id
stringlengths 14
16
| text
stringlengths 29
2.73k
| source
stringlengths 49
115
|
---|---|---|
70fdaeb2ac0b-8 | property headers: Dict[str, str]#
Get the token.
property request_url: str#
Get the request url.
property table_info: str#
Information about all tables in the database.
pydantic model langchain.utilities.PythonREPL[source]#
Simulates a standalone Python REPL.
field globals: Optional[Dict] [Optional] (alias '_globals')#
field locals: Optional[Dict] [Optional] (alias '_locals')#
run(command: str) → str[source]#
Run command with own globals/locals and returns anything printed.
pydantic model langchain.utilities.SearxSearchWrapper[source]#
Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
searx_host or exporting the environment variable SEARX_HOST.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
unsecure. You can also pass the host url scheme as http to disable SSL.
Example
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Validators
disable_ssl_warnings » unsecure
validate_params » all fields
field aiosession: Optional[Any] = None#
field categories: Optional[List[str]] = []#
field engines: Optional[List[str]] = []#
field headers: Optional[dict] = None#
field k: int = 10# | https://python.langchain.com/en/latest/reference/modules/utilities.html |
70fdaeb2ac0b-9 | field headers: Optional[dict] = None#
field k: int = 10#
field params: dict [Optional]#
field query_suffix: Optional[str] = ''#
field searx_host: str = ''#
field unsecure: bool = False#
async aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#
Asynchronously query with json results.
Uses aiohttp. See results for more info.
async arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#
Asynchronously version of run.
results(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#
Run query through Searx API and returns the results with metadata.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
num_results – Limit the number of results to return.
engines – List of engines to use for the query.
categories – List of categories to use for the query.
**kwargs – extra parameters to pass to the searx API.
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys | https://python.langchain.com/en/latest/reference/modules/utilities.html |
70fdaeb2ac0b-10 | }
Return type
Dict with the following keys
run(query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#
Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
engines – List of engines to use for the query.
categories – List of categories to use for the query.
**kwargs – extra parameters to pass to the searx API.
Returns
The result of the query.
Return type
str
Raises
ValueError – If an error occured with the query.
Example
This will make a query to the qwant engine:
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
pydantic model langchain.utilities.SerpAPIWrapper[source]#
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
field aiosession: Optional[aiohttp.client.ClientSession] = None# | https://python.langchain.com/en/latest/reference/modules/utilities.html |
70fdaeb2ac0b-11 | field aiosession: Optional[aiohttp.client.ClientSession] = None#
field params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}#
field serpapi_api_key: Optional[str] = None#
async aresults(query: str) → dict[source]#
Use aiohttp to run query through SerpAPI and return the results async.
async arun(query: str, **kwargs: Any) → str[source]#
Run query through SerpAPI and parse result async.
get_params(query: str) → Dict[str, str][source]#
Get parameters for SerpAPI.
results(query: str) → dict[source]#
Run query through SerpAPI and return the raw result.
run(query: str, **kwargs: Any) → str[source]#
Run query through SerpAPI and parse result.
pydantic model langchain.utilities.TextRequestsWrapper[source]#
Lightweight wrapper around requests library.
The main purpose of this wrapper is to always return a text output.
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field headers: Optional[Dict[str, str]] = None#
async adelete(url: str, **kwargs: Any) → str[source]#
DELETE the URL and return the text asynchronously.
async aget(url: str, **kwargs: Any) → str[source]#
GET the URL and return the text asynchronously.
async apatch(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
PATCH the URL and return the text asynchronously.
async apost(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
POST to the URL and return the text asynchronously. | https://python.langchain.com/en/latest/reference/modules/utilities.html |
70fdaeb2ac0b-12 | POST to the URL and return the text asynchronously.
async aput(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
PUT the URL and return the text asynchronously.
delete(url: str, **kwargs: Any) → str[source]#
DELETE the URL and return the text.
get(url: str, **kwargs: Any) → str[source]#
GET the URL and return the text.
patch(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
PATCH the URL and return the text.
post(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
POST to the URL and return the text.
put(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
PUT the URL and return the text.
property requests: langchain.requests.Requests#
pydantic model langchain.utilities.WikipediaAPIWrapper[source]#
Wrapper around WikipediaAPI.
To use, you should have the wikipedia python package installed.
This wrapper will use the Wikipedia API to conduct searches and
fetch page summaries. By default, it will return the page summaries
of the top-k results of an input search.
field lang: str = 'en'#
field top_k_results: int = 3#
fetch_formatted_page_summary(page: str) → Optional[str][source]#
run(query: str) → str[source]#
Run Wikipedia search and get page summaries.
pydantic model langchain.utilities.WolframAlphaAPIWrapper[source]#
Wrapper for Wolfram Alpha.
Docs for using:
Go to wolfram alpha and sign up for a developer account
Create an app and get your APP ID
Save your APP ID into WOLFRAM_ALPHA_APPID env variable | https://python.langchain.com/en/latest/reference/modules/utilities.html |
70fdaeb2ac0b-13 | Save your APP ID into WOLFRAM_ALPHA_APPID env variable
pip install wolframalpha
field wolfram_alpha_appid: Optional[str] = None#
run(query: str) → str[source]#
Run query through WolframAlpha and parse result.
previous
Agent Toolkits
next
Experimental Modules
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/reference/modules/utilities.html |
5dae1c72a85d-0 | .rst
.pdf
Output Parsers
Output Parsers#
pydantic model langchain.output_parsers.CommaSeparatedListOutputParser[source]#
Parse out comma separated lists.
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → List[str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.GuardrailsOutputParser[source]#
field guard: Any = None#
classmethod from_rail(rail_file: str, num_reasks: int = 1) → langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#
classmethod from_rail_string(rail_str: str, num_reasks: int = 1) → langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → Dict[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.ListOutputParser[source]#
Class to parse the output of an LLM call to a list.
abstract parse(text: str) → List[str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.OutputFixingParser[source]#
Wraps a parser and tries to fix parsing errors.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]# | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
5dae1c72a85d-1 | field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True)) → langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.fix.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.PydanticOutputParser[source]#
field pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → langchain.output_parsers.pydantic.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
5dae1c72a85d-2 | and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.RegexDictParser[source]#
Class to parse the output into a dictionary.
field no_update_value: Optional[str] = None#
field output_key_to_format: Dict[str, str] [Required]#
field regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?"#
parse(text: str) → Dict[str, str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.RegexParser[source]#
Class to parse the output into a dictionary.
field default_output_key: Optional[str] = None#
field output_keys: List[str] [Required]#
field regex: str [Required]#
parse(text: str) → Dict[str, str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.ResponseSchema[source]#
field description: str [Required]#
field name: str [Required]#
pydantic model langchain.output_parsers.RetryOutputParser[source]#
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt and the completion to another
LLM, and telling it the completion did not satisfy criteria in the prompt.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]# | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
5dae1c72a85d-3 | field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True)) → langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.retry.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
parse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) → langchain.output_parsers.retry.T[source]#
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
pydantic model langchain.output_parsers.RetryWithErrorOutputParser[source]#
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt, the completion, AND the error
that was raised to another language and telling it that the completion | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
5dae1c72a85d-4 | that was raised to another language and telling it that the completion
did not work, and raised the given error. Differs from RetryOutputParser
in that this implementation provides the error that was raised back to the
LLM, which in theory should give it more information on how to fix it.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True)) → langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.retry.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
parse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) → langchain.output_parsers.retry.T[source]#
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
5dae1c72a85d-5 | The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
pydantic model langchain.output_parsers.StructuredOutputParser[source]#
field response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]#
classmethod from_response_schemas(response_schemas: List[langchain.output_parsers.structured.ResponseSchema]) → langchain.output_parsers.structured.StructuredOutputParser[source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → Any[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
previous
Example Selector
next
Chat Prompt Template
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
0d4c737e5152-0 | .rst
.pdf
Docstore
Docstore#
Wrappers on top of docstores.
class langchain.docstore.InMemoryDocstore(_dict: Dict[str, langchain.schema.Document])[source]#
Simple in memory docstore in the form of a dict.
add(texts: Dict[str, langchain.schema.Document]) → None[source]#
Add texts to in memory dictionary.
search(search: str) → Union[str, langchain.schema.Document][source]#
Search via direct lookup.
class langchain.docstore.Wikipedia[source]#
Wrapper around wikipedia API.
search(search: str) → Union[str, langchain.schema.Document][source]#
Try to search for wiki page.
If page exists, return the page summary, and a PageWithLookups object.
If page does not exist, return similar entries.
previous
Indexes
next
Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/reference/modules/docstore.html |
bb5236069ce5-0 | .rst
.pdf
SerpAPI
SerpAPI#
For backwards compatiblity.
pydantic model langchain.serpapi.SerpAPIWrapper[source]#
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}#
field serpapi_api_key: Optional[str] = None#
async aresults(query: str) → dict[source]#
Use aiohttp to run query through SerpAPI and return the results async.
async arun(query: str, **kwargs: Any) → str[source]#
Run query through SerpAPI and parse result async.
get_params(query: str) → Dict[str, str][source]#
Get parameters for SerpAPI.
results(query: str) → dict[source]#
Run query through SerpAPI and return the raw result.
run(query: str, **kwargs: Any) → str[source]#
Run query through SerpAPI and parse result.
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/reference/modules/serpapi.html |
f527a9930db1-0 | .rst
.pdf
Experimental Modules
Contents
Autonomous Agents
Generative Agents
Experimental Modules#
This module contains experimental modules and reproductions of existing work using LangChain primitives.
Autonomous Agents#
Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module.
class langchain.experimental.BabyAGI(*, memory: Optional[langchain.schema.BaseMemory] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = None, task_list: collections.deque = None, task_creation_chain: langchain.chains.base.Chain, task_prioritization_chain: langchain.chains.base.Chain, execution_chain: langchain.chains.base.Chain, task_id_counter: int = 1, vectorstore: langchain.vectorstores.base.VectorStore, max_iterations: Optional[int] = None)[source]#
Controller model for the BabyAGI agent.
model Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
execute_task(objective: str, task: str, k: int = 5) → str[source]#
Execute a task.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, verbose: bool = False, task_execution_chain: Optional[langchain.chains.base.Chain] = None, **kwargs: Dict[str, Any]) → langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI[source]#
Initialize the BabyAGI Controller.
get_next_task(result: str, task_description: str, objective: str) → List[Dict][source]#
Get the next task. | https://python.langchain.com/en/latest/reference/modules/experimental.html |
f527a9930db1-1 | Get the next task.
property input_keys: List[str]#
Input keys this chain expects.
property output_keys: List[str]#
Output keys this chain expects.
prioritize_tasks(this_task_id: int, objective: str) → List[Dict][source]#
Prioritize tasks.
class langchain.experimental.AutoGPT(ai_name: str, memory: langchain.vectorstores.base.VectorStoreRetriever, chain: langchain.chains.llm.LLMChain, output_parser: langchain.experimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser, tools: List[langchain.tools.base.BaseTool], feedback_tool: Optional[langchain.tools.human.tool.HumanInputRun] = None)[source]#
Agent class for interacting with Auto-GPT.
Generative Agents#
Here, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module.
class langchain.experimental.GenerativeAgent(*, name: str, age: Optional[int] = None, traits: str = 'N/A', status: str, memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory, llm: langchain.base_language.BaseLanguageModel, verbose: bool = False, summary: str = '', summary_refresh_seconds: int = 3600, last_refreshed: datetime.datetime = None, daily_summaries: List[str] = None)[source]#
A character with memory and innate characteristics.
model Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
field age: Optional[int] = None#
The optional age of the character.
field daily_summaries: List[str] [Optional]#
Summary of the events in the plan that the agent took.
generate_dialogue_response(observation: str) → Tuple[bool, str][source]#
React to a given observation. | https://python.langchain.com/en/latest/reference/modules/experimental.html |
f527a9930db1-2 | React to a given observation.
generate_reaction(observation: str) → Tuple[bool, str][source]#
React to a given observation.
get_full_header(force_refresh: bool = False) → str[source]#
Return a full header of the agent’s status, summary, and current time.
get_summary(force_refresh: bool = False) → str[source]#
Return a descriptive summary of the agent.
field last_refreshed: datetime.datetime [Optional]#
The last time the character’s summary was regenerated.
field llm: langchain.base_language.BaseLanguageModel [Required]#
The underlying language model.
field memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required]#
The memory object that combines relevance, recency, and ‘importance’.
field name: str [Required]#
The character’s name.
field status: str [Required]#
The traits of the character you wish not to change.
summarize_related_memories(observation: str) → str[source]#
Summarize memories that are most relevant to an observation.
field summary: str = ''#
Stateful self-summary generated via reflection on the character’s memory.
field summary_refresh_seconds: int = 3600#
How frequently to re-generate the summary.
field traits: str = 'N/A'#
Permanent traits to ascribe to the character. | https://python.langchain.com/en/latest/reference/modules/experimental.html |
f527a9930db1-3 | field traits: str = 'N/A'#
Permanent traits to ascribe to the character.
class langchain.experimental.GenerativeAgentMemory(*, llm: langchain.base_language.BaseLanguageModel, memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever, verbose: bool = False, reflection_threshold: Optional[float] = None, current_plan: List[str] = [], importance_weight: float = 0.15, aggregate_importance: float = 0.0, max_tokens_limit: int = 1200, queries_key: str = 'queries', most_recent_memories_token_key: str = 'recent_memories_token', add_memory_key: str = 'add_memory', relevant_memories_key: str = 'relevant_memories', relevant_memories_simple_key: str = 'relevant_memories_simple', most_recent_memories_key: str = 'most_recent_memories')[source]#
add_memory(memory_content: str) → List[str][source]#
Add an observation or memory to the agent’s memory.
field aggregate_importance: float = 0.0#
Track the sum of the ‘importance’ of recent memories.
Triggers reflection when it reaches reflection_threshold.
clear() → None[source]#
Clear memory contents.
field current_plan: List[str] = []#
The current plan of the agent.
fetch_memories(observation: str) → List[langchain.schema.Document][source]#
Fetch related memories.
field importance_weight: float = 0.15#
How much weight to assign the memory importance.
field llm: langchain.base_language.BaseLanguageModel [Required]#
The core language model.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]#
Return key-value pairs given the text input to the chain. | https://python.langchain.com/en/latest/reference/modules/experimental.html |
f527a9930db1-4 | Return key-value pairs given the text input to the chain.
field memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]#
The retriever to fetch related memories.
property memory_variables: List[str]#
Input keys this memory class will load dynamically.
pause_to_reflect() → List[str][source]#
Reflect on recent observations and generate ‘insights’.
field reflection_threshold: Optional[float] = None#
When aggregate_importance exceeds reflection_threshold, stop to reflect.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save the context of this model run to memory.
previous
Utilities
next
LangChain Ecosystem
Contents
Autonomous Agents
Generative Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/reference/modules/experimental.html |
dc10ed6d93e1-0 | .rst
.pdf
Chat Models
Chat Models#
pydantic model langchain.chat_models.AzureChatOpenAI[source]#
Wrapper around Azure OpenAI Chat Completion API. To use this class you
must have a deployed model on Azure OpenAI. Use deployment_name in the
constructor to refer to the “Model deployment name” in the Azure portal.
In addition, you should have the openai python package installed, and the
following environment variables set or passed in constructor in lower case:
- OPENAI_API_TYPE (default: azure)
- OPENAI_API_KEY
- OPENAI_API_BASE
- OPENAI_API_VERSION
For exmaple, if you have gpt-35-turbo deployed, with the deployment name
35-turbo-dev, the constructor should look like:
Be aware the API version may change.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
field deployment_name: str = ''#
field openai_api_base: str = ''#
field openai_api_key: str = ''#
field openai_api_type: str = 'azure'#
field openai_api_version: str = ''#
field openai_organization: str = ''#
pydantic model langchain.chat_models.ChatAnthropic[source]#
Wrapper around Anthropic’s large language model.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
field callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None#
field callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None#
field verbose: bool [Optional]# | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
dc10ed6d93e1-1 | field verbose: bool [Optional]#
Whether to print out response text.
pydantic model langchain.chat_models.ChatGooglePalm[source]#
Wrapper around Google’s PaLM Chat API.
To use you must have the google.generativeai Python package installed and
either:
The GOOGLE_API_KEY` environment varaible set with your API key, or
Pass your API key using the google_api_key kwarg to the ChatGoogle
constructor.
Example
from langchain.chat_models import ChatGooglePalm
chat = ChatGooglePalm()
field google_api_key: Optional[str] = None#
field model_name: str = 'models/chat-bison-001'#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
field temperature: Optional[float] = None#
Run inference with this temperature. Must by in the closed
interval [0.0, 1.0].
field top_k: Optional[int] = None#
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
field top_p: Optional[float] = None#
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
pydantic model langchain.chat_models.ChatOpenAI[source]#
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatOpenAI | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
dc10ed6d93e1-2 | Example
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: Optional[int] = None#
Maximum number of tokens to generate.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo'#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt.
field openai_api_key: Optional[str] = None#
field openai_organization: Optional[str] = None#
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampling temperature to use.
completion_with_retry(**kwargs: Any) → Any[source]#
Use tenacity to retry the completion call.
get_num_tokens(text: str) → int[source]#
Calculate num tokens with tiktoken package.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int[source]#
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: openai/openai-cookbook
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
pydantic model langchain.chat_models.PromptLayerChatOpenAI[source]#
Wrapper around OpenAI Chat large language models and PromptLayer. | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
dc10ed6d93e1-3 | Wrapper around OpenAI Chat large language models and PromptLayer.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerChatOpenAI adds to optional
:param pl_tags: List of strings to tag the request with.
:param return_pl_id: If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.chat_models import PromptLayerChatOpenAI
openai = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo")
field pl_tags: Optional[List[str]] = None#
field return_pl_id: Optional[bool] = False#
previous
Models
next
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
521c8aa93adc-0 | .rst
.pdf
Embeddings
Embeddings#
Wrappers around embedding modules.
pydantic model langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding[source]#
Wrapper for Aleph Alpha’s Asymmetric Embeddings
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the content of the document?"
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
field compress_to_size: Optional[int] = 128#
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
field contextual_control_threshold: Optional[int] = None#
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
field control_log_additive: Optional[bool] = True#
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
field hosting: Optional[str] = 'https://api.aleph-alpha.com'#
Optional parameter that specifies which datacenters may process the request.
field model: Optional[str] = 'luminous-base'#
Model name to use.
field normalize: Optional[bool] = True#
Should returned embeddings be normalized
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Aleph Alpha’s asymmetric Document endpoint.
Parameters
texts – The list of texts to embed.
Returns | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-1 | Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding[source]#
The symmetric version of the Aleph Alpha’s semantic embeddings.
The main difference is that here, both the documents and
queries are embedded with a SemanticRepresentation.Symmetric
.. rubric:: Example
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Aleph Alpha’s Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.CohereEmbeddings[source]#
Wrapper around Cohere embedding models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import CohereEmbeddings
cohere = CohereEmbeddings(model="medium", cohere_api_key="my-api-key")
field model: str = 'large'#
Model name to use.
field truncate: Optional[str] = None#
Truncate embeddings that are too long from start or end (“NONE”|”START”|”END”)
embed_documents(texts: List[str]) → List[List[float]][source]# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-2 | embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Cohere’s embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Cohere’s embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.FakeEmbeddings[source]#
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed search docs.
embed_query(text: str) → List[float][source]#
Embed query text.
pydantic model langchain.embeddings.HuggingFaceEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
hf = HuggingFaceEmbeddings(model_name=model_name, model_kwargs=model_kwargs)
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME enviroment variable.
field encode_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass when calling the encode method of the model.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-3 | Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceHubEmbeddings[source]#
Wrapper around HuggingFaceHub embedding models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field repo_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field task: Optional[str] = 'feature-extraction'#
Task to call the model with.
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to HuggingFaceHub’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to HuggingFaceHub’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-4 | Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceInstructEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python packages installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name, model_kwargs=model_kwargs
)
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'hkunlp/instructor-large'#
Model name to use.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.LlamaCppEmbeddings[source]#
Wrapper around llama.cpp embedding models.
To use, you should have the llama-cpp-python library installed, and provide the | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-5 | To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: abetlen/llama-cpp-python
Example
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_threads: Optional[int] = None#
Number of threads to use. If None, the number
of threads is automatically determined.
field seed: int = -1#
Seed. If -1, a random seed is used.
field use_mlock: bool = False#
Force system to keep model in RAM.
field vocab_only: bool = False#
Only load the vocabulary, no weights.
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed a list of documents using the Llama model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using the Llama model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.OpenAIEmbeddings[source]# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-6 | pydantic model langchain.embeddings.OpenAIEmbeddings[source]#
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and optionally and
API_VERSION.
The OPENAI_API_TYPE must be set to ‘azure’ and the others correspond to
the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
api_base="https://your-endpoint.openai.azure.com/",
api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
field chunk_size: int = 1000#
Maximum number of texts to embed in each batch
field max_retries: int = 6#
Maximum number of retries to make when generating.
embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]#
Call out to OpenAI’s embedding endpoint for embedding search docs. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-7 | Call out to OpenAI’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to OpenAI’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
pydantic model langchain.embeddings.SagemakerEndpointEmbeddings[source]#
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-8 | credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
field endpoint_name: str = ''#
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: str = ''#
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts: List[str], chunk_size: int = 64) → List[List[float]][source]#
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedEmbeddings[source]#
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.). | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-9 | cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
raise_deprecation » all fields
set_verbose » verbose
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings on the remote hardware.
field inference_kwargs: Any = None#
Any kwargs to pass to the model’s inference function. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-10 | field inference_kwargs: Any = None#
Any kwargs to pass to the model’s inference function.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.s
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedHuggingFaceEmbeddings[source]#
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Validators
raise_deprecation » all fields
set_verbose » verbose
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_id: str = 'sentence-transformers/all-mpnet-base-v2'# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-11 | field model_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field model_load_fn: Callable = <function load_embedding_model>#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']#
Requirements to install on hardware to inference the model.
pydantic model langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings[source]#
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Validators
raise_deprecation » all fields
set_verbose » verbose
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_id: str = 'hkunlp/instructor-large'#
Model name to use.
field model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']#
Requirements to install on hardware to inference the model.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
521c8aa93adc-12 | embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
langchain.embeddings.SentenceTransformerEmbeddings#
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
pydantic model langchain.embeddings.TensorflowHubEmbeddings[source]#
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url)
field model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
previous
Chat Models
next
Indexes
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.