id
stringlengths 14
16
| text
stringlengths 29
2.73k
| source
stringlengths 49
115
|
---|---|---|
c9b7d85f6498-2
|
field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-3
|
a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)#
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-4
|
field entity_store: langchain.memory.entity.BaseEntityStore [Optional]#
field entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True)#
field human_prefix: str = 'Human'#
field k: int = 3#
field llm: langchain.base_language.BaseLanguageModel [Required]#
clear() → None[source]#
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer.
property buffer: List[langchain.schema.BaseMessage]#
pydantic model langchain.memory.ConversationKGMemory[source]#
Knowledge graph memory for storing conversation memory.
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-5
|
Knowledge graph memory for storing conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
field ai_prefix: str = 'AI'#
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-6
|
field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-7
|
a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)#
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-8
|
field human_prefix: str = 'Human'#
field k: int = 2#
field kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]#
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-9
|
field knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-10
|
It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-11
|
huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True)#
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-12
|
field llm: langchain.base_language.BaseLanguageModel [Required]#
field summary_message_cls: Type[langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'>#
Number of previous utterances to include in the context.
clear() → None[source]#
Clear memory contents.
get_current_entities(input_string: str) → List[str][source]#
get_knowledge_triplets(input_string: str) → List[langchain.graphs.networkx_graph.KnowledgeTriple][source]#
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer.
pydantic model langchain.memory.ConversationStringBufferMemory[source]#
Buffer for storing conversation memory.
field ai_prefix: str = 'AI'#
Prefix to use for AI generated responses.
field buffer: str = ''#
field human_prefix: str = 'Human'#
field input_key: Optional[str] = None#
field output_key: Optional[str] = None#
clear() → None[source]#
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer.
property memory_variables: List[str]#
Will always return list of memory variables.
:meta private:
pydantic model langchain.memory.ConversationSummaryBufferMemory[source]#
Buffer with summarizer for storing conversation memory.
field max_token_limit: int = 2000#
field memory_key: str = 'history'#
field moving_summary_buffer: str = ''#
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-13
|
field memory_key: str = 'history'#
field moving_summary_buffer: str = ''#
clear() → None[source]#
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer.
property buffer: List[langchain.schema.BaseMessage]#
pydantic model langchain.memory.ConversationSummaryMemory[source]#
Conversation summarizer to memory.
field buffer: str = ''#
clear() → None[source]#
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer.
pydantic model langchain.memory.ConversationTokenBufferMemory[source]#
Buffer for storing conversation memory.
field ai_prefix: str = 'AI'#
field human_prefix: str = 'Human'#
field llm: langchain.base_language.BaseLanguageModel [Required]#
field max_token_limit: int = 2000#
field memory_key: str = 'history'#
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer. Pruned.
property buffer: List[langchain.schema.BaseMessage]#
String buffer of memory.
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-14
|
property buffer: List[langchain.schema.BaseMessage]#
String buffer of memory.
class langchain.memory.CosmosDBChatMessageHistory(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, session_id: str, user_id: str, credential: Optional[Any] = None, connection_string: Optional[str] = None, ttl: Optional[int] = None)[source]#
Chat history backed by Azure CosmosDB.
add_ai_message(message: str) → None[source]#
Add a AI message to the memory.
add_user_message(message: str) → None[source]#
Add a user message to the memory.
clear() → None[source]#
Clear session memory from this memory and cosmos.
load_messages() → None[source]#
Retrieve the messages from Cosmos
messages: List[BaseMessage]#
prepare_cosmos() → None[source]#
Prepare the CosmosDB client.
Use this function or the context manager to make sure your database is ready.
upsert_messages(new_message: Optional[langchain.schema.BaseMessage] = None) → None[source]#
Update the cosmosdb item.
class langchain.memory.DynamoDBChatMessageHistory(table_name: str, session_id: str)[source]#
Chat message history that stores history in AWS DynamoDB.
This class expects that a DynamoDB table with name table_name
and a partition Key of SessionId is present.
Parameters
table_name – name of the DynamoDB table
session_id – arbitrary key that is used to store the messages
of a single chat session.
add_ai_message(message: str) → None[source]#
Add an AI message to the store
add_user_message(message: str) → None[source]#
Add a user message to the store
append(message: langchain.schema.BaseMessage) → None[source]#
Append the message to the record in DynamoDB
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-15
|
Append the message to the record in DynamoDB
clear() → None[source]#
Clear session memory from DynamoDB
property messages: List[langchain.schema.BaseMessage]#
Retrieve the messages from DynamoDB
class langchain.memory.InMemoryEntityStore[source]#
Basic in-memory entity store.
clear() → None[source]#
Delete all entities from store.
delete(key: str) → None[source]#
Delete entity value from store.
exists(key: str) → bool[source]#
Check if entity exists in store.
get(key: str, default: Optional[str] = None) → Optional[str][source]#
Get entity value from store.
set(key: str, value: Optional[str]) → None[source]#
Set entity value in store.
store: Dict[str, Optional[str]] = {}#
class langchain.memory.PostgresChatMessageHistory(session_id: str, connection_string: str = 'postgresql://postgres:mypassword@localhost/chat_history', table_name: str = 'message_store')[source]#
add_ai_message(message: str) → None[source]#
Add an AI message to the store
add_user_message(message: str) → None[source]#
Add a user message to the store
append(message: langchain.schema.BaseMessage) → None[source]#
Append the message to the record in PostgreSQL
clear() → None[source]#
Clear session memory from PostgreSQL
property messages: List[langchain.schema.BaseMessage]#
Retrieve the messages from PostgreSQL
pydantic model langchain.memory.ReadOnlySharedMemory[source]#
A memory wrapper that is read-only and cannot be changed.
field memory: langchain.schema.BaseMemory [Required]#
clear() → None[source]#
Nothing to clear, got a memory like a vault.
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-16
|
clear() → None[source]#
Nothing to clear, got a memory like a vault.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]#
Load memory variables from memory.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Nothing should be saved or changed
property memory_variables: List[str]#
Return memory variables.
class langchain.memory.RedisChatMessageHistory(session_id: str, url: str = 'redis://localhost:6379/0', key_prefix: str = 'message_store:', ttl: Optional[int] = None)[source]#
add_ai_message(message: str) → None[source]#
Add an AI message to the store
add_user_message(message: str) → None[source]#
Add a user message to the store
append(message: langchain.schema.BaseMessage) → None[source]#
Append the message to the record in Redis
clear() → None[source]#
Clear session memory from Redis
property key: str#
Construct the record key to use
property messages: List[langchain.schema.BaseMessage]#
Retrieve the messages from Redis
class langchain.memory.RedisEntityStore(session_id: str = 'default', url: str = 'redis://localhost:6379/0', key_prefix: str = 'memory_store', ttl: Optional[int] = 86400, recall_ttl: Optional[int] = 259200, *args: Any, **kwargs: Any)[source]#
Redis-backed Entity store. Entities get a TTL of 1 day by default, and
that TTL is extended by 3 days every time the entity is read back.
clear() → None[source]#
Delete all entities from store.
delete(key: str) → None[source]#
Delete entity value from store.
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-17
|
delete(key: str) → None[source]#
Delete entity value from store.
exists(key: str) → bool[source]#
Check if entity exists in store.
property full_key_prefix: str#
get(key: str, default: Optional[str] = None) → Optional[str][source]#
Get entity value from store.
key_prefix: str = 'memory_store'#
recall_ttl: Optional[int] = 259200#
redis_client: Any#
session_id: str = 'default'#
set(key: str, value: Optional[str]) → None[source]#
Set entity value in store.
ttl: Optional[int] = 86400#
pydantic model langchain.memory.SimpleMemory[source]#
Simple memory for storing context or other bits of information that shouldn’t
ever change between prompts.
field memories: Dict[str, Any] = {}#
clear() → None[source]#
Nothing to clear, got a memory like a vault.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]#
Return key-value pairs given the text input to the chain.
If None, return all memories
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Nothing should be saved or changed, my memory is set in stone.
property memory_variables: List[str]#
Input keys this memory class will load dynamically.
pydantic model langchain.memory.VectorStoreRetrieverMemory[source]#
Class for a VectorStore-backed memory object.
field input_key: Optional[str] = None#
Key name to index the inputs to load_memory_variables.
field memory_key: str = 'history'#
Key name to locate the memories in the result of load_memory_variables.
field retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]#
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
c9b7d85f6498-18
|
field retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]#
VectorStoreRetriever object to connect to.
field return_docs: bool = False#
Whether or not to return the result of querying the database directly.
clear() → None[source]#
Nothing to clear.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Union[List[langchain.schema.Document], str]][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer.
property memory_variables: List[str]#
The list of keys emitted from the load_memory_variables method.
previous
Document Transformers
next
Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023.
|
https://python.langchain.com/en/latest/reference/modules/memory.html
|
acaa60aeda34-0
|
.rst
.pdf
Tools
Tools#
Core toolkit implementations.
pydantic model langchain.tools.AIPluginTool[source]#
field api_spec: str [Required]#
field args_schema: Type[AIPluginToolSchema] = <class 'langchain.tools.plugin.AIPluginToolSchema'>#
Pydantic model class to validate and parse the tool’s input arguments.
field plugin: AIPlugin [Required]#
classmethod from_plugin_url(url: str) → langchain.tools.plugin.AIPluginTool[source]#
pydantic model langchain.tools.APIOperation[source]#
A model for a single API operation.
field base_url: str [Required]#
The base URL of the operation.
field description: Optional[str] = None#
The description of the operation.
field method: langchain.tools.openapi.utils.openapi_utils.HTTPVerb [Required]#
The HTTP method of the operation.
field operation_id: str [Required]#
The unique identifier of the operation.
field path: str [Required]#
The path of the operation.
field properties: Sequence[langchain.tools.openapi.utils.api_models.APIProperty] [Required]#
field request_body: Optional[langchain.tools.openapi.utils.api_models.APIRequestBody] = None#
The request body of the operation.
classmethod from_openapi_spec(spec: langchain.tools.openapi.utils.openapi_utils.OpenAPISpec, path: str, method: str) → langchain.tools.openapi.utils.api_models.APIOperation[source]#
Create an APIOperation from an OpenAPI spec.
classmethod from_openapi_url(spec_url: str, path: str, method: str) → langchain.tools.openapi.utils.api_models.APIOperation[source]#
Create an APIOperation from an OpenAPI URL.
to_typescript() → str[source]#
Get typescript string representation of the operation.
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-1
|
to_typescript() → str[source]#
Get typescript string representation of the operation.
static ts_type_from_python(type_: Union[str, Type, tuple, None, enum.Enum]) → str[source]#
property body_params: List[str]#
property path_params: List[str]#
property query_params: List[str]#
pydantic model langchain.tools.BaseTool[source]#
Interface LangChain tools must implement.
field args_schema: Optional[Type[pydantic.main.BaseModel]] = None#
Pydantic model class to validate and parse the tool’s input arguments.
field callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None#
Deprecated. Please use callbacks instead.
field callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None#
Callbacks to be called during tool execution.
field description: str [Required]#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str [Required]#
The unique name of the tool that clearly communicates its purpose.
field return_direct: bool = False#
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
field verbose: bool = False#
Whether to log the tool’s progress.
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Any[source]#
Run the tool asynchronously.
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-2
|
Run the tool asynchronously.
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Any[source]#
Run the tool.
property args: dict#
property is_single_input: bool#
Whether the tool only accepts a single input.
pydantic model langchain.tools.BingSearchResults[source]#
Tool that has capability to query the Bing Search API and get back json.
field api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]#
field num_results: int = 4#
pydantic model langchain.tools.BingSearchRun[source]#
Tool that adds the capability to query the Bing search API.
field api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]#
pydantic model langchain.tools.ClickTool[source]#
field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.click.ClickToolInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Click on an element with the given CSS selector'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'click_element'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.CopyFileTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.copy.FileCopyInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-3
|
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Create a copy of a file in a specified location'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'copy_file'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.CurrentWebPageTool[source]#
field args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Returns the URL of the current page'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'current_webpage'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.DeleteFileTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.delete.FileDeleteInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Delete a file'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'file_delete'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.DuckDuckGoSearchResults[source]#
Tool that queries the Duck Duck Go Search API and get back json.
field api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]#
field num_results: int = 4#
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-4
|
field num_results: int = 4#
pydantic model langchain.tools.DuckDuckGoSearchRun[source]#
Tool that adds the capability to query the DuckDuckGo search API.
field api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]#
pydantic model langchain.tools.ExtractHyperlinksTool[source]#
Extract all hyperlinks on the page.
field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Extract all hyperlinks on the current webpage'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'extract_hyperlinks'#
The unique name of the tool that clearly communicates its purpose.
static scrape_page(page: Any, html_content: str, absolute_urls: bool) → str[source]#
pydantic model langchain.tools.ExtractTextTool[source]#
field args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Extract all the text on the current webpage'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'extract_text'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.FileSearchTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.file_search.FileSearchInput'>#
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-5
|
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Recursively search for files in a subdirectory that match the regex pattern'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'file_search'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.GetElementsTool[source]#
field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.get_elements.GetElementsToolInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Retrieve elements in the current web page matching the given CSS selector'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'get_elements'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.GooglePlacesTool[source]#
Tool that adds the capability to query the Google places API.
field api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional]#
pydantic model langchain.tools.GoogleSearchResults[source]#
Tool that has capability to query the Google Search API and get back json.
field api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]#
field num_results: int = 4#
pydantic model langchain.tools.GoogleSearchRun[source]#
Tool that adds the capability to query the Google search API.
field api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]#
pydantic model langchain.tools.HumanInputRun[source]#
Tool that adds the capability to ask user for input.
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-6
|
Tool that adds the capability to ask user for input.
field input_func: Callable [Optional]#
field prompt_func: Callable[[str], None] [Optional]#
pydantic model langchain.tools.IFTTTWebhook[source]#
IFTTT Webhook.
Parameters
name – name of the tool
description – description of the tool
url – url to hit with the json event.
field url: str [Required]#
pydantic model langchain.tools.ListDirectoryTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.list_dir.DirectoryListingInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'List files and directories in a specified folder'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'list_directory'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.MoveFileTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.move.FileMoveInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Move or rename a file from one location to another'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'move_file'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.NavigateBackTool[source]#
Navigate back to the previous page in the browser history.
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-7
|
Navigate back to the previous page in the browser history.
field args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Navigate back to the previous page in the browser history'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'previous_webpage'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.NavigateTool[source]#
field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.navigate.NavigateToolInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Navigate a browser to the specified URL'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'navigate_browser'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.OpenAPISpec[source]#
OpenAPI Model that removes misformatted parts of the spec.
field components: Optional[openapi_schema_pydantic.v3.v3_1_0.components.Components] = None#
An element to hold various schemas for the document.
field externalDocs: Optional[openapi_schema_pydantic.v3.v3_1_0.external_documentation.ExternalDocumentation] = None#
Additional external documentation.
field info: openapi_schema_pydantic.v3.v3_1_0.info.Info [Required]#
REQUIRED. Provides metadata about the API. The metadata MAY be used by tooling as required.
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-8
|
REQUIRED. Provides metadata about the API. The metadata MAY be used by tooling as required.
field jsonSchemaDialect: Optional[str] = None#
The default value for the $schema keyword within [Schema Objects](#schemaObject)
contained within this OAS document. This MUST be in the form of a URI.
field openapi: str = '3.1.0'#
REQUIRED. This string MUST be the [version number](#versions)
of the OpenAPI Specification that the OpenAPI document uses.
The openapi field SHOULD be used by tooling to interpret the OpenAPI document.
This is not related to the API [info.version](#infoVersion) string.
field paths: Optional[Dict[str, openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem]] = None#
The available paths and operations for the API.
field security: Optional[List[Dict[str, List[str]]]] = None#
A declaration of which security mechanisms can be used across the API.
The list of values includes alternative security requirement objects that can be used.
Only one of the security requirement objects need to be satisfied to authorize a request.
Individual operations can override this definition.
To make security optional, an empty security requirement ({}) can be included in the array.
field servers: List[openapi_schema_pydantic.v3.v3_1_0.server.Server] = [Server(url='/', description=None, variables=None)]#
An array of Server Objects, which provide connectivity information to a target server.
If the servers property is not provided, or is an empty array,
the default value would be a [Server Object](#serverObject) with a [url](#serverUrl) value of /.
field tags: Optional[List[openapi_schema_pydantic.v3.v3_1_0.tag.Tag]] = None#
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-9
|
A list of tags used by the document with additional metadata.
The order of the tags can be used to reflect on their order by the parsing tools.
Not all tags that are used by the [Operation Object](#operationObject) must be declared.
The tags that are not declared MAY be organized randomly or based on the tools’ logic.
Each tag name in the list MUST be unique.
field webhooks: Optional[Dict[str, Union[openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem, openapi_schema_pydantic.v3.v3_1_0.reference.Reference]]] = None#
The incoming webhooks that MAY be received as part of this API and that the API consumer MAY choose to implement.
Closely related to the callbacks feature, this section describes requests initiated other than by an API call,
for example by an out of band registration.
The key name is a unique string to refer to each webhook,
while the (optionally referenced) Path Item Object describes a request
that may be initiated by the API provider and the expected responses.
An [example](../examples/v3.1/webhook-example.yaml) is available.
classmethod from_file(path: Union[str, pathlib.Path]) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a file path.
classmethod from_spec_dict(spec_dict: dict) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a dict.
classmethod from_text(text: str) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a text.
classmethod from_url(url: str) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a URL.
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-10
|
Get an OpenAPI spec from a URL.
static get_cleaned_operation_id(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation, path: str, method: str) → str[source]#
Get a cleaned operation id from an operation id.
get_methods_for_path(path: str) → List[str][source]#
Return a list of valid methods for the specified path.
get_operation(path: str, method: str) → openapi_schema_pydantic.v3.v3_1_0.operation.Operation[source]#
Get the operation object for a given path and HTTP method.
get_parameters_for_operation(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation) → List[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter][source]#
Get the components for a given operation.
get_referenced_schema(ref: openapi_schema_pydantic.v3.v3_1_0.reference.Reference) → openapi_schema_pydantic.v3.v3_1_0.schema.Schema[source]#
Get a schema (or nested reference) or err.
get_request_body_for_operation(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation) → Optional[openapi_schema_pydantic.v3.v3_1_0.request_body.RequestBody][source]#
Get the request body for a given operation.
classmethod parse_obj(obj: dict) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
property base_url: str#
Get the base url.
pydantic model langchain.tools.ReadFileTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.read.ReadFileInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-11
|
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Read file from disk'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'read_file'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.SceneXplainTool[source]#
Tool that adds the capability to explain images.
field api_wrapper: langchain.utilities.scenexplain.SceneXplainAPIWrapper [Optional]#
pydantic model langchain.tools.ShellTool[source]#
Tool to run shell commands.
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.shell.tool.ShellInput'>#
Schema for input arguments.
field description: str = 'Run shell commands on this Linux machine.'#
Description of tool.
field name: str = 'terminal'#
Name of tool.
field process: langchain.utilities.bash.BashProcess [Optional]#
Bash process to run commands.
pydantic model langchain.tools.StructuredTool[source]#
Tool that can operate on any number of inputs.
field args_schema: Type[pydantic.main.BaseModel] [Required]#
The input arguments’ schema.
The tool schema.
field coroutine: Optional[Callable[[...], Awaitable[Any]]] = None#
The asynchronous version of the function.
field description: str = ''#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field func: Callable[[...], Any] [Required]#
The function to run when the tool is called.
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-12
|
The function to run when the tool is called.
classmethod from_function(func: Callable, name: Optional[str] = None, description: Optional[str] = None, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True, **kwargs: Any) → langchain.tools.base.StructuredTool[source]#
property args: dict#
The tool’s input arguments.
pydantic model langchain.tools.Tool[source]#
Tool that takes in function or coroutine directly.
Validators
raise_deprecation » all fields
validate_func_not_partial » func
field args_schema: Optional[Type[pydantic.main.BaseModel]] = None#
Pydantic model class to validate and parse the tool’s input arguments.
field callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None#
Deprecated. Please use callbacks instead.
field callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None#
Callbacks to be called during tool execution.
field coroutine: Optional[Callable[[...], Awaitable[str]]] = None#
The asynchronous version of the function.
field description: str = ''#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field func: Callable[[...], str] [Required]#
The function to run when the tool is called.
field name: str [Required]#
The unique name of the tool that clearly communicates its purpose.
field return_direct: bool = False#
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
field verbose: bool = False#
Whether to log the tool’s progress.
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-13
|
field verbose: bool = False#
Whether to log the tool’s progress.
classmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, **kwargs: Any) → langchain.tools.base.Tool[source]#
Initialize tool from a function.
property args: dict#
The tool’s input arguments.
pydantic model langchain.tools.VectorStoreQATool[source]#
Tool for the VectorDBQA chain. To be initialized with name and chain.
field llm: langchain.base_language.BaseLanguageModel [Optional]#
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
static get_description(name: str, description: str) → str[source]#
pydantic model langchain.tools.VectorStoreQAWithSourcesTool[source]#
Tool for the VectorDBQAWithSources chain.
field llm: langchain.base_language.BaseLanguageModel [Optional]#
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
static get_description(name: str, description: str) → str[source]#
pydantic model langchain.tools.WikipediaQueryRun[source]#
Tool that adds the capability to search using the Wikipedia API.
field api_wrapper: langchain.utilities.wikipedia.WikipediaAPIWrapper [Required]#
pydantic model langchain.tools.WolframAlphaQueryRun[source]#
Tool that adds the capability to query using the Wolfram Alpha SDK.
field api_wrapper: langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper [Required]#
pydantic model langchain.tools.WriteFileTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.write.WriteFileInput'>#
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-14
|
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Write file to disk'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'write_file'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.ZapierNLAListActions[source]#
Returns a list of all exposed (enabled) actions associated withcurrent user (associated with the set api_key). Change your exposed
actions here: https://nla.zapier.com/demo/start/
The return list can be empty if no actions exposed. Else will contain
a list of action objects:
[{“id”: str,
“description”: str,
“params”: Dict[str, str]
}]
params will always contain an instructions key, the only required
param. All others optional and if provided will override any AI guesses
(see “understanding the AI guessing flow” here:
https://nla.zapier.com/api/v1/docs)
Parameters
None –
field api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]#
pydantic model langchain.tools.ZapierNLARunAction[source]#
Executes an action that is identified by action_id, must be exposed(enabled) by the current user (associated with the set api_key). Change
your exposed actions here: https://nla.zapier.com/demo/start/
The return JSON is guaranteed to be less than ~500 words (350
tokens) making it safe to inject into the prompt of another LLM
call.
Parameters
action_id – a specific action ID (from list actions) of the action to execute
(the set api_key must be associated with the action owner)
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-15
|
(the set api_key must be associated with the action owner)
instructions – a natural language instruction string for using the action
(eg. “get the latest email from Mike Knoop” for “Gmail: find email” action)
params – a dict, optional. Any params provided will override AI guesses
from instructions (see “understanding the AI guessing flow” here:
https://nla.zapier.com/api/v1/docs)
field action_id: str [Required]#
field api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]#
field params: Optional[dict] = None#
field params_schema: Dict[str, str] [Optional]#
field zapier_description: str [Required]#
langchain.tools.tool(*args: Union[str, Callable], return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True) → Callable[source]#
Make tools out of functions, can be used with or without arguments.
Parameters
*args – The arguments to the tool.
return_direct – Whether to return directly from the tool rather
than continuing the agent loop.
args_schema – optional argument schema for user to specify
infer_schema – Whether to infer the schema of the arguments from
the function’s signature. This also makes the resultant tool
accept a dictionary input to its run() function.
Requires:
Function must be of type (str) -> str
Function must have a docstring
Examples
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return
previous
Agents
next
Agent Toolkits
By Harrison Chase
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
acaa60aeda34-16
|
return
previous
Agents
next
Agent Toolkits
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023.
|
https://python.langchain.com/en/latest/reference/modules/tools.html
|
c31a3c84efc8-0
|
.rst
.pdf
LLMs
LLMs#
Wrappers on top of large language models APIs.
pydantic model langchain.llms.AI21[source]#
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to count.
field frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to frequency.
field logitBias: Optional[Dict[str, float]] = None#
Adjust the probability of specific tokens being generated.
field maxTokens: int = 256#
The maximum number of tokens to generate in the completion.
field minTokens: int = 0#
The minimum number of tokens to generate in the completion.
field model: str = 'j2-jumbo-instruct'#
Model name to use.
field numResults: int = 1#
How many completions to generate for each prompt.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-1
|
field numResults: int = 1#
How many completions to generate for each prompt.
field presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use.
field topP: float = 1.0#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-2
|
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-3
|
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AlephAlpha[source]#
Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
Aleph-Alpha/aleph-alpha-client
Example
from langchain.llms import AlephAlpha
alpeh_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-4
|
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field aleph_alpha_api_key: Optional[str] = None#
API key for Aleph Alpha API.
field best_of: Optional[int] = None#
returns the one with the “best of” results
(highest log probability per token)
field completion_bias_exclusion_first_token_only: bool = False#
Only consider the first token for the completion_bias_exclusion.
field contextual_control_threshold: Optional[float] = None#
If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
field control_log_additive: Optional[bool] = True#
True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
field echo: bool = False#
Echo the prompt in the completion.
field frequency_penalty: float = 0.0#
Penalizes repeated tokens according to frequency.
field log_probs: Optional[int] = None#
Number of top log probabilities to be returned for each generated token.
field logit_bias: Optional[Dict[int, float]] = None#
The logit bias allows to influence the likelihood of generating tokens.
field maximum_tokens: int = 64#
The maximum number of tokens to be generated.
field minimum_tokens: Optional[int] = 0#
Generate at least this number of tokens.
field model: Optional[str] = 'luminous-base'#
Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field penalty_bias: Optional[str] = None#
Penalty bias for the completion.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-5
|
field penalty_bias: Optional[str] = None#
Penalty bias for the completion.
field penalty_exceptions: Optional[List[str]] = None#
List of strings that may be generated without penalty,
regardless of other penalty settings
field penalty_exceptions_include_stop_sequences: Optional[bool] = None#
Should stop_sequences be included in penalty_exceptions.
field presence_penalty: float = 0.0#
Penalizes repeated tokens.
field raw_completion: bool = False#
Force the raw completion of the model to be returned.
field repetition_penalties_include_completion: bool = True#
Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
field repetition_penalties_include_prompt: Optional[bool] = False#
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
field stop_sequences: Optional[List[str]] = None#
Stop sequences to use.
field temperature: float = 0.0#
A non-negative float that tunes the degree of randomness in generation.
field tokens: Optional[bool] = False#
return tokens of completion.
field top_k: int = 0#
Number of most likely tokens to consider at each step.
field top_p: float = 0.0#
Total probability mass of tokens to consider at each step.
field use_multiplicative_presence_penalty: Optional[bool] = False#
Flag deciding whether presence penalty is applied
multiplicatively (True) or additively (False).
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-6
|
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-7
|
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-8
|
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Anthropic[source]#
Wrapper around Anthropic’s large language models.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
Validators
raise_deprecation » all fields
raise_warning » all fields
set_verbose » verbose
validate_environment » all fields
field default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to Anthropic Completion API. Default is 600 seconds.
field max_tokens_to_sample: int = 256#
Denotes the number of tokens to predict per generation.
field model: str = 'claude-v1'#
Model name to use.
field streaming: bool = False#
Whether to stream the results.
field temperature: Optional[float] = None#
A non-negative float that tunes the degree of randomness in generation.
field top_k: Optional[int] = None#
Number of most likely tokens to consider at each step.
field top_p: Optional[float] = None#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-9
|
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-10
|
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-11
|
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator[source]#
Call Anthropic completion_stream and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompt to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from Anthropic.
Example
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AzureOpenAI[source]#
Wrapper around Azure-specific OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowed。
field batch_size: int = 20#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-12
|
Set of special tokens that are allowed。
field batch_size: int = 20#
Batch size to use when passing multiple documents to generate.
field best_of: int = 1#
Generates best_of completions server-side and returns the “best”.
field deployment_name: str = ''#
Deployment name to use.
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowed。
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated.
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'text-davinci-003'#
Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampling temperature to use.
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-13
|
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-14
|
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Calculate num tokens with tiktoken package.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]#
Get the sub prompts for llm call.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-15
|
Get the sub prompts for llm call.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) → int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) → int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-16
|
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Banana[source]#
Wrapper around Banana large language models.
To use, you should have the banana-dev python package installed,
and the environment variable BANANA_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field model_key: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-17
|
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-18
|
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-19
|
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.CerebriumAI[source]#
Wrapper around CerebriumAI large language models.
To use, you should have the cerebrium python package installed, and the
environment variable CEREBRIUMAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field endpoint_url: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-20
|
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-21
|
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Cohere[source]#
Wrapper around Cohere large language models.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-22
|
Wrapper around Cohere large language models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import Cohere
cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field frequency_penalty: float = 0.0#
Penalizes repeated tokens according to frequency. Between 0 and 1.
field k: int = 0#
Number of most likely tokens to consider at each step.
field max_tokens: int = 256#
Denotes the number of tokens to predict per generation.
field model: Optional[str] = None#
Model name to use.
field p: int = 1#
Total probability mass of tokens to consider at each step.
field presence_penalty: float = 0.0#
Penalizes repeated tokens. Between 0 and 1.
field temperature: float = 0.75#
A non-negative float that tunes the degree of randomness in generation.
field truncate: Optional[str] = None#
Specify how the client handles inputs longer than the maximum token
length: Truncate from START, END or NONE
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-23
|
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-24
|
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-25
|
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.DeepInfra[source]#
Wrapper around DeepInfra deployed models.
To use, you should have the requests python package installed, and the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import DeepInfra
di = DeepInfra(model_id="google/flan-t5-xl",
deepinfra_api_token="my-api-key")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-26
|
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-27
|
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.ForefrontAI[source]#
Wrapper around ForefrontAI large language models.
To use, you should have the environment variable FOREFRONTAI_API_KEY
set with your API key.
Example
from langchain.llms import ForefrontAI
forefrontai = ForefrontAI(endpoint_url="")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-28
|
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field endpoint_url: str = ''#
Model name to use.
field length: int = 256#
The maximum number of tokens to generate in the completion.
field repetition_penalty: int = 1#
Penalizes repeated tokens according to frequency.
field temperature: float = 0.7#
What sampling temperature to use.
field top_k: int = 40#
The number of highest probability vocabulary tokens to
keep for top-k-filtering.
field top_p: float = 1.0#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-29
|
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-30
|
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GPT4All[source]#
Wrapper around GPT4All language models.
To use, you should have the pygpt4all python package installed, the
pre-trained model file, and the model’s config information.
Example
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Simplest invocation
response = model("Once upon a time, ")
Validators
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-31
|
# Simplest invocation
response = model("Once upon a time, ")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field echo: Optional[bool] = False#
Whether to echo the prompt.
field embedding: bool = False#
Use embedding mode only.
field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field model: str [Required]#
Path to the pre-trained GPT4All model file.
field n_batch: int = 1#
Batch size for prompt processing.
field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_predict: Optional[int] = 256#
The maximum number of tokens to generate.
field n_threads: Optional[int] = 4#
Number of threads to use.
field repeat_last_n: Optional[int] = 64#
Last n tokens to penalize
field repeat_penalty: Optional[float] = 1.3#
The penalty to apply to repeated tokens.
field seed: int = 0#
Seed. If -1, a random seed is used.
field stop: Optional[List[str]] = []#
A list of strings to stop generation when encountered.
field streaming: bool = False#
Whether to stream the results or not.
field temp: Optional[float] = 0.8#
The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-32
|
field top_p: Optional[float] = 0.95#
The top-p value to use for sampling.
field use_mlock: bool = False#
Force system to keep model in RAM.
field vocab_only: bool = False#
Only load the vocabulary, no weights.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-33
|
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-34
|
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GooglePalm[source]#
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field max_output_tokens: Optional[int] = None#
Maximum number of tokens to include in a candidate. Must be greater than zero.
If unset, will default to 64.
field model_name: str = 'models/text-bison-001'#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
field temperature: float = 0.7#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-35
|
field temperature: float = 0.7#
Run inference with this temperature. Must by in the closed interval
[0.0, 1.0].
field top_k: Optional[int] = None#
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
field top_p: Optional[float] = None#
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-36
|
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-37
|
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GooseAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable GOOSEAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-38
|
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field min_tokens: int = 1#
The minimum number of tokens to generate in the completion.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-neo-20b'#
Model name to use
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-39
|
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-40
|
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceEndpoint[source]#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-41
|
pydantic model langchain.llms.HuggingFaceEndpoint[source]#
Wrapper around HuggingFaceHub Inference Endpoints.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceEndpoint
endpoint_url = (
"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud"
)
hf = HuggingFaceEndpoint(
endpoint_url=endpoint_url,
huggingfacehub_api_token="my-api-key"
)
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field endpoint_url: str = ''#
Endpoint URL to use.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field task: Optional[str] = None#
Task to call the model with. Should be a task that returns generated_text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-42
|
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-43
|
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceHub[source]#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-44
|
pydantic model langchain.llms.HuggingFaceHub[source]#
Wrapper around HuggingFaceHub models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field repo_id: str = 'gpt2'#
Model name to use.
field task: Optional[str] = None#
Task to call the model with. Should be a task that returns generated_text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-45
|
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-46
|
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFacePipeline[source]#
Wrapper around HuggingFace Pipeline API.
To use, you should have the transformers python package installed.
Only supports text-generation and text2text-generation for now.
Example using from_model_id:from langchain.llms import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2", task="text-generation"
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-47
|
model_id="gpt2", task="text-generation"
)
Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
hf = HuggingFacePipeline(pipeline=pipe)
Validators
raise_deprecation » all fields
set_verbose » verbose
field model_id: str = 'gpt2'#
Model name to use.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-48
|
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.llms.base.LLM[source]#
Construct the pipeline object from model_id and task.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-49
|
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.LlamaCpp[source]#
Wrapper around the llama.cpp model.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-50
|
Wrapper around the llama.cpp model.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: abetlen/llama-cpp-python
Example
from langchain.llms import LlamaCppEmbeddings
llm = LlamaCppEmbeddings(model_path="/path/to/llama/model")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field echo: Optional[bool] = False#
Whether to echo the prompt.
field f16_kv: bool = True#
Use half-precision for key/value cache.
field last_n_tokens_size: Optional[int] = 64#
The number of tokens to look back when applying the repeat_penalty.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field logprobs: Optional[int] = None#
The number of logprobs to return. If None, no logprobs are returned.
field lora_base: Optional[str] = None#
The path to the Llama LoRA base model.
field lora_path: Optional[str] = None#
The path to the Llama LoRA. If None, no LoRa is loaded.
field max_tokens: Optional[int] = 256#
The maximum number of tokens to generate.
field model_path: str [Required]#
The path to the Llama model file.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split the model into.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-51
|
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_threads: Optional[int] = None#
Number of threads to use.
If None, the number of threads is automatically determined.
field repeat_penalty: Optional[float] = 1.1#
The penalty to apply to repeated tokens.
field seed: int = -1#
Seed. If -1, a random seed is used.
field stop: Optional[List[str]] = []#
A list of strings to stop generation when encountered.
field streaming: bool = True#
Whether to stream the results, token by token.
field suffix: Optional[str] = None#
A suffix to append to the generated text. If None, no suffix is appended.
field temperature: Optional[float] = 0.8#
The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
The top-p value to use for sampling.
field use_mlock: bool = False#
Force system to keep model in RAM.
field use_mmap: Optional[bool] = True#
Whether to keep the model loaded in RAM
field vocab_only: bool = False#
Only load the vocabulary, no weights.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-52
|
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-53
|
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-54
|
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[langchain.callbacks.manager.CallbackManagerForLLMRun] = None) → Generator[Dict, None, None][source]#
Yields results objects as they are generated in real time.
BETA: this is a beta feature while we figure out the right abstraction:
Once that happens, this interface could change.
It also calls the callback manager’s on_llm_new_token event with
similar parameters to the OpenAI LLM class method of the same name.
Args:prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:A generator representing the stream of tokens being generated.
Yields:A dictionary like objects containing a string token and metadata.
See llama-cpp-python docs and below for more.
Example:from langchain.llms import LlamaCpp
llm = LlamaCpp(
model_path="/path/to/local/model.bin",
temperature = 0.5
)
for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'",
stop=["'","
“]):result = chunk[“choices”][0]
print(result[“text”], end=’’, flush=True)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Modal[source]#
Wrapper around Modal large language models.
To use, you should have the modal-client python package installed.
Any parameters that are valid to be passed to the call can be passed
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-55
|
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
field endpoint_url: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-56
|
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-57
|
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.NLPCloud[source]#
Wrapper around NLPCloud large language models.
To use, you should have the nlpcloud python package installed, and the
environment variable NLPCLOUD_API_KEY set with your API key.
Example
from langchain.llms import NLPCloud
nlpcloud = NLPCloud(model="gpt-neox-20b")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field bad_words: List[str] = []#
List of tokens not allowed to be generated.
field do_sample: bool = True#
Whether to use sampling (True) or greedy decoding.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-58
|
field do_sample: bool = True#
Whether to use sampling (True) or greedy decoding.
field early_stopping: bool = False#
Whether to stop beam search at num_beams sentences.
field length_no_input: bool = True#
Whether min_length and max_length should include the length of the input.
field length_penalty: float = 1.0#
Exponential penalty to the length.
field max_length: int = 256#
The maximum number of tokens to generate in the completion.
field min_length: int = 1#
The minimum number of tokens to generate in the completion.
field model_name: str = 'finetuned-gpt-neox-20b'#
Model name to use.
field num_beams: int = 1#
Number of beams for beam search.
field num_return_sequences: int = 1#
How many completions to generate for each prompt.
field remove_end_sequence: bool = True#
Whether or not to remove the end sequence token.
field remove_input: bool = True#
Remove input text from API response
field repetition_penalty: float = 1.0#
Penalizes repeated tokens. 1.0 means no penalty.
field temperature: float = 0.7#
What sampling temperature to use.
field top_k: int = 50#
The number of highest probability tokens to keep for top-k filtering.
field top_p: int = 1#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-59
|
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-60
|
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-61
|
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.OpenAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-62
|
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-63
|
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Calculate num tokens with tiktoken package.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]#
Get the sub prompts for llm call.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-64
|
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) → int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) → int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
c31a3c84efc8-65
|
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.OpenAIChat[source]#
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowed。
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowed。
field max_retries: int = 6#
Maximum number of retries to make when generating.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo'#
Model name to use.
field prefix_messages: List [Optional]#
Series of messages for Chat input.
field streaming: bool = False#
Whether to stream the results or not.
field verbose: bool [Optional]#
Whether to print out response text.
|
https://python.langchain.com/en/latest/reference/modules/llms.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.