id stringlengths 14 16 | source stringlengths 49 117 | text stringlengths 16 2.73k |
|---|---|---|
9d3e0c332542-11 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | langchain.agents.agent_toolkits.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManage... |
9d3e0c332542-12 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Opti... |
9d3e0c332542-13 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | Construct a pbi agent from an LLM and tools. |
9d3e0c332542-14 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | langchain.agents.agent_toolkits.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackMa... |
9d3e0c332542-15 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | else):\n\n{{{{input}}}}\n", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agen... |
9d3e0c332542-16 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | Construct a pbi agent from an Chat LLM and tools.
If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.
langchain.agents.agent_toolkits.create_python_agent(llm: langchain.base_language.BaseLanguageModel, tool: langchain.tools.python.tool.PythonREPLTool, callback_manager: Optional[langchain... |
9d3e0c332542-17 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | langchain.agents.agent_toolkits.create_spark_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below t... |
9d3e0c332542-18 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | langchain.agents.agent_toolkits.create_spark_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Sp... |
9d3e0c332542-19 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_... |
9d3e0c332542-20 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | Construct a sql agent from an LLM and tools. |
9d3e0c332542-21 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | langchain.agents.agent_toolkits.create_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL datab... |
9d3e0c332542-22 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_... |
9d3e0c332542-23 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | Construct a sql agent from an LLM and tools.
langchain.agents.agent_toolkits.create_vectorstore_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: ... |
9d3e0c332542-24 | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html | Construct a vectorstore router agent from an LLM and tools.
previous
Tools
next
Utilities
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 04, 2023. |
de6bf77f44a2-0 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | .rst
.pdf
Document Loaders
Document Loaders#
All different types of document loaders.
class langchain.document_loaders.AZLyricsLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads AZLyrics webpages.
load() → List[langchain.schema.Document][source]#
Load webpage.
cla... |
de6bf77f44a2-1 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]#
Loading logic for loading documents from Azure Blob Storage.
load() → List[langchain.schema.Document][source]#
Load documents.
class langc... |
de6bf77f44a2-2 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path – the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
class langchain.document_loaders.BigQueryLoader(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, met... |
de6bf77f44a2-3 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
)
documents = loader.load()
base_url: str#
check_bs4() → None[source]#
Check if BeautifulSoup4 is install... |
de6bf77f44a2-4 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader... |
de6bf77f44a2-5 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.CoNLLULoader(file_path: str)[source]#
Load CoNLL-U files.
load() → List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.CollegeConfidentialLoader(web_path: Union[... |
de6bf77f44a2-6 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_k... |
de6bf77f44a2-7 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | load(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_comments: bool = False, limit: Optional[int] = 50, max_pages: O... |
de6bf77f44a2-8 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | package, we don’t get the “next” values from the “_links” key because
they only return the value from the results key. So here, the pagination
starts from 0 and goes until the max_pages, getting the limit number
of pages with each request. We have to manually check if there
are more docs based on the length of the retu... |
de6bf77f44a2-9 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | load() → List[langchain.schema.Document][source]#
Load from the dataframe.
class langchain.document_loaders.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]#
Loader that loads Diffbot file json.
load() → List[langchain.schema.Document][source]#
Extract text from Diffbot on all th... |
de6bf77f44a2-10 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | field access_token: Optional[str] = None#
field api: str = 'https://api.docugami.com/v1preview1'#
field docset_id: Optional[str] = None#
field document_ids: Optional[Sequence[str]] = None#
field file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None#
field min_chunk_size: int = 32#
load() → List[langchain.sche... |
de6bf77f44a2-11 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents
of the Docum... |
de6bf77f44a2-12 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | class langchain.document_loaders.GCSFileLoader(project_name: str, bucket: str, blob: str)[source]#
Loading logic for loading documents from GCS.
load() → List[langchain.schema.Document][source]#
Load documents.
pydantic model langchain.document_loaders.GitHubIssuesLoader[source]#
Validators
validate_environment » all f... |
de6bf77f44a2-13 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | Default is ‘created’.
field state: Optional[Literal['open', 'closed', 'all']] = None#
Filter on issue state. Can be one of: ‘open’, ‘closed’, ‘all’.
lazy_load() → Iterator[langchain.schema.Document][source]#
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
c... |
de6bf77f44a2-14 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | Load data into document objects.
class langchain.document_loaders.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main')[source]#
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
load() → List[langchain.sch... |
de6bf77f44a2-15 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client: langchain.document_loaders.youtube.GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False)[source]#
Loader that lo... |
de6bf77f44a2-16 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | field credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#
field document_ids: Optional[List[str]] = None#
field file_ids: Optional[List[str]] = None#
field file_types: Optional[Sequence[str]] = None#
field folder_id: Optional[str] = None#
field load_trashed_files: bool = False#
field... |
de6bf77f44a2-17 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | class langchain.document_loaders.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[b... |
de6bf77f44a2-18 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | static load_suggestions(query: str = '', doc_type: str = 'all') → List[langchain.schema.Document][source]#
class langchain.document_loaders.IMSDbLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads IMSDb webpages.
load() → List[langchain.schema.Document][source]#
Lo... |
de6bf77f44a2-19 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | class langchain.document_loaders.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]#
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for “Web Clipper” in the app settings).
To get the... |
de6bf77f44a2-20 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | class langchain.document_loaders.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]#
Mastodon toots loader.
load() → List[langchain.schema.Document][source]#
Lo... |
de6bf77f44a2-21 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | project – A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id – MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key – MaxCompute secret access key. Should be passed in
directly or set as the environm... |
de6bf77f44a2-22 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | :rtype: List[Document]
load_page(page_id: str) → langchain.schema.Document[source]#
Read a page.
class langchain.document_loaders.NotionDirectoryLoader(path: str)[source]#
Loader that loads Notion directory dump.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.Obsidian... |
de6bf77f44a2-23 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.OutlookMessageLoader(file_path: str)[source]#
Loader that loads Outlook Message files using extract_msg.
TeamMsgExtractor/msg-extractor
load() → List[langchain.schema.Document][source]#
Load data into document objects.
cl... |
de6bf77f44a2-24 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | If True, continue loading other URLs on failure.
Type
bool
headless#
If True, the browser will run in headless mode.
Type
bool
load() → List[langchain.schema.Document][source]#
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
L... |
de6bf77f44a2-25 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | class langchain.document_loaders.PyPDFium2Loader(file_path: str)[source]#
Loads a PDF with pypdfium2 and chunks at character level.
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazy load given path as pages.
load() → List[langchain.schema.Document][source]#
Load given path as pages.
class langchain.docume... |
de6bf77f44a2-26 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | First you need to go to
https://www.reddit.com/prefs/apps/
and create your application
load() → List[langchain.schema.Document][source]#
Load reddits.
class langchain.document_loaders.RoamLoader(path: str)[source]#
Loader that loads Roam files from disk.
load() → List[langchain.schema.Document][source]#
Load documents.... |
de6bf77f44a2-27 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | binary_location#
The location of the browser binary.
Type
Optional[str]
executable_path#
The path to the browser executable.
Type
Optional[str]
headless#
If True, the browser will run in headless mode.
Type
bool
arguments [List[str]]
List of arguments to pass to the browser.
load() → List[langchain.schema.Document][sou... |
de6bf77f44a2-28 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]... |
de6bf77f44a2-29 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | load() → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.TomlLoader(source: Union[str, pathlib.Path])[source]#
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
... |
de6bf77f44a2-30 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
load() → List[langchain.schema.Document][source]#
Loads all cards from the specified Trello board.
Y... |
de6bf77f44a2-31 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | class langchain.document_loaders.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]#
Loader that uses the unstructured web API to load file IO objects.
class langchain.docume... |
de6bf77f44a2-32 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | class langchain.document_loaders.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load HTML files.
class langchain.document_loaders.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstru... |
de6bf77f44a2-33 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | load() → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load word documents.
class langchain.document_loaders.WeatherDataLoad... |
de6bf77f44a2-34 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | Max number of concurrent requests to make.
scrape(parser: Optional[str] = None) → Any[source]#
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any][source]#
Fetch all urls, then return soups for all results.
property web_path: str#
web_pat... |
de6bf77f44a2-35 | https://python.langchain.com/en/latest/reference/modules/document_loaders.html | Last updated on Jun 04, 2023. |
cf8fdbccf568-0 | https://python.langchain.com/en/latest/reference/modules/document_transformers.html | .rst
.pdf
Document Transformers
Document Transformers#
Transform documents
pydantic model langchain.document_transformers.EmbeddingsRedundantFilter[source]#
Filter that drops redundant documents by comparing their embeddings.
field embeddings: langchain.embeddings.base.Embeddings [Required]#
Embeddings to use for embed... |
1ae35bf65e45-0 | https://python.langchain.com/en/latest/reference/modules/llms.html | .rst
.pdf
LLMs
LLMs#
Wrappers on top of large language models APIs.
pydantic model langchain.llms.AI21[source]#
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
V... |
1ae35bf65e45-1 | https://python.langchain.com/en/latest/reference/modules/llms.html | field presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use.
field topP: float = 1.0#
... |
1ae35bf65e45-2 | https://python.langchain.com/en/latest/reference/modules/llms.html | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values... |
1ae35bf65e45-3 | https://python.langchain.com/en/latest/reference/modules/llms.html | get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIn... |
1ae35bf65e45-4 | https://python.langchain.com/en/latest/reference/modules/llms.html | Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
Aleph-Alpha/aleph-alpha-clie... |
1ae35bf65e45-5 | https://python.langchain.com/en/latest/reference/modules/llms.html | The logit bias allows to influence the likelihood of generating tokens.
field maximum_tokens: int = 64#
The maximum number of tokens to be generated.
field minimum_tokens: Optional[int] = 0#
Generate at least this number of tokens.
field model: Optional[str] = 'luminous-base'#
Model name to use.
field n: int = 1#
How m... |
1ae35bf65e45-6 | https://python.langchain.com/en/latest/reference/modules/llms.html | multiplicatively (True) or additively (False).
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cac... |
1ae35bf65e45-7 | https://python.langchain.com/en/latest/reference/modules/llms.html | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fie... |
1ae35bf65e45-8 | https://python.langchain.com/en/latest/reference/modules/llms.html | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... |
1ae35bf65e45-9 | https://python.langchain.com/en/latest/reference/modules/llms.html | model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key")
# Simplest invocation, automatically wrapped with HUMAN_PROMPT
# and AI_PROMPT.
response = model("What are the biggest risks facing humanity?")
# Or if you want to use the chat mode, build a few-shot-prompt, or
# put words in the Assistant's mouth,... |
1ae35bf65e45-10 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langcha... |
1ae35bf65e45-11 | https://python.langchain.com/en/latest/reference/modules/llms.html | update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional... |
1ae35bf65e45-12 | https://python.langchain.com/en/latest/reference/modules/llms.html | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
predict_messages(messages: List[... |
1ae35bf65e45-13 | https://python.langchain.com/en/latest/reference/modules/llms.html | Service, or pass it as a named parameter to the constructor.
Example
from langchain.llms import Anyscale
anyscale = Anyscale(anyscale_service_url="SERVICE_URL",
anyscale_service_route="SERVICE_ROUTE",
anyscale_service_token="SERVICE_TOKEN")
# Use Ray for distributed processing
im... |
1ae35bf65e45-14 | https://python.langchain.com/en/latest/reference/modules/llms.html | async apredict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetS... |
1ae35bf65e45-15 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_n... |
1ae35bf65e45-16 | https://python.langchain.com/en/latest/reference/modules/llms.html | .. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AzureOpenAI[source]#
Wrapper around Azure-specific OpenAI large language models.
To use, you sho... |
1ae35bf65e45-17 | https://python.langchain.com/en/latest/reference/modules/llms.html | -1 returns as many tokens as possible given the prompt and
the models maximal context size.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'text-davinci-003' (alias 'model')#
Model name to use.
field n: int = 1#
How many ... |
1ae35bf65e45-18 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult... |
1ae35bf65e45-19 | https://python.langchain.com/en/latest/reference/modules/llms.html | create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[lang... |
1ae35bf65e45-20 | https://python.langchain.com/en/latest/reference/modules/llms.html | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... |
1ae35bf65e45-21 | https://python.langchain.com/en/latest/reference/modules/llms.html | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure ou... |
1ae35bf65e45-22 | https://python.langchain.com/en/latest/reference/modules/llms.html | __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = N... |
1ae35bf65e45-23 | https://python.langchain.com/en/latest/reference/modules/llms.html | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fie... |
1ae35bf65e45-24 | https://python.langchain.com/en/latest/reference/modules/llms.html | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... |
1ae35bf65e45-25 | https://python.langchain.com/en/latest/reference/modules/llms.html | The wrapper can then be called as follows, where the name, cpu, memory, gpu,
python version, and python packages can be updated accordingly. Once deployed,
the instance can be called.
Example
llm = Beam(model_name="gpt2",
name="langchain-gpt2",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="pytho... |
1ae35bf65e45-26 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult... |
1ae35bf65e45-27 | https://python.langchain.com/en/latest/reference/modules/llms.html | dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt an... |
1ae35bf65e45-28 | https://python.langchain.com/en/latest/reference/modules/llms.html | Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
run_creation() → None[source]#
Creates a Python file which will be deployed on beam.
save(file_path: Union[pathlib.Path, str]) → ... |
1ae35bf65e45-29 | https://python.langchain.com/en/latest/reference/modules/llms.html | Id of the model to call, e.g., amazon.titan-tg1-large, this is
equivalent to the modelId property in the list-foundation-models api
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: Optional[str] = None#
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION... |
1ae35bf65e45-30 | https://python.langchain.com/en/latest/reference/modules/llms.html | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values... |
1ae35bf65e45-31 | https://python.langchain.com/en/latest/reference/modules/llms.html | get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIn... |
1ae35bf65e45-32 | https://python.langchain.com/en/latest/reference/modules/llms.html | Wrapper around the C Transformers LLM interface.
To use, you should have the ctransformers python package installed.
See marella/ctransformers
Example
from langchain.llms import CTransformers
llm = CTransformers(model="/path/to/ggml-gpt-2.bin", model_type="gpt2")
Validators
raise_deprecation » all fields
set_verbose » ... |
1ae35bf65e45-33 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult... |
1ae35bf65e45-34 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.Prom... |
1ae35bf65e45-35 | https://python.langchain.com/en/latest/reference/modules/llms.html | predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
l... |
1ae35bf65e45-36 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langcha... |
1ae35bf65e45-37 | https://python.langchain.com/en/latest/reference/modules/llms.html | update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional... |
1ae35bf65e45-38 | https://python.langchain.com/en/latest/reference/modules/llms.html | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
predict_messages(messages: List[... |
1ae35bf65e45-39 | https://python.langchain.com/en/latest/reference/modules/llms.html | Model name to use.
field p: int = 1#
Total probability mass of tokens to consider at each step.
field presence_penalty: float = 0.0#
Penalizes repeated tokens. Between 0 and 1.
field temperature: float = 0.75#
A non-negative float that tunes the degree of randomness in generation.
field truncate: Optional[str] = None#
... |
1ae35bf65e45-40 | https://python.langchain.com/en/latest/reference/modules/llms.html | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values... |
1ae35bf65e45-41 | https://python.langchain.com/en/latest/reference/modules/llms.html | get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIn... |
1ae35bf65e45-42 | https://python.langchain.com/en/latest/reference/modules/llms.html | LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.
It supports two endpoint types:
Serving endpoint (recommended for both production and development).
We assume that an LLM was registered and deployed to a serving endpoint.
To wrap it as an LLM you must have “Can Query” permission to the en... |
1ae35bf65e45-43 | https://python.langchain.com/en/latest/reference/modules/llms.html | set_cluster_driver_port » cluster_driver_port
set_cluster_id » cluster_id
set_model_kwargs » model_kwargs
set_verbose » verbose
field api_token: str [Optional]#
Databricks personal access token.
If not provided, the default value is determined by
the DATABRICKS_API_TOKEN environment variable if present, or
an automatic... |
1ae35bf65e45-44 | https://python.langchain.com/en/latest/reference/modules/llms.html | field model_kwargs: Optional[Dict[str, Any]] = None#
Extra parameters to pass to the endpoint.
field transform_input_fn: Optional[Callable] = None#
A function that transforms {prompt, stop, **kwargs} into a JSON-compatible
request object that the endpoint accepts.
For example, you can apply a prompt template to the inp... |
1ae35bf65e45-45 | https://python.langchain.com/en/latest/reference/modules/llms.html | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values... |
1ae35bf65e45-46 | https://python.langchain.com/en/latest/reference/modules/llms.html | get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIn... |
1ae35bf65e45-47 | https://python.langchain.com/en/latest/reference/modules/llms.html | Wrapper around DeepInfra deployed models.
To use, you should have the requests python package installed, and the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.ll... |
1ae35bf65e45-48 | https://python.langchain.com/en/latest/reference/modules/llms.html | async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from t... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.