id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
2213e45f3c60-29 | Construct a vectorstore router agent from an LLM and tools.
previous
Tools
next
Utilities
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
980fab099ed2-0 | .rst
.pdf
Document Loaders
Document Loaders#
All different types of document loaders.
class langchain.document_loaders.AZLyricsLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads AZLyrics webpages.
load() β List[langchain.schema.Document][source]#
Load webpage.
class langchain.document_loaders.AirbyteJSONLoader(file_path: str)[source]#
Loader that loads local airbyte json files.
load() β List[langchain.schema.Document][source]#
Load file.
pydantic model langchain.document_loaders.ApifyDatasetLoader[source]#
Logic for loading documents from Apify datasets.
field apify_client: Any = None#
field dataset_id: str [Required]#
The ID of the dataset on the Apify platform.
field dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]#
A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.ArxivLoader(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]#
Loads a query result from arxiv.org into a list of Documents.
Each document represents one Document.
The loader converts the original PDF format into the text.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]#
Loading logic for loading documents from Azure Blob Storage. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-1 | Loading logic for loading documents from Azure Blob Storage.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]#
Loading logic for loading documents from Azure Blob Storage.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]#
Loader that uses beautiful soup to parse HTML files.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.BibtexLoader(file_path: str, *, parser: Optional[langchain.utilities.bibtex.BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]#
Loads a bibtex file into a list of Documents.
Each document represents one entry from the bibtex file.
If a PDF file is present in the file bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the abstract field is used instead.
lazy_load() β Iterator[langchain.schema.Document][source]#
Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns
a list of documents with the document.page_content in text format
load() β List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-2 | load() β List[langchain.schema.Document][source]#
Load bibtex file documents from the given bibtex file path.
See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path β the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
class langchain.document_loaders.BigQueryLoader(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]#
Loads a query result from BigQuery into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.BiliBiliLoader(video_urls: List[str])[source]#
Loader that loads bilibili transcripts.
load() β List[langchain.schema.Document][source]#
Load from bilibili url.
class langchain.document_loaders.BlackboardLoader(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None)[source]#
Loader that loads all documents from a Blackboard course.
This loader is not compatible with all Blackboard courses. It is only
compatible with courses that use the new Blackboard interface.
To use this loader, you must have the BbRouter cookie. You can get this
cookie by logging into the course and then copying the value of the | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-3 | cookie by logging into the course and then copying the value of the
BbRouter cookie from the browserβs developer tools.
Example
from langchain.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
)
documents = loader.load()
base_url: str#
check_bs4() β None[source]#
Check if BeautifulSoup4 is installed.
Raises
ImportError β If BeautifulSoup4 is not installed.
download(path: str) β None[source]#
Download a file from a url.
Parameters
path β Path to the file.
folder_path: str#
load() β List[langchain.schema.Document][source]#
Load data into document objects.
Returns
List of documents.
load_all_recursively: bool#
parse_filename(url: str) β str[source]#
Parse the filename from a url.
Parameters
url β Url to parse the filename from.
Returns
The filename.
class langchain.document_loaders.BlockchainDocumentLoader(contract_address: str, blockchainType: langchain.document_loaders.blockchain.BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]#
Loads elements from a blockchain smart contract into Langchain documents.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-4 | The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.CSVLoader(file_path: str, source_column: Optional[str] = None, csv_args: Optional[Dict] = None, encoding: Optional[str] = None)[source]#
Loads a CSV file into a list of documents.
Each document represents one row of the CSV file. Every row is converted into a
key/value pair and outputted to a new line in the documentβs page_content.
The source for each document loaded from csv is set to the value of the
file_path argument for all doucments by default.
You can override this by setting the source_column argument to the
name of a column in the CSV file.
The source of each document will then be set to the value of the column
with the name specified in source_column.
Output Example:column1: value1
column2: value2
column3: value3
load() β List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-5 | column3: value3
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.ChatGPTLoader(log_file: str, num_logs: int = - 1)[source]#
Loader that loads conversations from exported ChatGPT data.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.CoNLLULoader(file_path: str)[source]#
Load CoNLL-U files.
load() β List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.CollegeConfidentialLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads College Confidential webpages.
load() β List[langchain.schema.Document][source]#
Load webpage.
class langchain.document_loaders.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]#
Load Confluence pages. Port of https://llamahub.ai/l/confluence
This currently supports both username/api_key and Oauth2 login.
Specify a list page_ids and/or space_key to load in the corresponding pages into
Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean include_attachments to include attachments, this
is set to False by default, if set to True all attachments will be downloaded and | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-6 | is set to False by default, if set to True all attachments will be downloaded and
ConfluenceReader will extract the text from the attachments and add it to the
Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,
SVG, Word and Excel.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
Parameters
url (str) β _description_
api_key (str, optional) β _description_, defaults to None
username (str, optional) β _description_, defaults to None
oauth2 (dict, optional) β _description_, defaults to {}
cloud (bool, optional) β _description_, defaults to True
number_of_retries (Optional[int], optional) β How many times to retry, defaults to 3
min_retry_seconds (Optional[int], optional) β defaults to 2
max_retry_seconds (Optional[int], optional) β defaults to 10
confluence_kwargs (dict, optional) β additional kwargs to initialize confluence with
Raises
ValueError β Errors while validating input
ImportError β Required dependencies not installed.
is_public_page(page: dict) β bool[source]#
Check if a page is publicly accessible. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-7 | Check if a page is publicly accessible.
load(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_comments: bool = False, limit: Optional[int] = 50, max_pages: Optional[int] = 1000) β List[langchain.schema.Document][source]#
Parameters
space_key (Optional[str], optional) β Space key retrieved from a confluence URL, defaults to None
page_ids (Optional[List[str]], optional) β List of specific page IDs to load, defaults to None
label (Optional[str], optional) β Get all pages with this label, defaults to None
cql (Optional[str], optional) β CQL Expression, defaults to None
include_restricted_content (bool, optional) β defaults to False
include_archived_content (bool, optional) β Whether to include archived content,
defaults to False
include_attachments (bool, optional) β defaults to False
include_comments (bool, optional) β defaults to False
limit (int, optional) β Maximum number of pages to retrieve per request, defaults to 50
max_pages (int, optional) β Maximum number of pages to retrieve in total, defaults 1000
Raises
ValueError β _description_
ImportError β _description_
Returns
_description_
Return type
List[Document]
paginate_request(retrieval_method: Callable, **kwargs: Any) β List[source]#
Paginate the various methods to retrieve groups of pages.
Unfortunately, due to page size, sometimes the Confluence API
doesnβt match the limit value. If limit is >100 confluence | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-8 | doesnβt match the limit value. If limit is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we donβt get the βnextβ values from the β_linksβ key because
they only return the value from the results key. So here, the pagination
starts from 0 and goes until the max_pages, getting the limit number
of pages with each request. We have to manually check if there
are more docs based on the length of the returned list of pages, rather than
just checking for the presence of a next key in the response like this page
would have you do:
https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/
Parameters
retrieval_method (callable) β Function used to retrieve docs
Returns
List of documents
Return type
List
process_attachment(page_id: str) β List[str][source]#
process_doc(link: str) β str[source]#
process_image(link: str) β str[source]#
process_page(page: dict, include_attachments: bool, include_comments: bool) β langchain.schema.Document[source]#
process_pages(pages: List[dict], include_restricted_content: bool, include_attachments: bool, include_comments: bool) β List[langchain.schema.Document][source]#
Process a list of pages into a list of documents.
process_pdf(link: str) β str[source]#
process_svg(link: str) β str[source]#
process_xls(link: str) β str[source]#
static validate_init_args(url: Optional[str] = None, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None) β Optional[List][source]#
Validates proper combinations of init arguments | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-9 | Validates proper combinations of init arguments
class langchain.document_loaders.DataFrameLoader(data_frame: Any, page_content_column: str = 'text')[source]#
Load Pandas DataFrames.
load() β List[langchain.schema.Document][source]#
Load from the dataframe.
class langchain.document_loaders.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]#
Loader that loads Diffbot file json.
load() β List[langchain.schema.Document][source]#
Extract text from Diffbot on all the URLs and return Document instances
class langchain.document_loaders.DirectoryLoader(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: typing.Union[typing.Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], typing.Type[langchain.document_loaders.text.TextLoader], typing.Type[langchain.document_loaders.html_bs.BSHTMLLoader]] = <class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs: typing.Optional[dict] = None, recursive: bool = False, show_progress: bool = False, use_multithreading: bool = False, max_concurrency: int = 4)[source]#
Loading logic for loading documents from a directory.
load() β List[langchain.schema.Document][source]#
Load documents.
load_file(item: pathlib.Path, path: pathlib.Path, docs: List[langchain.schema.Document], pbar: Optional[Any]) β None[source]#
class langchain.document_loaders.DiscordChatLoader(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]#
Load Discord chat logs.
load() β List[langchain.schema.Document][source]#
Load all chat messages. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-10 | load() β List[langchain.schema.Document][source]#
Load all chat messages.
pydantic model langchain.document_loaders.DocugamiLoader[source]#
Loader that loads processed docs from Docugami.
To use, you should have the lxml python package installed.
field access_token: Optional[str] = None#
field api: str = 'https://api.docugami.com/v1preview1'#
field docset_id: Optional[str] = None#
field document_ids: Optional[Sequence[str]] = None#
field file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None#
field min_chunk_size: int = 32#
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.Docx2txtLoader(file_path: str)[source]#
Loads a DOCX with docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
load() β List[langchain.schema.Document][source]#
Load given path as single page.
class langchain.document_loaders.DuckDBLoader(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]#
Loads a query result from DuckDB into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-11 | are written into the page_content and none into the metadata.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.EverNoteLoader(file_path: str, load_single_document: bool = True)[source]#
EverNote Loader.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents
of the Document, any non content metadata (e.g. βauthorβ, βcreatedβ, βupdatedβ etc.
but not βcontent-rawβ or βresourceβ) tags on the note will be extracted and stored
as metadata on the Document.
Parameters
file_path (str) β The path to the notebook export with a .enex extension
load_single_document (bool) β Whether or not to concatenate the content of all
notes into a single long Document.
True (If this is set to) β the βsourceβ which contains the file name of the export.
load() β List[langchain.schema.Document][source]#
Load documents from EverNote export file.
class langchain.document_loaders.FacebookChatLoader(path: str)[source]#
Loader that loads Facebook messages json directory dump.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '')[source]#
Loading logic for loading documents from GCS.
load() β List[langchain.schema.Document][source]#
Load documents. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-12 | load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.GCSFileLoader(project_name: str, bucket: str, blob: str)[source]#
Loading logic for loading documents from GCS.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.GitLoader(repo_path: str, clone_url: Optional[str] = None, branch: Optional[str] = 'main', file_filter: Optional[Callable[[str], bool]] = None)[source]#
Loads files from a Git repository into a list of documents.
Repository can be local on disk available at repo_path,
or remote at clone_url that will be cloned to repo_path.
Currently supports only text files.
Each document represents one file in the repository. The path points to
the local Git repository, and the branch specifies the branch to load
files from. By default, it loads from the main branch.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main')[source]#
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
load() β List[langchain.schema.Document][source]#
Fetch text from one single GitBook page.
class langchain.document_loaders.GoogleApiClient(credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json'), service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json'))[source]#
A Generic Google Api Client. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-13 | A Generic Google Api Client.
To use, you should have the google_auth_oauthlib,youtube_transcript_api,google
python package installed.
As the google api expects credentials you need to set up a google account and
register your Service. βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#
service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#
token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')#
classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) β Dict[str, Any][source]#
Validate that either folder_id or document_ids is set, but not both.
class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client: langchain.document_loaders.youtube.GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False)[source]#
Loader that loads all Videos from a Channel
To use, you should have the googleapiclient,youtube_transcript_api
python package installed.
As the service needs a google_api_client, you first have to initialize
the GoogleApiClient.
Additionally you have to either provide a channel name or a list of videoids
βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
from langchain.document_loaders import GoogleApiYoutubeLoader
google_api_client = GoogleApiClient( | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-14 | from langchain.document_loaders import GoogleApiYoutubeLoader
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
loader = GoogleApiYoutubeLoader(
google_api_client=google_api_client,
channel_name = "CodeAesthetic"
)
load.load()
add_video_info: bool = True#
captions_language: str = 'en'#
channel_name: Optional[str] = None#
continue_on_failure: bool = False#
google_api_client: langchain.document_loaders.youtube.GoogleApiClient#
load() β List[langchain.schema.Document][source]#
Load documents.
classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) β Dict[str, Any][source]#
Validate that either folder_id or document_ids is set, but not both.
video_ids: Optional[List[str]] = None#
pydantic model langchain.document_loaders.GoogleDriveLoader[source]#
Loader that loads Google Docs from Google Drive.
Validators
validate_credentials_path Β» credentials_path
validate_inputs Β» all fields
field credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#
field document_ids: Optional[List[str]] = None#
field file_ids: Optional[List[str]] = None#
field file_types: Optional[Sequence[str]] = None#
field folder_id: Optional[str] = None#
field load_trashed_files: bool = False#
field recursive: bool = False#
field service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')#
field token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')#
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.GutenbergLoader(file_path: str)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-15 | class langchain.document_loaders.GutenbergLoader(file_path: str)[source]#
Loader that uses urllib to load .txt web files.
load() β List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.HNLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Load Hacker News data from either main page results or the comments page.
load() β List[langchain.schema.Document][source]#
Get important HN webpage information.
Components are:
title
content
source url,
time of post
author of the post
number of comments
rank of the post
load_comments(soup_info: Any) β List[langchain.schema.Document][source]#
Load comments from a HN post.
load_results(soup: Any) β List[langchain.schema.Document][source]#
Load items from an HN page.
class langchain.document_loaders.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]#
Loading logic for loading documents from the Hugging Face Hub.
lazy_load() β Iterator[langchain.schema.Document][source]#
Load documents lazily.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.IFixitLoader(web_path: str)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-16 | class langchain.document_loaders.IFixitLoader(web_path: str)[source]#
Load iFixit repair guides, device wikis and answers.
iFixit is the largest, open repair community on the web. The site contains nearly
100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is
licensed under CC-BY.
This loader will allow you to download the text of a repair guide, text of Q&Aβs
and wikis from devices on iFixit using their open APIs and web scraping.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
load_device(url_override: Optional[str] = None, include_guides: bool = True) β List[langchain.schema.Document][source]#
load_guide(url_override: Optional[str] = None) β List[langchain.schema.Document][source]#
load_questions_and_answers(url_override: Optional[str] = None) β List[langchain.schema.Document][source]#
static load_suggestions(query: str = '', doc_type: str = 'all') β List[langchain.schema.Document][source]#
class langchain.document_loaders.IMSDbLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads IMSDb webpages.
load() β List[langchain.schema.Document][source]#
Load webpage.
class langchain.document_loaders.ImageCaptionLoader(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]#
Loader that loads the captions of an image
load() β List[langchain.schema.Document][source]#
Load from a list of image files | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-17 | Load from a list of image files
class langchain.document_loaders.JSONLoader(file_path: Union[str, pathlib.Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True)[source]#
Loads a JSON file and references a jq schema provided to load the text into
documents.
Example
[{βtextβ: β¦}, {βtextβ: β¦}, {βtextβ: β¦}] -> schema = .[].text
{βkeyβ: [{βtextβ: β¦}, {βtextβ: β¦}, {βtextβ: β¦}]} -> schema = .key[].text
[ββ, ββ, ββ] -> schema = .[]
load() β List[langchain.schema.Document][source]#
Load and return documents from the JSON file.
class langchain.document_loaders.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]#
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for βWeb Clipperβ in the app settings).
To get the access token, you need to go to the Web Clipper options and
under βAdvanced Optionsβ you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
lazy_load() β Iterator[langchain.schema.Document][source]#
A lazy loader for document content.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.MWDumpLoader(file_path: str, encoding: Optional[str] = 'utf8')[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-18 | Load MediaWiki dump from XML file
.. rubric:: Example
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader(
file_path="myWiki.xml",
encoding="utf8"
)
docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0
)
texts = text_splitter.split_documents(docs)
Parameters
file_path (str) β XML local file path
encoding (str, optional) β Charset encoding, defaults to βutf8β
load() β List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]#
Mastodon toots loader.
load() β List[langchain.schema.Document][source]#
Load toots into documents.
class langchain.document_loaders.MathpixPDFLoader(file_path: str, processed_file_format: str = 'mmd', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any)[source]#
clean_pdf(contents: str) β str[source]#
property data: dict#
get_processed_pdf(pdf_id: str) β str[source]#
property headers: dict#
load() β List[langchain.schema.Document][source]#
Load data into document objects.
send_pdf() β str[source]#
property url: str#
wait_for_processing(pdf_id: str) β None[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-19 | property url: str#
wait_for_processing(pdf_id: str) β None[source]#
class langchain.document_loaders.ModernTreasuryLoader(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None)[source]#
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]#
Loader that loads .ipynb notebook files.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.NotionDBLoader(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10)[source]#
Notion DB Loader.
Reads content from pages within a Noton Database.
:param integration_token: Notion integration token.
:type integration_token: str
:param database_id: Notion database id.
:type database_id: str
:param request_timeout_sec: Timeout for Notion requests in seconds.
:type request_timeout_sec: int
load() β List[langchain.schema.Document][source]#
Load documents from the Notion database.
:returns: List of documents.
:rtype: List[Document]
load_page(page_id: str) β langchain.schema.Document[source]#
Read a page.
class langchain.document_loaders.NotionDirectoryLoader(path: str)[source]#
Loader that loads Notion directory dump.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.ObsidianLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-20 | Loader that loads Obsidian files from disk.
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)#
load() β List[langchain.schema.Document][source]#
Load documents.
pydantic model langchain.document_loaders.OneDriveLoader[source]#
field auth_with_token: bool = False#
field drive_id: str [Required]#
field folder_path: Optional[str] = None#
field object_ids: Optional[List[str]] = None#
field settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]#
load() β List[langchain.schema.Document][source]#
Loads all supported document files from the specified OneDrive drive a
nd returns a list of Document objects.
Returns
A list of Document objects
representing the loaded documents.
Return type
List[Document]
Raises
ValueError β If the specified drive ID
does not correspond to a drive in the OneDrive storage. β
class langchain.document_loaders.OnlinePDFLoader(file_path: str)[source]#
Loader that loads online PDFs.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.OutlookMessageLoader(file_path: str)[source]#
Loader that loads Outlook Message files using extract_msg.
TeamMsgExtractor/msg-extractor
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.PDFMinerLoader(file_path: str)[source]#
Loader that uses PDFMiner to load PDF files.
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazily lod documents.
load() β List[langchain.schema.Document][source]#
Eagerly load the content. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-21 | Eagerly load the content.
class langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path: str)[source]#
Loader that uses PDFMiner to load PDF files as HTML content.
load() β List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None)[source]#
Loader that uses pdfplumber to load PDF files.
load() β List[langchain.schema.Document][source]#
Load file.
langchain.document_loaders.PagedPDFSplitter#
alias of langchain.document_loaders.pdf.PyPDFLoader
class langchain.document_loaders.PlaywrightURLLoader(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None)[source]#
Loader that uses Playwright and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
urls#
List of URLs to load.
Type
List[str]
continue_on_failure#
If True, continue loading other URLs on failure.
Type
bool
headless#
If True, the browser will run in headless mode.
Type
bool
load() β List[langchain.schema.Document][source]#
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.PsychicLoader(api_key: str, connector_id: str, connection_id: str)[source]#
Loader that loads documents from Psychic.dev.
load() β List[langchain.schema.Document][source]#
Load documents. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-22 | load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.PyMuPDFLoader(file_path: str)[source]#
Loader that uses PyMuPDF to load PDF files.
load(**kwargs: Optional[Any]) β List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]#
Loads a directory with PDF files with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.PyPDFLoader(file_path: str)[source]#
Loads a PDF with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazy load given path as pages.
load() β List[langchain.schema.Document][source]#
Load given path as pages.
class langchain.document_loaders.PyPDFium2Loader(file_path: str)[source]#
Loads a PDF with pypdfium2 and chunks at character level.
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazy load given path as pages.
load() β List[langchain.schema.Document][source]#
Load given path as pages.
class langchain.document_loaders.PythonLoader(file_path: str)[source]#
Load Python files, respecting any non-default encoding if specified. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-23 | Load Python files, respecting any non-default encoding if specified.
class langchain.document_loaders.ReadTheDocsLoader(path: Union[str, pathlib.Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]#
Loader that loads ReadTheDocs documentation directory dump.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]#
Reddit posts loader.
Read posts on a subreddit.
First you need to go to
https://www.reddit.com/prefs/apps/
and create your application
load() β List[langchain.schema.Document][source]#
Load reddits.
class langchain.document_loaders.RoamLoader(path: str)[source]#
Loader that loads Roam files from disk.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.S3DirectoryLoader(bucket: str, prefix: str = '')[source]#
Loading logic for loading documents from s3.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.S3FileLoader(bucket: str, key: str)[source]#
Loading logic for loading documents from s3.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.SRTLoader(file_path: str)[source]#
Loader for .srt (subtitle) files. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-24 | Loader for .srt (subtitle) files.
load() β List[langchain.schema.Document][source]#
Load using pysrt file.
class langchain.document_loaders.SeleniumURLLoader(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = True, arguments: List[str] = [])[source]#
Loader that uses Selenium and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
urls#
List of URLs to load.
Type
List[str]
continue_on_failure#
If True, continue loading other URLs on failure.
Type
bool
browser#
The browser to use, either βchromeβ or βfirefoxβ.
Type
str
binary_location#
The location of the browser binary.
Type
Optional[str]
executable_path#
The path to the browser executable.
Type
Optional[str]
headless#
If True, the browser will run in headless mode.
Type
bool
arguments [List[str]]
List of arguments to pass to the browser.
load() β List[langchain.schema.Document][source]#
Load the specified URLs using Selenium and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False)[source]#
Loader that fetches a sitemap and loads those URLs. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-25 | Loader that fetches a sitemap and loads those URLs.
load() β List[langchain.schema.Document][source]#
Load sitemap.
parse_sitemap(soup: Any) β List[dict][source]#
Parse sitemap xml and load into a list of dicts.
class langchain.document_loaders.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]#
Loader for loading documents from a Slack directory dump.
load() β List[langchain.schema.Document][source]#
Load and return documents from the Slack directory dump.
class langchain.document_loaders.SpreedlyLoader(access_token: str, resource: str)[source]#
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.StripeLoader(resource: str, access_token: Optional[str] = None)[source]#
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]#
Loader that loads Telegram chat json directory dump.
async fetch_data_from_telegram() β None[source]#
Fetch data from Telegram API and save it as a JSON file.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.TelegramChatFileLoader(path: str)[source]#
Loader that loads Telegram chat json directory dump.
load() β List[langchain.schema.Document][source]#
Load documents.
langchain.document_loaders.TelegramChatLoader# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-26 | Load documents.
langchain.document_loaders.TelegramChatLoader#
alias of langchain.document_loaders.telegram.TelegramChatFileLoader
class langchain.document_loaders.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]#
Load text files.
Parameters
file_path β Path to the file to load.
encoding β File encoding to use. If None, the file will be loaded
encoding. (with the default system) β
autodetect_encoding β Whether to try to autodetect the file encoding
if the specified encoding fails.
load() β List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.ToMarkdownLoader(url: str, api_key: str)[source]#
Loader that loads HTML to markdown using 2markdown.
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazily load the file.
load() β List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.TomlLoader(source: Union[str, pathlib.Path])[source]#
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazily load the TOML documents from the source file or directory.
load() β List[langchain.schema.Document][source]#
Load and return all documents.
class langchain.document_loaders.TwitterTweetLoader(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]#
Twitter tweets loader.
Read tweets of user twitter handle.
First you need to go to | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-27 | Twitter tweets loader.
Read tweets of user twitter handle.
First you need to go to
https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api
to get your token. And create a v2 version of the app.
classmethod from_bearer_token(oauth2_bearer_token: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) β langchain.document_loaders.twitter.TwitterTweetLoader[source]#
Create a TwitterTweetLoader from OAuth2 bearer token.
classmethod from_secrets(access_token: str, access_token_secret: str, consumer_key: str, consumer_secret: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) β langchain.document_loaders.twitter.TwitterTweetLoader[source]#
Create a TwitterTweetLoader from access tokens and secrets.
load() β List[langchain.schema.Document][source]#
Load tweets.
class langchain.document_loaders.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]#
Loader that uses the unstructured web API to load file IO objects.
class langchain.document_loaders.UnstructuredAPIFileLoader(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]#
Loader that uses the unstructured web API to load files.
class langchain.document_loaders.UnstructuredEPubLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-28 | Loader that uses unstructured to load epub files.
class langchain.document_loaders.UnstructuredEmailLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load email files.
class langchain.document_loaders.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load file IO objects.
class langchain.document_loaders.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load files.
class langchain.document_loaders.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load HTML files.
class langchain.document_loaders.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load image files, such as PNGs and JPGs.
class langchain.document_loaders.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load markdown files.
class langchain.document_loaders.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load open office ODT files.
class langchain.document_loaders.UnstructuredPDFLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-29 | Loader that uses unstructured to load PDF files.
class langchain.document_loaders.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load powerpoint files.
class langchain.document_loaders.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load rtf files.
class langchain.document_loaders.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load HTML files.
load() β List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load word documents.
class langchain.document_loaders.WeatherDataLoader(client: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper, places: Sequence[str])[source]#
Weather Reader.
Reads the forecast & current weather of any location using OpenWeatherMapβs free
API. Checkout βhttps://openweathermap.org/appidβ for more on how to generate a free
OpenWeatherMap API.
classmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) β langchain.document_loaders.weather.WeatherDataLoader[source]#
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazily load weather data for the given locations.
load() β List[langchain.schema.Document][source]#
Load weather data for the given locations. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-30 | Load weather data for the given locations.
class langchain.document_loaders.WebBaseLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that uses urllib and beautiful soup to load webpages.
aload() β List[langchain.schema.Document][source]#
Load text from the urls in web_path async into Documents.
default_parser: str = 'html.parser'#
Default parser to use for BeautifulSoup.
async fetch_all(urls: List[str]) β Any[source]#
Fetch all urls concurrently with rate limiting.
load() β List[langchain.schema.Document][source]#
Load text from the url(s) in web_path.
requests_per_second: int = 2#
Max number of concurrent requests to make.
scrape(parser: Optional[str] = None) β Any[source]#
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) β List[Any][source]#
Fetch all urls, then return soups for all results.
property web_path: str#
web_paths: List[str]#
class langchain.document_loaders.WhatsAppChatLoader(path: str)[source]#
Loader that loads WhatsApp messages text file.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]#
Loads a query result from www.wikipedia.org into a list of Documents.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
load() β List[langchain.schema.Document][source]#
Load data into document objects. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
980fab099ed2-31 | load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.YoutubeLoader(video_id: str, add_video_info: bool = False, language: str = 'en', continue_on_failure: bool = False)[source]#
Loader that loads Youtube transcripts.
static extract_video_id(youtube_url: str) β str[source]#
Extract video id from common YT urls.
classmethod from_youtube_url(youtube_url: str, **kwargs: Any) β langchain.document_loaders.youtube.YoutubeLoader[source]#
Given youtube URL, load video.
load() β List[langchain.schema.Document][source]#
Load documents.
previous
Text Splitter
next
Vector Stores
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
01decb44c9b0-0 | .rst
.pdf
Document Transformers
Document Transformers#
Transform documents
pydantic model langchain.document_transformers.EmbeddingsRedundantFilter[source]#
Filter that drops redundant documents by comparing their embeddings.
field embeddings: langchain.embeddings.base.Embeddings [Required]#
Embeddings to use for embedding document contents.
field similarity_fn: Callable = <function cosine_similarity>#
Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
field similarity_threshold: float = 0.95#
Threshold for determining when two documents are similar enough
to be considered redundant.
async atransform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) β Sequence[langchain.schema.Document][source]#
Asynchronously transform a list of documents.
transform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) β Sequence[langchain.schema.Document][source]#
Filter down documents.
langchain.document_transformers.get_stateful_documents(documents: Sequence[langchain.schema.Document]) β Sequence[langchain.document_transformers._DocumentWithState][source]#
previous
Document Compressors
next
Memory
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/document_transformers.html |
f8a76bd808b1-0 | .rst
.pdf
LLMs
LLMs#
Wrappers on top of large language models APIs.
pydantic model langchain.llms.AI21[source]#
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to count.
field frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to frequency.
field logitBias: Optional[Dict[str, float]] = None#
Adjust the probability of specific tokens being generated.
field maxTokens: int = 256#
The maximum number of tokens to generate in the completion.
field minTokens: int = 0#
The minimum number of tokens to generate in the completion.
field model: str = 'j2-jumbo-instruct'#
Model name to use.
field numResults: int = 1#
How many completions to generate for each prompt. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-1 | field numResults: int = 1#
How many completions to generate for each prompt.
field presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use.
field topP: float = 1.0#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-2 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-4 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AlephAlpha[source]#
Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
Aleph-Alpha/aleph-alpha-client
Example
from langchain.llms import AlephAlpha
alpeh_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field aleph_alpha_api_key: Optional[str] = None#
API key for Aleph Alpha API.
field best_of: Optional[int] = None#
returns the one with the βbest ofβ results
(highest log probability per token)
field completion_bias_exclusion_first_token_only: bool = False#
Only consider the first token for the completion_bias_exclusion.
field contextual_control_threshold: Optional[float] = None#
If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
field control_log_additive: Optional[bool] = True#
True: apply control by adding the log(control_factor) to attention scores. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-5 | True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
field echo: bool = False#
Echo the prompt in the completion.
field frequency_penalty: float = 0.0#
Penalizes repeated tokens according to frequency.
field log_probs: Optional[int] = None#
Number of top log probabilities to be returned for each generated token.
field logit_bias: Optional[Dict[int, float]] = None#
The logit bias allows to influence the likelihood of generating tokens.
field maximum_tokens: int = 64#
The maximum number of tokens to be generated.
field minimum_tokens: Optional[int] = 0#
Generate at least this number of tokens.
field model: Optional[str] = 'luminous-base'#
Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field penalty_bias: Optional[str] = None#
Penalty bias for the completion.
field penalty_exceptions: Optional[List[str]] = None#
List of strings that may be generated without penalty,
regardless of other penalty settings
field penalty_exceptions_include_stop_sequences: Optional[bool] = None#
Should stop_sequences be included in penalty_exceptions.
field presence_penalty: float = 0.0#
Penalizes repeated tokens.
field raw_completion: bool = False#
Force the raw completion of the model to be returned.
field repetition_penalties_include_completion: bool = True#
Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
field repetition_penalties_include_prompt: Optional[bool] = False#
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
field stop_sequences: Optional[List[str]] = None#
Stop sequences to use. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-6 | field stop_sequences: Optional[List[str]] = None#
Stop sequences to use.
field temperature: float = 0.0#
A non-negative float that tunes the degree of randomness in generation.
field tokens: Optional[bool] = False#
return tokens of completion.
field top_k: int = 0#
Number of most likely tokens to consider at each step.
field top_p: float = 0.0#
Total probability mass of tokens to consider at each step.
field use_multiplicative_presence_penalty: Optional[bool] = False#
Flag deciding whether presence penalty is applied
multiplicatively (True) or additively (False).
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-7 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-8 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-9 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Anthropic[source]#
Wrapper around Anthropicβs large language models.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
Validators
raise_deprecation Β» all fields
raise_warning Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to Anthropic Completion API. Default is 600 seconds.
field max_tokens_to_sample: int = 256#
Denotes the number of tokens to predict per generation.
field model: str = 'claude-v1'#
Model name to use.
field streaming: bool = False#
Whether to stream the results.
field temperature: Optional[float] = None#
A non-negative float that tunes the degree of randomness in generation.
field top_k: Optional[int] = None#
Number of most likely tokens to consider at each step.
field top_p: Optional[float] = None#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-10 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-11 | Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int[source]#
Calculate number of tokens.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-12 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator[source]#
Call Anthropic completion_stream and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt β The prompt to pass into the model.
stop β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from Anthropic.
Example
prompt = "Write a poem about a stream." | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-13 | Example
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Anyscale[source]#
Wrapper around Anyscale Services.
To use, you should have the environment variable ANYSCALE_SERVICE_URL,
ANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale
Service, or pass it as a named parameter to the constructor.
Example
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model. Reserved for future use
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-14 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-15 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-16 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AzureOpenAI[source]#
Wrapper around Azure-specific OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowedγ
field batch_size: int = 20#
Batch size to use when passing multiple documents to generate.
field best_of: int = 1#
Generates best_of completions server-side and returns the βbestβ.
field deployment_name: str = ''#
Deployment name to use.
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowedγ
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated.
field max_retries: int = 6# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-17 | Adjust the probability of specific tokens being generated.
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'text-davinci-003' (alias 'model')#
Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampling temperature to use.
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-18 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-19 | deep β set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β List[List[str]]#
Get the sub prompts for llm call.
get_token_ids(text: str) β List[int]#
Get the token IDs using the tiktoken package. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-20 | Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) β int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) β int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname β The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
prep_streaming_params(stop: Optional[List[str]] = None) β Dict[str, Any]#
Prepare the params for streaming. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-21 | Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt β The prompts to pass into the model.
stop β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Banana[source]#
Wrapper around Banana large language models.
To use, you should have the banana-dev python package installed,
and the environment variable BANANA_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field model_key: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field verbose: bool [Optional]#
Whether to print out response text. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-22 | field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-23 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-24 | get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Beam[source]#
Wrapper around Beam API for gpt2 large language model.
To use, you should have the beam-sdk python package installed,
and the environment variable BEAM_CLIENT_ID set with your client id
and BEAM_CLIENT_SECRET set with your client secret. Information on how | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-25 | and BEAM_CLIENT_SECRET set with your client secret. Information on how
to get these is available here: https://docs.beam.cloud/account/api-keys.
The wrapper can then be called as follows, where the name, cpu, memory, gpu,
python version, and python packages can be updated accordingly. Once deployed,
the instance can be called.
llm = Beam(model_name=βgpt2β,name=βlangchain-gpt2β,
cpu=8,
memory=β32Giβ,
gpu=βA10Gβ,
python_version=βpython3.8β,
python_packages=[
βdiffusers[torch]>=0.10β,
βtransformersβ,
βtorchβ,
βpillowβ,
βaccelerateβ,
βsafetensorsβ,
βxformersβ,],
max_length=50)
llm._deploy()
call_result = llm._call(input)
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field url: str = ''#
model endpoint to use
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-26 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
app_creation() β None[source]#
Creates a Python file which will contain your Beam app definition.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-27 | the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-28 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
run_creation() β None[source]#
Creates a Python file which will be deployed on beam.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.CTransformers[source]#
Wrapper around the C Transformers LLM interface.
To use, you should have the ctransformers python package installed.
See marella/ctransformers
Example
from langchain.llms import CTransformers
llm = CTransformers(model="/path/to/ggml-gpt-2.bin", model_type="gpt2")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field config: Optional[Dict[str, Any]] = None#
The config parameters.
See marella/ctransformers
field lib: Optional[str] = None#
The path to a shared library or one of avx2, avx, basic.
field model: str [Required]# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-29 | field model: str [Required]#
The path to a model file or directory or the name of a Hugging Face Hub
model repo.
field model_file: Optional[str] = None#
The name of the model file in repo or directory.
field model_type: Optional[str] = None#
The model type.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-30 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-31 | Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.CerebriumAI[source]#
Wrapper around CerebriumAI large language models.
To use, you should have the cerebrium python package installed, and the
environment variable CEREBRIUMAI_API_KEY set with your API key. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-32 | environment variable CEREBRIUMAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field endpoint_url: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-33 | Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-34 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-35 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Cohere[source]#
Wrapper around Cohere large language models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import Cohere
cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field frequency_penalty: float = 0.0#
Penalizes repeated tokens according to frequency. Between 0 and 1.
field k: int = 0#
Number of most likely tokens to consider at each step.
field max_tokens: int = 256#
Denotes the number of tokens to predict per generation.
field model: Optional[str] = None#
Model name to use.
field p: int = 1#
Total probability mass of tokens to consider at each step.
field presence_penalty: float = 0.0#
Penalizes repeated tokens. Between 0 and 1.
field temperature: float = 0.75#
A non-negative float that tunes the degree of randomness in generation.
field truncate: Optional[str] = None#
Specify how the client handles inputs longer than the maximum token
length: Truncate from START, END or NONE
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-36 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-37 | Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-38 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Databricks[source]#
LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.
It supports two endpoint types:
Serving endpoint (recommended for both production and development).
We assume that an LLM was registered and deployed to a serving endpoint.
To wrap it as an LLM you must have βCan Queryβ permission to the endpoint. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-39 | To wrap it as an LLM you must have βCan Queryβ permission to the endpoint.
Set endpoint_name accordingly and do not set cluster_id and
cluster_driver_port.
The expected model signature is:
inputs:
[{"name": "prompt", "type": "string"},
{"name": "stop", "type": "list[string]"}]
outputs: [{"type": "string"}]
Cluster driver proxy app (recommended for interactive development).
One can load an LLM on a Databricks interactive cluster and start a local HTTP
server on the driver node to serve the model at / using HTTP POST method
with JSON input/output.
Please use a port number between [3000, 8000] and let the server listen to
the driver IP address or simply 0.0.0.0 instead of localhost only.
To wrap it as an LLM you must have βCan Attach Toβ permission to the cluster.
Set cluster_id and cluster_driver_port and do not set endpoint_name.
The expected server schema (using JSON schema) is:
inputs:
{"type": "object",
"properties": {
"prompt": {"type": "string"},
"stop": {"type": "array", "items": {"type": "string"}}},
"required": ["prompt"]}`
outputs: {"type": "string"}
If the endpoint model signature is different or you want to set extra params,
you can use transform_input_fn and transform_output_fn to apply necessary
transformations before and after the query.
Validators
raise_deprecation Β» all fields
set_cluster_driver_port Β» cluster_driver_port
set_cluster_id Β» cluster_id
set_model_kwargs Β» model_kwargs
set_verbose Β» verbose
field api_token: str [Optional]#
Databricks personal access token.
If not provided, the default value is determined by | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-40 | Databricks personal access token.
If not provided, the default value is determined by
the DATABRICKS_API_TOKEN environment variable if present, or
an automatically generated temporary token if running inside a Databricks
notebook attached to an interactive cluster in βsingle userβ or
βno isolation sharedβ mode.
field cluster_driver_port: Optional[str] = None#
The port number used by the HTTP server running on the cluster driver node.
The server should listen on the driver IP address or simply 0.0.0.0 to connect.
We recommend the server using a port number between [3000, 8000].
field cluster_id: Optional[str] = None#
ID of the cluster if connecting to a cluster driver proxy app.
If neither endpoint_name nor cluster_id is not provided and the code runs
inside a Databricks notebook attached to an interactive cluster in βsingle userβ
or βno isolation sharedβ mode, the current cluster ID is used as default.
You must not set both endpoint_name and cluster_id.
field endpoint_name: Optional[str] = None#
Name of the model serving endpont.
You must specify the endpoint name to connect to a model serving endpoint.
You must not set both endpoint_name and cluster_id.
field host: str [Optional]#
Databricks workspace hostname.
If not provided, the default value is determined by
the DATABRICKS_HOST environment variable if present, or
the hostname of the current Databricks workspace if running inside
a Databricks notebook attached to an interactive cluster in βsingle userβ
or βno isolation sharedβ mode.
field model_kwargs: Optional[Dict[str, Any]] = None#
Extra parameters to pass to the endpoint.
field transform_input_fn: Optional[Callable] = None# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-41 | field transform_input_fn: Optional[Callable] = None#
A function that transforms {prompt, stop, **kwargs} into a JSON-compatible
request object that the endpoint accepts.
For example, you can apply a prompt template to the input prompt.
field transform_output_fn: Optional[Callable[[...], str]] = None#
A function that transforms the output from the endpoint to the generated text.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-42 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-43 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.DeepInfra[source]#
Wrapper around DeepInfra deployed models. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-44 | Wrapper around DeepInfra deployed models.
To use, you should have the requests python package installed, and the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import DeepInfra
di = DeepInfra(model_id="google/flan-t5-xl",
deepinfra_api_token="my-api-key")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-45 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-46 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-47 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.FakeListLLM[source]#
Fake LLM wrapper for testing purposes.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-48 | Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-49 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-50 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.ForefrontAI[source]#
Wrapper around ForefrontAI large language models.
To use, you should have the environment variable FOREFRONTAI_API_KEY
set with your API key.
Example
from langchain.llms import ForefrontAI
forefrontai = ForefrontAI(endpoint_url="")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field endpoint_url: str = ''#
Model name to use.
field length: int = 256#
The maximum number of tokens to generate in the completion.
field repetition_penalty: int = 1#
Penalizes repeated tokens according to frequency.
field temperature: float = 0.7#
What sampling temperature to use.
field top_k: int = 40#
The number of highest probability vocabulary tokens to
keep for top-k-filtering.
field top_p: float = 1.0#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-51 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-52 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-53 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GPT4All[source]#
Wrapper around GPT4All language models.
To use, you should have the gpt4all python package installed, the
pre-trained model file, and the modelβs config information.
Example
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Simplest invocation
response = model("Once upon a time, ")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field context_erase: float = 0.5#
Leave (n_ctx * context_erase) tokens
starting from beginning if the context has run out.
field echo: Optional[bool] = False#
Whether to echo the prompt.
field embedding: bool = False#
Use embedding mode only.
field f16_kv: bool = False#
Use half-precision for key/value cache. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-54 | field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field model: str [Required]#
Path to the pre-trained GPT4All model file.
field n_batch: int = 1#
Batch size for prompt processing.
field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_predict: Optional[int] = 256#
The maximum number of tokens to generate.
field n_threads: Optional[int] = 4#
Number of threads to use.
field repeat_last_n: Optional[int] = 64#
Last n tokens to penalize
field repeat_penalty: Optional[float] = 1.3#
The penalty to apply to repeated tokens.
field seed: int = 0#
Seed. If -1, a random seed is used.
field stop: Optional[List[str]] = []#
A list of strings to stop generation when encountered.
field streaming: bool = False#
Whether to stream the results or not.
field temp: Optional[float] = 0.8#
The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
The top-p value to use for sampling.
field use_mlock: bool = False#
Force system to keep model in RAM.
field verbose: bool [Optional]#
Whether to print out response text.
field vocab_only: bool = False#
Only load the vocabulary, no weights. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-55 | field vocab_only: bool = False#
Only load the vocabulary, no weights.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-56 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-57 | get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GooglePalm[source]#
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field max_output_tokens: Optional[int] = None#
Maximum number of tokens to include in a candidate. Must be greater than zero.
If unset, will default to 64. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-58 | If unset, will default to 64.
field model_name: str = 'models/text-bison-001'#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
field temperature: float = 0.7#
Run inference with this temperature. Must by in the closed interval
[0.0, 1.0].
field top_k: Optional[int] = None#
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
field top_p: Optional[float] = None#
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-59 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-60 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-61 | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GooseAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable GOOSEAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field min_tokens: int = 1#
The minimum number of tokens to generate in the completion.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-neo-20b'#
Model name to use
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field temperature: float = 0.7# | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-62 | Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-63 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message. | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-64 | Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceEndpoint[source]#
Wrapper around HuggingFaceHub Inference Endpoints.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass | https://python.langchain.com/en/latest/reference/modules/llms.html |
f8a76bd808b1-65 | environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceEndpoint
endpoint_url = (
"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud"
)
hf = HuggingFaceEndpoint(
endpoint_url=endpoint_url,
huggingfacehub_api_token="my-api-key"
)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field endpoint_url: str = ''#
Endpoint URL to use.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field task: Optional[str] = None#
Task to call the model with.
Should be a task that returns generated_text or summary_text.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://python.langchain.com/en/latest/reference/modules/llms.html |