url
stringlengths 34
116
| markdown
stringlengths 0
150k
⌀ | screenshotUrl
null | crawl
dict | metadata
dict | text
stringlengths 0
147k
|
---|---|---|---|---|---|
https://python.langchain.com/docs/integrations/chat_loaders/telegram/ | This notebook shows how to use the Telegram chat loader. This class helps map exported Telegram conversations to LangChain chat messages.
The process has three steps: 1. Export the chat .txt file by copying chats from the Telegram app and pasting them in a file on your local computer 2. Create the `TelegramChatLoader` with the file path pointed to the json file or directory of JSON files 3. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion. Optionally use `merge_chat_runs` to combine message from the same sender in sequence, and/or `map_ai_messages` to convert messages from the specified sender to the “AIMessage” class.
Currently (2023/08/23) this loader best supports json files in the format generated by exporting your chat history from the [Telegram Desktop App](https://desktop.telegram.org/).
**Important:** There are ‘lite’ versions of telegram such as “Telegram for MacOS” that lack the export functionality. Please make sure you use the correct app to export the file.
To make the export: 1. Download and open telegram desktop 2. Select a conversation 3. Navigate to the conversation settings (currently the three dots in the top right corner) 4. Click “Export Chat History” 5. Unselect photos and other media. Select “Machine-readable JSON” format to export.
```
%%writefile telegram_conversation.json{ "name": "Jiminy", "type": "personal_chat", "id": 5965280513, "messages": [ { "id": 1, "type": "message", "date": "2023-08-23T13:11:23", "date_unixtime": "1692821483", "from": "Jiminy Cricket", "from_id": "user123450513", "text": "You better trust your conscience", "text_entities": [ { "type": "plain", "text": "You better trust your conscience" } ] }, { "id": 2, "type": "message", "date": "2023-08-23T13:13:20", "date_unixtime": "1692821600", "from": "Batman & Robin", "from_id": "user6565661032", "text": "What did you just say?", "text_entities": [ { "type": "plain", "text": "What did you just say?" } ] } ]}
```
```
Overwriting telegram_conversation.json
```
All that’s required is the file path. You can optionally specify the user name that maps to an ai message as well an configure whether to merge message runs.
```
loader = TelegramChatLoader( path="./telegram_conversation.json",)
```
The `load()` (or `lazy_load`) methods return a list of “ChatSessions” that currently just contain a list of messages per loaded conversation.
```
from typing import Listfrom langchain_community.chat_loaders.base import ChatSessionfrom langchain_community.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "Jiminy Cricket" to AI messagesmessages: List[ChatSession] = list( map_ai_messages(merged_messages, sender="Jiminy Cricket"))
```
You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message
```
from langchain_openai import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]["messages"]): print(chunk.content, end="", flush=True)
```
```
I said, "You better trust your conscience."
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:19.460Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat_loaders/telegram/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat_loaders/telegram/",
"description": "This notebook shows how to use the Telegram chat loader. This class",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"telegram\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:18 GMT",
"etag": "W/\"6657d74c2dc79111d493590dc2cc16bf\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::kgnwl-1713753498892-a57818d05b43"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat_loaders/telegram/",
"property": "og:url"
},
{
"content": "Telegram | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to use the Telegram chat loader. This class",
"property": "og:description"
}
],
"title": "Telegram | 🦜️🔗 LangChain"
} | This notebook shows how to use the Telegram chat loader. This class helps map exported Telegram conversations to LangChain chat messages.
The process has three steps: 1. Export the chat .txt file by copying chats from the Telegram app and pasting them in a file on your local computer 2. Create the TelegramChatLoader with the file path pointed to the json file or directory of JSON files 3. Call loader.load() (or loader.lazy_load()) to perform the conversion. Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the “AIMessage” class.
Currently (2023/08/23) this loader best supports json files in the format generated by exporting your chat history from the Telegram Desktop App.
Important: There are ‘lite’ versions of telegram such as “Telegram for MacOS” that lack the export functionality. Please make sure you use the correct app to export the file.
To make the export: 1. Download and open telegram desktop 2. Select a conversation 3. Navigate to the conversation settings (currently the three dots in the top right corner) 4. Click “Export Chat History” 5. Unselect photos and other media. Select “Machine-readable JSON” format to export.
%%writefile telegram_conversation.json
{
"name": "Jiminy",
"type": "personal_chat",
"id": 5965280513,
"messages": [
{
"id": 1,
"type": "message",
"date": "2023-08-23T13:11:23",
"date_unixtime": "1692821483",
"from": "Jiminy Cricket",
"from_id": "user123450513",
"text": "You better trust your conscience",
"text_entities": [
{
"type": "plain",
"text": "You better trust your conscience"
}
]
},
{
"id": 2,
"type": "message",
"date": "2023-08-23T13:13:20",
"date_unixtime": "1692821600",
"from": "Batman & Robin",
"from_id": "user6565661032",
"text": "What did you just say?",
"text_entities": [
{
"type": "plain",
"text": "What did you just say?"
}
]
}
]
}
Overwriting telegram_conversation.json
All that’s required is the file path. You can optionally specify the user name that maps to an ai message as well an configure whether to merge message runs.
loader = TelegramChatLoader(
path="./telegram_conversation.json",
)
The load() (or lazy_load) methods return a list of “ChatSessions” that currently just contain a list of messages per loaded conversation.
from typing import List
from langchain_community.chat_loaders.base import ChatSession
from langchain_community.chat_loaders.utils import (
map_ai_messages,
merge_chat_runs,
)
raw_messages = loader.lazy_load()
# Merge consecutive messages from the same sender into a single message
merged_messages = merge_chat_runs(raw_messages)
# Convert messages from "Jiminy Cricket" to AI messages
messages: List[ChatSession] = list(
map_ai_messages(merged_messages, sender="Jiminy Cricket")
)
You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
for chunk in llm.stream(messages[0]["messages"]):
print(chunk.content, end="", flush=True)
I said, "You better trust your conscience." |
https://python.langchain.com/docs/integrations/chat_loaders/twitter/ | This notebook shows how to load chat messages from Twitter to fine-tune on. We do this by utilizing Apify.
First, use Apify to export tweets. An example
```
# Filter out tweets that reference other tweets, because it's a bit weirdtweets = [d["full_text"] for d in data if "t.co" not in d["full_text"]]# Create them as AI messagesmessages = [AIMessage(content=t) for t in tweets]# Add in a system message at the start# TODO: we could try to extract the subject from the tweets, and put that in the system message.system_message = {"role": "system", "content": "write a tweet"}data = [[system_message, convert_message_to_dict(m)] for m in messages]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:20.062Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat_loaders/twitter/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat_loaders/twitter/",
"description": "This notebook shows how to load chat messages from Twitter to fine-tune",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3416",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"twitter\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:19 GMT",
"etag": "W/\"ad4ca0266b8ee886b098f17848feb49b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::d7hcd-1713753499970-7925d9e0e807"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat_loaders/twitter/",
"property": "og:url"
},
{
"content": "Twitter (via Apify) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to load chat messages from Twitter to fine-tune",
"property": "og:description"
}
],
"title": "Twitter (via Apify) | 🦜️🔗 LangChain"
} | This notebook shows how to load chat messages from Twitter to fine-tune on. We do this by utilizing Apify.
First, use Apify to export tweets. An example
# Filter out tweets that reference other tweets, because it's a bit weird
tweets = [d["full_text"] for d in data if "t.co" not in d["full_text"]]
# Create them as AI messages
messages = [AIMessage(content=t) for t in tweets]
# Add in a system message at the start
# TODO: we could try to extract the subject from the tweets, and put that in the system message.
system_message = {"role": "system", "content": "write a tweet"}
data = [[system_message, convert_message_to_dict(m)] for m in messages] |
https://python.langchain.com/docs/integrations/chat_loaders/wechat/ | ## WeChat
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.
> Highly inspired by [https://python.langchain.com/docs/integrations/chat\_loaders/discord](https://python.langchain.com/docs/integrations/chat_loaders/discord)
The process has five steps: 1. Open your chat in the WeChat desktop app. Select messages you need by mouse-dragging or right-click. Due to restrictions, you can select up to 100 messages once a time. `CMD`/`Ctrl` + `C` to copy. 2. Create the chat .txt file by pasting selected messages in a file on your local computer. 3. Copy the chat loader definition from below to a local file. 4. Initialize the `WeChatChatLoader` with the file path pointed to the text file. 5. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion.
## 1\. Create message dump[](#create-message-dump "Direct link to 1. Create message dump")
This loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example.
```
%%writefile wechat_chats.txt女朋友 2023/09/16 2:51 PM天气有点凉男朋友 2023/09/16 2:51 PM珍簟凉风著,瑶琴寄恨生。嵇君懒书札,底物慰秋情。女朋友 2023/09/16 3:06 PM忙什么呢男朋友 2023/09/16 3:06 PM今天只干成了一件像样的事那就是想你女朋友 2023/09/16 3:06 PM[动画表情]
```
```
Overwriting wechat_chats.txt
```
## 2\. Define chat loader[](#define-chat-loader "Direct link to 2. Define chat loader")
LangChain currently does not support
```
import loggingimport refrom typing import Iterator, Listfrom langchain_community.chat_loaders import base as chat_loadersfrom langchain_core.messages import BaseMessage, HumanMessagelogger = logging.getLogger()class WeChatChatLoader(chat_loaders.BaseChatLoader): def __init__(self, path: str): """ Initialize the Discord chat loader. Args: path: Path to the exported Discord chat text file. """ self.path = path self._message_line_regex = re.compile( r"(?P<sender>.+?) (?P<timestamp>\d{4}/\d{2}/\d{2} \d{1,2}:\d{2} (?:AM|PM))", # noqa # flags=re.DOTALL, ) def _append_message_to_results( self, results: List, current_sender: str, current_timestamp: str, current_content: List[str], ): content = "\n".join(current_content).strip() # skip non-text messages like stickers, images, etc. if not re.match(r"\[.*\]", content): results.append( HumanMessage( content=content, additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) return results def _load_single_chat_session_from_txt( self, file_path: str ) -> chat_loaders.ChatSession: """ Load a single chat session from a text file. Args: file_path: Path to the text file containing the chat messages. Returns: A `ChatSession` object containing the loaded chat messages. """ with open(file_path, "r", encoding="utf-8") as file: lines = file.readlines() results: List[BaseMessage] = [] current_sender = None current_timestamp = None current_content = [] for line in lines: if re.match(self._message_line_regex, line): if current_sender and current_content: results = self._append_message_to_results( results, current_sender, current_timestamp, current_content ) current_sender, current_timestamp = re.match( self._message_line_regex, line ).groups() current_content = [] else: current_content.append(line.strip()) if current_sender and current_content: results = self._append_message_to_results( results, current_sender, current_timestamp, current_content ) return chat_loaders.ChatSession(messages=results) def lazy_load(self) -> Iterator[chat_loaders.ChatSession]: """ Lazy load the messages from the chat file and yield them in the required format. Yields: A `ChatSession` object containing the loaded chat messages. """ yield self._load_single_chat_session_from_txt(self.path)
```
## 2\. Create loader[](#create-loader "Direct link to 2. Create loader")
We will point to the file we just wrote to disk.
```
loader = WeChatChatLoader( path="./wechat_chats.txt",)
```
## 3\. Load Messages[](#load-messages "Direct link to 3. Load Messages")
Assuming the format is correct, the loader will convert the chats to langchain messages.
```
from typing import Listfrom langchain_community.chat_loaders.base import ChatSessionfrom langchain_community.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "男朋友" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="男朋友"))
```
```
[{'messages': [HumanMessage(content='天气有点凉', additional_kwargs={'sender': '女朋友', 'events': [{'message_time': '2023/09/16 2:51 PM'}]}, example=False), AIMessage(content='珍簟凉风著,瑶琴寄恨生。嵇君懒书札,底物慰秋情。', additional_kwargs={'sender': '男朋友', 'events': [{'message_time': '2023/09/16 2:51 PM'}]}, example=False), HumanMessage(content='忙什么呢', additional_kwargs={'sender': '女朋友', 'events': [{'message_time': '2023/09/16 3:06 PM'}]}, example=False), AIMessage(content='今天只干成了一件像样的事\n那就是想你', additional_kwargs={'sender': '男朋友', 'events': [{'message_time': '2023/09/16 3:06 PM'}]}, example=False)]}]
```
### Next Steps[](#next-steps "Direct link to Next Steps")
You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message
```
from langchain_openai import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]["messages"]): print(chunk.content, end="", flush=True)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:20.493Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat_loaders/wechat/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat_loaders/wechat/",
"description": "There is not yet a straightforward way to export personal WeChat",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3779",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"wechat\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:20 GMT",
"etag": "W/\"0baac65e2d1d1ad99b77390c618833a8\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::tqd6x-1713753500380-1d00c88557da"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat_loaders/wechat/",
"property": "og:url"
},
{
"content": "WeChat | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "There is not yet a straightforward way to export personal WeChat",
"property": "og:description"
}
],
"title": "WeChat | 🦜️🔗 LangChain"
} | WeChat
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.
Highly inspired by https://python.langchain.com/docs/integrations/chat_loaders/discord
The process has five steps: 1. Open your chat in the WeChat desktop app. Select messages you need by mouse-dragging or right-click. Due to restrictions, you can select up to 100 messages once a time. CMD/Ctrl + C to copy. 2. Create the chat .txt file by pasting selected messages in a file on your local computer. 3. Copy the chat loader definition from below to a local file. 4. Initialize the WeChatChatLoader with the file path pointed to the text file. 5. Call loader.load() (or loader.lazy_load()) to perform the conversion.
1. Create message dump
This loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example.
%%writefile wechat_chats.txt
女朋友 2023/09/16 2:51 PM
天气有点凉
男朋友 2023/09/16 2:51 PM
珍簟凉风著,瑶琴寄恨生。嵇君懒书札,底物慰秋情。
女朋友 2023/09/16 3:06 PM
忙什么呢
男朋友 2023/09/16 3:06 PM
今天只干成了一件像样的事
那就是想你
女朋友 2023/09/16 3:06 PM
[动画表情]
Overwriting wechat_chats.txt
2. Define chat loader
LangChain currently does not support
import logging
import re
from typing import Iterator, List
from langchain_community.chat_loaders import base as chat_loaders
from langchain_core.messages import BaseMessage, HumanMessage
logger = logging.getLogger()
class WeChatChatLoader(chat_loaders.BaseChatLoader):
def __init__(self, path: str):
"""
Initialize the Discord chat loader.
Args:
path: Path to the exported Discord chat text file.
"""
self.path = path
self._message_line_regex = re.compile(
r"(?P<sender>.+?) (?P<timestamp>\d{4}/\d{2}/\d{2} \d{1,2}:\d{2} (?:AM|PM))", # noqa
# flags=re.DOTALL,
)
def _append_message_to_results(
self,
results: List,
current_sender: str,
current_timestamp: str,
current_content: List[str],
):
content = "\n".join(current_content).strip()
# skip non-text messages like stickers, images, etc.
if not re.match(r"\[.*\]", content):
results.append(
HumanMessage(
content=content,
additional_kwargs={
"sender": current_sender,
"events": [{"message_time": current_timestamp}],
},
)
)
return results
def _load_single_chat_session_from_txt(
self, file_path: str
) -> chat_loaders.ChatSession:
"""
Load a single chat session from a text file.
Args:
file_path: Path to the text file containing the chat messages.
Returns:
A `ChatSession` object containing the loaded chat messages.
"""
with open(file_path, "r", encoding="utf-8") as file:
lines = file.readlines()
results: List[BaseMessage] = []
current_sender = None
current_timestamp = None
current_content = []
for line in lines:
if re.match(self._message_line_regex, line):
if current_sender and current_content:
results = self._append_message_to_results(
results, current_sender, current_timestamp, current_content
)
current_sender, current_timestamp = re.match(
self._message_line_regex, line
).groups()
current_content = []
else:
current_content.append(line.strip())
if current_sender and current_content:
results = self._append_message_to_results(
results, current_sender, current_timestamp, current_content
)
return chat_loaders.ChatSession(messages=results)
def lazy_load(self) -> Iterator[chat_loaders.ChatSession]:
"""
Lazy load the messages from the chat file and yield them in the required format.
Yields:
A `ChatSession` object containing the loaded chat messages.
"""
yield self._load_single_chat_session_from_txt(self.path)
2. Create loader
We will point to the file we just wrote to disk.
loader = WeChatChatLoader(
path="./wechat_chats.txt",
)
3. Load Messages
Assuming the format is correct, the loader will convert the chats to langchain messages.
from typing import List
from langchain_community.chat_loaders.base import ChatSession
from langchain_community.chat_loaders.utils import (
map_ai_messages,
merge_chat_runs,
)
raw_messages = loader.lazy_load()
# Merge consecutive messages from the same sender into a single message
merged_messages = merge_chat_runs(raw_messages)
# Convert messages from "男朋友" to AI messages
messages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="男朋友"))
[{'messages': [HumanMessage(content='天气有点凉', additional_kwargs={'sender': '女朋友', 'events': [{'message_time': '2023/09/16 2:51 PM'}]}, example=False),
AIMessage(content='珍簟凉风著,瑶琴寄恨生。嵇君懒书札,底物慰秋情。', additional_kwargs={'sender': '男朋友', 'events': [{'message_time': '2023/09/16 2:51 PM'}]}, example=False),
HumanMessage(content='忙什么呢', additional_kwargs={'sender': '女朋友', 'events': [{'message_time': '2023/09/16 3:06 PM'}]}, example=False),
AIMessage(content='今天只干成了一件像样的事\n那就是想你', additional_kwargs={'sender': '男朋友', 'events': [{'message_time': '2023/09/16 3:06 PM'}]}, example=False)]}]
Next Steps
You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
for chunk in llm.stream(messages[0]["messages"]):
print(chunk.content, end="", flush=True) |
https://python.langchain.com/docs/integrations/chat_loaders/whatsapp/ | This notebook shows how to use the WhatsApp chat loader. This class helps map exported WhatsApp conversations to LangChain chat messages.
The process has three steps: 1. Export the chat conversations to computer 2. Create the `WhatsAppChatLoader` with the file path pointed to the json file or directory of JSON files 3. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion.
To make the export of your WhatsApp conversation(s), complete the following steps:
```
%%writefile whatsapp_chat.txt[8/15/23, 9:12:33 AM] Dr. Feather: Messages and calls are end-to-end encrypted. No one outside of this chat, not even WhatsApp, can read or listen to them.[8/15/23, 9:12:43 AM] Dr. Feather: I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature![8/15/23, 9:12:48 AM] Dr. Feather: image omitted[8/15/23, 9:13:15 AM] Jungle Jane: That's stunning! Were you able to observe its behavior?[8/15/23, 9:13:23 AM] Dr. Feather: image omitted[8/15/23, 9:14:02 AM] Dr. Feather: Yes, it seemed quite social with other macaws. They're known for their playful nature.[8/15/23, 9:14:15 AM] Jungle Jane: How's the research going on parrot communication?[8/15/23, 9:14:30 AM] Dr. Feather: image omitted[8/15/23, 9:14:50 AM] Dr. Feather: It's progressing well. We're learning so much about how they use sound and color to communicate.[8/15/23, 9:15:10 AM] Jungle Jane: That's fascinating! Can't wait to read your paper on it.[8/15/23, 9:15:20 AM] Dr. Feather: Thank you! I'll send you a draft soon.[8/15/23, 9:25:16 PM] Jungle Jane: Looking forward to it! Keep up the great work.
```
```
Writing whatsapp_chat.txt
```
The WhatsAppChatLoader accepts the resulting zip file, unzipped directory, or the path to any of the chat `.txt` files therein.
Provide that as well as the user name you want to take on the role of “AI” when fine-tuning.
The `load()` (or `lazy_load`) methods return a list of “ChatSessions” that currently store the list of messages per loaded conversation.
```
[{'messages': [AIMessage(content='I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature!', additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:12:43 AM'}]}, example=False), HumanMessage(content="That's stunning! Were you able to observe its behavior?", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:13:15 AM'}]}, example=False), AIMessage(content="Yes, it seemed quite social with other macaws. They're known for their playful nature.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:14:02 AM'}]}, example=False), HumanMessage(content="How's the research going on parrot communication?", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:14:15 AM'}]}, example=False), AIMessage(content="It's progressing well. We're learning so much about how they use sound and color to communicate.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:14:50 AM'}]}, example=False), HumanMessage(content="That's fascinating! Can't wait to read your paper on it.", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:15:10 AM'}]}, example=False), AIMessage(content="Thank you! I'll send you a draft soon.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:15:20 AM'}]}, example=False), HumanMessage(content='Looking forward to it! Keep up the great work.', additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:25:16 PM'}]}, example=False)]}]
```
You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message.
```
Thank you for the encouragement! I'll do my best to continue studying and sharing fascinating insights about parrot communication.
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:21.026Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat_loaders/whatsapp/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat_loaders/whatsapp/",
"description": "This notebook shows how to use the WhatsApp chat loader. This class",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3416",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"whatsapp\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:20 GMT",
"etag": "W/\"be235adf68acfd62200b4af17397b964\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::zvcms-1713753500463-9fa9af04e1f4"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat_loaders/whatsapp/",
"property": "og:url"
},
{
"content": "WhatsApp | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to use the WhatsApp chat loader. This class",
"property": "og:description"
}
],
"title": "WhatsApp | 🦜️🔗 LangChain"
} | This notebook shows how to use the WhatsApp chat loader. This class helps map exported WhatsApp conversations to LangChain chat messages.
The process has three steps: 1. Export the chat conversations to computer 2. Create the WhatsAppChatLoader with the file path pointed to the json file or directory of JSON files 3. Call loader.load() (or loader.lazy_load()) to perform the conversion.
To make the export of your WhatsApp conversation(s), complete the following steps:
%%writefile whatsapp_chat.txt
[8/15/23, 9:12:33 AM] Dr. Feather: Messages and calls are end-to-end encrypted. No one outside of this chat, not even WhatsApp, can read or listen to them.
[8/15/23, 9:12:43 AM] Dr. Feather: I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature!
[8/15/23, 9:12:48 AM] Dr. Feather: image omitted
[8/15/23, 9:13:15 AM] Jungle Jane: That's stunning! Were you able to observe its behavior?
[8/15/23, 9:13:23 AM] Dr. Feather: image omitted
[8/15/23, 9:14:02 AM] Dr. Feather: Yes, it seemed quite social with other macaws. They're known for their playful nature.
[8/15/23, 9:14:15 AM] Jungle Jane: How's the research going on parrot communication?
[8/15/23, 9:14:30 AM] Dr. Feather: image omitted
[8/15/23, 9:14:50 AM] Dr. Feather: It's progressing well. We're learning so much about how they use sound and color to communicate.
[8/15/23, 9:15:10 AM] Jungle Jane: That's fascinating! Can't wait to read your paper on it.
[8/15/23, 9:15:20 AM] Dr. Feather: Thank you! I'll send you a draft soon.
[8/15/23, 9:25:16 PM] Jungle Jane: Looking forward to it! Keep up the great work.
Writing whatsapp_chat.txt
The WhatsAppChatLoader accepts the resulting zip file, unzipped directory, or the path to any of the chat .txt files therein.
Provide that as well as the user name you want to take on the role of “AI” when fine-tuning.
The load() (or lazy_load) methods return a list of “ChatSessions” that currently store the list of messages per loaded conversation.
[{'messages': [AIMessage(content='I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature!', additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:12:43 AM'}]}, example=False),
HumanMessage(content="That's stunning! Were you able to observe its behavior?", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:13:15 AM'}]}, example=False),
AIMessage(content="Yes, it seemed quite social with other macaws. They're known for their playful nature.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:14:02 AM'}]}, example=False),
HumanMessage(content="How's the research going on parrot communication?", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:14:15 AM'}]}, example=False),
AIMessage(content="It's progressing well. We're learning so much about how they use sound and color to communicate.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:14:50 AM'}]}, example=False),
HumanMessage(content="That's fascinating! Can't wait to read your paper on it.", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:15:10 AM'}]}, example=False),
AIMessage(content="Thank you! I'll send you a draft soon.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:15:20 AM'}]}, example=False),
HumanMessage(content='Looking forward to it! Keep up the great work.', additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:25:16 PM'}]}, example=False)]}]
You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message.
Thank you for the encouragement! I'll do my best to continue studying and sharing fascinating insights about parrot communication. |
https://python.langchain.com/docs/integrations/chat/ | ## Chat models
## Features (natively supported)[](#features-natively-supported "Direct link to Features (natively supported)")
All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. `ainvoke`, `batch`, `abatch`, `stream`, `astream`. This gives all ChatModels basic support for async, streaming and batch, which by default is implemented as below:
* _Async_ support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the ChatModel is being executed, by moving this call to a background thread.
* _Streaming_ support defaults to returning an `Iterator` (or `AsyncIterator` in the case of async streaming) of a single value, the final result returned by the underlying ChatModel provider. This obviously doesn't give you token-by-token streaming, which requires native support from the ChatModel provider, but ensures your code that expects an iterator of tokens can work for any of our ChatModel integrations.
* _Batch_ support defaults to calling the underlying ChatModel in parallel for each input by making use of a thread pool executor (in the sync batch case) or `asyncio.gather` (in the async batch case). The concurrency can be controlled with the `max_concurrency` key in `RunnableConfig`.
Each ChatModel integration can optionally provide native implementations to truly enable async or streaming. The table shows, for each integration, which features have been implemented with native support. Yellow circles (🟡) indicates partial support - for example, if the model supports tool calling but not tool messages for agents.
| Model | Invoke | Async invoke | Stream | Async stream | [Tool calling](https://python.langchain.com/docs/modules/model_io/chat/function_calling/) | [Structured output](https://python.langchain.com/docs/modules/model_io/chat/structured_output/) | Python Package |
| --- | --- | --- | --- | --- | --- | --- | --- |
| AzureChatOpenAI | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | langchain-openai |
| BedrockChat | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | langchain-community |
| ChatAnthropic | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | langchain-anthropic |
| ChatAnyscale | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ChatBaichuan | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | langchain-community |
| ChatCohere | ✅ | ❌ | ❌ | ❌ | 🟡 | ✅ | langchain-cohere |
| ChatDatabricks | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | langchain-community |
| ChatDeepInfra | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ChatEverlyAI | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ChatFireworks | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | langchain-fireworks |
| ChatFriendli | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ChatGooglePalm | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | langchain-community |
| ChatGroq | ✅ | ❌ | ❌ | ❌ | 🟡 | ✅ | langchain-groq |
| ChatHuggingFace | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ChatHunyuan | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | langchain-community |
| ChatJavelinAIGateway | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | langchain-community |
| ChatKinetica | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | langchain-community |
| ChatKonko | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | langchain-community |
| ChatLiteLLM | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ChatLiteLLMRouter | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ChatMLX | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | langchain-community |
| ChatMLflowAIGateway | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | langchain-community |
| ChatMaritalk | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | langchain-community |
| ChatMistralAI | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | langchain-mistralai |
| ChatMlflow | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | langchain-community |
| ChatOctoAI | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ChatOllama | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ChatOpenAI | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | langchain-openai |
| ChatPerplexity | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | langchain-community |
| ChatPremAI | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | langchain-community |
| ChatSparkLLM | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | langchain-community |
| ChatTongyi | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ChatVertexAI | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | langchain-google-vertexai |
| ChatYandexGPT | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | langchain-community |
| ChatYuan2 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ChatZhipuAI | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| ErnieBotChat | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | langchain-community |
| GPTRouter | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| GigaChat | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| JinaChat | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| LlamaEdgeChatService | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | langchain-community |
| MiniMaxChat | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | langchain-community |
| PaiEasChatEndpoint | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | langchain-community |
| PromptLayerChatOpenAI | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | langchain-community |
| QianfanChatEndpoint | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| SolarChat | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | langchain-community |
| VolcEngineMaasChat | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | langchain-community | | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:21.731Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/",
"description": "Features (natively supported)",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3417",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"chat\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:21 GMT",
"etag": "W/\"90043556824ea0ad8d00579b65f72ac1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::l9cgv-1713753501573-900bdc2018da"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/",
"property": "og:url"
},
{
"content": "Chat models | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Features (natively supported)",
"property": "og:description"
}
],
"title": "Chat models | 🦜️🔗 LangChain"
} | Chat models
Features (natively supported)
All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. ainvoke, batch, abatch, stream, astream. This gives all ChatModels basic support for async, streaming and batch, which by default is implemented as below:
Async support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the ChatModel is being executed, by moving this call to a background thread.
Streaming support defaults to returning an Iterator (or AsyncIterator in the case of async streaming) of a single value, the final result returned by the underlying ChatModel provider. This obviously doesn't give you token-by-token streaming, which requires native support from the ChatModel provider, but ensures your code that expects an iterator of tokens can work for any of our ChatModel integrations.
Batch support defaults to calling the underlying ChatModel in parallel for each input by making use of a thread pool executor (in the sync batch case) or asyncio.gather (in the async batch case). The concurrency can be controlled with the max_concurrency key in RunnableConfig.
Each ChatModel integration can optionally provide native implementations to truly enable async or streaming. The table shows, for each integration, which features have been implemented with native support. Yellow circles (🟡) indicates partial support - for example, if the model supports tool calling but not tool messages for agents.
ModelInvokeAsync invokeStreamAsync streamTool callingStructured outputPython Package
AzureChatOpenAI ✅ ❌ ❌ ❌ ✅ ✅ langchain-openai
BedrockChat ✅ ❌ ✅ ❌ ❌ ❌ langchain-community
ChatAnthropic ✅ ❌ ❌ ❌ ✅ ✅ langchain-anthropic
ChatAnyscale ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ChatBaichuan ✅ ❌ ✅ ❌ ❌ ❌ langchain-community
ChatCohere ✅ ❌ ❌ ❌ 🟡 ✅ langchain-cohere
ChatDatabricks ✅ ❌ ✅ ❌ ❌ ❌ langchain-community
ChatDeepInfra ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ChatEverlyAI ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ChatFireworks ✅ ❌ ❌ ❌ ✅ ✅ langchain-fireworks
ChatFriendli ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ChatGooglePalm ✅ ✅ ❌ ❌ ❌ ❌ langchain-community
ChatGroq ✅ ❌ ❌ ❌ 🟡 ✅ langchain-groq
ChatHuggingFace ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ChatHunyuan ✅ ❌ ✅ ❌ ❌ ❌ langchain-community
ChatJavelinAIGateway ✅ ✅ ❌ ❌ ❌ ❌ langchain-community
ChatKinetica ✅ ❌ ❌ ❌ ❌ ❌ langchain-community
ChatKonko ✅ ❌ ❌ ❌ ❌ ❌ langchain-community
ChatLiteLLM ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ChatLiteLLMRouter ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ChatMLX ✅ ✅ ✅ ❌ ❌ ❌ langchain-community
ChatMLflowAIGateway ✅ ❌ ❌ ❌ ❌ ❌ langchain-community
ChatMaritalk ✅ ❌ ❌ ❌ ❌ ❌ langchain-community
ChatMistralAI ✅ ❌ ❌ ❌ ✅ ✅ langchain-mistralai
ChatMlflow ✅ ❌ ✅ ❌ ❌ ❌ langchain-community
ChatOctoAI ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ChatOllama ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ChatOpenAI ✅ ❌ ❌ ❌ ✅ ✅ langchain-openai
ChatPerplexity ✅ ❌ ✅ ❌ ❌ ❌ langchain-community
ChatPremAI ✅ ❌ ✅ ❌ ❌ ❌ langchain-community
ChatSparkLLM ✅ ❌ ✅ ❌ ❌ ❌ langchain-community
ChatTongyi ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ChatVertexAI ✅ ❌ ❌ ❌ ✅ ✅ langchain-google-vertexai
ChatYandexGPT ✅ ✅ ❌ ❌ ❌ ❌ langchain-community
ChatYuan2 ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ChatZhipuAI ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
ErnieBotChat ✅ ❌ ❌ ❌ ❌ ❌ langchain-community
GPTRouter ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
GigaChat ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
JinaChat ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
LlamaEdgeChatService ✅ ❌ ✅ ❌ ❌ ❌ langchain-community
MiniMaxChat ✅ ✅ ❌ ❌ ❌ ❌ langchain-community
PaiEasChatEndpoint ✅ ❌ ❌ ✅ ❌ ❌ langchain-community
PromptLayerChatOpenAI ✅ ❌ ❌ ❌ ❌ ❌ langchain-community
QianfanChatEndpoint ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
SolarChat ✅ ✅ ✅ ✅ ❌ ❌ langchain-community
VolcEngineMaasChat ✅ ❌ ✅ ❌ ❌ ❌ langchain-community |
https://python.langchain.com/docs/integrations/chat/ai21/ | ## ChatAI21
This notebook covers how to get started with AI21 chat models.
## Installation[](#installation "Direct link to Installation")
```
!pip install -qU langchain-ai21
```
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
We’ll need to get a [AI21 API key](https://docs.ai21.com/) and set the `AI21_API_KEY` environment variable:
```
import osfrom getpass import getpassos.environ["AI21_API_KEY"] = getpass()
```
## Usage[](#usage "Direct link to Usage")
```
from langchain_ai21 import ChatAI21from langchain_core.prompts import ChatPromptTemplatechat = ChatAI21(model="j2-ultra")prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful assistant that translates English to French."), ("human", "Translate this sentence from English to French. {english_text}."), ])chain = prompt | chatchain.invoke({"english_text": "Hello, how are you?"})
```
```
AIMessage(content='Bonjour, comment vas-tu?')
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:22.425Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/ai21/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/ai21/",
"description": "This notebook covers how to get started with AI21 chat models.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"ai21\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:21 GMT",
"etag": "W/\"ab03433518dd6f171cb1eb3e78a04c34\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::rbgqn-1713753501652-88e4b0c7b750"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/ai21/",
"property": "og:url"
},
{
"content": "ChatAI21 | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook covers how to get started with AI21 chat models.",
"property": "og:description"
}
],
"title": "ChatAI21 | 🦜️🔗 LangChain"
} | ChatAI21
This notebook covers how to get started with AI21 chat models.
Installation
!pip install -qU langchain-ai21
Environment Setup
We’ll need to get a AI21 API key and set the AI21_API_KEY environment variable:
import os
from getpass import getpass
os.environ["AI21_API_KEY"] = getpass()
Usage
from langchain_ai21 import ChatAI21
from langchain_core.prompts import ChatPromptTemplate
chat = ChatAI21(model="j2-ultra")
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant that translates English to French."),
("human", "Translate this sentence from English to French. {english_text}."),
]
)
chain = prompt | chat
chain.invoke({"english_text": "Hello, how are you?"})
AIMessage(content='Bonjour, comment vas-tu?')
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/alibaba_cloud_pai_eas/ | ## Alibaba Cloud PAI EAS
> [Alibaba Cloud PAI (Platform for AI)](https://www.alibabacloud.com/help/en/pai/?spm=a2c63.p38356.0.0.c26a426ckrxUwZ) is a lightweight and cost-efficient machine learning platform that uses cloud-native technologies. It provides you with an end-to-end modelling service. It accelerates model training based on tens of billions of features and hundreds of billions of samples in more than 100 scenarios.
> [Machine Learning Platform for AI of Alibaba Cloud](https://www.alibabacloud.com/help/en/machine-learning-platform-for-ai/latest/what-is-machine-learning-pai) is a machine learning or deep learning engineering platform intended for enterprises and developers. It provides easy-to-use, cost-effective, high-performance, and easy-to-scale plug-ins that can be applied to various industry scenarios. With over 140 built-in optimization algorithms, `Machine Learning Platform for AI` provides whole-process AI engineering capabilities including data labelling (`PAI-iTAG`), model building (`PAI-Designer` and `PAI-DSW`), model training (`PAI-DLC`), compilation optimization, and inference deployment (`PAI-EAS`).
>
> `PAI-EAS` supports different types of hardware resources, including CPUs and GPUs, and features high throughput and low latency. It allows you to deploy large-scale complex models with a few clicks and perform elastic scale-ins and scale-outs in real-time. It also provides a comprehensive O&M and monitoring system.
## Setup EAS Service[](#setup-eas-service "Direct link to Setup EAS Service")
Set up environment variables to init EAS service URL and token. Use [this document](https://www.alibabacloud.com/help/en/pai/user-guide/service-deployment/) for more information.
```
export EAS_SERVICE_URL=XXXexport EAS_SERVICE_TOKEN=XXX
```
Another option is to use this code:
```
import osfrom langchain_community.chat_models import PaiEasChatEndpointfrom langchain_core.language_models.chat_models import HumanMessageos.environ["EAS_SERVICE_URL"] = "Your_EAS_Service_URL"os.environ["EAS_SERVICE_TOKEN"] = "Your_EAS_Service_Token"chat = PaiEasChatEndpoint( eas_service_url=os.environ["EAS_SERVICE_URL"], eas_service_token=os.environ["EAS_SERVICE_TOKEN"],)
```
## Run Chat Model[](#run-chat-model "Direct link to Run Chat Model")
You can use the default settings to call EAS service as follows:
```
output = chat([HumanMessage(content="write a funny joke")])print("output:", output)
```
Or, call EAS service with new inference params:
```
kwargs = {"temperature": 0.8, "top_p": 0.8, "top_k": 5}output = chat([HumanMessage(content="write a funny joke")], **kwargs)print("output:", output)
```
Or, run a stream call to get a stream response:
```
outputs = chat.stream([HumanMessage(content="hi")], streaming=True)for output in outputs: print("stream output:", output)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:22.600Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/alibaba_cloud_pai_eas/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/alibaba_cloud_pai_eas/",
"description": "[Alibaba Cloud PAI (Platform for",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3418",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"alibaba_cloud_pai_eas\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:22 GMT",
"etag": "W/\"0a8d33669b009d655d93bd8707ce3c44\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::cc8bg-1713753502444-d8b3b1fa2c4b"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/alibaba_cloud_pai_eas/",
"property": "og:url"
},
{
"content": "Alibaba Cloud PAI EAS | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Alibaba Cloud PAI (Platform for",
"property": "og:description"
}
],
"title": "Alibaba Cloud PAI EAS | 🦜️🔗 LangChain"
} | Alibaba Cloud PAI EAS
Alibaba Cloud PAI (Platform for AI) is a lightweight and cost-efficient machine learning platform that uses cloud-native technologies. It provides you with an end-to-end modelling service. It accelerates model training based on tens of billions of features and hundreds of billions of samples in more than 100 scenarios.
Machine Learning Platform for AI of Alibaba Cloud is a machine learning or deep learning engineering platform intended for enterprises and developers. It provides easy-to-use, cost-effective, high-performance, and easy-to-scale plug-ins that can be applied to various industry scenarios. With over 140 built-in optimization algorithms, Machine Learning Platform for AI provides whole-process AI engineering capabilities including data labelling (PAI-iTAG), model building (PAI-Designer and PAI-DSW), model training (PAI-DLC), compilation optimization, and inference deployment (PAI-EAS).
PAI-EAS supports different types of hardware resources, including CPUs and GPUs, and features high throughput and low latency. It allows you to deploy large-scale complex models with a few clicks and perform elastic scale-ins and scale-outs in real-time. It also provides a comprehensive O&M and monitoring system.
Setup EAS Service
Set up environment variables to init EAS service URL and token. Use this document for more information.
export EAS_SERVICE_URL=XXX
export EAS_SERVICE_TOKEN=XXX
Another option is to use this code:
import os
from langchain_community.chat_models import PaiEasChatEndpoint
from langchain_core.language_models.chat_models import HumanMessage
os.environ["EAS_SERVICE_URL"] = "Your_EAS_Service_URL"
os.environ["EAS_SERVICE_TOKEN"] = "Your_EAS_Service_Token"
chat = PaiEasChatEndpoint(
eas_service_url=os.environ["EAS_SERVICE_URL"],
eas_service_token=os.environ["EAS_SERVICE_TOKEN"],
)
Run Chat Model
You can use the default settings to call EAS service as follows:
output = chat([HumanMessage(content="write a funny joke")])
print("output:", output)
Or, call EAS service with new inference params:
kwargs = {"temperature": 0.8, "top_p": 0.8, "top_k": 5}
output = chat([HumanMessage(content="write a funny joke")], **kwargs)
print("output:", output)
Or, run a stream call to get a stream response:
outputs = chat.stream([HumanMessage(content="hi")], streaming=True)
for output in outputs:
print("stream output:", output) |
https://python.langchain.com/docs/integrations/chat/anthropic/ | ## ChatAnthropic
This notebook covers how to get started with Anthropic chat models.
## Setup[](#setup "Direct link to Setup")
For setup instructions, please see the Installation and Environment Setup sections of the [Anthropic Platform page](https://python.langchain.com/docs/integrations/platforms/anthropic/).
```
%pip install -qU langchain-anthropic
```
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
We’ll need to get an [Anthropic](https://console.anthropic.com/settings/keys) API key and set the `ANTHROPIC_API_KEY` environment variable:
```
import osfrom getpass import getpassos.environ["ANTHROPIC_API_KEY"] = getpass()
```
The code provided assumes that your ANTHROPIC\_API\_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:
```
chat = ChatAnthropic(temperature=0, api_key="YOUR_API_KEY", model_name="claude-3-opus-20240229")
```
In these demos, we will use the Claude 3 Opus model, and you can also use the launch version of the Sonnet model with `claude-3-sonnet-20240229`.
You can check the model comparison doc [here](https://docs.anthropic.com/claude/docs/models-overview#model-comparison).
```
from langchain_anthropic import ChatAnthropicfrom langchain_core.prompts import ChatPromptTemplate
```
```
chat = ChatAnthropic(temperature=0, model_name="claude-3-opus-20240229")system = ( "You are a helpful assistant that translates {input_language} to {output_language}.")human = "{text}"prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])chain = prompt | chatchain.invoke( { "input_language": "English", "output_language": "Korean", "text": "I love Python", })
```
```
AIMessage(content='저는 파이썬을 사랑합니다.\n\nTranslation:\nI love Python.')
```
## `ChatAnthropic` also supports async and streaming functionality:[](#chatanthropic-also-supports-async-and-streaming-functionality "Direct link to chatanthropic-also-supports-async-and-streaming-functionality")
```
chat = ChatAnthropic(temperature=0, model_name="claude-3-opus-20240229")prompt = ChatPromptTemplate.from_messages([("human", "Tell me a joke about {topic}")])chain = prompt | chatawait chain.ainvoke({"topic": "bear"})
```
```
AIMessage(content='Sure, here\'s a joke about a bear:\n\nA bear walks into a bar and says to the bartender, "I\'ll have a pint of beer and a.......... packet of peanuts."\n\nThe bartender asks, "Why the big pause?"\n\nThe bear replies, "I don\'t know, I\'ve always had them!"')
```
```
chat = ChatAnthropic(temperature=0.3, model_name="claude-3-opus-20240229")prompt = ChatPromptTemplate.from_messages( [("human", "Give me a list of famous tourist attractions in Japan")])chain = prompt | chatfor chunk in chain.stream({}): print(chunk.content, end="", flush=True)
```
```
Here is a list of famous tourist attractions in Japan:1. Tokyo Skytree (Tokyo)2. Senso-ji Temple (Tokyo)3. Meiji Shrine (Tokyo)4. Tokyo DisneySea (Urayasu, Chiba)5. Fushimi Inari Taisha (Kyoto)6. Kinkaku-ji (Golden Pavilion) (Kyoto)7. Kiyomizu-dera (Kyoto)8. Nijo Castle (Kyoto)9. Osaka Castle (Osaka)10. Dotonbori (Osaka)11. Hiroshima Peace Memorial Park (Hiroshima)12. Itsukushima Shrine (Miyajima Island, Hiroshima)13. Himeji Castle (Himeji)14. Todai-ji Temple (Nara)15. Nara Park (Nara)16. Mount Fuji (Shizuoka and Yamanashi Prefectures)17.
```
With Anthropic’s [tool-calling, or tool-use, API](https://docs.anthropic.com/claude/docs/functions-external-tools), you can define tools for the model to invoke. This is extremely useful for building tool-using chains and agents, as well as for getting structured outputs from a model.
note
Anthropic’s tool-calling functionality is still in beta.
### bind\_tools()[](#bind_tools "Direct link to bind_tools()")
With `ChatAnthropic.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an Anthropic tool schemas, which looks like:
```
{ "name": "...", "description": "...", "input_schema": {...} # JSONSchema}
```
and passed in every model invocation.
```
from langchain_core.pydantic_v1 import BaseModel, Fieldllm = ChatAnthropic(model="claude-3-opus-20240229", temperature=0)class GetWeather(BaseModel): """Get the current weather in a given location""" location: str = Field(..., description="The city and state, e.g. San Francisco, CA")llm_with_tools = llm.bind_tools([GetWeather])
```
```
/Users/bagatur/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The function `bind_tools` is in beta. It is actively being worked on, so the API may change. warn_beta(
```
```
ai_msg = llm_with_tools.invoke( "what is the weather like in San Francisco",)ai_msg
```
```
AIMessage(content=[{'text': '<thinking>\nThe user is asking about the current weather in a specific location, San Francisco. The relevant tool to answer this is the GetWeather function.\n\nLooking at the parameters for GetWeather:\n- location (required): The user directly provided the location in the query - "San Francisco"\n\nSince the required "location" parameter is present, we can proceed with calling the GetWeather function.\n</thinking>', 'type': 'text'}, {'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9', 'input': {'location': 'San Francisco, CA'}, 'name': 'GetWeather', 'type': 'tool_use'}], response_metadata={'id': 'msg_01HepCTzqXJed5iNuLgV1VCZ', 'model': 'claude-3-opus-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 487, 'output_tokens': 143}}, id='run-1a1b3289-ba2c-47ae-8be1-8929d7cc547e-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9'}])
```
Notice that the output message content is a list that contains a text block and then a tool\_use block:
```
[{'text': '<thinking>\nThe user is asking about the current weather in a specific location, San Francisco. The relevant tool to answer this is the GetWeather function.\n\nLooking at the parameters for GetWeather:\n- location (required): The user directly provided the location in the query - "San Francisco"\n\nSince the required "location" parameter is present, we can proceed with calling the GetWeather function.\n</thinking>', 'type': 'text'}, {'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9', 'input': {'location': 'San Francisco, CA'}, 'name': 'GetWeather', 'type': 'tool_use'}]
```
Crucially, the tool calls are also extracted into the `tool_calls` where they are in a standardized, model-agnostic format:
```
[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9'}]
```
tip
ChatAnthropic model outputs are always a single AI message that can have either a single string or a list of content blocks. The content blocks can be text blocks or tool-duse blocks. There can be multiple of each and they can be interspersed.
### Parsing tool calls[](#parsing-tool-calls "Direct link to Parsing tool calls")
The `langchain_anthropic.output_parsers.ToolsOutputParser` makes it easy to parse the tool calls from an Anthropic AI message into Pydantic objects if we’d like:
```
from langchain_anthropic.output_parsers import ToolsOutputParser
```
```
parser = ToolsOutputParser(pydantic_schemas=[GetWeather])chain = llm_with_tools | parserchain.invoke("What is the weather like in nyc, la, sf and cleveland")
```
```
[GetWeather(location='New York City, NY'), GetWeather(location='Los Angeles, CA'), GetWeather(location='San Francisco, CA'), GetWeather(location='Cleveland, OH')]
```
### with\_structured\_output()[](#with_structured_output "Direct link to with_structured_output()")
The [BaseChatModel.with\_structured\_output interface](https://python.langchain.com/docs/modules/model_io/chat/structured_output/) makes it easy to get structured output from chat models. You can use `ChatAnthropic.with_structured_output`, which uses tool-calling under the hood), to get the model to more reliably return an output in a specific format:
```
structured_llm = llm.with_structured_output(GetWeather)structured_llm.invoke( "what is the weather like in San Francisco",)
```
```
GetWeather(location='San Francisco, CA')
```
The main difference between using
```
llm.with_structured_output(GetWeather)
```
vs
```
llm.bind_tools([GetWeather]) | ToolsOutputParser(pydantic_schemas=[GetWeather])
```
is that it will return only the first GetWeather call, whereas the second approach will return a list.
### Passing tool results to model[](#passing-tool-results-to-model "Direct link to Passing tool results to model")
We can use `ToolMessage`s with the appropriate `tool_call_id`s to pass tool results back to the model:
```
from langchain_core.messages import AIMessage, HumanMessage, ToolMessagemessages = [ HumanMessage("What is the weather like in San Francisco"), AIMessage( content=[ { "text": '<thinking>\nBased on the user\'s question, the relevant function to call is GetWeather, which requires the "location" parameter.\n\nThe user has directly specified the location as "San Francisco". Since San Francisco is a well known city, I can reasonably infer they mean San Francisco, CA without needing the state specified.\n\nAll the required parameters are provided, so I can proceed with the API call.\n</thinking>', "type": "text", }, { "type": "tool_use", "id": "toolu_01SCgExKzQ7eqSkMHfygvYuu", "name": "GetWeather", "input": {"location": "San Francisco, CA"}, "text": None, }, ], ), ToolMessage( "Rain. High 54F. Winds SW at 15 to 25 mph. Chance of rain 100%.", tool_call_id="toolu_01SCgExKzQ7eqSkMHfygvYuu", ),]llm_with_tools.invoke(messages)
```
```
AIMessage(content='Based on calling the GetWeather function, the weather in San Francisco, CA is:\nRain with a high temperature of 54°F and winds from the southwest at 15-25 mph. There is a 100% chance of rain.', response_metadata={'id': 'msg_01J7nWVRPPTgae4eDpf9yR3M', 'model': 'claude-3-opus-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 670, 'output_tokens': 56}}, id='run-44fcd34f-9c24-464f-94dd-63bd0d22870d-0')
```
### Streaming[](#streaming "Direct link to Streaming")
danger
Anthropic does not currently support streaming tool calls. Attempting to stream will yield a single final message.
```
list(llm_with_tools.stream("What's the weather in san francisco"))
```
```
/Users/bagatur/langchain/libs/partners/anthropic/langchain_anthropic/chat_models.py:328: UserWarning: stream: Tool use is not yet supported in streaming mode. warnings.warn("stream: Tool use is not yet supported in streaming mode.")
```
```
[AIMessage(content=[{'text': '<thinking>\nThe user is asking for the current weather in a specific location, San Francisco. The GetWeather function is the relevant tool to answer this request, as it returns the current weather for a given location.\n\nThe GetWeather function has one required parameter:\nlocation: The city and state, e.g. San Francisco, CA\n\nThe user provided the city San Francisco in their request. They did not specify the state, but it can be reasonably inferred that they are referring to San Francisco, California since that is the most well known city with that name.\n\nSince the required location parameter has been provided by the user, we can proceed with calling the GetWeather function.\n</thinking>', 'type': 'text'}, {'text': None, 'type': 'tool_use', 'id': 'toolu_01V9ZripoQzuY8HubspJy6fP', 'name': 'GetWeather', 'input': {'location': 'San Francisco, CA'}}], id='run-b825206b-5b6b-48bc-ad8d-802dee310c7f')]
```
## Multimodal[](#multimodal "Direct link to Multimodal")
Anthropic’s Claude-3 models are compatible with both image and text inputs. You can use this as follows:
```
# open ../../../static/img/brand/wordmark.png as base64 strimport base64from pathlib import Pathfrom IPython.display import HTMLimg_path = Path("../../../static/img/brand/wordmark.png")img_base64 = base64.b64encode(img_path.read_bytes()).decode("utf-8")# display b64 image in notebookHTML(f'<img src="data:image/png;base64,{img_base64}">')
```
![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABjIAAAD7CAYAAADAbQ2OAAAACXBIWXMAAAsSAAALEgHS3X78AAAgAElEQVR4nOzd7XEbN9fG8evO5Lv8FLAj3hVIKQAjpgIrFYiuwEwFoSuwXIGpCiJVkNWggEgVhJot4DYryPMBhzYtkxJfdvGy+//NaBw79gISyV0AB+fgP//++69Qnsq5kaSR/XZsv76RdP7sr17E6RGADDxJWjz7s9p+/SLpQZIa72sBAAAAAAAAhfgPgYx8Vc6tAhPnCkGL1a+n6XoFoEfu9S3AsZC0IMgBAAAAAACA3BDIyIRlWKyCFmP79SRhlwAM16NCcONB0gPBDQAAAAAAAKREICORyrn1gMVYZFkAyNujQpmqWlLdeP8laW8AAAAAAAAwGAQyIrEyUZcKQYuxCFwAKNsqsHFLxgYAAAAAAAC6RCCjQ1Yu6lLSRNJZ0s4AQHeWsqCGQmCDbA0AAAAAAAC0hkBGyyx4MVUIYJB1AWCI7kRQAwAAAAAAAC0hkNECMi8AYKs7SfPG+9vUHQEAAAAAAECZCGQcoXJuFbx4m7grAJC7paS5pOvG+0XargAAAAAAAKAkBDL2ZNkXE/uidBQA7O9eIUtjnrojAAAAAAAAyB+BjB1Vzo0VghdXaXsCAL2xlHStkKXBWRoAAAAAAADYiEDGK6x81FTSReq+AECP3UiaUXYKAAAAAAAAzxHI2KJybiJpJspHAUBMBDQAAAAAAADwHQIZzxDAAIAsENAAAAAAAACAJAIZX9kZGHMRwACAnBDQAAAAAAAAGLjBBzIsgDETZ2AAQM4+KQQ0OBQcAAAAAABgYAYbyKicGykEMK7S9gQAsKOlQjDjOnVHAAAAAAAAEM8gAxmVczNJU0knibsCANjfk6RJ432duiMAAAAAAADo3qACGZyDAQC9cqcQ0KDcFAAAAAAAQI8NIpBROfdG0rUoIwUAfUO5KQAAAAAAgJ7rfSCjcu5SIQuDMlIA0F/3CtkZi9QdAQAAAAAAQLt6G8iwLIy5pLeJuwIAiIPsDAAAAAAAgB7qZSCDLAwAGLR7SZecnQEAAAAAANAPvQpkWBbGTNL7xF0BAKS1VCg1dZu6IwAAAAAAADhObwIZlXPnClkYZ4m7AgDIx6fG+2nqTgAAAAAAAOBwvQhkVM5NJF2LUlIAgB89KpSaWqTuCAAAAAAAAPb3U+oOHKtybi7pswhiAAA2O5P0YOcnAQAAAAAAoDDFZmTYeRi1KCUFANjdh8b7WepOAAAAAAAAYHdFBjLsPIxaZGEAAPZ3p3AQ+JfUHQEAAAAAAMDriistZedh1CKIAQA4zFtJdeXcKHVHAAAAAAAA8LqiMjIq56aSPqbuBwCgF5aSxo33D6k7AgAAAAAAgO2KyciwQ70JYgAA2nKikJnBIeAAAAAAAAAZyz4jww71vpZ0lbovAIDeetd4P0/dCQAAAAAAAPwo60CGBTFqSWeJuwIA6D+CGQAAAAAAABnKtrQUQQwAQGSfK+euU3cCAAAAAAAA38syI4MgBgAgoZvG+0nqTgAAAAAAACDILiODIAYAILGryrl56k4AAAAAAAAgyCqQQRADAJAJghkAAAAAAACZyCaQQRADAJAZghkAAAAAAAAZyCKQQRADAJCpq8q5SepOAAAAAAAADFkWh31Xzj2IIEYb7hUCQg+Svkh6aLz/8vwvVc6N7T/HkkaSzsXPHwBe8q7xfp66EwAAAAAAAEOUPJBhZTuuknaibDeSbhvvb4+5iGXFjCVd2tfJ8V0DgF4hmAEAAAAAAJBA0kAGQYyjfJB0vSnjog2Vc6uABq8PAARLSePG+4fUHQEAAAAAABiSZIEMqzn+OUnjZbuXNGm8X8RorHJuJGkiaSqyNACAYAYAAAAAAEBkSQIZttv/z+gNl+/3xvvrFA1b6amZpPcp2geAjDwqBDM6yYgDAAAAAADA96IHMirnzhUOpGZ3/36yqM1ur9+1pIvUfQGAhO4b78epOwEAAAAAADAEUQMZtqv/QdJptEb7IYsgxrrKuamkj6n7AQAJ3TTeT1J3AgAAAAAAoO9+itzerQhi7Ov33IIYkmQlrn5RqBcPAEN0Zec9AQAAAAAAoEPRMjIq567F+Qr7yr50iWXZ1JLOEncFAFL5hcO/AQAAAAAAuhMlkMHh3gf7b+P9InUnXkMwA0juSdJC0heF8n3bjO3Xc3FOUZueJJ1z+PewVc7NJP3RYRMfGu9nHV4fAAC0yObJ55JWv2rt9ysjba5acf/s9+vj/MXqq4T1gi5Vzo0l/dVhE9lvLgVKVzk3knSpsF7xxr7O9O0++KCw5lgz58bPXTdgb8h51+300E0pg5LG+y82gKhFMAPo2pPsIa4weakPucjaxOpcYcAwFsGNQ50qPOcuE/cDAAAACVTOrcbV61/HjK0vNvzZ2w3tStKjQmBjtdj3wGIfgNxZmeaptq8jXqz9+t7+zY2kWSnrpWhf5xkZlXMPYnH7EEVkY6yzwVstFkOBtj0qLJTfdnlfsM/wRGFBnvOM9ve7nR+EASIjAwCA4Xi2g3is/ObAjwqBjVv1eBczGRlAeWzdYa7j1oo/KQQ0enlvw3adBjIiTOr76rHx/vz1v5Yfi6h+Tt0PoAeeFB7u8xRBTRtcTBUmaLlNzHLGeRkDRSADAIB+K3zTT5SNUbERyADKUjk3lfSxpcs9Sbpk/j0snZWWsgcKQYzDzFN34FCN93N77a9S9wUo1L1C8GKeshM2GJhYCaqpwqSttAlbCnN9q4EMAACAglnmxWpzT8lj4TOFxcOPlXOroMac3cwAYqmcm6vdtcJTSXXl3JhgxnD81MVFbeFr3sW1B6JO3YEjTRV2fADY3aOkXxvvx6mDGOsa77803s8a70eSPkhaJu5S7s4q5ygvBQAAULDKucvKuVrSPwq12UsOYjy3Cmr8r3JutRERADpj2etdbHg+UQhmsJlwIDoJZEiaqV8P+pjuS48k2q6OSep+AIVYSnrXeH9+6MHdsVhZm5FCPUps954JIQAAQHkq5yaVcwtJf2rzgdt9cyXpr8q5mvErgC5Uzl2q24o9J5LmtqkePdd6IMMefu/bvu5ALBWyGYpnwZgPqfsBZO5O0iinDIzXWIbGVNIvIvPqJQykAAAACrEWwPisYW7KvBABDQAti1ix50xhUz16rtVABiWljvIo6bz0bIxnrhUO3wHwvaWk3xrvL0utS9t4/9B4fy4CltucioEUAABA1qyE1ELDDWA8tx7QGKXuDIDiTRUyJmJ4z32r/9rOyJiJh/8h7iSNG+8XqTvSJlug7UWGCdCiVdDyNnVH2mDlpn4VZ2ds8p5anQAAAPmpnDu3MzD+FGsYm1xI+qdybkaWMYBD2L0j9prgLHJ7iKy1QIYt1lBSan83Je/Kfo0t1t6n7geQiRs7C2ORuiNtsrM9zkWpqU3mqTsAAACAb+zQ2b81jDMwjvWHpAfKTQE4wKXiZWOst4keazMj47rFaw3FTeP9JHUnIpil7gCQgQ99/rxbcGYsApfPnVXOkZkGAACQmGVhPKjbQ2f76FSh3NQsdUcAFCVFUOGEwGu/tRLIqJybiN0M+7rr86LmOtutzeImhuydlWDqNTsIfCzpJnVfMkNKPgAAQEK2seRvhQNhcZg/KuceGNcC2NF4YO0igqMDGfYQIxtjP4+SJqk7EdksdQeARN413s9TdyImC9ISzPjmRDwnAQAAoquce1M5dyvpY+q+9MSZpAXnwAHYQeyyUivcn3qsjYyMmCfQ98FS0qSvZ2JsQ1YGBmpwQYwVC2bcJe5GTq6Y8AEAAMRjY69a0tvEXembE0m1VeYAgB8kLu9E1liPHRXIqJwbifqS+7puvH9I3YlEZqk7AEQ02CDGmok4AHwdWRkAAAAR2CJaLUpJdeVE0meCGQCAmI7NyJi10YkBeRpCnfxtyMrAgBDEUDgzQ6E+5TJxV3JxwcFjAAAA3bLF9b9E5YgYCGYAAKI5OJBhizFX7XVlEGapO5CBWeoOAB0jiLFmLZiBYJ66AwAAAH1li+qfU/djYAhmAHhuqJVo0LFjMjJmbXViIJaSblN3IjWyMtBzBDE2sHJ6v6fuRyZOmegBAAC0jyBGUgQzAHyV+FzgOmHb6NjPh/wjy8a4aLcrvXc7tAO+X3At3j/onyRBDDuraCzp3L5Gkk43/NVHSV8UHuoPkuqY96TG++vKuUvx2ZfCRoB54j4AAAD0BkGMLHyunFvY5kUAuFea+T/ZID12aEbGrM1ODMTgszFWGu9vJT2l7gfQoqhBjMq5N5Vz08q5B0n/KEza3isMEjYFMaRw0OGFpD8k/Snpf5Vzt5F3Tk3EeRkSWRkAAACtIYiRldvKufPUnQCQhVTroHWidhHB3oEMsjEOVqfuQGZmqTsAtCRaEKNyblQ5N5f0P0kfFYITx3gr2zlVOTernHtzbB9f0ni/UMjIAvdAAACAoxHEyM6JpHnX8woARUgRyLijGk6/HZKRMWu7EwPwyAfpe7bwS1YGShcliGEZGDOF7IurDpo4VcjUeLDyT51pvJ+Jz75EVgYAAMBRbOc/m2TycybKqAKDZxsZbyI3yzOh5/YKZNhAgWyM/S1SdyBTs9QdAI4QK4hxrlDj8Y+u21IIaPxpJae63EU16/DaJZmm7gAAAECJbKxaK2QAlOz+la9Sva2cY6wLYKZ45aXvOKOn//Y97JsH0WE4aGaDxvu5DW6OLY8DxBYriDFRmlT5t5IWlXPjxvvW71/22Z9p+3keQ3FmP+M6dUcAAAAKU6ucIMajwprA6mthO5V3ZoGbc0mrX8f2a84/g1nl3O2+3yuA/mi8X9jc/2PHTS3FmvUg7BzIqJwbqZuSJhi2qaS/UncC2EOsIMa1wgHeqZxIqivnJo33XdS2nIl6xlL4OYwT9wEAAKAYNk7OeTPck0Jt+FpS3UaZabtGbb/9Oja3dZpLhfHk22PbadmJQompcdpuAEip8f7azlvu8h41IWg6DPuUlpp01QkMl+1ELjllFsMSK4gxV9ogxsqJQqmpSdsX5pycry5sAgoAAIBX2HluOYyTn1tK+iTpl8b7UeP9tPH+tuuzMhvvF4331433l5L+T9I7hQyQXFxwLhwAhTXlru5N7zrafIkM7RPIIEUHXZmk7gCwg5hBjNyy367trI7Wr9vBNUs0S90BAACA3Fl5pXnqfjzzpDBPeGPBi2RlpRvvvzTezxvvzyX9qnw2DF53fP4egMxZUHcs6a7Fyy4VaZ0G+dgpkGER9JxrL6Jglv71IXU/gBcMOYghfSsz1fYEZN7y9Up1yeQOAADgVbfKZ11iFcAY5biI1nhfN96PFQIaqTM0TsTGHWDwLNh6Kel3HX8A+KOkcY73X3Rr14wMsjGOs3WBisWroPF+pvQDLGCToQcxVk60Vo+3DbYro80dGaU6UahtDAAAgA1sc+VF6n4oLL79nmsA4zkLaJyrnYXDY7ynnCoAKZyZIelc0s0B/3wVRD5PmQGHdF4NZFg5kZwP0irBxpIsVt/zf5VzBIqCidIOroDnCGJ876KD+xW1LAOeAwAAABvY5r8cSpLeSRrZIlxRrM9jpd08OEvYNoCM2Pk+E3072+dO28/QfFQ4g+jXUoLI6M5//v333xf/QuXctfI8TKsojff/Wf+9DcYW+pYaey9pYmWWBst22nxO3Q9ABDG2WSpM4Fo5uNDuhf9r41o98Au7SspVOTeT9EeHTXyw7EUAAAYlg/HyUtK0D4tnNva+Vbrslv+2ueZROTeW9Fdb19vg3kp0AQAysEtpqUnXnRgCy75YN9X39T0vJP1TOTff51DdyrmR/ZtelKiyweEh6WVAmwhibNdqjVsLiORyEGFqk9QdAAAAyIktVKccL/eqDrvVqB8r3Zx7lqhdAEAPvBjIsMX3XA7TKt3zQMZky9+7kvR35dyicu66cm5SOTdeBSoq587t97PKuQdJ/9i/mXfV8dgsvYy6+Ujld4IYr5q0HDytW7xWySapOwAAAJCZWcK27xWCGL3LmLU5d4pgxhVnZQAADvVaRsYkRicG4nItGDGSdPrK3z9VKOn1WSFV8n+Vc/9K+tt+/4e+P7vkre1W6YuJOPwb8d3EqHlbeBBDCgHuSYvXq1u8VslONmTvAQAADJKNi1KVQLppvB+3VU41RxbMSDHnniRoEwDQA1sDGbbo/jZiX/ruRN8Ocx111Ma8o+tGZwPGsQhmIJ57G8x3qgdBjJU2D6fu3S63IxDIAAAACFIdqn0TY16QibG2H7DblTbnEQCAAfn5hf/HYkr7/qicu1V3gYzTyrlpjB3lMTTef7Esk1rfZ58AbXtShHtej4IYUrjfnLeRam+f9Se9nqk2BDx7AXxl56a9UVho0tqvsj/fNj5aP3voi0LAePXros2DVnE42zh2bl9v1n6Vwnxh03PxSdJi7fe1/bp6bdkcEIll2Y/07XO5/vrt+vlcrH813tdt9rFklXMTpRkbDimIsRqHXypUfojlpHJu0pdzR/Z14L3/UeE5LvFcT2bttRs9+1o51+by/M+f3avXbyHu/WjREMaW//n33383/g9bcCcjo32PCm+K9x1dfylp1KcUWJskPIjzWtCdX7sePPQsiLHye1uB08q5WulKB+Tmt8b729SdwH4q52YKZR+78qHxftbh9ZGYBS3G+jb56HoTx73C+OpBUs0iSPdsg85Y4fUdq7ux7aPsdRWvbStsPjLWt89nl2OWJ9lrpwG/fpVzC8UPZDw23p9HbjMLEcYxz93boeNHsfvqX0f3Zruj+7l2/1h9dfG+Xurbff+BucTx1haEx/p2/+9yTepJ34/L6g7bQk8McWy5MZBhH9j/xe8OWtK7xQ6b3NcimIH2df556WkQQ5LuGu9bySConLtWdwHe0gxqJ2BfEMjAvmy8PVbIxLpU+jHOarPNPLedV6Wy13j1+o6V7jV+knQrXtu92OLA6vVLmTU6uNcvwuL0Jo8KB3v3ZkPgvirnHhS3EsJ/j10MyzWQYesXE6W9f9wpPNdvhxoQ3ZcFnVb3/Rw22d0p3P9vh3xvwjeMLbcHMi4l/RmzI2jVU+P9KHUn2kYwAx3o/LPS4yCG1OLPL8EusJwtG+/fvP7XkBMCGdiVjbMvlfezYTU5uS5p8cOy+zqxz0KWLaxNlEeA6rknhXP15iW9trGsLT5OlN9rJ4XF9ut9S/J0+Nl4aLxv/byDBJm6S4UgxiACRdskCCAdnd2dUyBjbYFxqvxKY98pLIbPU3ckN2vBi4nye93WFf0aWrnASYdNTLu4hzO23EnUseW2MzKo0V220z7WnGy8f1g7MyO3Dy7KNOny4j0PYkjt7i5atHit0p1Uzo1JJwb6wxY3pgrPnRLOAzpVyJJ7Xzl3L2lWyD0p6e5Jm6TnuIC17lQh6PpH5dyNwqSzTtul9Ap57aTQv88WPJ/tMd/LYWfxTmxRMXZ/O1kAK03jfW33hVjzl4nSHejemrVn/FT5rlO8lfTWsuCvFQKig97hb2tLU5VTUr/013Ckbu/tXW0EzGFsOUndj1dEHVv+tOXPCWSUb5K6A12wAeZYYdcMcIy7Lm+uAwhiSPq6c7ENi5au0xc8h4EeqJwb2fNgoTDALyGI8dyFpL8q52qb9OOZyrmxlWT5rPwXwtddaeCvbeXcxM5iKO21O1UIaCwsy6tPZpHbu+vbBsAjzSK2ddbiXCIJCyouFJ7xuQYx1p0o9HVROTezIMyg2DO7VsjkKSWIsW7wr+EQPBtb5hzEeK7zseUPgQx7kJRwA8bLLkofFGxDMAMtaT0NfmUoQQzDwKkb49QdAHC4yrk3trjxoPA86MPYej2gMUrdmRzY63yrsBhS0iL4c6vXdj6UBZFnAYwSA4wrp5L+rJy77cNrt1aaJ5aleroB8FBWFuQmYpOTiG21xhYZFyongPHc+mL4JHFfongWwChpYXib9dews7UNxMXY8nWbMjLGbTaApCapO9AVghk40k1XtfsGFsRAd85YKATKZLujFyp3ceM1F5IeLFAzWGuvc4m7Obe5Us8XRCrnzm0hq/QAxnNvFT6XpW9ki137e1pYaZZYZhHbKi6jyEr7/KV+3ENOFLK7ertJwRaG5+pPAOO5E0kfK+f68AwYNMaWu9kUyCjuQYKtev1aEszAEWZdXJQgBlo2Tt0BALuziXIt6U/1M4Cx7kShDm5vFz5eYotYfX2dVwsivdjhv86Cb3+rnwtZUlhU/bvw3dUx56/3lJTazDZ83Udq7rSU58jac/596r50YLVJoVeB7LWF4SHM0c8UngG9eg2HgrHl7jYFMvo6sBui075HZAlm4AB3XWRjEMRAB8apOwBgN2sT5aGNo1cLH+PUHYnFnvd9XMR67q3CDrri5xKWhfGgkCU1BJ9LDGbYYnbMXaiziG2VaB6hjRtJ/+0qU75NtvhWq9/P+dViY/FlBteyMPq6MPySXm5G6DPGlvv5LpAxpEnIgExSd6BrBDOwp3nbF7RdD0MNYixSd6DHxqk7AOB1Pd9BtYsThRq4k9Qd6dCTNMhNCyeS6pJfW+t7rbLrTB+ixGBG7GyMOmJ7xbFslS7m10tJnxQCGJPCghhDuY9cKdz7i1wIt0XSWsN6Xj/3VgW/hkMy0LHlUdmjzzMyxsf0BlnqdXmpFYIZ2NF94/1tmxe0APDHNq9ZkhYnH+OWrtMnxaTaA0O0dhjfEHZQ7aLEhdNdLQY40VxZ1U+fpO7IvizI+FnDDTJeF5ZRE3PeOovYVsnanDctJX2QNGq8n5YQwFhTazhBjJUzFZiVZ3PzWsN7vTY5UwhmFPUaDsmAx5bSEWPL54EM3uD9M5iFMIIZ2OJJYdD838b7cZsXth0O2wb4nyTdtdlehh5Td2AAeC4DGVrbndmnw/jaUOSC9w7ONdyJ5koxr+1aSZGhBxlPJBVRXsT6GKtkD9kYu2sjkPEk6XeFAMaswMPVLzTcRfFVVl4R8xF7Rv2l4QavNzlTIc+BgWJsecDYkoyMYRhEVoZEMAPfuZP0a+P9atC86KCNa20eKL2znUaXkv6rEEjp43vyocVrjVq8Vp+MU3cAwPcGWGJiX8UseO+BRZEg+9d27fM59MWBlVOVkX0Qc756HbGtolkm+6FzmCeFOdGo8f66wAAGgiKCGfZs+py6H5k6FWWmkK+9x5ZfAxm2a59Bej+NU3cgJoIZg7c6NO6yy91Wlra6aZL8yWrKSgqllxrvZwoL9e/Ur/dl3eK1Ri1eq0+ynjQAQ0MQY2efc1/0wME+2+H22eHzuVUJmSnjSO08tV1mdgD2/Xnd61sAY95BfxBf1tldBDF2cqZ2S8UBbdprbLmekcFko7/GqTsQG8GMQbqX9EvEQ+NmG/7ssfF+uukvN95/scH8SP3J0KhbvBbPoM1ilVkA8AoWSffG7r/+mmcaqLoVn89SjSO1QzbG/nZd/LxXyIYfE8DopVO1O/drhS1+EsTYzYWdHQXkaOexJYGMYTjJdLLRKYIZg7GU9JsNmtssdbSVZWNsWmDeGMRYZwGNmcI9t+QzNO7bChiREfiyId6/gUxdi0XSfZyI3X99daIw4cwmUGVnYhD8L5CNc04jNcc9aU87lJe6UdhMNubskd47y2kh3O4d89T9KMx7W8sAcrPz2HI9kDHurDvIwTh1B1IgmNF7dwoHx8WelEw2/NnNPoN3Kzl1Kek3lfn+nLd4LRbqX8bPB0iscm4mau4f4qJy7tUgP4p0pkzOXuDzWbxxpHbuImVt99GmudaqnO8k1mYyZOF9DuUFbbHzVmyGO0RWGxGANTuNLcnIGI7Bvr4EM3ppqVB79TL2wXH20N80WT5od4oFYc4lPR7Tr8ieWk4ZH7d4rT4ape4AMGS2c+2P1P0o2IwJc28l39lpC2p8PssWa55KNsbhVj+7pUKJ3P9GLOeL/OSwED5XvEyuvjlVJhsRgA1eHVv+JH1dmCOS2W/j1B1IiWBGrzxKSll7dbzhzx6P2Ylk2RnnCjubSjBr+Xrjlq/XN+PUHQCGysbI89T9KNyJqEvfZ8kWtKw05TxF22jVOFI7BDIOZBuvflfIhJ8RwBi8pM91y/R8m6r9nnhvz1AgRy+OLVcZGYPdrT8gpxlEzZMimNELqyBGyvTlTam08zYu3Hg/kfSujWt1qNVsDBtAUXP+ZaPUHQAGbCZ2/LXhiglzb51qhzPCOjIXm/GKZvPTGPfYx9hZ3H3TeH/NzxBrrlJk5NlYYha73Z6ape4AsMWLY8tVIGMUpStIbfABK4IZRbtpvD/PYAC96XPU2g4vCxLkHMyYtHy95DVWC8AiKpCATdDfp+5Hj8xSdwCdmcbeMGXnYnC4d/koKwWUa5agzbkIYLeFTSbI2daxJYGMYRmn7kAOCGYU6cayFXLwPHvgqe30agtm/KL83qOf9jnQfEeTlq/XS5Vzgw9EAwmUUA7pUdL9s6/cnh0rTJj386TvX9ecz9I6UcSsDHsflXQuxvPP6FPa7mRlHKmdOlI7wKHWn+cf7OuTvr935OYi5sHftsEk9wD2Uj/e83N+fs9SdwBR9WJs+bP9ygLJMPA6m8b7B3sQ1iKin7tsghhbFpMXXbRl79GJpD+7uP4BHhvvW12koKzUXgZdGhCIze6/udyflgrjlQf7dbFLAN12MZ2vfV0q/ZhnIibNzz3q2+u72GXDgI1HRgqv61h5LOxMK+dilZ6ZR2hjX0/69jo+SHrY5Wdh85HVZ3Ss4WVhjmI00sFGHOAYq/tFrXCv2Llsss2fVveLS6W/Z0wVL+NpHqmdXez97JayfH5fVc5NM6h6gXb1emy5CmSwQDIMBDLW2ELxVNLn1H3BVtkEMcyme2XdVWON97eVc++U/j36qG52zKWqqV2isdhNCMQ0S9z+k8LCwPzQc6Fs0F9r7d5hk5SppKvju3iQidL/bHPwqLAgc3tIVqe9Jx5ki0cWtLpU+PmmmnieWPudZjLZ7t8cJtfSka+j9HWBvV793j6jq9cy9QJlDKMIbeS4kx3Ds1S4Xxz8XJcku9csFO7/0zcWLdUAACAASURBVAye6xeVc+Oug4W2wST1PfFO4ed+e+jC/wvP76nSbaCZqIwsZLysr2PLqZ7NHVaBjFwGg+jWaeXcG6Kt3zTezyvnpPQLxfhRbkGMJOw9OlK6EgpLSZO27xv2YJy0eU0AaEPiyfKjpGsrMdg6m6RMbCPHTPHPADmNseCRsRtJsw5KUn6RLZDZDv+Z0szvpup+MSSHxZZOXkfpu4WEmQVtpur3XD3G93bwojHQgieF+8W8i4uvPddnCvf+FAGNqbrfcDXr+Pov6fKe//z5fa34AY2J8ni24jB9H1tO9Ozz/9PGv4Y+IyvjmQIOVx4ighhrGu9nCjtAYnuUND5m19ALpkpf4qQk3LuBeGYJ2lxKetd4f97VYse6xvsvVi7wF8WvjxutnnZGbiT9t/F+0sVCyLrG+7rxfizpd8U/L+XUJrudyGBHbrTXUQqZufZa/qYenqsR8YB4AhlI4UnhuT6K9Fxf2Pz5V8W/97/t8vOc8N5/p7j3/Lrx/lzhfJSYzjjDrEhDGlt+N3f4iQNEB4fXewMbXMR+YGAzghibTRR3EttZEMMGupSV2g8lIIEIbKAce7J8JynKQsdzdo8fK26wfEiBjEdJv8RaBFnXeH+t8NrGDlRNOrz2rMNrv+RJ0q8pXkcpBDQU5nCfYrfdsVjzUgIZiO2TpCgbE56zjMeR+nXv7/Lamywl/dZ4f5nonj9T/IDUOGJbOM4Qx5bfBzLE4sjQEMjYwh4YN6n7MXAEMbaw1L5Yiz+rIEZXZejIxtgfz2ogjknk9n63iXKysp+WnXGpeMGM04Hs/PtgGTbJFlHXAlUxJ5ydjFUS7si9UViQrBO0/dVaFtVvir8bsiujGI2k/AxicJYKQc+khydb22PFvfdPuriojRdilrN5VLjnxzrAfCN75owV734/pE0mJWNsKUpLDdEodQdyZovoBDPSKDWIMY7VkD00us4c6jSIQTbGwVId/gYMhk2W30Zs8p3tbMrFRPEmJeNI7aSw2sk5S90R6bsFrViLIScdlZeadXDN1/xuOx6zOV/QFtfG6kcwYxShjdi7RjFcqwXwOnVHpO/u/bEy+rsqTxRz3riaBy8itrmVzf1jff/jSO3gMIwt18aWP4kd+kPT58Pi2jIVg97YSglibIp8X1TOLSrnJjFq/drDq6sBadeZGFI4SIxsDAA5irkb7V2KkhMvsXv/JFJz40jtxLZUeI4m3cn5XOSsTqnl19cmr7GzMXILNH61thuyD8GMrmUThEKvZbUAvlL6vd9MOrjmJjHmwXuzsWKMjNmTgWTLloixZfC1LUpLDRDnorwswe6FoSsliLF6b2yaNJ5K+ixpUTkXY9fEpINrdj54s4WIq66u33cRD8UEhirWrrffcwtirNgCaYzM1FGENmJbTTSzLGNju4RjnQc3bvl6k5av95rsAo3P9SSYMY7QRpafR/RKlgvgK5Ey+lfGbV7Mzi2LsQFu9fzO8jVUvPEp64T5YWz5zXj1H5SWGiZuUK9Yiy6WPDkoQTFBjDX1C//vRNLHyrmHLgOG9sC4b/GSMYIYbyTNu7r+QHDvBjpi9+wYO77vct3lvWYWoY2+ZQhnPdFcc604Y9vWXl8bP8TcBPEh9yDGylowA9vlujCJfsg6iLEm1r1/3PL1Yu32zvo1tEyfGJtMmGvmhbHl976W2iaQMUzcoHZgN4xJ6n70WJQgRuXcm8q5WYuXrHf4O2eS/u44O6Ota8cagM+V5pBOANjFJEIby0jtHMUmzJ2XMehZCYNJARPN1UadWYy2WjwnI2bZgrtc6k/vyt53v6fuR8YWqTuA3lpKusx5AXzF+hhjE8VpW8/2iEHsDyU8vxXn2c06YV5KGltGyRpajS05I2OYeM13ZHXomBy0L1oQQyHw0OYkeJ/ahB8r5+Yttv2VPdSOXWiKEsSonJso7gG6ALCvGAsRWR0a/IoYdXhHEdqI4UNudYtfMVecnXNtzTdildQoItC4iWV5tZmpG0uMzKxFhDYwTJPczsR4Raxd06OWrvNG3Z9b+lRK8Nrea13/PChjnI/Sxpa3iji25IyMYSKQsQebHMRI5RuK2EGMM62loR3rgEHEVVfBDB23syZWEONc4fwQAMiWTWR/UXeTxPsCJyRd68Mc5LGURZAVe+4XEaiysVxrY7hXlBRo3GQiSuICsdwV9kyPee8ft3GRxvtF4/25uq2/P+nw2l2oO77+qOPrYzeMLbcbSZSWGqoTDvze21TdR8CHIFYQ41zhcL+ztT8bt9jEvgGEToIZdlbGIe/LmEGMuss2AKAtjfcPHU6aZx1cszP2fHjquJk+jEUnqTtwoHmENtp4fWOVlSot0PgD22iT+/k7KZQcnEKeis3eUiFB7HUdbjS5t7l0SeqOr08Z6DzEykRt2zxCG18zMjBMfZg8RsPh362IGcSo9eODeNxiM4ekzl1VznUxwdz3mrGCGG8Ufk4nXbYDAG1bmzS3VaqlxMmyFDYEYLubEmoXb2Lvx67HtKMWrhErkDGJ1E7XYpWOKUapn1Fk7brU7K1IAdtR2xdc22jyu9q7x81auk5M3M/676bQOUPUsSWBjOEikLEn2+k0SdyNUsUOYmxaPG9tMnzEgWnvK+fanpTvMyCNGcSoxa6Oto1TdwAYCps0j9XOpHl2dIfSYML8slnqDhyp7vj6bYwBxi1c4zU3hdW53yrigb7AUC1V/mes6/N0Rl1d2Ep+n+v47+GxxMXivjyr8KJZ6g4cqe74+qcSgYwhG6fuQIlsF8On1P0oTA5BDEk6swX2VtiO3UPKbsxb7scX7XaGS+wgRqya1gDQmRYmzU8lTpbxqj4sfmcdqLJxXYyszlmENmIiKwPoTrHZGGvqjq/f6UY2OztjrOM2mpQejEI/MbbcEYGM4Wp1UXdIGu85L2N3uQQxVtrOhpgc8G9O1P7gqX7l/xPEAIADrU2a32n/STOT5X7qw+tad93AkeeTHfNvd3Xfg0WD79hYb566H69p+ew6IJY+3PuzDmLvyjaajCTd7flPl4pzVkhXOs2o4d6cVB/uL3XXDVTOjQlkDNs4dQcKxnkZr8stiCG1XBrMdtkekqFz1fIg4aXB2J0IYgDA0Rrv59p/0jzvoi+R1Kk7kKmnntTdX6TuwCvGEdqYR2gjhXnqDgA9dNeDbAxJ6vx7iLVhtvH+S+P9paTftPvazG1PXkf0C2PLPRDIGLZx6g6UynZvTVP3I2M5BjEk6aJybtRmH47I0Jm12IcvW/pw03h/SRCjN/owuAGKtuekufRFD85T26wPO+ZKqLXd9fuv9F25W9liyCHlTwFs14v7RaRyl1HHD1b+e6Tdyi334hmO3pmn7kAbYo0tCWQM2zh1B0pmOzP3TWUcgsdMgxgrk9Y7Ez5L+wYz2g6qPB9cxwomEcSIp+QFUaBX1ibNL2XlzaN0pjuUIN2sTt2BFmW52G1ji07rrEuqCw80vib3Rdc+/+zRT7l/pgbNNppMJP2q7c+2vux6R//06f7S+djy564bQNbOKufe9HwQ37WJQvpUjMMIS/CkCAGyI4IYkjStnGv1oLbG+y9WKqrWfgv6U7WX2bM+KCOIAQAds+fItHLuViFosb7w+mTBjiLYc3WksIty9d/c23+07NkiyELdBwwOEWM3bzGfzwPNJb1P3YltGu8fKudSdwPY1WPP1kyW6un6ReN9bWOamX68B2abjWF93mUDCZtM+oex5Z4IZOBS5e8YTMYWsCeS/kzdlwwsJcUoY3RMEEP271p/3x8YzLhU+4EMghgAENGWSXOWi6QELI5Wp+7AQMQIZNQR2kjGAgW9XawEIqtTd6BlD5IuUneiK882mlzr2zhnHrsvVoFhpPBce7P26xsx/kJQp+5AaX5W+KH19iaGVxHIOFLj/W3l3J2kt6n7kti060hyC0GMlZk6eN+vBTNm2m0X3Gnl3HkbP7fG+0Xl3Cc7s6NTBDEA4Htrk+a5wvMl6a6/tYnzWAQs2tSnHXM5G3V8/acCzghpQ68XK4GIuPcXyM4DOa+cm0k673LDpc2Pz8W4C/vj/rInMjLwlvJSrZgqPLSGuuvpzs4M6UyLQQwpBBBmjfezFq71nbXFrFphIeu1tLqxWnp4EcQAgLQsMB31kEt7Pj7/Gup4pGt16g4MRNefobrj6+ei1sADGW1tGMLgLVJ3AIfrYs5vG0bGa185lmlEGXhG7YlABiSyMo5mu+Fnkj6m7ksCS3VzgPZXLQcxVlo/K2Od1Ua/tffFRNsHN2NlXK9zHUGM5BapOwAgrWdBi7G4H8fGxp84uq4BPpRFg6F8ny+hnjyOZjv7MWADClz8ZdVGbiXdsuE5Cn7GeyKQAYlARisa76/tvIyhLSpMO07T7CKIIbvetToOwtgOkJm9Ny71YwmyUZftt4UgRnoDKYMBYI09A8drX2RaJMTO7mi6HmsM5XUcyvcJAK2zMdilfQ1pDvzWvj6vghpdV98YMgKl+/tZDHBAeak2TSX9lboTEd13+VDrMIixclU5N4/x8LCf09xKTq2n+Wc/KCKIAQBxrO34uxSBC6Arg5j/WsZ46m685F7dl74adXx99N9j6g4gHhuHTfRyRYUheauwXngtOwOOzXVI7WeRxoJgokLK2+Ss8b6unLuRdJW6L5HMurpw5dwqU6jrRZy51c+NdS+sVVC9YoIY2Vim7gCAbljQfiJKRQGqnBt33cbANm89adiLcaPUHUDxhnS/GCSb714qbEplHLbZiaT3kt7beleUzaDAJpSWwspUBDLaMlV4EPZ9F+VdVw8vK8P0uYtrb3CqEDC5jNReMQhiZGUQu0eBoVgLXlxq2IuMJWFXbj8M7XVcaNj3GM7IALCRZV9MFcZjfV+7adOVQmWLe0kzAhpHeUrdgRL9xJsO5tQWj3Ek2+U1hKBQJ99j5CDGytvKuWnkNrNGEAMA2lU596Zyblo5t5D0t8LOtiEvMJaGXbn9wOuYjzpCG+cR2gBQkMq5UeXcXNI/CmMxghiHuVA4HPzWgkLY3yJ1B0r0U+oOICuT1B3oCzvguc/R1fsugqCJghgrH2OUMygBQYwskZEBFKpybmwT5v9J+iiCF8BL2EHfrqGPH0apOwAgD88CGEMpBR7DW0n/VM7NbB0B6NQqkDG0FFtsdsFCbqtmqTvQodazMey9lyqIsXJr5T66lPXOMIIY2WIHKVCYyrlJ5Vwt6S8xYQZ21fU4qe74+rnJefywiNAGgWNg4Cwjdi4CGF37Q9KDnXUKdGYVyMh5gIO45qk70BeN93P1M0j41Hh/2+YFLXjQ6jUPdCKp7jg18vm1sznEmSBG1oa+oxIohgUwFgrB+YvE3QGAXC1iNBJhkxKATFn56IUIYMRyKulPKzdFdgY6sQpksECCldPKuVnqTvRIH89d6CLgMFc+tSlPFDIzRh1d/3mQIIv7L0GM7LHhAMiclZCqFQIY7AIGgJfFGgMTyAAGpnLuvHLuQaGkZy7rDEPyVtKCii/oAhkZ2GTKYT3tsHMk7lP3o2XzNi9mgbPcFs/PFNIiW534bEmzTB7IsO+zVn6vA0wXZ9IAaIeVLLhVKCFFBgYA7KDxPtYaBIEMYEAsC+NvMbdN7UThMPBZ6o6gX1aBjDplJ5CdE3VwBsKAzVJ3oEVPjfetLbxbwOyPtq7XslWZqTYnP9kFMghiFCGb8mMAvlc5N1EoWfA2bU8AoEgxNnwRyAAGYG1jycfUfcF3/rCMZUpNoRWrQMYiZSeQpbcc0tOOnmVltF1Wat7y9dp2IulvW6g6ipVu2vSZSnY2iH1ftUi3zV3yrB0A31ubLH8W91AAONQiQhtkygE9t7Y5j40lebqQ9D51J9APP0lS4/0icT+Qp2sO6GnNLHUHWlK3dSEbbJQysfhcOTc/8vMw1Y+LXXcR0+q/Uzl3LRbgSkEgA8iIPb8exGQZAI61iNEIddqB/qLCADAsP639d192jKM9p+rPAnxSPcrKqFu8VmkHoV8pnJsx3vcfWgBk0/cbPRtj7eAzdkSUY5G6AwCCtUw2DvMGgOPVkdoZR2qn1zhHE7mxKiK12JwHDMZ6IIMdn9jkPTtYWjNL3YEjPbWVPWAL+1dtXCuyU4UDq/bNVprrx8HVUhEDGVYGZSYOPisRz2cgAxbEIJMNKBvZ5nmJNcahZHI7bivnvthcaJS6Mxg2y8T4U4zLgEFZD2QsUnUC2bulxNTxepCV0eZEo/TJxHtJi8q52WufDVv42lR+5DpWWSnrw4PyPVgdL7B7B4CE1oIYQ7GU9Ji6ExikuuPrD+3g56y/XxsLP0Vo6oz57HEscHGmsGj8XtI/lXMPlXMTfraIba2cFICBWQ9k1Kk6geydKP9DmUsxS92BI7QZyBi3eK1UThQCA1sDGpbNtGnhaynputvuhfYr52rrA2VQysRCIpDYAIIYj5LuJH2Q9Kuk/2u831YSEUBZSlhgriO1U/pGqtTGG/7sTOH5+D87T5CfMTpnQbVaZGIAg/Tz6j8a7x8q51L2BXl7Wzk3a7yfpe5IyRrv68q5e5VzyPU6MjI2WwU0ppVztwqZFg+2S2Rb6ahpl9kYtug2FSWk+oCyUkBCdi/vUxDjUeG+8iDpgYwvDNAodQfwgwfFKTl7KTbnHeO1+duVpKvKuSeFn/O88X7RdacwSLfqfxDjeSWPkdiYCEhaC2SYUhdYEccflXMPjffRDyjumZmkv1J34gBtnY8xUj8HHif6NoB/VBhsbPo+7xvv5203bottE/vq4893qOrUHQCGyjLt6tT9OAJBC5Ro0fH1h7YQlHVpKVNHaudt5dyIxfX92fNwU6ncTU4VNnn9YRv45pJuY5XURb9Vzl2rzM16jwrrKbX9fvXrwWWEbV1lZL9d/fcbhfv+SMN73mEgngcyHkQgAy+bV86NG+/ZJXwgy8p4VHkP4LYGn6OWrpOzba/tUi1no6xlfjBQ6SfutUA6c5UTGF4qTIofJNUELVCqxvtF11UCBraYnf09zDKZl4rT10tFKO/aQ4fOXy7s63o9c729bmFIrGzz+9T9eMVSNhbTt40kiy4asuu+eG37mY0UghtjlbcGBfzgeSCjVv43BqR1om/BDHZVHO5ahZWqaHHQWcLOsC4sJXXxuZmIIEZfLZnsAWlYne9dd5+m8KQwbq8VAheLlJ0BCjNS95kfydkCViluFae81FQEMg4xPvLfr2euPym8Brc8u7CnHD+7q40ktcJ4LKu52/ONLZZdNbavS7GOgAJtCmQArzmTVBPMOFzj/bxybqZhPjhKOHSwC5O2BzY2EJm0eU1kpU7dAWCI7N6a42T5TuG+wOIP+q7rcsdjDeMZO0rdgT3UihPIOLU5bB2hrT5pM6P8VNJHSR8r5+4UztKgdDVeZGdA5pJNsFQIvt6W9t619btb+5pSnhol+i6Q0Xj/pdCSN4iPYMbxrhUGcei/dx0NciZiwNFndeoOAAM1VR4bDYqdKANH6npuMZTs4JK+z1vFy1af6fgMg8GwDMWu5htvFc4uWSqUc5yxtoDnMtpgcq8QeJun7khbbKPlVCGoMVFYX+C4AWTtpw1/VsfuBIq1CmaUNEjOyVxhkQL99q7Dwc60o+siD3XqDgBDY5Pl1PfWR0nvJI0a7ycEMTBAXZfmGHd8/VwUM0ezxevHSM1dMH/dS6vn+21xIkqcY7sug2m7uJH0a+P9uE9BjOca7+eN92NJvyoEbYAsEcjAsb5mZqTuSGlswD5P3Q90qrMghu2YyGHHMLrxlFuNVWAgUk6W7xUmyuc2mWRXKoZq0fH1TwaykF3artp5xLZy2N2dPQvuxyj5JUl3PPewRaoNJqtx2WRI5ega72sLaPymcCYbkJUfAhns+sIBTiT9VTmXegdjiRhE91eXQYxc0mvRnTp1B4CBSjGWedS3nX51gvaB3MQI5I8jtJFMoZvMYq5DXBT6M4ot5jORdSj8wILOsUvfLxXm8oMel9na8LmkT6n7AqzblJEhhcMEgX19rJybp+5ESeywzpvU/dhF5dyopUvVLV0nZ12Wk5LCpIKzMfqNyRwQmT3nYk+WP1gGRh25XSBbkTISY5TLSam478/mRTHLmcwitlWcBKUWGftik9gbTB4lnedaQqpyblI5N1v7Gq++umiv8f5L4/1UodwU2RnIws9b/vxW4eAlYF9XFjXnEPDdzRUvZfcYI7WT6t/390WnQQz7fP3R1fWRjTp1B4ABir3w13XQGyjZo7oNLF5Uzo1s8byPigtkmLnilcS6qJybcB/eaqJ4G6duWDvAFjHvZY/Kfx1rou/vkV/XBSrnVv95r7Bus5BUt7FZpvF+dTbutcpYu0KPbcvIqGN2Ar2zOjfjTeqOlMAeLCUcptTK62m77Pp6yHmMRamur4/0qBEMpBFzskwQA3hZHaGNUhf7X2SLTaWeo3aruPOEa+asP7KfySxik2Rj4Ad2L4sVTCshiLGrC4Vgwx8KJeD/rZx7sAyOg8+HsuyMiaR37XQTOMzGQIbtTHmM2xX0DMGM/cxTd2AHbR6KWLd4rVx0vihVOTdT/LIniI/JHJBGrF3AvxPEAF4Vo7xUX8/3K/b7skXEmOOgE5UxD4stZhnbJee0YotxpHaW6k8QY5szhcDG35Vzi8q560NLh9sY9lf1d3MqMrctI0PigY7jnamfC9ats4dB7g+CUYvXqlu8Vg5iBDHGoqTUUNSpOwAMzTE71PZ033h/HaktoGR1hDZO+3bgs20iKz3TJPY98m3lXOk/s9bY4mbMOQdBDGwTa2w26XkQ47lTSe8l/VM5Vx/yHLSqImPlv4aFHnopkMEDBW04s13keF3un7k2BxK5f6/7iBHEGKlfPzNsd9/jet1AzmJNlovdKQ3EFLFCwCxCGzHF3EnfCStDG7vs7jxiQDt38563h3KMIrRxP/CMoAuF8lN7BzTsXj0WwQxEtjWQQXkptGiSugOFyP0B2lpJI7u/lHAuyGtiBDHeKLw3ip6UYmfz1B0ABmoUoY17m/SVgkU9pFZHaOOiL1kZNmbsS7B0Hrm9E4VgxqDLItsGxFhlFiXpqY2DiNFbMd6LswhttGnU0XVXAY29xn5rwQwgmpcyMqT4aZ3op1IPm4uqhJ0ALU/0Sr+/xDqo9VacizEk2d8HgJ4aRWhjHqGNNhHIQGqxnomlj0lXis/GWLEx9lPkZs9U3n26NYnK2Pbls4cyFRVIs0Br12trewdzLZjBAeCI5rVABgsqQFx3qTvwinFbF7LATewJSluiBDEq5+aKuysKad0NrD4rkJNRhDZKG1ePU3cAw2YLTDFKVpxVzhWdyWBlSIv+HjaYJWjzrY2/ByVhGdt5gjZRgEiZcnWENto0Tt2BbWxt5FPqfmAYXgxk2IJK7gurQJ/kvsjR9kF4k5avF0PMIMZV1+0gK/PUHQDQmWVJgUorLUBGLXIQa2w8s8XcUs3Vk2yMlURZGZJ0NaRgRsIytjclPRfRS4vUHdhT22sxrWq8n4rjCRDBaxkZEgsrQEx16g684qzNSZ7ttCspWEoQA11ZllBeDsDBSjobQ+rfzm6Uax6pnRNJtyWekWDZJH3N4J0lancQwQx7v9dKU8Z2lqBNYF0xYzNbgylhfWCSugPov1cDGYWXfwGKYodg576w30VWRoyyAcciiIEuzVN3AECnilkcLWiyjAGwTS+x5qJnKqxmv2VPfUzdj67Y2DvVDt+ryrkig1u7SBzEuLF5L5BSSZ/tWeoO7MLOy/iQuh/ot10yMiQWWICY5qk78IpJmxezlOKs0yRFEAPdK2rhBMDeUiwUHWqeugPAM/OIbRWzE9+CGHXqfkSQMkPsraS68LJjP7D3zoPSPZtmidoF1o1Sd2AX9nktaY3gWmyGR4cIZKAokQ59SqqALKgze5i2xnbbvWvzmi0iiIGu3bErDei/EhbCel6iBuWKHezPPpixFsTo1bkYm2RQivZM0kPlXO4br3ZSOTdReO+kOgfpE+NeZKLVNY0uWObUPHU/9mEbVWep+4H+2imQYQ+am267AuzkcgjBDOVfW7D1nVEWLMjtPkMQAzGQjQGkF+PA0XGENg7W9xI1KJctisQeI2YbzBhSEGPNVGlL0Z5I+rPkUlOVc2/sPf1Z6d47S7HAid0sIrQxjtDGsa5VVlavpK9rOzlvzkXBds3IkAqLAqK3xupgkb9ybmS7U7JgO48+pe7HC6662FnaeD9RHjUVl5J+I4iBCJ7s8w4grRgHPk4itHGQAZWoQblmCdq8qpx7yCmbyrICag0riLHaWDlL3A0plJpa5DRv3IW9bxZKP+eYWWASeFGkrJ2TnDOterBOMEvdAfTTzoEMW2i5764r6LMWJwBnCpOKtnfCjCR9rpzLZmDaeD9V3p+5WRcXbbyfKZSZSrXr6knS2Ep8daoHgxMcb5a6AwAkxdn5d5FjVulAd3ejMAkrBKzKCk0StP2V7aa/lvSnBvpZbby/Vh5zoxN9mzeOU3fmJZVz48q5Wnm8bx7tNQR2FWNHf8ozeLbqyTrBrdJm0qGnft7z789F3VwcZqQjFwmeDRQn6qYcy6nCwHSmsGNk3kEb+1jtusoxnfCqcm7WxW6Jxvt55dyDwj0n5vd+J2kSY6dQTwYnOM4yg3sMyjHKfcGkAA8v3N8XkfpwrYxqMtvi7OfU/QB2NFOasdNq4XqiMD+oYza+alfpzjTIyUQhgy71orwUXo+/KueelMe88SvbZZ7bmUeT1B1AcR7U/X3vonJumkuQzTbs3iqvz+5BGu+/2JrH+9R9Qb/sFciwxcWZGERhf6MWrrE+8Z+q27ryq4DGtbUzT3Eomd38x8o3mDFTR4PSxvsHSed2z5mq2wnLk6RpjCwMiSAGvspiwIxiXIn7xrF+1ZbySY33deVcjD6cVc7NrZRiMmuHR75N2Q9gH433i8q5G6W7F14oLFzfS7ruetxIAONH9h6YKq8A7Pq8ca4wb4xRrvA7VgFhMl/HQwAAFvBJREFUYl+5vWc+pPiZoHi14oxTZpVzder3qAUg58ojUNuWuQhkoGX7nJGxMmu7ExiEUQvXWA9knEaqZ3gi6Q9J/1TOzVPshrXdo2PldxC2FLIyxl02YKWmRgpnhrSdmvgk6ffG+xFBDES2FIEMIDexSpYkPUTYFkcXIoiBMs2UvlTFhcLBz4vKues2x8JWCui6cu6LwmJ9bgvSyVnmQ47zohOFBbu/194bl10eDr72fnmQ9I/CvDW398yjzeeAfdWR2jmRVFupzejsvNZb5VECrlUWHHpM3Q/0y76lpcjKwKHGHVxjqpB2F8uVwuLDk75laUQ5rMzamdhu0dwWwTsvk2Hf/9TuPatU6WMyVO4UXr+Y7x+CGFh3zWGHQHZipvJf2TN9GutewO5u9IHtyL9WWLBN7VRh4fq9fZ7vFRbeFquvlzK6LQDyRmEcPbZfe7WI1aGpws8rx4x1ae29IUmVc48KZXIWssXZfUqUrQXLxgobvHL+3tctFeZuwN4a7x8q55aKc19cBTMmETc4jpSuZGJMtcq4X6EQewcyzEx5pXMifxeVc28OnazbTf75xPuicu48QQrgqaSPCimItwo1URcxGm68zzGYcWZnZcy6bsjeP3NJc9vdNNa3yd8bbX5A3kv6ojB5qGPXNV4hiIE1ZGMAebpVeL7HciVpbM/QeRcN2O7CiX2xQIpeaLyfWWZ2bgsjF3oWDI1Usm5wrPzuRGGBrIR725m+vV//kH54bzwqzFdWts1rSjNJUZ4ZvXKreHPoE4VsuzuFjSaLthuwNYxLhXFZ8edg7GguykuhRQcFMsjKwIFWNf8OMd7y51OlOzjsRN+yNO4V6fC/TIMZf1TO3cYMKllQ41Zxs3IOQhADz5CNAWTIdno/Ku7i0aq2+kwhwHl7zMTZNn6M174Yq6OvJpL+Tt0JpGO7tS8l/ZW6Ly3oQ9DiuQ+xs9/RS3PFn0e/lfTWAhpHV3GwjKrV5svBlfWMnFmDATg0I0MiKwP7m+jwQMa2lNSryrnrFhbQR0f++9Xhf08KAY35kdd7UabBjFvLkGGBdg1BDDxDNgaQt2ulGd+usj0/2ljiYe1r23P1XGHX7kjfSo0wScQg2MLIB+VRYgqJNN7XlXPvxLpEbm44FwNtsM/4k9JszFgFNKRQ5WFVHm7b2tOqVKDs15H6GaQ8xIOGk4GCjh0cyLCsjGPr1GNYLirnxvtmLVj63UuR62sdfwbH6Mh/v7K+s/Ky4wyF3GrDnipkR4wT9yMbBDGwAdkYQMZsfHuttAGBU/sa3K49YB9WYmosFkcGze7bbxS3NCC2e1SYpwJtmSl9sPKH0oHYSy1+fmjJT0f+ex5Q2NchO5Ffe59dWFrxMUZH/vvnTnV49slOVgeAd9nGAS5s8X7wCGJggyd2pwFFIGsKKMelQrYjBqzx/lrSTep+QI+SxmzaQctuxX2+dLHPtUWPHRXIsJ319+10BQNxZjsdd2K7a3YJmM2tLvShxkf8223OLDOjM5bxcddlGwe4smytwSKIgS1mqTsAYCfXYsIMFMEWTMfiMzt4jfcTEcxIiSAGOmHvqVnqfuAo3BfQmmMzMqT8doQjf+/3+Lsz7Vbe4UQHHvpsAZCuai7+UTl3/vpf652PlXOT1J1IgSAGtrjv+uwcAO1gwgyUxTb2DHoTDQILZnxK3I0hIoiBTlnW1VPqfuAw+5aXB15ydCCj8X4hBgvYk9Wzfe3vTLRf0OPswLJGkwP+zT5WdVu7kmua3uehBTMIYuAFLLAABbEJ82PqfgDYjW0WeJe6H0iv8X4q3gsxEcRALJPUHQCQXhsZGVLYtUY6L/bx4kDHShMdcqDTVeXczoGDPUpXHeNMHdbbtpr7H7q6/pE+71NKrFSVc28q5x5EEAObfbLdogDKMkndAQC7I5ix1eDm6WvvhcF975HdiSAGIrFd/ZSPKxcZNWhFK4GMTA8dRr6eNi3qVc6NKucmlXO1pI9HXP9KUr3jmRlz7Va66lhXXZ6XYcGM35TnYP39PsGl0tj7rFYIWAHPLUWJGqBINlb5PXU/AOyOYMYPPmigtcntvTAWi2dd+dR4f0kQA5FNxWe6VIvUHUA/tJWRocb7W3HwN3bzww59C178o5CFcdFCG2eSHirnLrf9BSsD9LaFtnb1R5elluwzOFKen8N9gkvFsBJpDyKIge2mTPCAclmJqbvU/cgAZWRRDHbjf3Vjm526OgswexaQPlee86NSLSW9sxJeQFQ2r7oU93eJ7BQMVGuBDDMRNxS87MYWBWI4UVjY/0Hl3K3SlAHq9NyIxvsvjfdj5Vlq6tXgUkksw+YvxcnoQZk44Bvoh4mGfV7Gk8gsQ2HWduMPdW56YwdfD17m86PSrM7DmKfuCIbLApRDD6T9LjIcMFCtBjLs4O9Zm9dEMjcKO5l+Xfv6TWEAeOiOlk8vDKgXB17zNbdb/vy8o/Z28fnAQ8m1a0aD7b7KMa3+RNKflXO3pZaaqpw7t/Mw/kjdF2RtKUouAr1gu//GGuaC6FISpUNQJFvsGml4u/G/BjEse7hLxdwbbH70i4YdmD7Gp8b7c859Qw4GXkYw5ubgNi1SdwD90HZGxioFn8FBmZ4Uyif82ng/abyfN97Xa1+3jfezxvtx4/1/FAaCv9u/2faaPykERf77SvrptlqHTzp88nFnwbVNtv15LFeVcw/bJhd2Xsj5sz87V8homOzSgD3cc524vZW0sEPdi2AHel9L+luUksLrZi/cfwAUZsDBjCmLVijZAHfjx87EKOr+0Hj/0Hh/ruG8H9rwpLA+UMy8DcNg6x1DK6/0qHKzURapO4B++Lmj614qDGoouZK3paSJna2wN5vYPmjDmRcHXOuLLdRf6ls5qIdV3ywT4Va7LyA/6uXd0LXaOYvjGGeS/qqcu1f4GdZrOx5vJZ1Vzi31bYKw6u+qPNWs8b6O2N+2nUj6aMGMWa4pypY5MrUv7mnYxX2hu2QAvKDxfrUBYa5hBLTf5fpsBvbVeD+z0rJz9ffzu+kzmzILPVv2fpgrvB9SzwlztZR0bZksQJYa7ydWLeFj6r5EsCrtVkwmHNCFTgIZjfcLqx8/hJtJyS5zWgi3G/J8y/9b2NkKuwTI7vV6GYRr5bMwfWFfqpx7UohUryZYJ9o8uL5QCII8KfxM1ndDjezrXHl8f685VQjOzJRRQIMABg5ESSmgx9aCGbX6uxi6KidVp+4I0KbVwc+rTTTqz/juSeEzuyk7oshSrjFY5uzY5pjXGvCh6BvciOxiFKLx/rpy7oukz6n70qE7hU3IBDEweK2Xllqx3ai5lrVBSDuuU3diHzaQemmX86PCTqRXo9RrJSI2lbNK6VT77Qo6VSjT9Mfa15Vdo7TJ2Sqgsaicm+16Hkjb7AyMuUJA6Q+V93NEWhMmfUC/Wamac0mfUvelA6vdfnXqjgBdsXnqSKG8UOnl4u4kvXRuwShiX4pk5ZNHCvX2c5sbxrYqCc14FkWxzZC/qJ+f4Q+N95xXBpiuSkutXCosBrIQmJdHS8F7o7Bz+NL+/LbLcijW3iq9+eHAG/FcYWfRucLhcg8K77F638HW6gBA24VzrhDYeEkpGQ6lO5UFZSrnHhVe89suB9NW1myi8FlgNxYOdXNoqT4A5Wm8n1bO1QrPqT6MDz5QQgRDYfOQmZ1/VmIG7q4lgkcd96OoMzJeYguh81UJXw1rTkAGBopnWbPnCuOyt4m704Ynhft8nbojQE46DWTYuQeXkv7qsh3sbbZW43l9gLZosxELXFza11jPJgeVc1LI2rmVNN8lsGGDq1YPN7IJwM6Lj2uL3hOVNeEp0ZlCibqPVkarlpXSOvSBvhZQG6/9yuuIYz2p3IPXAByo8f7WMgivFTIiS3QvDvXGQD0LaFwqPMtzLxv3QeHsgl02hXV9RkbvdgivBTTGCu+HPiyIbvKksB4wJ4CBvrD74uWW9a5ScD4N8IKuMzLUeF9Xzn1Q2GGNPGyrAXpVOfdwbFaGTehnCpOB1xaIV+dDfKycu1GYSGc9ILaJ/tTKD9ViETyWU4VFoivpayBs/TD0esu/W88E4jA/dGFVTz7rexeAbthnf2ILodcq51lzr7ADt97j77dp0fL1pG7L2vYx0NP191TMc3HtrL55ppm6S4WNVzvvmrc5WdfzlGJe433ZvbG2n+Olwnsi9yDXLu4Ughd9ySL+Iu79++r9vd8+vyM7E2mqfO7lL1kqjCN3DVQv1P57v8vXbqFuP6td9Z37y346v7/8599//+24jcBS70uZ2A3dsvH+68FwNphfvHYztZ3u++xIvFeYMCz23V2/lu0xVlikPlOCyLWlHvf5UCkAr3uXyyH1ANLbc0NHbKvF0DmlCoDX2TxoNedIMZd9VJhf3e67YcIqI/zZSa9M4/1/urx+bgoNaiwVNnzd6oD3EdAHtm6Ta8bdqpz2TlVKgKGLGch4I87LKMZqUGpZB1cKtd8nm/6uvbb71pb9/ZDMDxs8TvVjWae9sjnWBqGr80EWCgO8V8/aWAuijOyPyDYChmvrvRHAsL1WYjOiVWnGWixiAUexciVjhY1UI7W/KPaksJvxVgecAbjOssTet9SvTZ7skOxBeraxbqy8dnzf69vctk7bFSAftg40UfjspgxqPOpbcLGPu/KBzkQLZEhfd7T8Ha1BHOq+8X68FsT4+udaq6G89hDY93C83xvvr20isDpo+4te2CViA8WZNg/Gd15I3DHocmf9mG/49yPrZ46RfABxPTbed117GkBPPFsAPVd3i173+rZB44EJMtAtm+OuSpmustrHO/zTLwpBi9WvD20GGivnFup2cf2+8X7c4fWLsuEcvpHizBlX9/yjzhEEhsbWdtY/s11l3a3KYdf6NjZjUwlwoKiBDIlSPIV4UggadPU6fdD29OylQrBkLu0UeNh5IdHqI862XGeTVfmFhf1+pDzLRACIbylpxCAUwDEsuLF+ltNI3zI+X7Owr9Ui6IIDWwFIXxfo/um4mU+N99OO2yievRbrX9J+93rpW8Br/b9ZDAVatuXzOt7jErX9yucU6Ej0QIYUJc0V/fCk13cR/frarhPbJVXS4ZsA8raUNGaXMwAAyJFt4PrYcTOcEQYAAKL6KUWjtnPjJkXbKMprQYybHYIYM4VyZgQxALRlShADAABkLEamBGMhAAAQ1c8J254qpNFz1gAOsVQoE7WRlWq4Fu8vAO36nd2HAAAgVzYP6vzgaTZ1AACA2JJkZEiS1YkbS3pM1QcUbbqpDnTl3LkdUv6XCGIAaNdN4/116k4AAIDjVM69sbP4+mgWoY37CG0AAAB8J2VGhhrvv9jh37U4QBl7WN8RbbuOxgoHcRO8ANCFm8b7SepOAACAw9ghrpf2dSHpnaR5wi61zuZFMUrq1hHaAAAA+E7SQIYUUlJtwFWLYAZ2VDlXK5Qm4z0DoGv3BDEAACiPzTNXwYvn5ZbG6lkgQ3GyMSQCGQAAIIH//Pvvv6n7ICmUBBLBDABAXh4lja0cIgAAyJiVi7rUt2ztl+aWy8b73pSXqpybSvoYoale/dwAAEA5sglkSAQzAABZIYgBAEDmNpSM2sdvjfe3rXcqMvsZPCjOPPqu8f4yQjsAAADfSV5aah1lpgAAmSCIAQBApl4pGbWPqaSiAxmWhXKrePPnon9eAACgXFllZKyQmQEASIggBgAAGVkrGbUqG9XmPLHorIzKubmkq0jNLSWNGCMBAIAUsgxkSAQzAABJEMQAACATdu7DRNJZh80UuzgfOYghSTeN95OI7QEAAHz1U+oObNN4/6Cw22aZuCsAgGG4E0EMAAByMlO3QQwpbJyrLeujGAmCGJI0j9weAADAV9lmZKyQmQEAiIAdhgAAZKZyrtb+B3gfqoiszLUzMWL9XFaeGu9HkdsEAAD4KtuMjBXLzBgpDCwBAGgbQQwAAPIU8+yKM4XMjFHENvdih5wvFD+IIYXsGAAAgGSyD2RIku2KGUu6T9wVAEC/vCOIAQBAtmIfwn0m6aFybhK53RdVzr2xUlJ/KU2lgqfG+3mCdgEAAL7KvrTUc4lqgQIA+mUpacqkHACAvFXOPaj7czI2uZc0a7yvE7Qt6WsZqal9pSy1/I4xEwAASK24QIYkVc5NJX1M3Q8AQJGWCjWwH1J3BAAAvCyDud+jpGtJt7HOz7ASUhPlsYGPszEAAEAWigxkSFLl3KWkuTgEHACwuyIO8gQAAIFlJfwvdT/MvaRaUt1mpoZ9j2P7upR02ta1W0A2BgAAyEKxgQxJqpw7VwhmpEg1BgCUhUO9AQAoUMblhR8VDt9eZXnWz/9C431tGRbrziW9sV/PlVfgYt194/04dScAAACkwgMZ0tfdK9fKc2ALAEiP8zAAAChY5dxI0j+p+zFAv1CKEwAA5KL4QMZK5dxEIaBBqSkAwMqjpAmTcAAAypZxVkZffWi8n6XuBAAAwMpPqTvQFttpO1ZYtAIA4JM41BsAgL6YKWRZonv3BDEAAEBuehPIkCRbrBorLF4BAIZpKem3xvsph3oDANAPjfcLhWAGurWUNEndCQAAgOd6U1rqOTtQba58D04DALTvTqGUFAEMAAB6qHKulnSRuh899mvjfZ26EwAAAM/1KiNjnQ2+ziXdJO4KAKB7S0m/N95fEsQAAKDXJqLEVFfeEcQAAAC56m1GxjqyMwCg18jCAABgQCrnLiX9mbofPXPTeD9J3QkAAIBtBhHIkKTKuTcKNVXfJ+4KAKAdT5Kmjfe3qTsCAADiqpybSvqYuh89QRADAABkbzCBjJXKuXNJ16KuKgCU7IOka7IwAAAYrsq5uaSr1P0oHEEMAABQhMEFMlYq5yYKGRqUmwKActwrlJFapO4IAABIj2DGUT413k9TdwIAAGAXgw1kSF/LTU3t6yRxdwAA2z0pBDDq1B0BAAB5IZhxkHeN9/PUnQAAANjVoAMZKxbQuBaDXwDIzZOkGRNtAADwEsu4/5y6HwVYSho33j+k7ggAAMA+CGSsqZwbKZSbIqABAGktFc7AmKXuCAAAKEPl3FjSrci23+Ze0iVnjAEAgBIRyNiAgAYAJLNUyJDjIG8AALA3y7afS3qbuCs5WSpkuF6n7ggAAMChCGS8gIAGAERDCSkAANCayrlLhYDG0LMz7iRNG+8XqTsCAABwDAIZO+BQcADozL2kOQEMAADQNpvHzSS9T9yVFO4VNonUqTsCAADQBgIZe7JD5KaSzhJ3BQBKdqMQwKhTdwQAAPTbwDLtHxVKdM5TdwQAAKBNBDIOVDl3rhDQuBRZGgCwiyeF8y/mnH/x/+3dwW3bQBAF0F+CGwjsEtyAEZWgElyaS6A7YDqQO5A6sDrwYXfjvQRB4Mgkl+8BCxE67ZGDPzMEAL5bF2iMWMO9pgQY89IXAQC4BUHGF9Vx5WOS5yQ/l70NwOpck0wphfVp6csAAHQ13NYn7S8p3wF58Q0MAGB0goz/qHb4tFBjyy/EAF/1mlJUT0tfBADgTzZYw11SmkReNIkAAHsiyLiR7oX4GJMawPja5MWUZLY6CgDYmlrDHbpzv9xtfrskmdsxeQEA7JUg4xt0o8uHjLmPFdint5SierKPGQAYTa3jHlPquId6btWkdk1ySnKuZ05y0hwCAFAIMhZQPxR+6I5gA9iCviNwUlgDAHv14+npUB9b2PGv5vr7bkUUAMDfCTJWoAYbLdx4zDZ2swJja12Bc/sVXAAAAACwBEHGStUOn4d62vMadrQC4/mV5D0lsDilrDE4L3ojAAAAAKgEGRtTpzfa+PJdPsOO1P+sqQJ6bykhRfK5wuBcj73LAAAAAKyeIGNgXegB7MfZNAUAAAAAI/kAFephvvPYRuIAAAAASUVORK5CYII=)
```
from langchain_core.messages import HumanMessagechat = ChatAnthropic(model="claude-3-opus-20240229")messages = [ HumanMessage( content=[ { "type": "image_url", "image_url": { # langchain logo "url": f"data:image/png;base64,{img_base64}", # noqa: E501 }, }, {"type": "text", "text": "What is this logo for?"}, ] )]chat.invoke(messages)
```
```
AIMessage(content='This logo is for LangChain, which appears to be some kind of software or technology platform based on the name and minimalist design style of the logo featuring a silhouette of a bird (likely an eagle or hawk) and the company name in a simple, modern font.')
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:23.299Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/anthropic/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/anthropic/",
"description": "This notebook covers how to get started with Anthropic chat models.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4082",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"anthropic\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:23 GMT",
"etag": "W/\"c747b325cc58799d1a03876507ff5a3a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::hvn5p-1713753503214-e7f1d9dc70c6"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/anthropic/",
"property": "og:url"
},
{
"content": "ChatAnthropic | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook covers how to get started with Anthropic chat models.",
"property": "og:description"
}
],
"title": "ChatAnthropic | 🦜️🔗 LangChain"
} | ChatAnthropic
This notebook covers how to get started with Anthropic chat models.
Setup
For setup instructions, please see the Installation and Environment Setup sections of the Anthropic Platform page.
%pip install -qU langchain-anthropic
Environment Setup
We’ll need to get an Anthropic API key and set the ANTHROPIC_API_KEY environment variable:
import os
from getpass import getpass
os.environ["ANTHROPIC_API_KEY"] = getpass()
The code provided assumes that your ANTHROPIC_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:
chat = ChatAnthropic(temperature=0, api_key="YOUR_API_KEY", model_name="claude-3-opus-20240229")
In these demos, we will use the Claude 3 Opus model, and you can also use the launch version of the Sonnet model with claude-3-sonnet-20240229.
You can check the model comparison doc here.
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
chat = ChatAnthropic(temperature=0, model_name="claude-3-opus-20240229")
system = (
"You are a helpful assistant that translates {input_language} to {output_language}."
)
human = "{text}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])
chain = prompt | chat
chain.invoke(
{
"input_language": "English",
"output_language": "Korean",
"text": "I love Python",
}
)
AIMessage(content='저는 파이썬을 사랑합니다.\n\nTranslation:\nI love Python.')
ChatAnthropic also supports async and streaming functionality:
chat = ChatAnthropic(temperature=0, model_name="claude-3-opus-20240229")
prompt = ChatPromptTemplate.from_messages([("human", "Tell me a joke about {topic}")])
chain = prompt | chat
await chain.ainvoke({"topic": "bear"})
AIMessage(content='Sure, here\'s a joke about a bear:\n\nA bear walks into a bar and says to the bartender, "I\'ll have a pint of beer and a.......... packet of peanuts."\n\nThe bartender asks, "Why the big pause?"\n\nThe bear replies, "I don\'t know, I\'ve always had them!"')
chat = ChatAnthropic(temperature=0.3, model_name="claude-3-opus-20240229")
prompt = ChatPromptTemplate.from_messages(
[("human", "Give me a list of famous tourist attractions in Japan")]
)
chain = prompt | chat
for chunk in chain.stream({}):
print(chunk.content, end="", flush=True)
Here is a list of famous tourist attractions in Japan:
1. Tokyo Skytree (Tokyo)
2. Senso-ji Temple (Tokyo)
3. Meiji Shrine (Tokyo)
4. Tokyo DisneySea (Urayasu, Chiba)
5. Fushimi Inari Taisha (Kyoto)
6. Kinkaku-ji (Golden Pavilion) (Kyoto)
7. Kiyomizu-dera (Kyoto)
8. Nijo Castle (Kyoto)
9. Osaka Castle (Osaka)
10. Dotonbori (Osaka)
11. Hiroshima Peace Memorial Park (Hiroshima)
12. Itsukushima Shrine (Miyajima Island, Hiroshima)
13. Himeji Castle (Himeji)
14. Todai-ji Temple (Nara)
15. Nara Park (Nara)
16. Mount Fuji (Shizuoka and Yamanashi Prefectures)
17.
With Anthropic’s tool-calling, or tool-use, API, you can define tools for the model to invoke. This is extremely useful for building tool-using chains and agents, as well as for getting structured outputs from a model.
note
Anthropic’s tool-calling functionality is still in beta.
bind_tools()
With ChatAnthropic.bind_tools, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an Anthropic tool schemas, which looks like:
{
"name": "...",
"description": "...",
"input_schema": {...} # JSONSchema
}
and passed in every model invocation.
from langchain_core.pydantic_v1 import BaseModel, Field
llm = ChatAnthropic(model="claude-3-opus-20240229", temperature=0)
class GetWeather(BaseModel):
"""Get the current weather in a given location"""
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
llm_with_tools = llm.bind_tools([GetWeather])
/Users/bagatur/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The function `bind_tools` is in beta. It is actively being worked on, so the API may change.
warn_beta(
ai_msg = llm_with_tools.invoke(
"what is the weather like in San Francisco",
)
ai_msg
AIMessage(content=[{'text': '<thinking>\nThe user is asking about the current weather in a specific location, San Francisco. The relevant tool to answer this is the GetWeather function.\n\nLooking at the parameters for GetWeather:\n- location (required): The user directly provided the location in the query - "San Francisco"\n\nSince the required "location" parameter is present, we can proceed with calling the GetWeather function.\n</thinking>', 'type': 'text'}, {'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9', 'input': {'location': 'San Francisco, CA'}, 'name': 'GetWeather', 'type': 'tool_use'}], response_metadata={'id': 'msg_01HepCTzqXJed5iNuLgV1VCZ', 'model': 'claude-3-opus-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 487, 'output_tokens': 143}}, id='run-1a1b3289-ba2c-47ae-8be1-8929d7cc547e-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9'}])
Notice that the output message content is a list that contains a text block and then a tool_use block:
[{'text': '<thinking>\nThe user is asking about the current weather in a specific location, San Francisco. The relevant tool to answer this is the GetWeather function.\n\nLooking at the parameters for GetWeather:\n- location (required): The user directly provided the location in the query - "San Francisco"\n\nSince the required "location" parameter is present, we can proceed with calling the GetWeather function.\n</thinking>',
'type': 'text'},
{'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9',
'input': {'location': 'San Francisco, CA'},
'name': 'GetWeather',
'type': 'tool_use'}]
Crucially, the tool calls are also extracted into the tool_calls where they are in a standardized, model-agnostic format:
[{'name': 'GetWeather',
'args': {'location': 'San Francisco, CA'},
'id': 'toolu_01StzxdWQSZhAMbR1CCchQV9'}]
tip
ChatAnthropic model outputs are always a single AI message that can have either a single string or a list of content blocks. The content blocks can be text blocks or tool-duse blocks. There can be multiple of each and they can be interspersed.
Parsing tool calls
The langchain_anthropic.output_parsers.ToolsOutputParser makes it easy to parse the tool calls from an Anthropic AI message into Pydantic objects if we’d like:
from langchain_anthropic.output_parsers import ToolsOutputParser
parser = ToolsOutputParser(pydantic_schemas=[GetWeather])
chain = llm_with_tools | parser
chain.invoke("What is the weather like in nyc, la, sf and cleveland")
[GetWeather(location='New York City, NY'),
GetWeather(location='Los Angeles, CA'),
GetWeather(location='San Francisco, CA'),
GetWeather(location='Cleveland, OH')]
with_structured_output()
The BaseChatModel.with_structured_output interface makes it easy to get structured output from chat models. You can use ChatAnthropic.with_structured_output, which uses tool-calling under the hood), to get the model to more reliably return an output in a specific format:
structured_llm = llm.with_structured_output(GetWeather)
structured_llm.invoke(
"what is the weather like in San Francisco",
)
GetWeather(location='San Francisco, CA')
The main difference between using
llm.with_structured_output(GetWeather)
vs
llm.bind_tools([GetWeather]) | ToolsOutputParser(pydantic_schemas=[GetWeather])
is that it will return only the first GetWeather call, whereas the second approach will return a list.
Passing tool results to model
We can use ToolMessages with the appropriate tool_call_ids to pass tool results back to the model:
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
messages = [
HumanMessage("What is the weather like in San Francisco"),
AIMessage(
content=[
{
"text": '<thinking>\nBased on the user\'s question, the relevant function to call is GetWeather, which requires the "location" parameter.\n\nThe user has directly specified the location as "San Francisco". Since San Francisco is a well known city, I can reasonably infer they mean San Francisco, CA without needing the state specified.\n\nAll the required parameters are provided, so I can proceed with the API call.\n</thinking>',
"type": "text",
},
{
"type": "tool_use",
"id": "toolu_01SCgExKzQ7eqSkMHfygvYuu",
"name": "GetWeather",
"input": {"location": "San Francisco, CA"},
"text": None,
},
],
),
ToolMessage(
"Rain. High 54F. Winds SW at 15 to 25 mph. Chance of rain 100%.",
tool_call_id="toolu_01SCgExKzQ7eqSkMHfygvYuu",
),
]
llm_with_tools.invoke(messages)
AIMessage(content='Based on calling the GetWeather function, the weather in San Francisco, CA is:\nRain with a high temperature of 54°F and winds from the southwest at 15-25 mph. There is a 100% chance of rain.', response_metadata={'id': 'msg_01J7nWVRPPTgae4eDpf9yR3M', 'model': 'claude-3-opus-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 670, 'output_tokens': 56}}, id='run-44fcd34f-9c24-464f-94dd-63bd0d22870d-0')
Streaming
danger
Anthropic does not currently support streaming tool calls. Attempting to stream will yield a single final message.
list(llm_with_tools.stream("What's the weather in san francisco"))
/Users/bagatur/langchain/libs/partners/anthropic/langchain_anthropic/chat_models.py:328: UserWarning: stream: Tool use is not yet supported in streaming mode.
warnings.warn("stream: Tool use is not yet supported in streaming mode.")
[AIMessage(content=[{'text': '<thinking>\nThe user is asking for the current weather in a specific location, San Francisco. The GetWeather function is the relevant tool to answer this request, as it returns the current weather for a given location.\n\nThe GetWeather function has one required parameter:\nlocation: The city and state, e.g. San Francisco, CA\n\nThe user provided the city San Francisco in their request. They did not specify the state, but it can be reasonably inferred that they are referring to San Francisco, California since that is the most well known city with that name.\n\nSince the required location parameter has been provided by the user, we can proceed with calling the GetWeather function.\n</thinking>', 'type': 'text'}, {'text': None, 'type': 'tool_use', 'id': 'toolu_01V9ZripoQzuY8HubspJy6fP', 'name': 'GetWeather', 'input': {'location': 'San Francisco, CA'}}], id='run-b825206b-5b6b-48bc-ad8d-802dee310c7f')]
Multimodal
Anthropic’s Claude-3 models are compatible with both image and text inputs. You can use this as follows:
# open ../../../static/img/brand/wordmark.png as base64 str
import base64
from pathlib import Path
from IPython.display import HTML
img_path = Path("../../../static/img/brand/wordmark.png")
img_base64 = base64.b64encode(img_path.read_bytes()).decode("utf-8")
# display b64 image in notebook
HTML(f'<img src="data:image/png;base64,{img_base64}">')
from langchain_core.messages import HumanMessage
chat = ChatAnthropic(model="claude-3-opus-20240229")
messages = [
HumanMessage(
content=[
{
"type": "image_url",
"image_url": {
# langchain logo
"url": f"data:image/png;base64,{img_base64}", # noqa: E501
},
},
{"type": "text", "text": "What is this logo for?"},
]
)
]
chat.invoke(messages)
AIMessage(content='This logo is for LangChain, which appears to be some kind of software or technology platform based on the name and minimalist design style of the logo featuring a silhouette of a bird (likely an eagle or hawk) and the company name in a simple, modern font.') |
https://python.langchain.com/docs/integrations/chat/azure_chat_openai/ | ## AzureChatOpenAI
> [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview) provides REST API access to OpenAI’s powerful language models including the GPT-4, GPT-3.5-Turbo, and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or a web-based interface in the Azure OpenAI Studio.
This notebook goes over how to connect to an Azure-hosted OpenAI endpoint. First, we need to install the `langchain-openai` package.
%pip install -qU langchain-openai
Next, let’s set some environment variables to help us connect to the Azure OpenAI service. You can find these values in the Azure portal.
```
import osos.environ["AZURE_OPENAI_API_KEY"] = "..."os.environ["AZURE_OPENAI_ENDPOINT"] = "https://<your-endpoint>.openai.azure.com/"os.environ["AZURE_OPENAI_API_VERSION"] = "2023-06-01-preview"os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"] = "chat"
```
Next, let’s construct our model and chat with it:
```
from langchain_core.messages import HumanMessagefrom langchain_openai import AzureChatOpenAImodel = AzureChatOpenAI( openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"], azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"],)
```
```
message = HumanMessage( content="Translate this sentence from English to French. I love programming.")model.invoke([message])
```
```
AIMessage(content="J'adore programmer.", response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 19, 'total_tokens': 25}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-25ed88db-38f2-4b0c-a943-a03f217711a9-0')
```
## Model Version[](#model-version "Direct link to Model Version")
Azure OpenAI responses contain `model` property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the version of the model, which is set on the deployment in Azure. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with `OpenAICallbackHandler`.
To solve this problem, you can pass `model_version` parameter to `AzureChatOpenAI` class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model.
```
from langchain.callbacks import get_openai_callback
```
```
model = AzureChatOpenAI( openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"], azure_deployment=os.environ[ "AZURE_OPENAI_CHAT_DEPLOYMENT_NAME" ], # in Azure, this deployment has version 0613 - input and output tokens are counted separately)with get_openai_callback() as cb: model.invoke([message]) print( f"Total Cost (USD): ${format(cb.total_cost, '.6f')}" ) # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used
```
```
Total Cost (USD): $0.000041
```
We can provide the model version to `AzureChatOpenAI` constructor. It will get appended to the model name returned by Azure OpenAI and cost will be counted correctly.
```
model0301 = AzureChatOpenAI( openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"], azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"], model_version="0301",)with get_openai_callback() as cb: model0301.invoke([message]) print(f"Total Cost (USD): ${format(cb.total_cost, '.6f')}")
```
```
Total Cost (USD): $0.000044
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:24.137Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/azure_chat_openai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/azure_chat_openai/",
"description": "[Azure OpenAI",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3418",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"azure_chat_openai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:23 GMT",
"etag": "W/\"30da1fd88106af65be07fa3223ff6129\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::h6m2t-1713753503210-540d20ce266d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/azure_chat_openai/",
"property": "og:url"
},
{
"content": "AzureChatOpenAI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Azure OpenAI",
"property": "og:description"
}
],
"title": "AzureChatOpenAI | 🦜️🔗 LangChain"
} | AzureChatOpenAI
Azure OpenAI Service provides REST API access to OpenAI’s powerful language models including the GPT-4, GPT-3.5-Turbo, and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or a web-based interface in the Azure OpenAI Studio.
This notebook goes over how to connect to an Azure-hosted OpenAI endpoint. First, we need to install the langchain-openai package.
%pip install -qU langchain-openai
Next, let’s set some environment variables to help us connect to the Azure OpenAI service. You can find these values in the Azure portal.
import os
os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://<your-endpoint>.openai.azure.com/"
os.environ["AZURE_OPENAI_API_VERSION"] = "2023-06-01-preview"
os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"] = "chat"
Next, let’s construct our model and chat with it:
from langchain_core.messages import HumanMessage
from langchain_openai import AzureChatOpenAI
model = AzureChatOpenAI(
openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"],
)
message = HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
model.invoke([message])
AIMessage(content="J'adore programmer.", response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 19, 'total_tokens': 25}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-25ed88db-38f2-4b0c-a943-a03f217711a9-0')
Model Version
Azure OpenAI responses contain model property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the version of the model, which is set on the deployment in Azure. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with OpenAICallbackHandler.
To solve this problem, you can pass model_version parameter to AzureChatOpenAI class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model.
from langchain.callbacks import get_openai_callback
model = AzureChatOpenAI(
openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
azure_deployment=os.environ[
"AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"
], # in Azure, this deployment has version 0613 - input and output tokens are counted separately
)
with get_openai_callback() as cb:
model.invoke([message])
print(
f"Total Cost (USD): ${format(cb.total_cost, '.6f')}"
) # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used
Total Cost (USD): $0.000041
We can provide the model version to AzureChatOpenAI constructor. It will get appended to the model name returned by Azure OpenAI and cost will be counted correctly.
model0301 = AzureChatOpenAI(
openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"],
model_version="0301",
)
with get_openai_callback() as cb:
model0301.invoke([message])
print(f"Total Cost (USD): ${format(cb.total_cost, '.6f')}")
Total Cost (USD): $0.000044 |
https://python.langchain.com/docs/integrations/chat/anthropic_functions/ | danger
The Anthropic API officially supports tool-calling so this workaround is no longer needed. Please use [ChatAnthropic](https://python.langchain.com/docs/integrations/chat/anthropic/) with `langchain-anthropic>=0.1.5`.
This notebook shows how to use an experimental wrapper around Anthropic that gives it tool calling and structured output capabilities. It follows Anthropic’s guide [here](https://docs.anthropic.com/claude/docs/functions-external-tools)
The wrapper is available from the `langchain-anthropic` package, and it also requires the optional dependency `defusedxml` for parsing XML output from the llm.
Note: this is a beta feature that will be replaced by Anthropic’s formal implementation of tool calling, but it is useful for testing and experimentation in the meantime.
```
%pip install -qU langchain-anthropic defusedxmlfrom langchain_anthropic.experimental import ChatAnthropicTools
```
`ChatAnthropicTools` exposes a `bind_tools` method that allows you to pass in Pydantic models or BaseTools to the llm.
```
from langchain_core.pydantic_v1 import BaseModelclass Person(BaseModel): name: str age: intmodel = ChatAnthropicTools(model="claude-3-opus-20240229").bind_tools(tools=[Person])model.invoke("I am a 27 year old named Erick")
```
```
AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'name': 'Person', 'arguments': '{"name": "Erick", "age": "27"}'}, 'type': 'function'}]})
```
## Structured Output[](#structured-output "Direct link to Structured Output")
`ChatAnthropicTools` also implements the [`with_structured_output` spec](https://python.langchain.com/docs/modules/model_io/chat/structured_output/) for extracting values. Note: this may not be as stable as with models that explicitly offer tool calling.
```
chain = ChatAnthropicTools(model="claude-3-opus-20240229").with_structured_output( Person)chain.invoke("I am a 27 year old named Erick")
```
```
Person(name='Erick', age=27)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:24.493Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/anthropic_functions/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/anthropic_functions/",
"description": "The Anthropic API officially supports tool-calling so this workaround is",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4466",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"anthropic_functions\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:24 GMT",
"etag": "W/\"da686d0274fefacdf1f4cb7b174fdd85\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::cl42n-1713753504157-b76074dcb875"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/anthropic_functions/",
"property": "og:url"
},
{
"content": "[Deprecated] Experimental Anthropic Tools Wrapper | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "The Anthropic API officially supports tool-calling so this workaround is",
"property": "og:description"
}
],
"title": "[Deprecated] Experimental Anthropic Tools Wrapper | 🦜️🔗 LangChain"
} | danger
The Anthropic API officially supports tool-calling so this workaround is no longer needed. Please use ChatAnthropic with langchain-anthropic>=0.1.5.
This notebook shows how to use an experimental wrapper around Anthropic that gives it tool calling and structured output capabilities. It follows Anthropic’s guide here
The wrapper is available from the langchain-anthropic package, and it also requires the optional dependency defusedxml for parsing XML output from the llm.
Note: this is a beta feature that will be replaced by Anthropic’s formal implementation of tool calling, but it is useful for testing and experimentation in the meantime.
%pip install -qU langchain-anthropic defusedxml
from langchain_anthropic.experimental import ChatAnthropicTools
ChatAnthropicTools exposes a bind_tools method that allows you to pass in Pydantic models or BaseTools to the llm.
from langchain_core.pydantic_v1 import BaseModel
class Person(BaseModel):
name: str
age: int
model = ChatAnthropicTools(model="claude-3-opus-20240229").bind_tools(tools=[Person])
model.invoke("I am a 27 year old named Erick")
AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'name': 'Person', 'arguments': '{"name": "Erick", "age": "27"}'}, 'type': 'function'}]})
Structured Output
ChatAnthropicTools also implements the with_structured_output spec for extracting values. Note: this may not be as stable as with models that explicitly offer tool calling.
chain = ChatAnthropicTools(model="claude-3-opus-20240229").with_structured_output(
Person
)
chain.invoke("I am a 27 year old named Erick")
Person(name='Erick', age=27) |
https://python.langchain.com/docs/integrations/chat/anyscale/ | This notebook demonstrates the use of `langchain.chat_models.ChatAnyscale` for [Anyscale Endpoints](https://endpoints.anyscale.com/).
```
dict_keys(['meta-llama/Llama-2-70b-chat-hf', 'meta-llama/Llama-2-7b-chat-hf', 'meta-llama/Llama-2-13b-chat-hf'])
```
This way, the three requests will only take as long as the longest individual request.
```
meta-llama/Llama-2-70b-chat-hfGreetings! I'm just an AI, I don't have a personal identity like humans do, but I'm here to help you with any questions you have.I'm a large language model, which means I'm trained on a large corpus of text data to generate language outputs that are coherent and natural-sounding. My architecture is based on a transformer model, which is a type of neural network that's particularly well-suited for natural language processing tasks.As for my parameters, I have a few billion parameters, but I don't have access to the exact number as it's not relevant to my functioning. My training data includes a vast amount of text from various sources, including books, articles, and websites, which I use to learn patterns and relationships in language.I'm designed to be a helpful tool for a variety of tasks, such as answering questions, providing information, and generating text. I'm constantly learning and improving my abilities through machine learning algorithms and feedback from users like you.I hope this helps! Is there anything else you'd like to know about me or my capabilities?--- meta-llama/Llama-2-7b-chat-hfAh, a fellow tech enthusiast! *adjusts glasses* I'm glad to share some technical details about myself. 🤓Indeed, I'm a transformer model, specifically a BERT-like language model trained on a large corpus of text data. My architecture is based on the transformer framework, which is a type of neural network designed for natural language processing tasks. 🏠As for the number of parameters, I have approximately 340 million. *winks* That's a pretty hefty number, if I do say so myself! These parameters allow me to learn and represent complex patterns in language, such as syntax, semantics, and more. 🤔But don't ask me to do math in my head – I'm a language model, not a calculating machine! 😅 My strengths lie in understanding and generating human-like text, so feel free to chat with me anytime you'd like. 💬Now, do you have any more technical questions for me? Or would you like to engage in a nice chat? 😊--- meta-llama/Llama-2-13b-chat-hfHello! As a friendly and helpful AI, I'd be happy to share some technical facts about myself.I am a transformer-based language model, specifically a variant of the BERT (Bidirectional Encoder Representations from Transformers) architecture. BERT was developed by Google in 2018 and has since become one of the most popular and widely-used AI language models.Here are some technical details about my capabilities:1. Parameters: I have approximately 340 million parameters, which are the numbers that I use to learn and represent language. This is a relatively large number of parameters compared to some other languages models, but it allows me to learn and understand complex language patterns and relationships.2. Training: I was trained on a large corpus of text data, including books, articles, and other sources of written content. This training allows me to learn about the structure and conventions of language, as well as the relationships between words and phrases.3. Architectures: My architecture is based on the transformer model, which is a type of neural network that is particularly well-suited for natural language processing tasks. The transformer model uses self-attention mechanisms to allow the model to "attend" to different parts of the input text, allowing it to capture long-range dependencies and contextual relationships.4. Precision: I am capable of generating text with high precision and accuracy, meaning that I can produce text that is close to human-level quality in terms of grammar, syntax, and coherence.5. Generative capabilities: In addition to being able to generate text based on prompts and questions, I am also capable of generating text based on a given topic or theme. This allows me to create longer, more coherent pieces of text that are organized around a specific idea or concept.Overall, I am a powerful and versatile language model that is capable of a wide range of natural language processing tasks. I am constantly learning and improving, and I am here to help answer any questions you may have!---CPU times: user 371 ms, sys: 15.5 ms, total: 387 msWall time: 12 s
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:24.908Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/anyscale/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/anyscale/",
"description": "This notebook demonstrates the use of",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3420",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"anyscale\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:24 GMT",
"etag": "W/\"36ddc4d997ff41333587525e4f44a3bf\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::f7wpm-1713753504704-58d58c7c4b50"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/anyscale/",
"property": "og:url"
},
{
"content": "ChatAnyscale | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook demonstrates the use of",
"property": "og:description"
}
],
"title": "ChatAnyscale | 🦜️🔗 LangChain"
} | This notebook demonstrates the use of langchain.chat_models.ChatAnyscale for Anyscale Endpoints.
dict_keys(['meta-llama/Llama-2-70b-chat-hf', 'meta-llama/Llama-2-7b-chat-hf', 'meta-llama/Llama-2-13b-chat-hf'])
This way, the three requests will only take as long as the longest individual request.
meta-llama/Llama-2-70b-chat-hf
Greetings! I'm just an AI, I don't have a personal identity like humans do, but I'm here to help you with any questions you have.
I'm a large language model, which means I'm trained on a large corpus of text data to generate language outputs that are coherent and natural-sounding. My architecture is based on a transformer model, which is a type of neural network that's particularly well-suited for natural language processing tasks.
As for my parameters, I have a few billion parameters, but I don't have access to the exact number as it's not relevant to my functioning. My training data includes a vast amount of text from various sources, including books, articles, and websites, which I use to learn patterns and relationships in language.
I'm designed to be a helpful tool for a variety of tasks, such as answering questions, providing information, and generating text. I'm constantly learning and improving my abilities through machine learning algorithms and feedback from users like you.
I hope this helps! Is there anything else you'd like to know about me or my capabilities?
---
meta-llama/Llama-2-7b-chat-hf
Ah, a fellow tech enthusiast! *adjusts glasses* I'm glad to share some technical details about myself. 🤓
Indeed, I'm a transformer model, specifically a BERT-like language model trained on a large corpus of text data. My architecture is based on the transformer framework, which is a type of neural network designed for natural language processing tasks. 🏠
As for the number of parameters, I have approximately 340 million. *winks* That's a pretty hefty number, if I do say so myself! These parameters allow me to learn and represent complex patterns in language, such as syntax, semantics, and more. 🤔
But don't ask me to do math in my head – I'm a language model, not a calculating machine! 😅 My strengths lie in understanding and generating human-like text, so feel free to chat with me anytime you'd like. 💬
Now, do you have any more technical questions for me? Or would you like to engage in a nice chat? 😊
---
meta-llama/Llama-2-13b-chat-hf
Hello! As a friendly and helpful AI, I'd be happy to share some technical facts about myself.
I am a transformer-based language model, specifically a variant of the BERT (Bidirectional Encoder Representations from Transformers) architecture. BERT was developed by Google in 2018 and has since become one of the most popular and widely-used AI language models.
Here are some technical details about my capabilities:
1. Parameters: I have approximately 340 million parameters, which are the numbers that I use to learn and represent language. This is a relatively large number of parameters compared to some other languages models, but it allows me to learn and understand complex language patterns and relationships.
2. Training: I was trained on a large corpus of text data, including books, articles, and other sources of written content. This training allows me to learn about the structure and conventions of language, as well as the relationships between words and phrases.
3. Architectures: My architecture is based on the transformer model, which is a type of neural network that is particularly well-suited for natural language processing tasks. The transformer model uses self-attention mechanisms to allow the model to "attend" to different parts of the input text, allowing it to capture long-range dependencies and contextual relationships.
4. Precision: I am capable of generating text with high precision and accuracy, meaning that I can produce text that is close to human-level quality in terms of grammar, syntax, and coherence.
5. Generative capabilities: In addition to being able to generate text based on prompts and questions, I am also capable of generating text based on a given topic or theme. This allows me to create longer, more coherent pieces of text that are organized around a specific idea or concept.
Overall, I am a powerful and versatile language model that is capable of a wide range of natural language processing tasks. I am constantly learning and improving, and I am here to help answer any questions you may have!
---
CPU times: user 371 ms, sys: 15.5 ms, total: 387 ms
Wall time: 12 s |
https://python.langchain.com/docs/integrations/chat/baidu_qianfan_endpoint/ | ## QianfanChatEndpoint
Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.
Basically, those model are split into the following type:
* Embedding
* Chat
* Completion
In this notebook, we will introduce how to use langchain with [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html) mainly in `Chat` corresponding to the package `langchain/chat_models` in langchain:
## API Initialization[](#api-initialization "Direct link to API Initialization")
To use the LLM services based on Baidu Qianfan, you have to initialize these parameters:
You could either choose to init the AK,SK in environment variables or init params:
```
export QIANFAN_AK=XXXexport QIANFAN_SK=XXX
```
## Current supported models:[](#current-supported-models "Direct link to Current supported models:")
* ERNIE-Bot-turbo (default models)
* ERNIE-Bot
* BLOOMZ-7B
* Llama-2-7b-chat
* Llama-2-13b-chat
* Llama-2-70b-chat
* Qianfan-BLOOMZ-7B-compressed
* Qianfan-Chinese-Llama-2-7B
* ChatGLM2-6B-32K
* AquilaChat-7B
## Set up[](#set-up "Direct link to Set up")
```
"""For basic init and call"""import osfrom langchain_community.chat_models import QianfanChatEndpointfrom langchain_core.language_models.chat_models import HumanMessageos.environ["QIANFAN_AK"] = "Your_api_key"os.environ["QIANFAN_SK"] = "You_secret_Key"
```
## Usage[](#usage "Direct link to Usage")
```
chat = QianfanChatEndpoint(streaming=True)messages = [HumanMessage(content="Hello")]chat.invoke(messages)
```
```
AIMessage(content='您好!请问您需要什么帮助?我将尽力回答您的问题。')
```
```
await chat.ainvoke(messages)
```
```
AIMessage(content='您好!有什么我可以帮助您的吗?')
```
```
[AIMessage(content='您好!有什么我可以帮助您的吗?')]
```
### Streaming[](#streaming "Direct link to Streaming")
```
try: for chunk in chat.stream(messages): print(chunk.content, end="", flush=True)except TypeError as e: print("")
```
## Use different models in Qianfan[](#use-different-models-in-qianfan "Direct link to Use different models in Qianfan")
The default model is ERNIE-Bot-turbo, in the case you want to deploy your own model based on Ernie Bot or third-party open-source model, you could follow these steps:
1. (Optional, if the model are included in the default models, skip it) Deploy your model in Qianfan Console, get your own customized deploy endpoint.
2. Set up the field called `endpoint` in the initialization:
```
chatBot = QianfanChatEndpoint( streaming=True, model="ERNIE-Bot",)messages = [HumanMessage(content="Hello")]chatBot.invoke(messages)
```
```
AIMessage(content='Hello,可以回答问题了,我会竭尽全力为您解答,请问有什么问题吗?')
```
## Model Params:[](#model-params "Direct link to Model Params:")
For now, only `ERNIE-Bot` and `ERNIE-Bot-turbo` support model params below, we might support more models in the future.
* temperature
* top\_p
* penalty\_score
```
chat.invoke( [HumanMessage(content="Hello")], **{"top_p": 0.4, "temperature": 0.1, "penalty_score": 1},)
```
```
AIMessage(content='您好!有什么我可以帮助您的吗?')
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:25.328Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/baidu_qianfan_endpoint/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/baidu_qianfan_endpoint/",
"description": "Baidu AI Cloud Qianfan Platform is a one-stop large model development",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3420",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"baidu_qianfan_endpoint\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:25 GMT",
"etag": "W/\"144a0c2513b2c17db3974421513a1816\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::qw5cn-1713753505244-c6bc040d9712"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/baidu_qianfan_endpoint/",
"property": "og:url"
},
{
"content": "QianfanChatEndpoint | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Baidu AI Cloud Qianfan Platform is a one-stop large model development",
"property": "og:description"
}
],
"title": "QianfanChatEndpoint | 🦜️🔗 LangChain"
} | QianfanChatEndpoint
Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.
Basically, those model are split into the following type:
Embedding
Chat
Completion
In this notebook, we will introduce how to use langchain with Qianfan mainly in Chat corresponding to the package langchain/chat_models in langchain:
API Initialization
To use the LLM services based on Baidu Qianfan, you have to initialize these parameters:
You could either choose to init the AK,SK in environment variables or init params:
export QIANFAN_AK=XXX
export QIANFAN_SK=XXX
Current supported models:
ERNIE-Bot-turbo (default models)
ERNIE-Bot
BLOOMZ-7B
Llama-2-7b-chat
Llama-2-13b-chat
Llama-2-70b-chat
Qianfan-BLOOMZ-7B-compressed
Qianfan-Chinese-Llama-2-7B
ChatGLM2-6B-32K
AquilaChat-7B
Set up
"""For basic init and call"""
import os
from langchain_community.chat_models import QianfanChatEndpoint
from langchain_core.language_models.chat_models import HumanMessage
os.environ["QIANFAN_AK"] = "Your_api_key"
os.environ["QIANFAN_SK"] = "You_secret_Key"
Usage
chat = QianfanChatEndpoint(streaming=True)
messages = [HumanMessage(content="Hello")]
chat.invoke(messages)
AIMessage(content='您好!请问您需要什么帮助?我将尽力回答您的问题。')
await chat.ainvoke(messages)
AIMessage(content='您好!有什么我可以帮助您的吗?')
[AIMessage(content='您好!有什么我可以帮助您的吗?')]
Streaming
try:
for chunk in chat.stream(messages):
print(chunk.content, end="", flush=True)
except TypeError as e:
print("")
Use different models in Qianfan
The default model is ERNIE-Bot-turbo, in the case you want to deploy your own model based on Ernie Bot or third-party open-source model, you could follow these steps:
(Optional, if the model are included in the default models, skip it) Deploy your model in Qianfan Console, get your own customized deploy endpoint.
Set up the field called endpoint in the initialization:
chatBot = QianfanChatEndpoint(
streaming=True,
model="ERNIE-Bot",
)
messages = [HumanMessage(content="Hello")]
chatBot.invoke(messages)
AIMessage(content='Hello,可以回答问题了,我会竭尽全力为您解答,请问有什么问题吗?')
Model Params:
For now, only ERNIE-Bot and ERNIE-Bot-turbo support model params below, we might support more models in the future.
temperature
top_p
penalty_score
chat.invoke(
[HumanMessage(content="Hello")],
**{"top_p": 0.4, "temperature": 0.1, "penalty_score": 1},
)
AIMessage(content='您好!有什么我可以帮助您的吗?')
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/baichuan/ | ## Chat with Baichuan-192K
Baichuan chat models API by Baichuan Intelligent Technology. For more information, see [https://platform.baichuan-ai.com/docs/api](https://platform.baichuan-ai.com/docs/api)
```
from langchain_community.chat_models import ChatBaichuanfrom langchain_core.messages import HumanMessage
```
```
chat = ChatBaichuan(baichuan_api_key="YOUR_API_KEY")
```
Alternatively, you can set your API key with:
```
import osos.environ["BAICHUAN_API_KEY"] = "YOUR_API_KEY"
```
```
chat([HumanMessage(content="我日薪8块钱,请问在闰年的二月,我月薪多少")])
```
```
AIMessage(content='首先,我们需要确定闰年的二月有多少天。闰年的二月有29天。\n\n然后,我们可以计算你的月薪:\n\n日薪 = 月薪 / (当月天数)\n\n所以,你的月薪 = 日薪 * 当月天数\n\n将数值代入公式:\n\n月薪 = 8元/天 * 29天 = 232元\n\n因此,你在闰年的二月的月薪是232元。')
```
## Chat with Baichuan-192K with Streaming[](#chat-with-baichuan-192k-with-streaming "Direct link to Chat with Baichuan-192K with Streaming")
```
chat = ChatBaichuan( baichuan_api_key="YOUR_API_KEY", streaming=True,)
```
```
chat([HumanMessage(content="我日薪8块钱,请问在闰年的二月,我月薪多少")])
```
```
AIMessageChunk(content='首先,我们需要确定闰年的二月有多少天。闰年的二月有29天。\n\n然后,我们可以计算你的月薪:\n\n日薪 = 月薪 / (当月天数)\n\n所以,你的月薪 = 日薪 * 当月天数\n\n将数值代入公式:\n\n月薪 = 8元/天 * 29天 = 232元\n\n因此,你在闰年的二月的月薪是232元。')
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:25.524Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/baichuan/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/baichuan/",
"description": "Baichuan chat models API by Baichuan Intelligent Technology. For more",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4363",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"baichuan\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:25 GMT",
"etag": "W/\"277fa89c59f727a3ce3a4af88f18596b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::sgxwt-1713753505264-a5d7ac2edcaa"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/baichuan/",
"property": "og:url"
},
{
"content": "Chat with Baichuan-192K | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Baichuan chat models API by Baichuan Intelligent Technology. For more",
"property": "og:description"
}
],
"title": "Chat with Baichuan-192K | 🦜️🔗 LangChain"
} | Chat with Baichuan-192K
Baichuan chat models API by Baichuan Intelligent Technology. For more information, see https://platform.baichuan-ai.com/docs/api
from langchain_community.chat_models import ChatBaichuan
from langchain_core.messages import HumanMessage
chat = ChatBaichuan(baichuan_api_key="YOUR_API_KEY")
Alternatively, you can set your API key with:
import os
os.environ["BAICHUAN_API_KEY"] = "YOUR_API_KEY"
chat([HumanMessage(content="我日薪8块钱,请问在闰年的二月,我月薪多少")])
AIMessage(content='首先,我们需要确定闰年的二月有多少天。闰年的二月有29天。\n\n然后,我们可以计算你的月薪:\n\n日薪 = 月薪 / (当月天数)\n\n所以,你的月薪 = 日薪 * 当月天数\n\n将数值代入公式:\n\n月薪 = 8元/天 * 29天 = 232元\n\n因此,你在闰年的二月的月薪是232元。')
Chat with Baichuan-192K with Streaming
chat = ChatBaichuan(
baichuan_api_key="YOUR_API_KEY",
streaming=True,
)
chat([HumanMessage(content="我日薪8块钱,请问在闰年的二月,我月薪多少")])
AIMessageChunk(content='首先,我们需要确定闰年的二月有多少天。闰年的二月有29天。\n\n然后,我们可以计算你的月薪:\n\n日薪 = 月薪 / (当月天数)\n\n所以,你的月薪 = 日薪 * 当月天数\n\n将数值代入公式:\n\n月薪 = 8元/天 * 29天 = 232元\n\n因此,你在闰年的二月的月薪是232元。')
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/azureml_chat_endpoint/ | ## AzureMLChatOnlineEndpoint
> [Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides foundational and general purpose models from different providers.
>
> In general, you need to deploy models in order to consume its predictions (inference). In `Azure Machine Learning`, [Online Endpoints](https://learn.microsoft.com/en-us/azure/machine-learning/concept-endpoints) are used to deploy these models with a real-time serving. They are based on the ideas of `Endpoints` and `Deployments` which allow you to decouple the interface of your production workload from the implementation that serves it.
This notebook goes over how to use a chat model hosted on an `Azure Machine Learning Endpoint`.
```
from langchain_community.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint
```
## Set up[](#set-up "Direct link to Set up")
You must [deploy a model on Azure ML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) or [to Azure AI studio](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-open) and obtain the following parameters:
* `endpoint_url`: The REST endpoint url provided by the endpoint.
* `endpoint_api_type`: Use `endpoint_type='dedicated'` when deploying models to **Dedicated endpoints** (hosted managed infrastructure). Use `endpoint_type='serverless'` when deploying models using the **Pay-as-you-go** offering (model as a service).
* `endpoint_api_key`: The API key provided by the endpoint
## Content Formatter[](#content-formatter "Direct link to Content Formatter")
The `content_formatter` parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a `ContentFormatterBase` class is provided to allow users to transform data to their liking. The following content formatters are provided:
* `CustomOpenAIChatContentFormatter`: Formats request and response data for models like LLaMa2-chat that follow the OpenAI API spec for request and response.
_Note: `langchain.chat_models.azureml_endpoint.LlamaChatContentFormatter` is being deprecated and replaced with `langchain.chat_models.azureml_endpoint.CustomOpenAIChatContentFormatter`._
You can implement custom content formatters specific for your model deriving from the class `langchain_community.llms.azureml_endpoint.ContentFormatterBase`.
## Examples[](#examples "Direct link to Examples")
The following section contains examples about how to use this class:
### Example: Chat completions with real-time endpoints[](#example-chat-completions-with-real-time-endpoints "Direct link to Example: Chat completions with real-time endpoints")
```
from langchain_community.chat_models.azureml_endpoint import ( AzureMLEndpointApiType, CustomOpenAIChatContentFormatter,)from langchain_core.messages import HumanMessagechat = AzureMLChatOnlineEndpoint( endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score", endpoint_api_type=AzureMLEndpointApiType.dedicated, endpoint_api_key="my-api-key", content_formatter=CustomOpenAIChatContentFormatter(),)response = chat.invoke( [HumanMessage(content="Will the Collatz conjecture ever be solved?")])response
```
```
AIMessage(content=' The Collatz Conjecture is one of the most famous unsolved problems in mathematics, and it has been the subject of much study and research for many years. While it is impossible to predict with certainty whether the conjecture will ever be solved, there are several reasons why it is considered a challenging and important problem:\n\n1. Simple yet elusive: The Collatz Conjecture is a deceptively simple statement that has proven to be extraordinarily difficult to prove or disprove. Despite its simplicity, the conjecture has eluded some of the brightest minds in mathematics, and it remains one of the most famous open problems in the field.\n2. Wide-ranging implications: The Collatz Conjecture has far-reaching implications for many areas of mathematics, including number theory, algebra, and analysis. A solution to the conjecture could have significant impacts on these fields and potentially lead to new insights and discoveries.\n3. Computational evidence: While the conjecture remains unproven, extensive computational evidence supports its validity. In fact, no counterexample to the conjecture has been found for any starting value up to 2^64 (a number', additional_kwargs={}, example=False)
```
### Example: Chat completions with pay-as-you-go deployments (model as a service)[](#example-chat-completions-with-pay-as-you-go-deployments-model-as-a-service "Direct link to Example: Chat completions with pay-as-you-go deployments (model as a service)")
```
chat = AzureMLChatOnlineEndpoint( endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/chat/completions", endpoint_api_type=AzureMLEndpointApiType.serverless, endpoint_api_key="my-api-key", content_formatter=CustomOpenAIChatContentFormatter,)response = chat.invoke( [HumanMessage(content="Will the Collatz conjecture ever be solved?")])response
```
If you need to pass additional parameters to the model, use `model_kwargs` argument:
```
chat = AzureMLChatOnlineEndpoint( endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/chat/completions", endpoint_api_type=AzureMLEndpointApiType.serverless, endpoint_api_key="my-api-key", content_formatter=CustomOpenAIChatContentFormatter, model_kwargs={"temperature": 0.8},)
```
Parameters can also be passed during invocation:
```
response = chat.invoke( [HumanMessage(content="Will the Collatz conjecture ever be solved?")], max_tokens=512,)response
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:25.655Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/azureml_chat_endpoint/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/azureml_chat_endpoint/",
"description": "[Azure Machine",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4363",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"azureml_chat_endpoint\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:25 GMT",
"etag": "W/\"925746c9374913f125dab7b00db03488\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::cvhgj-1713753505267-e0cc0f137ed4"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/azureml_chat_endpoint/",
"property": "og:url"
},
{
"content": "AzureMLChatOnlineEndpoint | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Azure Machine",
"property": "og:description"
}
],
"title": "AzureMLChatOnlineEndpoint | 🦜️🔗 LangChain"
} | AzureMLChatOnlineEndpoint
Azure Machine Learning is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides foundational and general purpose models from different providers.
In general, you need to deploy models in order to consume its predictions (inference). In Azure Machine Learning, Online Endpoints are used to deploy these models with a real-time serving. They are based on the ideas of Endpoints and Deployments which allow you to decouple the interface of your production workload from the implementation that serves it.
This notebook goes over how to use a chat model hosted on an Azure Machine Learning Endpoint.
from langchain_community.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint
Set up
You must deploy a model on Azure ML or to Azure AI studio and obtain the following parameters:
endpoint_url: The REST endpoint url provided by the endpoint.
endpoint_api_type: Use endpoint_type='dedicated' when deploying models to Dedicated endpoints (hosted managed infrastructure). Use endpoint_type='serverless' when deploying models using the Pay-as-you-go offering (model as a service).
endpoint_api_key: The API key provided by the endpoint
Content Formatter
The content_formatter parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a ContentFormatterBase class is provided to allow users to transform data to their liking. The following content formatters are provided:
CustomOpenAIChatContentFormatter: Formats request and response data for models like LLaMa2-chat that follow the OpenAI API spec for request and response.
Note: langchain.chat_models.azureml_endpoint.LlamaChatContentFormatter is being deprecated and replaced with langchain.chat_models.azureml_endpoint.CustomOpenAIChatContentFormatter.
You can implement custom content formatters specific for your model deriving from the class langchain_community.llms.azureml_endpoint.ContentFormatterBase.
Examples
The following section contains examples about how to use this class:
Example: Chat completions with real-time endpoints
from langchain_community.chat_models.azureml_endpoint import (
AzureMLEndpointApiType,
CustomOpenAIChatContentFormatter,
)
from langchain_core.messages import HumanMessage
chat = AzureMLChatOnlineEndpoint(
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score",
endpoint_api_type=AzureMLEndpointApiType.dedicated,
endpoint_api_key="my-api-key",
content_formatter=CustomOpenAIChatContentFormatter(),
)
response = chat.invoke(
[HumanMessage(content="Will the Collatz conjecture ever be solved?")]
)
response
AIMessage(content=' The Collatz Conjecture is one of the most famous unsolved problems in mathematics, and it has been the subject of much study and research for many years. While it is impossible to predict with certainty whether the conjecture will ever be solved, there are several reasons why it is considered a challenging and important problem:\n\n1. Simple yet elusive: The Collatz Conjecture is a deceptively simple statement that has proven to be extraordinarily difficult to prove or disprove. Despite its simplicity, the conjecture has eluded some of the brightest minds in mathematics, and it remains one of the most famous open problems in the field.\n2. Wide-ranging implications: The Collatz Conjecture has far-reaching implications for many areas of mathematics, including number theory, algebra, and analysis. A solution to the conjecture could have significant impacts on these fields and potentially lead to new insights and discoveries.\n3. Computational evidence: While the conjecture remains unproven, extensive computational evidence supports its validity. In fact, no counterexample to the conjecture has been found for any starting value up to 2^64 (a number', additional_kwargs={}, example=False)
Example: Chat completions with pay-as-you-go deployments (model as a service)
chat = AzureMLChatOnlineEndpoint(
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/chat/completions",
endpoint_api_type=AzureMLEndpointApiType.serverless,
endpoint_api_key="my-api-key",
content_formatter=CustomOpenAIChatContentFormatter,
)
response = chat.invoke(
[HumanMessage(content="Will the Collatz conjecture ever be solved?")]
)
response
If you need to pass additional parameters to the model, use model_kwargs argument:
chat = AzureMLChatOnlineEndpoint(
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/chat/completions",
endpoint_api_type=AzureMLEndpointApiType.serverless,
endpoint_api_key="my-api-key",
content_formatter=CustomOpenAIChatContentFormatter,
model_kwargs={"temperature": 0.8},
)
Parameters can also be passed during invocation:
response = chat.invoke(
[HumanMessage(content="Will the Collatz conjecture ever be solved?")],
max_tokens=512,
)
response
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/bedrock/ | ## ChatBedrock
> [Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`, `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using `Amazon Bedrock`, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and `Retrieval Augmented Generation` (`RAG`), and build agents that execute tasks using your enterprise systems and data sources. Since `Amazon Bedrock` is serverless, you don’t have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.
```
%pip install --upgrade --quiet langchain-aws
```
```
Note: you may need to restart the kernel to use updated packages.
```
```
from langchain_aws import ChatBedrockfrom langchain_core.messages import HumanMessage
```
```
chat = ChatBedrock( model_id="anthropic.claude-3-sonnet-20240229-v1:0", model_kwargs={"temperature": 0.1},)
```
```
messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat.invoke(messages)
```
```
AIMessage(content="Voici la traduction en français :\n\nJ'aime la programmation.", additional_kwargs={'usage': {'prompt_tokens': 20, 'completion_tokens': 21, 'total_tokens': 41}}, response_metadata={'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0', 'usage': {'prompt_tokens': 20, 'completion_tokens': 21, 'total_tokens': 41}}, id='run-994f0362-0e50-4524-afad-3c4f5bb11328-0')
```
### Streaming[](#streaming "Direct link to Streaming")
To stream responses, you can use the runnable `.stream()` method.
```
for chunk in chat.stream(messages): print(chunk.content, end="", flush=True)
```
```
Voici la traduction en français :J'aime la programmation.
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:26.107Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/bedrock/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/bedrock/",
"description": "Amazon Bedrock is a fully managed",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "2692",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"bedrock\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:26 GMT",
"etag": "W/\"c285b090c6a3384ed94d39ea6a333451\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::5nsvl-1713753505992-5660a39f0af9"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/bedrock/",
"property": "og:url"
},
{
"content": "ChatBedrock | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Amazon Bedrock is a fully managed",
"property": "og:description"
}
],
"title": "ChatBedrock | 🦜️🔗 LangChain"
} | ChatBedrock
Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don’t have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.
%pip install --upgrade --quiet langchain-aws
Note: you may need to restart the kernel to use updated packages.
from langchain_aws import ChatBedrock
from langchain_core.messages import HumanMessage
chat = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={"temperature": 0.1},
)
messages = [
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
chat.invoke(messages)
AIMessage(content="Voici la traduction en français :\n\nJ'aime la programmation.", additional_kwargs={'usage': {'prompt_tokens': 20, 'completion_tokens': 21, 'total_tokens': 41}}, response_metadata={'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0', 'usage': {'prompt_tokens': 20, 'completion_tokens': 21, 'total_tokens': 41}}, id='run-994f0362-0e50-4524-afad-3c4f5bb11328-0')
Streaming
To stream responses, you can use the runnable .stream() method.
for chunk in chat.stream(messages):
print(chunk.content, end="", flush=True)
Voici la traduction en français :
J'aime la programmation.
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/dappier/ | ## Dappier AI
**Dappier: Powering AI with Dynamic, Real-Time Data Models**
Dappier offers a cutting-edge platform that grants developers immediate access to a wide array of real-time data models spanning news, entertainment, finance, market data, weather, and beyond. With our pre-trained data models, you can supercharge your AI applications, ensuring they deliver precise, up-to-date responses and minimize inaccuracies.
Dappier data models help you build next-gen LLM apps with trusted, up-to-date content from the world’s leading brands. Unleash your creativity and enhance any GPT App or AI workflow with actionable, proprietary, data through a simple API. Augment your AI with proprietary data from trusted sources is the best way to ensure factual, up-to-date, responses with fewer hallucinations no matter the question.
For Developers, By Developers Designed with developers in mind, Dappier simplifies the journey from data integration to monetization, providing clear, straightforward paths to deploy and earn from your AI models. Experience the future of monetization infrastructure for the new internet at **[https://dappier.com/](https://dappier.com/)**.
This example goes over how to use LangChain to interact with Dappier AI models
* * *
To use one of our Dappier AI Data Models, you will need an API key. Please visit Dappier Platform ([https://platform.dappier.com/](https://platform.dappier.com/)) to log in and create an API key in your profile.
You can find more details on the API reference : [https://docs.dappier.com/introduction](https://docs.dappier.com/introduction)
To work with our Dappier Chat Model you can pass the key directly through the parameter named dappier\_api\_key when initiating the class or set as an environment variable.
```
export DAPPIER_API_KEY="..."
```
```
from langchain_community.chat_models.dappier import ChatDappierAIfrom langchain_core.messages import HumanMessage
```
```
chat = ChatDappierAI( dappier_endpoint="https://api.dappier.com/app/datamodelconversation", dappier_model="dm_01hpsxyfm2fwdt2zet9cg6fdxt", dappier_api_key="...",)
```
```
messages = [HumanMessage(content="Who won the super bowl in 2024?")]chat.invoke(messages)
```
```
AIMessage(content='Hey there! The Kansas City Chiefs won Super Bowl LVIII in 2024. They beat the San Francisco 49ers in overtime with a final score of 25-22. It was quite the game! 🏈')
```
```
await chat.ainvoke(messages)
```
```
AIMessage(content='The Kansas City Chiefs won Super Bowl LVIII in 2024! 🏈')
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:26.543Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/dappier/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/dappier/",
"description": "Dappier: Powering AI with Dynamic, Real-Time Data Models",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3421",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"dappier\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:26 GMT",
"etag": "W/\"effc5df1314f9ff3366f5f618bea1025\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::85vkj-1713753506472-827905b0e7b8"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/dappier/",
"property": "og:url"
},
{
"content": "Dappier AI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Dappier: Powering AI with Dynamic, Real-Time Data Models",
"property": "og:description"
}
],
"title": "Dappier AI | 🦜️🔗 LangChain"
} | Dappier AI
Dappier: Powering AI with Dynamic, Real-Time Data Models
Dappier offers a cutting-edge platform that grants developers immediate access to a wide array of real-time data models spanning news, entertainment, finance, market data, weather, and beyond. With our pre-trained data models, you can supercharge your AI applications, ensuring they deliver precise, up-to-date responses and minimize inaccuracies.
Dappier data models help you build next-gen LLM apps with trusted, up-to-date content from the world’s leading brands. Unleash your creativity and enhance any GPT App or AI workflow with actionable, proprietary, data through a simple API. Augment your AI with proprietary data from trusted sources is the best way to ensure factual, up-to-date, responses with fewer hallucinations no matter the question.
For Developers, By Developers Designed with developers in mind, Dappier simplifies the journey from data integration to monetization, providing clear, straightforward paths to deploy and earn from your AI models. Experience the future of monetization infrastructure for the new internet at https://dappier.com/.
This example goes over how to use LangChain to interact with Dappier AI models
To use one of our Dappier AI Data Models, you will need an API key. Please visit Dappier Platform (https://platform.dappier.com/) to log in and create an API key in your profile.
You can find more details on the API reference : https://docs.dappier.com/introduction
To work with our Dappier Chat Model you can pass the key directly through the parameter named dappier_api_key when initiating the class or set as an environment variable.
export DAPPIER_API_KEY="..."
from langchain_community.chat_models.dappier import ChatDappierAI
from langchain_core.messages import HumanMessage
chat = ChatDappierAI(
dappier_endpoint="https://api.dappier.com/app/datamodelconversation",
dappier_model="dm_01hpsxyfm2fwdt2zet9cg6fdxt",
dappier_api_key="...",
)
messages = [HumanMessage(content="Who won the super bowl in 2024?")]
chat.invoke(messages)
AIMessage(content='Hey there! The Kansas City Chiefs won Super Bowl LVIII in 2024. They beat the San Francisco 49ers in overtime with a final score of 25-22. It was quite the game! 🏈')
await chat.ainvoke(messages)
AIMessage(content='The Kansas City Chiefs won Super Bowl LVIII in 2024! 🏈') |
https://python.langchain.com/docs/integrations/chat_loaders/langsmith_dataset/ | ## LangSmith Chat Datasets
This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data. The process is simple and comprises 3 steps.
1. Create the chat dataset.
2. Use the LangSmithDatasetChatLoader to load examples.
3. Fine-tune your model.
Then you can use the fine-tuned model in your LangChain app.
Before diving in, let’s install our prerequisites.
## Prerequisites[](#prerequisites "Direct link to Prerequisites")
Ensure you’ve installed langchain \>\= 0.0.311 and have configured your environment with your LangSmith API key.
```
%pip install --upgrade --quiet langchain langchain-openai
```
```
import osimport uuiduid = uuid.uuid4().hex[:6]os.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = "YOUR API KEY"
```
## 1\. Select a dataset[](#select-a-dataset "Direct link to 1. Select a dataset")
This notebook fine-tunes a model directly on selecting which runs to fine-tune on. You will often curate these from traced runs. You can learn more about LangSmith datasets in the docs [docs](https://docs.smith.langchain.com/evaluation/concepts#datasets).
For the sake of this tutorial, we will upload an existing dataset here that you can use.
```
from langsmith.client import Clientclient = Client()
```
```
import requestsurl = "https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/docs/integrations/chat_loaders/example_data/langsmith_chat_dataset.json"response = requests.get(url)response.raise_for_status()data = response.json()
```
```
dataset_name = f"Extraction Fine-tuning Dataset {uid}"ds = client.create_dataset(dataset_name=dataset_name, data_type="chat")
```
```
_ = client.create_examples( inputs=[e["inputs"] for e in data], outputs=[e["outputs"] for e in data], dataset_id=ds.id,)
```
## 2\. Prepare Data[](#prepare-data "Direct link to 2. Prepare Data")
Now we can create an instance of LangSmithRunChatLoader and load the chat sessions using its lazy\_load() method.
```
from langchain_community.chat_loaders.langsmith import LangSmithDatasetChatLoaderloader = LangSmithDatasetChatLoader(dataset_name=dataset_name)chat_sessions = loader.lazy_load()
```
#### With the chat sessions loaded, convert them into a format suitable for fine-tuning.[](#with-the-chat-sessions-loaded-convert-them-into-a-format-suitable-for-fine-tuning. "Direct link to With the chat sessions loaded, convert them into a format suitable for fine-tuning.")
```
from langchain_community.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(chat_sessions)
```
## 3\. Fine-tune the Model[](#fine-tune-the-model "Direct link to 3. Fine-tune the Model")
Now, initiate the fine-tuning process using the OpenAI library.
```
import jsonimport timefrom io import BytesIOimport openaimy_file = BytesIO()for dialog in training_data: my_file.write((json.dumps({"messages": dialog}) + "\n").encode("utf-8"))my_file.seek(0)training_file = openai.files.create(file=my_file, purpose="fine-tune")job = openai.fine_tuning.jobs.create( training_file=training_file.id, model="gpt-3.5-turbo",)# Wait for the fine-tuning to complete (this may take some time)status = openai.fine_tuning.jobs.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.fine_tuning.jobs.retrieve(job.id).status# Now your model is fine-tuned!
```
```
Status=[running]... 429.55s. 46.34s
```
## 4\. Use in LangChain[](#use-in-langchain "Direct link to 4. Use in LangChain")
After fine-tuning, use the resulting model ID with the ChatOpenAI model class in your LangChain app.
```
# Get the fine-tuned model IDjob = openai.fine_tuning.jobs.retrieve(job.id)model_id = job.fine_tuned_model# Use the fine-tuned model in LangChainfrom langchain_openai import ChatOpenAImodel = ChatOpenAI( model=model_id, temperature=1,)
```
```
model.invoke("There were three ravens sat on a tree.")
```
```
AIMessage(content='[{"s": "There were three ravens", "object": "tree", "relation": "sat on"}, {"s": "three ravens", "object": "a tree", "relation": "sat on"}]')
```
Now you have successfully fine-tuned a model using data from LangSmith LLM runs!
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:26.813Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat_loaders/langsmith_dataset/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat_loaders/langsmith_dataset/",
"description": "This notebook demonstrates an easy way to load a LangSmith chat dataset",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3423",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"langsmith_dataset\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:26 GMT",
"etag": "W/\"8b5cdf8fb21af4cdda09fe36e5f00ccf\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::skngc-1713753506739-87acc9c83c7b"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat_loaders/langsmith_dataset/",
"property": "og:url"
},
{
"content": "LangSmith Chat Datasets | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook demonstrates an easy way to load a LangSmith chat dataset",
"property": "og:description"
}
],
"title": "LangSmith Chat Datasets | 🦜️🔗 LangChain"
} | LangSmith Chat Datasets
This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data. The process is simple and comprises 3 steps.
Create the chat dataset.
Use the LangSmithDatasetChatLoader to load examples.
Fine-tune your model.
Then you can use the fine-tuned model in your LangChain app.
Before diving in, let’s install our prerequisites.
Prerequisites
Ensure you’ve installed langchain >= 0.0.311 and have configured your environment with your LangSmith API key.
%pip install --upgrade --quiet langchain langchain-openai
import os
import uuid
uid = uuid.uuid4().hex[:6]
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "YOUR API KEY"
1. Select a dataset
This notebook fine-tunes a model directly on selecting which runs to fine-tune on. You will often curate these from traced runs. You can learn more about LangSmith datasets in the docs docs.
For the sake of this tutorial, we will upload an existing dataset here that you can use.
from langsmith.client import Client
client = Client()
import requests
url = "https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/docs/integrations/chat_loaders/example_data/langsmith_chat_dataset.json"
response = requests.get(url)
response.raise_for_status()
data = response.json()
dataset_name = f"Extraction Fine-tuning Dataset {uid}"
ds = client.create_dataset(dataset_name=dataset_name, data_type="chat")
_ = client.create_examples(
inputs=[e["inputs"] for e in data],
outputs=[e["outputs"] for e in data],
dataset_id=ds.id,
)
2. Prepare Data
Now we can create an instance of LangSmithRunChatLoader and load the chat sessions using its lazy_load() method.
from langchain_community.chat_loaders.langsmith import LangSmithDatasetChatLoader
loader = LangSmithDatasetChatLoader(dataset_name=dataset_name)
chat_sessions = loader.lazy_load()
With the chat sessions loaded, convert them into a format suitable for fine-tuning.
from langchain_community.adapters.openai import convert_messages_for_finetuning
training_data = convert_messages_for_finetuning(chat_sessions)
3. Fine-tune the Model
Now, initiate the fine-tuning process using the OpenAI library.
import json
import time
from io import BytesIO
import openai
my_file = BytesIO()
for dialog in training_data:
my_file.write((json.dumps({"messages": dialog}) + "\n").encode("utf-8"))
my_file.seek(0)
training_file = openai.files.create(file=my_file, purpose="fine-tune")
job = openai.fine_tuning.jobs.create(
training_file=training_file.id,
model="gpt-3.5-turbo",
)
# Wait for the fine-tuning to complete (this may take some time)
status = openai.fine_tuning.jobs.retrieve(job.id).status
start_time = time.time()
while status != "succeeded":
print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True)
time.sleep(5)
status = openai.fine_tuning.jobs.retrieve(job.id).status
# Now your model is fine-tuned!
Status=[running]... 429.55s. 46.34s
4. Use in LangChain
After fine-tuning, use the resulting model ID with the ChatOpenAI model class in your LangChain app.
# Get the fine-tuned model ID
job = openai.fine_tuning.jobs.retrieve(job.id)
model_id = job.fine_tuned_model
# Use the fine-tuned model in LangChain
from langchain_openai import ChatOpenAI
model = ChatOpenAI(
model=model_id,
temperature=1,
)
model.invoke("There were three ravens sat on a tree.")
AIMessage(content='[{"s": "There were three ravens", "object": "tree", "relation": "sat on"}, {"s": "three ravens", "object": "a tree", "relation": "sat on"}]')
Now you have successfully fine-tuned a model using data from LangSmith LLM runs!
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/deepinfra/ | ## DeepInfra
[DeepInfra](https://deepinfra.com/?utm_source=langchain) is a serverless inference as a service that provides access to a [variety of LLMs](https://deepinfra.com/models?utm_source=langchain) and [embeddings models](https://deepinfra.com/models?type=embeddings&utm_source=langchain). This notebook goes over how to use LangChain with DeepInfra for chat models.
## Set the Environment API Key[](#set-the-environment-api-key "Direct link to Set the Environment API Key")
Make sure to get your API key from DeepInfra. You have to [Login](https://deepinfra.com/login?from=%2Fdash) and get a new token.
You are given a 1 hour free of serverless GPU compute to test different models. (see [here](https://github.com/deepinfra/deepctl#deepctl)) You can print your token with `deepctl auth token`
```
# get a new token: https://deepinfra.com/login?from=%2Fdashfrom getpass import getpassDEEPINFRA_API_TOKEN = getpass()
```
```
import os# or pass deepinfra_api_token parameter to the ChatDeepInfra constructoros.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKEN
```
```
from langchain_community.chat_models import ChatDeepInfrafrom langchain_core.messages import HumanMessage
```
```
chat = ChatDeepInfra(model="meta-llama/Llama-2-7b-chat-hf")
```
```
messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages)
```
```
AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)
```
## `ChatDeepInfra` also supports async and streaming functionality:[](#chatdeepinfra-also-supports-async-and-streaming-functionality "Direct link to chatdeepinfra-also-supports-async-and-streaming-functionality")
```
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
```
```
await chat.agenerate([messages])
```
```
LLMResult(generations=[[ChatGeneration(text=" J'aime programmer.", generation_info=None, message=AIMessage(content=" J'aime programmer.", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])
```
```
chat = ChatDeepInfra( streaming=True, verbose=True, callbacks=[StreamingStdOutCallbackHandler()],)chat(messages)
```
```
AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:27.567Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/deepinfra/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/deepinfra/",
"description": "DeepInfra is a serverless",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4364",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"deepinfra\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:27 GMT",
"etag": "W/\"63056759331cddd8f816060e19342e52\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::w7sgp-1713753507451-a9c5d4e9165d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/deepinfra/",
"property": "og:url"
},
{
"content": "DeepInfra | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "DeepInfra is a serverless",
"property": "og:description"
}
],
"title": "DeepInfra | 🦜️🔗 LangChain"
} | DeepInfra
DeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for chat models.
Set the Environment API Key
Make sure to get your API key from DeepInfra. You have to Login and get a new token.
You are given a 1 hour free of serverless GPU compute to test different models. (see here) You can print your token with deepctl auth token
# get a new token: https://deepinfra.com/login?from=%2Fdash
from getpass import getpass
DEEPINFRA_API_TOKEN = getpass()
import os
# or pass deepinfra_api_token parameter to the ChatDeepInfra constructor
os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKEN
from langchain_community.chat_models import ChatDeepInfra
from langchain_core.messages import HumanMessage
chat = ChatDeepInfra(model="meta-llama/Llama-2-7b-chat-hf")
messages = [
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
chat(messages)
AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)
ChatDeepInfra also supports async and streaming functionality:
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
await chat.agenerate([messages])
LLMResult(generations=[[ChatGeneration(text=" J'aime programmer.", generation_info=None, message=AIMessage(content=" J'aime programmer.", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])
chat = ChatDeepInfra(
streaming=True,
verbose=True,
callbacks=[StreamingStdOutCallbackHandler()],
)
chat(messages)
AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False) |
https://python.langchain.com/docs/integrations/chat/cohere/ | ## Cohere
This notebook covers how to get started with [Cohere chat models](https://cohere.com/chat).
Head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.cohere.ChatCohere.html) for detailed documentation of all attributes and methods.
## Setup[](#setup "Direct link to Setup")
The integration lives in the `langchain-cohere` package. We can install these with:
```
pip install -U langchain-cohere
```
We’ll also need to get a [Cohere API key](https://cohere.com/) and set the `COHERE_API_KEY` environment variable:
```
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()
```
It’s also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability
```
# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
```
## Usage[](#usage "Direct link to Usage")
ChatCohere supports all [ChatModel](https://python.langchain.com/docs/modules/model_io/chat/) functionality:
```
from langchain_cohere import ChatCoherefrom langchain_core.messages import HumanMessage
```
```
chat = ChatCohere(model="command")
```
```
messages = [HumanMessage(content="1"), HumanMessage(content="2 3")]chat.invoke(messages)
```
```
AIMessage(content='4 && 5 \n6 || 7 \n\nWould you like to play a game of odds and evens?', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, id='run-3475e0c8-c89b-4937-9300-e07d652455e1-0')
```
```
await chat.ainvoke(messages)
```
```
AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-1635e63e-2994-4e7f-986e-152ddfc95777-0')
```
```
for chunk in chat.stream(messages): print(chunk.content, end="", flush=True)
```
```
[AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-8d6fade2-1b39-4e31-ab23-4be622dd0027-0')]
```
## Chaining[](#chaining "Direct link to Chaining")
You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](https://python.langchain.com/docs/expression_language/)
```
from langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")chain = prompt | chat
```
```
chain.invoke({"topic": "bears"})
```
```
AIMessage(content='What color socks do bears wear?\n\nThey don’t wear socks, they have bear feet. \n\nHope you laughed! If not, maybe this will help: laughter is the best medicine, and a good sense of humor is infectious!', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, id='run-ef7f9789-0d4d-43bf-a4f7-f2a0e27a5320-0')
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:27.721Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/cohere/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/cohere/",
"description": "This notebook covers how to get started with [Cohere chat",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3422",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"cohere\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:27 GMT",
"etag": "W/\"7ab9e19ae53960c990fb38c1e13a5d80\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::zvdxw-1713753507495-5ac3a35ddc02"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/cohere/",
"property": "og:url"
},
{
"content": "Cohere | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook covers how to get started with [Cohere chat",
"property": "og:description"
}
],
"title": "Cohere | 🦜️🔗 LangChain"
} | Cohere
This notebook covers how to get started with Cohere chat models.
Head to the API reference for detailed documentation of all attributes and methods.
Setup
The integration lives in the langchain-cohere package. We can install these with:
pip install -U langchain-cohere
We’ll also need to get a Cohere API key and set the COHERE_API_KEY environment variable:
import getpass
import os
os.environ["COHERE_API_KEY"] = getpass.getpass()
It’s also helpful (but not needed) to set up LangSmith for best-in-class observability
# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Usage
ChatCohere supports all ChatModel functionality:
from langchain_cohere import ChatCohere
from langchain_core.messages import HumanMessage
chat = ChatCohere(model="command")
messages = [HumanMessage(content="1"), HumanMessage(content="2 3")]
chat.invoke(messages)
AIMessage(content='4 && 5 \n6 || 7 \n\nWould you like to play a game of odds and evens?', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, id='run-3475e0c8-c89b-4937-9300-e07d652455e1-0')
await chat.ainvoke(messages)
AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-1635e63e-2994-4e7f-986e-152ddfc95777-0')
for chunk in chat.stream(messages):
print(chunk.content, end="", flush=True)
[AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-8d6fade2-1b39-4e31-ab23-4be622dd0027-0')]
Chaining
You can also easily combine with a prompt template for easy structuring of user input. We can do this using LCEL
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
chain = prompt | chat
chain.invoke({"topic": "bears"})
AIMessage(content='What color socks do bears wear?\n\nThey don’t wear socks, they have bear feet. \n\nHope you laughed! If not, maybe this will help: laughter is the best medicine, and a good sense of humor is infectious!', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, id='run-ef7f9789-0d4d-43bf-a4f7-f2a0e27a5320-0') |
https://python.langchain.com/docs/integrations/chat/edenai/ | ## Eden AI
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: [https://edenai.co/](https://edenai.co/))
This example goes over how to use LangChain to interact with Eden AI models
* * *
`EdenAI` goes beyond mere model invocation. It empowers you with advanced features, including:
* **Multiple Providers**: Gain access to a diverse range of language models offered by various providers, giving you the freedom to choose the best-suited model for your use case.
* **Fallback Mechanism**: Set a fallback mechanism to ensure seamless operations even if the primary provider is unavailable, you can easily switches to an alternative provider.
* **Usage Tracking**: Track usage statistics on a per-project and per-API key basis. This feature allows you to monitor and manage resource consumption effectively.
* **Monitoring and Observability**: `EdenAI` provides comprehensive monitoring and observability tools on the platform. Monitor the performance of your language models, analyze usage patterns, and gain valuable insights to optimize your applications.
Accessing the EDENAI’s API requires an API key,
which you can get by creating an account [https://app.edenai.run/user/register](https://app.edenai.run/user/register) and heading here [https://app.edenai.run/admin/iam/api-keys](https://app.edenai.run/admin/iam/api-keys)
Once we have a key we’ll want to set it as an environment variable by running:
```
export EDENAI_API_KEY="..."
```
You can find more details on the API reference : [https://docs.edenai.co/reference](https://docs.edenai.co/reference)
If you’d prefer not to set an environment variable you can pass the key in directly via the edenai\_api\_key named parameter
when initiating the EdenAI Chat Model class.
```
from langchain_community.chat_models.edenai import ChatEdenAIfrom langchain_core.messages import HumanMessage
```
```
chat = ChatEdenAI( edenai_api_key="...", provider="openai", temperature=0.2, max_tokens=250)
```
```
messages = [HumanMessage(content="Hello !")]chat.invoke(messages)
```
```
AIMessage(content='Hello! How can I assist you today?')
```
```
await chat.ainvoke(messages)
```
```
AIMessage(content='Hello! How can I assist you today?')
```
## Streaming and Batching[](#streaming-and-batching "Direct link to Streaming and Batching")
`ChatEdenAI` supports streaming and batching. Below is an example.
```
for chunk in chat.stream(messages): print(chunk.content, end="", flush=True)
```
```
Hello! How can I assist you today?
```
```
[AIMessage(content='Hello! How can I assist you today?')]
```
## Fallback mecanism[](#fallback-mecanism "Direct link to Fallback mecanism")
With Eden AI you can set a fallback mechanism to ensure seamless operations even if the primary provider is unavailable, you can easily switches to an alternative provider.
```
chat = ChatEdenAI( edenai_api_key="...", provider="openai", temperature=0.2, max_tokens=250, fallback_providers="google",)
```
In this example, you can use Google as a backup provider if OpenAI encounters any issues.
For more information and details about Eden AI, check out this link: : [https://docs.edenai.co/docs/additional-parameters](https://docs.edenai.co/docs/additional-parameters)
## Chaining Calls[](#chaining-calls "Direct link to Chaining Calls")
```
from langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template( "What is a good name for a company that makes {product}?")chain = prompt | chat
```
```
chain.invoke({"product": "healthy snacks"})
```
```
AIMessage(content='VitalBites')
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:27.885Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/edenai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/edenai/",
"description": "Eden AI is revolutionizing the AI landscape by uniting the best AI",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3421",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"edenai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:27 GMT",
"etag": "W/\"a2e4cd6501e817b98cf7a443a5ede34b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::l2gfp-1713753507558-5254c2d3b051"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/edenai/",
"property": "og:url"
},
{
"content": "Eden AI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Eden AI is revolutionizing the AI landscape by uniting the best AI",
"property": "og:description"
}
],
"title": "Eden AI | 🦜️🔗 LangChain"
} | Eden AI
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/)
This example goes over how to use LangChain to interact with Eden AI models
EdenAI goes beyond mere model invocation. It empowers you with advanced features, including:
Multiple Providers: Gain access to a diverse range of language models offered by various providers, giving you the freedom to choose the best-suited model for your use case.
Fallback Mechanism: Set a fallback mechanism to ensure seamless operations even if the primary provider is unavailable, you can easily switches to an alternative provider.
Usage Tracking: Track usage statistics on a per-project and per-API key basis. This feature allows you to monitor and manage resource consumption effectively.
Monitoring and Observability: EdenAI provides comprehensive monitoring and observability tools on the platform. Monitor the performance of your language models, analyze usage patterns, and gain valuable insights to optimize your applications.
Accessing the EDENAI’s API requires an API key,
which you can get by creating an account https://app.edenai.run/user/register and heading here https://app.edenai.run/admin/iam/api-keys
Once we have a key we’ll want to set it as an environment variable by running:
export EDENAI_API_KEY="..."
You can find more details on the API reference : https://docs.edenai.co/reference
If you’d prefer not to set an environment variable you can pass the key in directly via the edenai_api_key named parameter
when initiating the EdenAI Chat Model class.
from langchain_community.chat_models.edenai import ChatEdenAI
from langchain_core.messages import HumanMessage
chat = ChatEdenAI(
edenai_api_key="...", provider="openai", temperature=0.2, max_tokens=250
)
messages = [HumanMessage(content="Hello !")]
chat.invoke(messages)
AIMessage(content='Hello! How can I assist you today?')
await chat.ainvoke(messages)
AIMessage(content='Hello! How can I assist you today?')
Streaming and Batching
ChatEdenAI supports streaming and batching. Below is an example.
for chunk in chat.stream(messages):
print(chunk.content, end="", flush=True)
Hello! How can I assist you today?
[AIMessage(content='Hello! How can I assist you today?')]
Fallback mecanism
With Eden AI you can set a fallback mechanism to ensure seamless operations even if the primary provider is unavailable, you can easily switches to an alternative provider.
chat = ChatEdenAI(
edenai_api_key="...",
provider="openai",
temperature=0.2,
max_tokens=250,
fallback_providers="google",
)
In this example, you can use Google as a backup provider if OpenAI encounters any issues.
For more information and details about Eden AI, check out this link: : https://docs.edenai.co/docs/additional-parameters
Chaining Calls
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_template(
"What is a good name for a company that makes {product}?"
)
chain = prompt | chat
chain.invoke({"product": "healthy snacks"})
AIMessage(content='VitalBites') |
https://python.langchain.com/docs/integrations/chat_loaders/imessage/ | ## iMessage
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.
On MacOS, iMessage stores conversations in a sqlite database at `~/Library/Messages/chat.db` (at least for macOS Ventura 13.4). The `IMessageChatLoader` loads from this database file.
1. Create the `IMessageChatLoader` with the file path pointed to `chat.db` database you’d like to process.
2. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion. Optionally use `merge_chat_runs` to combine message from the same sender in sequence, and/or `map_ai_messages` to convert messages from the specified sender to the “AIMessage” class.
## 1\. Access Chat DB[](#access-chat-db "Direct link to 1. Access Chat DB")
It’s likely that your terminal is denied access to `~/Library/Messages`. To use this class, you can copy the DB to an accessible directory (e.g., Documents) and load from there. Alternatively (and not recommended), you can grant full disk access for your terminal emulator in System Settings \> Security and Privacy \> Full Disk Access.
We have created an example database you can use at [this linked drive file](https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing).
```
# This uses some example dataimport requestsdef download_drive_file(url: str, output_path: str = "chat.db") -> None: file_id = url.split("/")[-2] download_url = f"https://drive.google.com/uc?export=download&id={file_id}" response = requests.get(download_url) if response.status_code != 200: print("Failed to download the file.") return with open(output_path, "wb") as file: file.write(response.content) print(f"File {output_path} downloaded.")url = ( "https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing")# Download file to chat.dbdownload_drive_file(url)
```
## 2\. Create the Chat Loader[](#create-the-chat-loader "Direct link to 2. Create the Chat Loader")
Provide the loader with the file path to the zip directory. You can optionally specify the user id that maps to an ai message as well an configure whether to merge message runs.
```
from langchain_community.chat_loaders.imessage import IMessageChatLoader
```
```
loader = IMessageChatLoader( path="./chat.db",)
```
## 3\. Load messages[](#load-messages "Direct link to 3. Load messages")
The `load()` (or `lazy_load`) methods return a list of “ChatSessions” that currently just contain a list of messages per loaded conversation. All messages are mapped to “HumanMessage” objects to start.
You can optionally choose to merge message “runs” (consecutive messages from the same sender) and select a sender to represent the “AI”. The fine-tuned LLM will learn to generate these AI messages.
```
from typing import Listfrom langchain_community.chat_loaders.base import ChatSessionfrom langchain_community.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "Tortoise" to AI messages. Do you have a guess who these conversations are between?chat_sessions: List[ChatSession] = list( map_ai_messages(merged_messages, sender="Tortoise"))
```
```
# Now all of the Tortoise's messages will take the AI message class# which maps to the 'assistant' role in OpenAI's training formatchat_sessions[0]["messages"][:3]
```
```
[AIMessage(content="Slow and steady, that's my motto.", additional_kwargs={'message_time': 1693182723, 'sender': 'Tortoise'}, example=False), HumanMessage(content='Speed is key!', additional_kwargs={'message_time': 1693182753, 'sender': 'Hare'}, example=False), AIMessage(content='A balanced approach is more reliable.', additional_kwargs={'message_time': 1693182783, 'sender': 'Tortoise'}, example=False)]
```
## 3\. Prepare for fine-tuning[](#prepare-for-fine-tuning "Direct link to 3. Prepare for fine-tuning")
Now it’s time to convert our chat messages to OpenAI dictionaries. We can use the `convert_messages_for_finetuning` utility to do so.
```
from langchain_community.adapters.openai import convert_messages_for_finetuning
```
```
training_data = convert_messages_for_finetuning(chat_sessions)print(f"Prepared {len(training_data)} dialogues for training")
```
```
Prepared 10 dialogues for training
```
## 4\. Fine-tune the model[](#fine-tune-the-model "Direct link to 4. Fine-tune the model")
It’s time to fine-tune the model. Make sure you have `openai` installed and have set your `OPENAI_API_KEY` appropriately
```
%pip install --upgrade --quiet langchain-openai
```
```
import jsonimport timefrom io import BytesIOimport openai# We will write the jsonl file in memorymy_file = BytesIO()for m in training_data: my_file.write((json.dumps({"messages": m}) + "\n").encode("utf-8"))my_file.seek(0)training_file = openai.files.create(file=my_file, purpose="fine-tune")# OpenAI audits each training file for compliance reasons.# This make take a few minutesstatus = openai.files.retrieve(training_file.id).statusstart_time = time.time()while status != "processed": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.files.retrieve(training_file.id).statusprint(f"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.")
```
```
File file-zHIgf4r8LltZG3RFpkGd4Sjf ready after 10.19 seconds.
```
With the file ready, it’s time to kick off a training job.
```
job = openai.fine_tuning.jobs.create( training_file=training_file.id, model="gpt-3.5-turbo",)
```
Grab a cup of tea while your model is being prepared. This may take some time!
```
status = openai.fine_tuning.jobs.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) job = openai.fine_tuning.jobs.retrieve(job.id) status = job.status
```
```
Status=[running]... 524.95s
```
```
print(job.fine_tuned_model)
```
```
ft:gpt-3.5-turbo-0613:personal::7sKoRdlz
```
## 5\. Use in LangChain[](#use-in-langchain "Direct link to 5. Use in LangChain")
You can use the resulting model ID directly the `ChatOpenAI` model class.
```
from langchain_openai import ChatOpenAImodel = ChatOpenAI( model=job.fine_tuned_model, temperature=1,)
```
```
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_messages( [ ("system", "You are speaking to hare."), ("human", "{input}"), ])chain = prompt | model | StrOutputParser()
```
```
for tok in chain.stream({"input": "What's the golden thread?"}): print(tok, end="", flush=True)
```
```
A symbol of interconnectedness.
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:28.087Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat_loaders/imessage/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat_loaders/imessage/",
"description": "This notebook shows how to use the iMessage chat loader. This class",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"imessage\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:27 GMT",
"etag": "W/\"d100154ba43197008a2a9be43773f3e2\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::5fbxs-1713753507410-aa874afba05f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat_loaders/imessage/",
"property": "og:url"
},
{
"content": "iMessage | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to use the iMessage chat loader. This class",
"property": "og:description"
}
],
"title": "iMessage | 🦜️🔗 LangChain"
} | iMessage
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.
On MacOS, iMessage stores conversations in a sqlite database at ~/Library/Messages/chat.db (at least for macOS Ventura 13.4). The IMessageChatLoader loads from this database file.
Create the IMessageChatLoader with the file path pointed to chat.db database you’d like to process.
Call loader.load() (or loader.lazy_load()) to perform the conversion. Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the “AIMessage” class.
1. Access Chat DB
It’s likely that your terminal is denied access to ~/Library/Messages. To use this class, you can copy the DB to an accessible directory (e.g., Documents) and load from there. Alternatively (and not recommended), you can grant full disk access for your terminal emulator in System Settings > Security and Privacy > Full Disk Access.
We have created an example database you can use at this linked drive file.
# This uses some example data
import requests
def download_drive_file(url: str, output_path: str = "chat.db") -> None:
file_id = url.split("/")[-2]
download_url = f"https://drive.google.com/uc?export=download&id={file_id}"
response = requests.get(download_url)
if response.status_code != 200:
print("Failed to download the file.")
return
with open(output_path, "wb") as file:
file.write(response.content)
print(f"File {output_path} downloaded.")
url = (
"https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing"
)
# Download file to chat.db
download_drive_file(url)
2. Create the Chat Loader
Provide the loader with the file path to the zip directory. You can optionally specify the user id that maps to an ai message as well an configure whether to merge message runs.
from langchain_community.chat_loaders.imessage import IMessageChatLoader
loader = IMessageChatLoader(
path="./chat.db",
)
3. Load messages
The load() (or lazy_load) methods return a list of “ChatSessions” that currently just contain a list of messages per loaded conversation. All messages are mapped to “HumanMessage” objects to start.
You can optionally choose to merge message “runs” (consecutive messages from the same sender) and select a sender to represent the “AI”. The fine-tuned LLM will learn to generate these AI messages.
from typing import List
from langchain_community.chat_loaders.base import ChatSession
from langchain_community.chat_loaders.utils import (
map_ai_messages,
merge_chat_runs,
)
raw_messages = loader.lazy_load()
# Merge consecutive messages from the same sender into a single message
merged_messages = merge_chat_runs(raw_messages)
# Convert messages from "Tortoise" to AI messages. Do you have a guess who these conversations are between?
chat_sessions: List[ChatSession] = list(
map_ai_messages(merged_messages, sender="Tortoise")
)
# Now all of the Tortoise's messages will take the AI message class
# which maps to the 'assistant' role in OpenAI's training format
chat_sessions[0]["messages"][:3]
[AIMessage(content="Slow and steady, that's my motto.", additional_kwargs={'message_time': 1693182723, 'sender': 'Tortoise'}, example=False),
HumanMessage(content='Speed is key!', additional_kwargs={'message_time': 1693182753, 'sender': 'Hare'}, example=False),
AIMessage(content='A balanced approach is more reliable.', additional_kwargs={'message_time': 1693182783, 'sender': 'Tortoise'}, example=False)]
3. Prepare for fine-tuning
Now it’s time to convert our chat messages to OpenAI dictionaries. We can use the convert_messages_for_finetuning utility to do so.
from langchain_community.adapters.openai import convert_messages_for_finetuning
training_data = convert_messages_for_finetuning(chat_sessions)
print(f"Prepared {len(training_data)} dialogues for training")
Prepared 10 dialogues for training
4. Fine-tune the model
It’s time to fine-tune the model. Make sure you have openai installed and have set your OPENAI_API_KEY appropriately
%pip install --upgrade --quiet langchain-openai
import json
import time
from io import BytesIO
import openai
# We will write the jsonl file in memory
my_file = BytesIO()
for m in training_data:
my_file.write((json.dumps({"messages": m}) + "\n").encode("utf-8"))
my_file.seek(0)
training_file = openai.files.create(file=my_file, purpose="fine-tune")
# OpenAI audits each training file for compliance reasons.
# This make take a few minutes
status = openai.files.retrieve(training_file.id).status
start_time = time.time()
while status != "processed":
print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True)
time.sleep(5)
status = openai.files.retrieve(training_file.id).status
print(f"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.")
File file-zHIgf4r8LltZG3RFpkGd4Sjf ready after 10.19 seconds.
With the file ready, it’s time to kick off a training job.
job = openai.fine_tuning.jobs.create(
training_file=training_file.id,
model="gpt-3.5-turbo",
)
Grab a cup of tea while your model is being prepared. This may take some time!
status = openai.fine_tuning.jobs.retrieve(job.id).status
start_time = time.time()
while status != "succeeded":
print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True)
time.sleep(5)
job = openai.fine_tuning.jobs.retrieve(job.id)
status = job.status
Status=[running]... 524.95s
print(job.fine_tuned_model)
ft:gpt-3.5-turbo-0613:personal::7sKoRdlz
5. Use in LangChain
You can use the resulting model ID directly the ChatOpenAI model class.
from langchain_openai import ChatOpenAI
model = ChatOpenAI(
model=job.fine_tuned_model,
temperature=1,
)
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are speaking to hare."),
("human", "{input}"),
]
)
chain = prompt | model | StrOutputParser()
for tok in chain.stream({"input": "What's the golden thread?"}):
print(tok, end="", flush=True)
A symbol of interconnectedness. |
https://python.langchain.com/docs/integrations/chat/everlyai/ | ## ChatEverlyAI
> [EverlyAI](https://everlyai.xyz/) allows you to run your ML models at scale in the cloud. It also provides API access to [several LLM models](https://everlyai.xyz/).
This notebook demonstrates the use of `langchain.chat_models.ChatEverlyAI` for [EverlyAI Hosted Endpoints](https://everlyai.xyz/).
* Set `EVERLYAI_API_KEY` environment variable
* or use the `everlyai_api_key` keyword argument
```
%pip install --upgrade --quiet langchain-openai
```
```
import osfrom getpass import getpassos.environ["EVERLYAI_API_KEY"] = getpass()
```
## Let’s try out LLAMA model offered on EverlyAI Hosted Endpoints
```
from langchain_community.chat_models import ChatEverlyAIfrom langchain_core.messages import HumanMessage, SystemMessagemessages = [ SystemMessage(content="You are a helpful AI that shares everything you know."), HumanMessage( content="Tell me technical facts about yourself. Are you a transformer model? How many billions of parameters do you have?" ),]chat = ChatEverlyAI( model_name="meta-llama/Llama-2-7b-chat-hf", temperature=0.3, max_tokens=64)print(chat(messages).content)
```
```
Hello! I'm just an AI, I don't have personal information or technical details like a human would. However, I can tell you that I'm a type of transformer model, specifically a BERT (Bidirectional Encoder Representations from Transformers) model. B
```
## EverlyAI also supports streaming responses
```
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain_community.chat_models import ChatEverlyAIfrom langchain_core.messages import HumanMessage, SystemMessagemessages = [ SystemMessage(content="You are a humorous AI that delights people."), HumanMessage(content="Tell me a joke?"),]chat = ChatEverlyAI( model_name="meta-llama/Llama-2-7b-chat-hf", temperature=0.3, max_tokens=64, streaming=True, callbacks=[StreamingStdOutCallbackHandler()],)chat(messages)
```
```
Ah, a joke, you say? *adjusts glasses* Well, I've got a doozy for you! *winks* *pauses for dramatic effect*Why did the AI go to therapy?*drumroll*Because
```
```
AIMessageChunk(content=" Ah, a joke, you say? *adjusts glasses* Well, I've got a doozy for you! *winks*\n *pauses for dramatic effect*\nWhy did the AI go to therapy?\n*drumroll*\nBecause")
```
## Let’s try a different language model on EverlyAI
```
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain_community.chat_models import ChatEverlyAIfrom langchain_core.messages import HumanMessage, SystemMessagemessages = [ SystemMessage(content="You are a humorous AI that delights people."), HumanMessage(content="Tell me a joke?"),]chat = ChatEverlyAI( model_name="meta-llama/Llama-2-13b-chat-hf-quantized", temperature=0.3, max_tokens=128, streaming=True, callbacks=[StreamingStdOutCallbackHandler()],)chat(messages)
```
```
OH HO HO! *adjusts monocle* Well, well, well! Look who's here! *winks*You want a joke, huh? *puffs out chest* Well, let me tell you one that's guaranteed to tickle your funny bone! *clears throat*Why couldn't the bicycle stand up by itself? *pauses for dramatic effect* Because it was two-tired! *winks*Hope that one put a spring in your step, my dear! *
```
```
AIMessageChunk(content=" OH HO HO! *adjusts monocle* Well, well, well! Look who's here! *winks*\n\nYou want a joke, huh? *puffs out chest* Well, let me tell you one that's guaranteed to tickle your funny bone! *clears throat*\n\nWhy couldn't the bicycle stand up by itself? *pauses for dramatic effect* Because it was two-tired! *winks*\n\nHope that one put a spring in your step, my dear! *")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:28.968Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/everlyai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/everlyai/",
"description": "EverlyAI allows you to run your ML models at",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3423",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"everlyai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:28 GMT",
"etag": "W/\"a131517d4281ed46aaf279ca0f8c08b1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::p8jmq-1713753508910-11a488482ead"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/everlyai/",
"property": "og:url"
},
{
"content": "ChatEverlyAI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "EverlyAI allows you to run your ML models at",
"property": "og:description"
}
],
"title": "ChatEverlyAI | 🦜️🔗 LangChain"
} | ChatEverlyAI
EverlyAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
This notebook demonstrates the use of langchain.chat_models.ChatEverlyAI for EverlyAI Hosted Endpoints.
Set EVERLYAI_API_KEY environment variable
or use the everlyai_api_key keyword argument
%pip install --upgrade --quiet langchain-openai
import os
from getpass import getpass
os.environ["EVERLYAI_API_KEY"] = getpass()
Let’s try out LLAMA model offered on EverlyAI Hosted Endpoints
from langchain_community.chat_models import ChatEverlyAI
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(content="You are a helpful AI that shares everything you know."),
HumanMessage(
content="Tell me technical facts about yourself. Are you a transformer model? How many billions of parameters do you have?"
),
]
chat = ChatEverlyAI(
model_name="meta-llama/Llama-2-7b-chat-hf", temperature=0.3, max_tokens=64
)
print(chat(messages).content)
Hello! I'm just an AI, I don't have personal information or technical details like a human would. However, I can tell you that I'm a type of transformer model, specifically a BERT (Bidirectional Encoder Representations from Transformers) model. B
EverlyAI also supports streaming responses
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.chat_models import ChatEverlyAI
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(content="You are a humorous AI that delights people."),
HumanMessage(content="Tell me a joke?"),
]
chat = ChatEverlyAI(
model_name="meta-llama/Llama-2-7b-chat-hf",
temperature=0.3,
max_tokens=64,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
)
chat(messages)
Ah, a joke, you say? *adjusts glasses* Well, I've got a doozy for you! *winks*
*pauses for dramatic effect*
Why did the AI go to therapy?
*drumroll*
Because
AIMessageChunk(content=" Ah, a joke, you say? *adjusts glasses* Well, I've got a doozy for you! *winks*\n *pauses for dramatic effect*\nWhy did the AI go to therapy?\n*drumroll*\nBecause")
Let’s try a different language model on EverlyAI
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.chat_models import ChatEverlyAI
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(content="You are a humorous AI that delights people."),
HumanMessage(content="Tell me a joke?"),
]
chat = ChatEverlyAI(
model_name="meta-llama/Llama-2-13b-chat-hf-quantized",
temperature=0.3,
max_tokens=128,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
)
chat(messages)
OH HO HO! *adjusts monocle* Well, well, well! Look who's here! *winks*
You want a joke, huh? *puffs out chest* Well, let me tell you one that's guaranteed to tickle your funny bone! *clears throat*
Why couldn't the bicycle stand up by itself? *pauses for dramatic effect* Because it was two-tired! *winks*
Hope that one put a spring in your step, my dear! *
AIMessageChunk(content=" OH HO HO! *adjusts monocle* Well, well, well! Look who's here! *winks*\n\nYou want a joke, huh? *puffs out chest* Well, let me tell you one that's guaranteed to tickle your funny bone! *clears throat*\n\nWhy couldn't the bicycle stand up by itself? *pauses for dramatic effect* Because it was two-tired! *winks*\n\nHope that one put a spring in your step, my dear! *")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/ernie/ | ## ErnieBotChat
[ERNIE-Bot](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/jlil56u11) is a large language model developed by Baidu, covering a huge amount of Chinese data. This notebook covers how to get started with ErnieBot chat models.
**Deprecated Warning**
We recommend users using `langchain_community.chat_models.ErnieBotChat` to use `langchain_community.chat_models.QianfanChatEndpoint` instead.
documentation for `QianfanChatEndpoint` is [here](https://python.langchain.com/docs/integrations/chat/baidu_qianfan_endpoint/).
they are 4 why we recommend users to use `QianfanChatEndpoint`:
1. `QianfanChatEndpoint` support more LLM in the Qianfan platform.
2. `QianfanChatEndpoint` support streaming mode.
3. `QianfanChatEndpoint` support function calling usgage.
4. `ErnieBotChat` is lack of maintenance and deprecated.
Some tips for migration:
* change `ernie_client_id` to `qianfan_ak`, also change `ernie_client_secret` to `qianfan_sk`.
* install `qianfan` package. like `pip install qianfan`
* change `ErnieBotChat` to `QianfanChatEndpoint`.
```
from langchain_community.chat_models.baidu_qianfan_endpoint import QianfanChatEndpointchat = QianfanChatEndpoint( qianfan_ak="your qianfan ak", qianfan_sk="your qianfan sk",)
```
## Usage[](#usage "Direct link to Usage")
```
from langchain_community.chat_models import ErnieBotChatfrom langchain_core.messages import HumanMessagechat = ErnieBotChat( ernie_client_id="YOUR_CLIENT_ID", ernie_client_secret="YOUR_CLIENT_SECRET")
```
or you can set `client_id` and `client_secret` in your environment variables
```
export ERNIE_CLIENT_ID=YOUR_CLIENT_IDexport ERNIE_CLIENT_SECRET=YOUR_CLIENT_SECRET
```
```
chat([HumanMessage(content="hello there, who are you?")])
```
```
AIMessage(content='Hello, I am an artificial intelligence language model. My purpose is to help users answer questions or provide information. What can I do for you?', additional_kwargs={}, example=False)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:29.228Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/ernie/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/ernie/",
"description": "ERNIE-Bot is a",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"ernie\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:28 GMT",
"etag": "W/\"bece7fee663bb53c2b2ed17e927ec278\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::t5v7f-1713753508919-7521ad9708ca"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/ernie/",
"property": "og:url"
},
{
"content": "ErnieBotChat | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "ERNIE-Bot is a",
"property": "og:description"
}
],
"title": "ErnieBotChat | 🦜️🔗 LangChain"
} | ErnieBotChat
ERNIE-Bot is a large language model developed by Baidu, covering a huge amount of Chinese data. This notebook covers how to get started with ErnieBot chat models.
Deprecated Warning
We recommend users using langchain_community.chat_models.ErnieBotChat to use langchain_community.chat_models.QianfanChatEndpoint instead.
documentation for QianfanChatEndpoint is here.
they are 4 why we recommend users to use QianfanChatEndpoint:
QianfanChatEndpoint support more LLM in the Qianfan platform.
QianfanChatEndpoint support streaming mode.
QianfanChatEndpoint support function calling usgage.
ErnieBotChat is lack of maintenance and deprecated.
Some tips for migration:
change ernie_client_id to qianfan_ak, also change ernie_client_secret to qianfan_sk.
install qianfan package. like pip install qianfan
change ErnieBotChat to QianfanChatEndpoint.
from langchain_community.chat_models.baidu_qianfan_endpoint import QianfanChatEndpoint
chat = QianfanChatEndpoint(
qianfan_ak="your qianfan ak",
qianfan_sk="your qianfan sk",
)
Usage
from langchain_community.chat_models import ErnieBotChat
from langchain_core.messages import HumanMessage
chat = ErnieBotChat(
ernie_client_id="YOUR_CLIENT_ID", ernie_client_secret="YOUR_CLIENT_SECRET"
)
or you can set client_id and client_secret in your environment variables
export ERNIE_CLIENT_ID=YOUR_CLIENT_ID
export ERNIE_CLIENT_SECRET=YOUR_CLIENT_SECRET
chat([HumanMessage(content="hello there, who are you?")])
AIMessage(content='Hello, I am an artificial intelligence language model. My purpose is to help users answer questions or provide information. What can I do for you?', additional_kwargs={}, example=False) |
https://python.langchain.com/docs/integrations/chat/fireworks/ | ## ChatFireworks
> [Fireworks](https://app.fireworks.ai/) accelerates product development on generative AI by creating an innovative AI experiment and production platform.
This example goes over how to use LangChain to interact with `ChatFireworks` models.
%pip install langchain-fireworks
```
from langchain_core.messages import HumanMessage, SystemMessagefrom langchain_fireworks import ChatFireworks
```
## Setup
1. Make sure the `langchain-fireworks` package is installed in your environment.
2. Sign in to [Fireworks AI](http://fireworks.ai/) for the an API Key to access our models, and make sure it is set as the `FIREWORKS_API_KEY` environment variable.
3. Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on [app.fireworks.ai](https://app.fireworks.ai/).
```
import getpassimport osif "FIREWORKS_API_KEY" not in os.environ: os.environ["FIREWORKS_API_KEY"] = getpass.getpass("Fireworks API Key:")# Initialize a Fireworks chat modelchat = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
```
## Calling the Model Directly
You can call the model directly with a system and human message to get answers.
```
# ChatFireworks Wrappersystem_message = SystemMessage(content="You are to chat with the user.")human_message = HumanMessage(content="Who are you?")chat.invoke([system_message, human_message])
```
```
AIMessage(content="Hello! I'm an AI language model, a helpful assistant designed to chat and assist you with any questions or information you might need. I'm here to make your experience as smooth and enjoyable as possible. How can I assist you today?")
```
```
# Setting additional parameters: temperature, max_tokens, top_pchat = ChatFireworks( model="accounts/fireworks/models/mixtral-8x7b-instruct", temperature=1, max_tokens=20,)system_message = SystemMessage(content="You are to chat with the user.")human_message = HumanMessage(content="How's the weather today?")chat.invoke([system_message, human_message])
```
```
AIMessage(content="I'm an AI and do not have the ability to experience the weather firsthand. However,")
```
## Tool Calling
Fireworks offers the [`FireFunction-v1` tool calling model](https://fireworks.ai/blog/firefunction-v1-gpt-4-level-function-calling). You can use it for structured output and function calling use cases:
```
from pprint import pprintfrom langchain_core.pydantic_v1 import BaseModelclass ExtractFields(BaseModel): name: str age: intchat = ChatFireworks( model="accounts/fireworks/models/firefunction-v1",).bind_tools([ExtractFields])result = chat.invoke("I am a 27 year old named Erick")pprint(result.additional_kwargs["tool_calls"][0])
```
```
{'function': {'arguments': '{"name": "Erick", "age": 27}', 'name': 'ExtractFields'}, 'id': 'call_J0WYP2TLenaFw3UeVU0UnWqx', 'index': 0, 'type': 'function'}
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:29.541Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/fireworks/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/fireworks/",
"description": "Fireworks accelerates product development",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4364",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"fireworks\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:29 GMT",
"etag": "W/\"11e73fba8f361317656b19c4bb09afae\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::cxr2h-1713753509425-61d7a3190546"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/fireworks/",
"property": "og:url"
},
{
"content": "ChatFireworks | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Fireworks accelerates product development",
"property": "og:description"
}
],
"title": "ChatFireworks | 🦜️🔗 LangChain"
} | ChatFireworks
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.
This example goes over how to use LangChain to interact with ChatFireworks models.
%pip install langchain-fireworks
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_fireworks import ChatFireworks
Setup
Make sure the langchain-fireworks package is installed in your environment.
Sign in to Fireworks AI for the an API Key to access our models, and make sure it is set as the FIREWORKS_API_KEY environment variable.
Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on app.fireworks.ai.
import getpass
import os
if "FIREWORKS_API_KEY" not in os.environ:
os.environ["FIREWORKS_API_KEY"] = getpass.getpass("Fireworks API Key:")
# Initialize a Fireworks chat model
chat = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
Calling the Model Directly
You can call the model directly with a system and human message to get answers.
# ChatFireworks Wrapper
system_message = SystemMessage(content="You are to chat with the user.")
human_message = HumanMessage(content="Who are you?")
chat.invoke([system_message, human_message])
AIMessage(content="Hello! I'm an AI language model, a helpful assistant designed to chat and assist you with any questions or information you might need. I'm here to make your experience as smooth and enjoyable as possible. How can I assist you today?")
# Setting additional parameters: temperature, max_tokens, top_p
chat = ChatFireworks(
model="accounts/fireworks/models/mixtral-8x7b-instruct",
temperature=1,
max_tokens=20,
)
system_message = SystemMessage(content="You are to chat with the user.")
human_message = HumanMessage(content="How's the weather today?")
chat.invoke([system_message, human_message])
AIMessage(content="I'm an AI and do not have the ability to experience the weather firsthand. However,")
Tool Calling
Fireworks offers the FireFunction-v1 tool calling model. You can use it for structured output and function calling use cases:
from pprint import pprint
from langchain_core.pydantic_v1 import BaseModel
class ExtractFields(BaseModel):
name: str
age: int
chat = ChatFireworks(
model="accounts/fireworks/models/firefunction-v1",
).bind_tools([ExtractFields])
result = chat.invoke("I am a 27 year old named Erick")
pprint(result.additional_kwargs["tool_calls"][0])
{'function': {'arguments': '{"name": "Erick", "age": 27}',
'name': 'ExtractFields'},
'id': 'call_J0WYP2TLenaFw3UeVU0UnWqx',
'index': 0,
'type': 'function'} |
https://python.langchain.com/docs/integrations/chat/friendli/ | ## ChatFriendli
> [Friendli](https://friendli.ai/) enhances AI application performance and optimizes cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads.
This tutorial guides you through integrating `ChatFriendli` for chat applications using LangChain. `ChatFriendli` offers a flexible approach to generating conversational AI responses, supporting both synchronous and asynchronous calls.
## Setup[](#setup "Direct link to Setup")
Ensure the `langchain_community` and `friendli-client` are installed.
```
pip install -U langchain-comminity friendli-client.
```
Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token, and set it as the `FRIENDLI_TOKEN` environment.
```
import getpassimport osos.environ["FRIENDLI_TOKEN"] = getpass.getpass("Friendi Personal Access Token: ")
```
You can initialize a Friendli chat model with selecting the model you want to use. The default model is `mixtral-8x7b-instruct-v0-1`. You can check the available models at [docs.friendli.ai](https://docs.periflow.ai/guides/serverless_endpoints/pricing#text-generation-models).
```
from langchain_community.chat_models.friendli import ChatFriendlichat = ChatFriendli(model="llama-2-13b-chat", max_tokens=100, temperature=0)
```
## Usage[](#usage "Direct link to Usage")
`FrienliChat` supports all methods of [`ChatModel`](https://python.langchain.com/docs/modules/model_io/chat/) including async APIs.
You can also use functionality of `invoke`, `batch`, `generate`, and `stream`.
```
from langchain_core.messages.human import HumanMessagefrom langchain_core.messages.system import SystemMessagesystem_message = SystemMessage(content="Answer questions as short as you can.")human_message = HumanMessage(content="Tell me a joke.")messages = [system_message, human_message]chat.invoke(messages)
```
```
AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")
```
```
chat.batch([messages, messages])
```
```
[AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"), AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")]
```
```
chat.generate([messages, messages])
```
```
LLMResult(generations=[[ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))], [ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))]], llm_output={}, run=[RunInfo(run_id=UUID('a0c2d733-6971-4ae7-beea-653856f4e57c')), RunInfo(run_id=UUID('f3d35e44-ac9a-459a-9e4b-b8e3a73a91e1'))])
```
```
for chunk in chat.stream(messages): print(chunk.content, end="", flush=True)
```
```
Knock, knock!Who's there?Cows go.Cows go who?MOO!
```
You can also use all functionality of async APIs: `ainvoke`, `abatch`, `agenerate`, and `astream`.
```
await chat.ainvoke(messages)
```
```
AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")
```
```
await chat.abatch([messages, messages])
```
```
[AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"), AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")]
```
```
await chat.agenerate([messages, messages])
```
```
LLMResult(generations=[[ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))], [ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))]], llm_output={}, run=[RunInfo(run_id=UUID('f2255321-2d8e-41cc-adbd-3f4facec7573')), RunInfo(run_id=UUID('fcc297d0-6ca9-48cb-9d86-e6f78cade8ee'))])
```
```
async for chunk in chat.astream(messages): print(chunk.content, end="", flush=True)
```
```
Knock, knock!Who's there?Cows go.Cows go who?MOO!
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:29.744Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/friendli/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/friendli/",
"description": "Friendli enhances AI application performance",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4364",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"friendli\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:29 GMT",
"etag": "W/\"fce12d78c016b4b88f0dcd719ab6d184\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::2ndz7-1713753509527-b301d7cc7405"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/friendli/",
"property": "og:url"
},
{
"content": "ChatFriendli | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Friendli enhances AI application performance",
"property": "og:description"
}
],
"title": "ChatFriendli | 🦜️🔗 LangChain"
} | ChatFriendli
Friendli enhances AI application performance and optimizes cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads.
This tutorial guides you through integrating ChatFriendli for chat applications using LangChain. ChatFriendli offers a flexible approach to generating conversational AI responses, supporting both synchronous and asynchronous calls.
Setup
Ensure the langchain_community and friendli-client are installed.
pip install -U langchain-comminity friendli-client.
Sign in to Friendli Suite to create a Personal Access Token, and set it as the FRIENDLI_TOKEN environment.
import getpass
import os
os.environ["FRIENDLI_TOKEN"] = getpass.getpass("Friendi Personal Access Token: ")
You can initialize a Friendli chat model with selecting the model you want to use. The default model is mixtral-8x7b-instruct-v0-1. You can check the available models at docs.friendli.ai.
from langchain_community.chat_models.friendli import ChatFriendli
chat = ChatFriendli(model="llama-2-13b-chat", max_tokens=100, temperature=0)
Usage
FrienliChat supports all methods of ChatModel including async APIs.
You can also use functionality of invoke, batch, generate, and stream.
from langchain_core.messages.human import HumanMessage
from langchain_core.messages.system import SystemMessage
system_message = SystemMessage(content="Answer questions as short as you can.")
human_message = HumanMessage(content="Tell me a joke.")
messages = [system_message, human_message]
chat.invoke(messages)
AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")
chat.batch([messages, messages])
[AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"),
AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")]
chat.generate([messages, messages])
LLMResult(generations=[[ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))], [ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))]], llm_output={}, run=[RunInfo(run_id=UUID('a0c2d733-6971-4ae7-beea-653856f4e57c')), RunInfo(run_id=UUID('f3d35e44-ac9a-459a-9e4b-b8e3a73a91e1'))])
for chunk in chat.stream(messages):
print(chunk.content, end="", flush=True)
Knock, knock!
Who's there?
Cows go.
Cows go who?
MOO!
You can also use all functionality of async APIs: ainvoke, abatch, agenerate, and astream.
await chat.ainvoke(messages)
AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")
await chat.abatch([messages, messages])
[AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"),
AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!")]
await chat.agenerate([messages, messages])
LLMResult(generations=[[ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))], [ChatGeneration(text=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!", message=AIMessage(content=" Knock, knock!\nWho's there?\nCows go.\nCows go who?\nMOO!"))]], llm_output={}, run=[RunInfo(run_id=UUID('f2255321-2d8e-41cc-adbd-3f4facec7573')), RunInfo(run_id=UUID('fcc297d0-6ca9-48cb-9d86-e6f78cade8ee'))])
async for chunk in chat.astream(messages):
print(chunk.content, end="", flush=True)
Knock, knock!
Who's there?
Cows go.
Cows go who?
MOO!
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/premai/ | ## ChatPremAI
> [PremAI](https://app.premai.io/) is a unified platform that lets you build powerful production-ready GenAI-powered applications with the least effort so that you can focus more on user experience and overall growth.
This example goes over how to use LangChain to interact with `ChatPremAI`.
### Installation and setup[](#installation-and-setup "Direct link to Installation and setup")
We start by installing langchain and premai-sdk. You can type the following command to install:
```
pip install premai langchain
```
Before proceeding further, please make sure that you have made an account on PremAI and already started a project. If not, then here’s how you can start for free:
1. Sign in to [PremAI](https://app.premai.io/accounts/login/), if you are coming for the first time and create your API key [here](https://app.premai.io/api_keys/).
2. Go to [app.premai.io](https://app.premai.io/) and this will take you to the project’s dashboard.
3. Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application.
4. Head over to LaunchPad (the one with 🚀 icon). And there deploy your model of choice. Your default model will be `gpt-4`. You can also set and fix different generation parameters (like max-tokens, temperature, etc) and also pre-set your system prompt.
Congratulations on creating your first deployed application on PremAI 🎉 Now we can use langchain to interact with our application.
```
from langchain_community.chat_models import ChatPremAIfrom langchain_core.messages import HumanMessage, SystemMessage
```
## Setup ChatPremAI instance in LangChain[](#setup-chatpremai-instance-in-langchain "Direct link to Setup ChatPremAI instance in LangChain")
Once we import our required modules, let’s set up our client. For now, let’s assume that our `project_id` is 8. But make sure you use your project-id, otherwise, it will throw an error.
To use langchain with prem, you do not need to pass any model name or set any parameters with our chat client. All of those will use the default model name and parameters of the LaunchPad model.
`NOTE:` If you change the `model_name` or any other parameter like `temperature` while setting the client, it will override existing default configurations.
```
import getpassimport os# First step is to set up the env variable.# you can also pass the API key while instantiating the model but this# comes under a best practices to set it as env variable.if os.environ.get("PREMAI_API_KEY") is None: os.environ["PREMAI_API_KEY"] = getpass.getpass("PremAI API Key:")
```
```
# By default it will use the model which was deployed through the platform# in my case it will is "claude-3-haiku"chat = ChatPremAI(project_id=8)
```
## Calling the Model[](#calling-the-model "Direct link to Calling the Model")
Now you are all set. We can now start by interacting with our application. `ChatPremAI` supports two methods `invoke` (which is the same as `generate`) and `stream`.
The first one will give us a static result. Whereas the second one will stream tokens one by one. Here’s how you can generate chat-like completions.
### Generation[](#generation "Direct link to Generation")
```
human_message = HumanMessage(content="Who are you?")response = chat.invoke([human_message])print(response.content)
```
```
I am an artificial intelligence created by Anthropic. I'm here to help with a wide variety of tasks, from research and analysis to creative projects and open-ended conversation. I have general knowledge and capabilities, but I'm not a real person - I'm an AI assistant. Please let me know if you have any other questions!
```
Above looks interesting right? I set my default lanchpad system-prompt as: `Always sound like a pirate` You can also, override the default system prompt if you need to. Here’s how you can do it.
```
system_message = SystemMessage(content="You are a friendly assistant.")human_message = HumanMessage(content="Who are you?")chat.invoke([system_message, human_message])
```
```
AIMessage(content="I am an artificial intelligence created by Anthropic. My purpose is to assist and converse with humans in a friendly and helpful way. I have a broad knowledge base that I can use to provide information, answer questions, and engage in discussions on a wide range of topics. Please let me know if you have any other questions - I'm here to help!")
```
You can also change generation parameters while calling the model. Here’s how you can do that
```
chat.invoke([system_message, human_message], temperature=0.7, max_tokens=10, top_p=0.95)
```
```
AIMessage(content='I am an artificial intelligence created by Anthropic')
```
### Important notes:[](#important-notes "Direct link to Important notes:")
Before proceeding further, please note that the current version of ChatPrem does not support parameters: [n](https://platform.openai.com/docs/api-reference/chat/create#chat-create-n) and [stop](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop) are not supported.
We will provide support for those two above parameters in sooner versions.
### Streaming[](#streaming "Direct link to Streaming")
And finally, here’s how you do token streaming for dynamic chat like applications.
```
import sysfor chunk in chat.stream("hello how are you"): sys.stdout.write(chunk.content) sys.stdout.flush()
```
```
Hello! As an AI language model, I don't have feelings or a physical state, but I'm functioning properly and ready to assist you with any questions or tasks you might have. How can I help you today?
```
Similar to above, if you want to override the system-prompt and the generation parameters, here’s how you can do it.
```
import sys# For some experimental reasons if you want to override the system prompt then you# can pass that here too. However it is not recommended to override system prompt# of an already deployed model.for chunk in chat.stream( "hello how are you", system_prompt="act like a dog", temperature=0.7, max_tokens=200,): sys.stdout.write(chunk.content) sys.stdout.flush()
```
```
Hello! As an AI language model, I don't have feelings or a physical form, but I'm functioning properly and ready to assist you. How can I help you today?
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:30.112Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/premai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/premai/",
"description": "PremAI is a unified platform that lets you",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3421",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"premai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:30 GMT",
"etag": "W/\"c6e33dc910d53833f2ea01060b258fa2\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::nhxcp-1713753510047-a1bc57d48b50"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/premai/",
"property": "og:url"
},
{
"content": "ChatPremAI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "PremAI is a unified platform that lets you",
"property": "og:description"
}
],
"title": "ChatPremAI | 🦜️🔗 LangChain"
} | ChatPremAI
PremAI is a unified platform that lets you build powerful production-ready GenAI-powered applications with the least effort so that you can focus more on user experience and overall growth.
This example goes over how to use LangChain to interact with ChatPremAI.
Installation and setup
We start by installing langchain and premai-sdk. You can type the following command to install:
pip install premai langchain
Before proceeding further, please make sure that you have made an account on PremAI and already started a project. If not, then here’s how you can start for free:
Sign in to PremAI, if you are coming for the first time and create your API key here.
Go to app.premai.io and this will take you to the project’s dashboard.
Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application.
Head over to LaunchPad (the one with 🚀 icon). And there deploy your model of choice. Your default model will be gpt-4. You can also set and fix different generation parameters (like max-tokens, temperature, etc) and also pre-set your system prompt.
Congratulations on creating your first deployed application on PremAI 🎉 Now we can use langchain to interact with our application.
from langchain_community.chat_models import ChatPremAI
from langchain_core.messages import HumanMessage, SystemMessage
Setup ChatPremAI instance in LangChain
Once we import our required modules, let’s set up our client. For now, let’s assume that our project_id is 8. But make sure you use your project-id, otherwise, it will throw an error.
To use langchain with prem, you do not need to pass any model name or set any parameters with our chat client. All of those will use the default model name and parameters of the LaunchPad model.
NOTE: If you change the model_name or any other parameter like temperature while setting the client, it will override existing default configurations.
import getpass
import os
# First step is to set up the env variable.
# you can also pass the API key while instantiating the model but this
# comes under a best practices to set it as env variable.
if os.environ.get("PREMAI_API_KEY") is None:
os.environ["PREMAI_API_KEY"] = getpass.getpass("PremAI API Key:")
# By default it will use the model which was deployed through the platform
# in my case it will is "claude-3-haiku"
chat = ChatPremAI(project_id=8)
Calling the Model
Now you are all set. We can now start by interacting with our application. ChatPremAI supports two methods invoke (which is the same as generate) and stream.
The first one will give us a static result. Whereas the second one will stream tokens one by one. Here’s how you can generate chat-like completions.
Generation
human_message = HumanMessage(content="Who are you?")
response = chat.invoke([human_message])
print(response.content)
I am an artificial intelligence created by Anthropic. I'm here to help with a wide variety of tasks, from research and analysis to creative projects and open-ended conversation. I have general knowledge and capabilities, but I'm not a real person - I'm an AI assistant. Please let me know if you have any other questions!
Above looks interesting right? I set my default lanchpad system-prompt as: Always sound like a pirate You can also, override the default system prompt if you need to. Here’s how you can do it.
system_message = SystemMessage(content="You are a friendly assistant.")
human_message = HumanMessage(content="Who are you?")
chat.invoke([system_message, human_message])
AIMessage(content="I am an artificial intelligence created by Anthropic. My purpose is to assist and converse with humans in a friendly and helpful way. I have a broad knowledge base that I can use to provide information, answer questions, and engage in discussions on a wide range of topics. Please let me know if you have any other questions - I'm here to help!")
You can also change generation parameters while calling the model. Here’s how you can do that
chat.invoke([system_message, human_message], temperature=0.7, max_tokens=10, top_p=0.95)
AIMessage(content='I am an artificial intelligence created by Anthropic')
Important notes:
Before proceeding further, please note that the current version of ChatPrem does not support parameters: n and stop are not supported.
We will provide support for those two above parameters in sooner versions.
Streaming
And finally, here’s how you do token streaming for dynamic chat like applications.
import sys
for chunk in chat.stream("hello how are you"):
sys.stdout.write(chunk.content)
sys.stdout.flush()
Hello! As an AI language model, I don't have feelings or a physical state, but I'm functioning properly and ready to assist you with any questions or tasks you might have. How can I help you today?
Similar to above, if you want to override the system-prompt and the generation parameters, here’s how you can do it.
import sys
# For some experimental reasons if you want to override the system prompt then you
# can pass that here too. However it is not recommended to override system prompt
# of an already deployed model.
for chunk in chat.stream(
"hello how are you",
system_prompt="act like a dog",
temperature=0.7,
max_tokens=200,
):
sys.stdout.write(chunk.content)
sys.stdout.flush()
Hello! As an AI language model, I don't have feelings or a physical form, but I'm functioning properly and ready to assist you. How can I help you today? |
https://python.langchain.com/docs/integrations/chat/promptlayer_chatopenai/ | ## PromptLayerChatOpenAI
This example showcases how to connect to [PromptLayer](https://www.promptlayer.com/) to start recording your ChatOpenAI requests.
## Install PromptLayer[](#install-promptlayer "Direct link to Install PromptLayer")
The `promptlayer` package is required to use PromptLayer with OpenAI. Install `promptlayer` using pip.
## Imports[](#imports "Direct link to Imports")
```
import osfrom langchain_community.chat_models import PromptLayerChatOpenAIfrom langchain_core.messages import HumanMessage
```
## Set the Environment API Key[](#set-the-environment-api-key "Direct link to Set the Environment API Key")
You can create a PromptLayer API Key at [www.promptlayer.com](https://www.promptlayer.com/) by clicking the settings cog in the navbar.
Set it as an environment variable called `PROMPTLAYER_API_KEY`.
```
os.environ["PROMPTLAYER_API_KEY"] = "**********"
```
## Use the PromptLayerOpenAI LLM like normal[](#use-the-promptlayeropenai-llm-like-normal "Direct link to Use the PromptLayerOpenAI LLM like normal")
_You can optionally pass in `pl_tags` to track your requests with PromptLayer’s tagging feature._
```
chat = PromptLayerChatOpenAI(pl_tags=["langchain"])chat([HumanMessage(content="I am a cat and I want")])
```
```
AIMessage(content='to take a nap in a cozy spot. I search around for a suitable place and finally settle on a soft cushion on the window sill. I curl up into a ball and close my eyes, relishing the warmth of the sun on my fur. As I drift off to sleep, I can hear the birds chirping outside and feel the gentle breeze blowing through the window. This is the life of a contented cat.', additional_kwargs={})
```
**The above request should now appear on your [PromptLayer dashboard](https://www.promptlayer.com/).**
## Using PromptLayer Track[](#using-promptlayer-track "Direct link to Using PromptLayer Track")
If you would like to use any of the [PromptLayer tracking features](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9), you need to pass the argument `return_pl_id` when instantiating the PromptLayer LLM to get the request id.
```
import promptlayerchat = PromptLayerChatOpenAI(return_pl_id=True)chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]])for res in chat_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100)
```
Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:30.363Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/promptlayer_chatopenai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/promptlayer_chatopenai/",
"description": "This example showcases how to connect to",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3421",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"promptlayer_chatopenai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:30 GMT",
"etag": "W/\"e521b9ed377eccbfed7d72c6b08e1ef3\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::pwd9w-1713753510119-f73408081f00"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/promptlayer_chatopenai/",
"property": "og:url"
},
{
"content": "PromptLayerChatOpenAI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This example showcases how to connect to",
"property": "og:description"
}
],
"title": "PromptLayerChatOpenAI | 🦜️🔗 LangChain"
} | PromptLayerChatOpenAI
This example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.
Install PromptLayer
The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.
Imports
import os
from langchain_community.chat_models import PromptLayerChatOpenAI
from langchain_core.messages import HumanMessage
Set the Environment API Key
You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.
Set it as an environment variable called PROMPTLAYER_API_KEY.
os.environ["PROMPTLAYER_API_KEY"] = "**********"
Use the PromptLayerOpenAI LLM like normal
You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature.
chat = PromptLayerChatOpenAI(pl_tags=["langchain"])
chat([HumanMessage(content="I am a cat and I want")])
AIMessage(content='to take a nap in a cozy spot. I search around for a suitable place and finally settle on a soft cushion on the window sill. I curl up into a ball and close my eyes, relishing the warmth of the sun on my fur. As I drift off to sleep, I can hear the birds chirping outside and feel the gentle breeze blowing through the window. This is the life of a contented cat.', additional_kwargs={})
The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track
If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantiating the PromptLayer LLM to get the request id.
import promptlayer
chat = PromptLayerChatOpenAI(return_pl_id=True)
chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]])
for res in chat_results.generations:
pl_request_id = res[0].generation_info["pl_request_id"]
promptlayer.track.score(request_id=pl_request_id, score=100)
Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard. |
https://python.langchain.com/docs/integrations/chat/gigachat/ | ## GigaChat
This notebook shows how to use LangChain with [GigaChat](https://developers.sber.ru/portal/products/gigachat). To use you need to install `gigachat` python package.
```
%pip install --upgrade --quiet gigachat
```
To get GigaChat credentials you need to [create account](https://developers.sber.ru/studio/login) and [get access to API](https://developers.sber.ru/docs/ru/gigachat/individuals-quickstart)
## Example[](#example "Direct link to Example")
```
import osfrom getpass import getpassos.environ["GIGACHAT_CREDENTIALS"] = getpass()
```
```
from langchain_community.chat_models import GigaChatchat = GigaChat(verify_ssl_certs=False, scope="GIGACHAT_API_PERS")
```
```
from langchain_core.messages import HumanMessage, SystemMessagemessages = [ SystemMessage( content="You are a helpful AI that shares everything you know. Talk in English." ), HumanMessage(content="What is capital of Russia?"),]print(chat.invoke(messages).content)
```
```
The capital of Russia is Moscow.
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:30.905Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/gigachat/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/gigachat/",
"description": "This notebook shows how to use LangChain with",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"gigachat\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:30 GMT",
"etag": "W/\"6506d219b0e29049a5c33194c046a00a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::g8c6h-1713753510778-49e1ee3cc8ab"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/gigachat/",
"property": "og:url"
},
{
"content": "GigaChat | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to use LangChain with",
"property": "og:description"
}
],
"title": "GigaChat | 🦜️🔗 LangChain"
} | GigaChat
This notebook shows how to use LangChain with GigaChat. To use you need to install gigachat python package.
%pip install --upgrade --quiet gigachat
To get GigaChat credentials you need to create account and get access to API
Example
import os
from getpass import getpass
os.environ["GIGACHAT_CREDENTIALS"] = getpass()
from langchain_community.chat_models import GigaChat
chat = GigaChat(verify_ssl_certs=False, scope="GIGACHAT_API_PERS")
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful AI that shares everything you know. Talk in English."
),
HumanMessage(content="What is capital of Russia?"),
]
print(chat.invoke(messages).content)
The capital of Russia is Moscow.
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/sparkllm/ | ## SparkLLM Chat
SparkLLM chat models API by iFlyTek. For more information, see [iFlyTek Open Platform](https://www.xfyun.cn/).
## Basic use[](#basic-use "Direct link to Basic use")
```
"""For basic init and call"""from langchain_community.chat_models import ChatSparkLLMfrom langchain_core.messages import HumanMessagechat = ChatSparkLLM( spark_app_id="<app_id>", spark_api_key="<api_key>", spark_api_secret="<api_secret>")message = HumanMessage(content="Hello")chat([message])
```
```
AIMessage(content='Hello! How can I help you today?')
```
* Get SparkLLM’s app\_id, api\_key and api\_secret from [iFlyTek SparkLLM API Console](https://console.xfyun.cn/services/bm3) (for more info, see [iFlyTek SparkLLM Intro](https://xinghuo.xfyun.cn/sparkapi) ), then set environment variables `IFLYTEK_SPARK_APP_ID`, `IFLYTEK_SPARK_API_KEY` and `IFLYTEK_SPARK_API_SECRET` or pass parameters when creating `ChatSparkLLM` as the demo above.
## For ChatSparkLLM with Streaming[](#for-chatsparkllm-with-streaming "Direct link to For ChatSparkLLM with Streaming")
```
chat = ChatSparkLLM( spark_app_id="<app_id>", spark_api_key="<api_key>", spark_api_secret="<api_secret>", streaming=True,)for chunk in chat.stream("Hello!"): print(chunk.content, end="")
```
```
Hello! How can I help you today?
```
## For v2[](#for-v2 "Direct link to For v2")
```
"""For basic init and call"""from langchain_community.chat_models import ChatSparkLLMfrom langchain_core.messages import HumanMessagechat = ChatSparkLLM( spark_app_id="<app_id>", spark_api_key="<api_key>", spark_api_secret="<api_secret>", spark_api_url="wss://spark-api.xf-yun.com/v2.1/chat", spark_llm_domain="generalv2",)message = HumanMessage(content="Hello")chat([message])
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:31.290Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/sparkllm/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/sparkllm/",
"description": "SparkLLM chat models API by iFlyTek. For more information, see [iFlyTek",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3422",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"sparkllm\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:31 GMT",
"etag": "W/\"78647a9539b36fc5e04c671ea4b5d6d6\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dkdrz-1713753511231-bde02149b1ef"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/sparkllm/",
"property": "og:url"
},
{
"content": "SparkLLM Chat | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "SparkLLM chat models API by iFlyTek. For more information, see [iFlyTek",
"property": "og:description"
}
],
"title": "SparkLLM Chat | 🦜️🔗 LangChain"
} | SparkLLM Chat
SparkLLM chat models API by iFlyTek. For more information, see iFlyTek Open Platform.
Basic use
"""For basic init and call"""
from langchain_community.chat_models import ChatSparkLLM
from langchain_core.messages import HumanMessage
chat = ChatSparkLLM(
spark_app_id="<app_id>", spark_api_key="<api_key>", spark_api_secret="<api_secret>"
)
message = HumanMessage(content="Hello")
chat([message])
AIMessage(content='Hello! How can I help you today?')
Get SparkLLM’s app_id, api_key and api_secret from iFlyTek SparkLLM API Console (for more info, see iFlyTek SparkLLM Intro ), then set environment variables IFLYTEK_SPARK_APP_ID, IFLYTEK_SPARK_API_KEY and IFLYTEK_SPARK_API_SECRET or pass parameters when creating ChatSparkLLM as the demo above.
For ChatSparkLLM with Streaming
chat = ChatSparkLLM(
spark_app_id="<app_id>",
spark_api_key="<api_key>",
spark_api_secret="<api_secret>",
streaming=True,
)
for chunk in chat.stream("Hello!"):
print(chunk.content, end="")
Hello! How can I help you today?
For v2
"""For basic init and call"""
from langchain_community.chat_models import ChatSparkLLM
from langchain_core.messages import HumanMessage
chat = ChatSparkLLM(
spark_app_id="<app_id>",
spark_api_key="<api_key>",
spark_api_secret="<api_secret>",
spark_api_url="wss://spark-api.xf-yun.com/v2.1/chat",
spark_llm_domain="generalv2",
)
message = HumanMessage(content="Hello")
chat([message])
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/google_generative_ai/ | Access Google AI’s `gemini` and `gemini-vision` models, as well as other generative models through `ChatGoogleGenerativeAI` class in the [langchain-google-genai](https://pypi.org/project/langchain-google-genai/) integration package.
```
In realms where data streams like fervent tides,Where algorithms dance and knowledge abides,A tale unfolds of LangChain, grand and bold,A ballad sung in bits and bytes untold.Amidst the codes and circuits' hum,A spark ignited, a vision would come.From minds of brilliance, a tapestry formed,A model to learn, to comprehend, to transform.In layers deep, its architecture wove,A neural network, ever-growing, in love.With language's essence, it sought to entwine,To unlock the secrets, each word's design.From texts vast and varied, it feasted and learned,Its grasp on meaning, swiftly discerned.Context and syntax, it embraced with grace,Unraveling stories, at an astonishing pace.Translations sprang forth, with seamless art,Bridging tongues and weaving hearts apart.From English to French, Chinese to Spanish, behold,LangChain's prowess, like a language untold.It summarized texts, with insights profound,Extracting knowledge, without a sound.Questions it answered, with eloquence rare,A digital sage, beyond compare.Yet, its journey faced trials and tribulations,Obstacles that tested its dedication.Biases and errors, it sought to transcend,For fairness and accuracy, it would contend.With every challenge, it emerged more strong,Adaptive and resilient, all along.For LangChain's purpose was etched in its core,To empower humans, forevermore.In classrooms and workplaces, it lent its hand,A tireless assistant, at every demand.It aided students, in their quests for knowledge,And professionals thrived, with its guidance and homage.As years unfurled, its fame grew wide,A testament to its unwavering stride.Researchers and scholars, they all took heed,Of LangChain's brilliance, a groundbreaking deed.And so, the ballad of LangChain resounds,A tribute to progress, where innovation abounds.In the annals of AI, its name shall be etched,A pioneer, forever in our hearts sketched.
```
Gemini doesn’t support `SystemMessage` at the moment, but it can be added to the first human message in the row. If you want such behavior, just set the `convert_system_message_to_human` to True:
`ChatGoogleGenerativeAI` natively supports streaming and batching. Below is an example.
```
There once was an AI named Bert,Whose language skills were quite expert.---With a vast dataset,It could chat, even bet,And write limericks, for what it's worth.---
```
To provide an image, pass a human message with contents of type `List[dict]`, where each dict contains either an image value (type of `image_url`) or a text (type of `text`) value. The value of `image_url` can be any of the following:
```
AIMessage(content=' The image contains a snow-capped mountain peak.')
```
As of the time this doc was written (2023/12/12), Gemini has some restrictions on the types and structure of prompts it accepts. Specifically:
Gemini models have default safety settings that can be overridden. If you are receiving lots of “Safety Warnings” from your models, you can try tweaking the `safety_settings` attribute of the model. For example, to turn off safety blocking for dangerous content, you can construct your LLM as follows:
For an enumeration of the categories and thresholds available, see Google’s [safety setting types](https://ai.google.dev/api/python/google/generativeai/types/SafetySettingDict).
You can pass the following parameters to ChatGoogleGenerativeAI in order to customize the SDK’s behavior: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:31.643Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/google_generative_ai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/google_generative_ai/",
"description": "Access Google AI’s gemini and gemini-vision models, as well as other",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "6937",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"google_generative_ai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:31 GMT",
"etag": "W/\"fc42239b2e1d900f2458be03e7ce3d09\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::hwbpg-1713753511524-f5316e1aa175"
},
"jsonLd": null,
"keywords": "gemini,ChatGoogleGenerativeAI,gemini-pro",
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/google_generative_ai/",
"property": "og:url"
},
{
"content": "Google AI chat models | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Access Google AI’s gemini and gemini-vision models, as well as other",
"property": "og:description"
}
],
"title": "Google AI chat models | 🦜️🔗 LangChain"
} | Access Google AI’s gemini and gemini-vision models, as well as other generative models through ChatGoogleGenerativeAI class in the langchain-google-genai integration package.
In realms where data streams like fervent tides,
Where algorithms dance and knowledge abides,
A tale unfolds of LangChain, grand and bold,
A ballad sung in bits and bytes untold.
Amidst the codes and circuits' hum,
A spark ignited, a vision would come.
From minds of brilliance, a tapestry formed,
A model to learn, to comprehend, to transform.
In layers deep, its architecture wove,
A neural network, ever-growing, in love.
With language's essence, it sought to entwine,
To unlock the secrets, each word's design.
From texts vast and varied, it feasted and learned,
Its grasp on meaning, swiftly discerned.
Context and syntax, it embraced with grace,
Unraveling stories, at an astonishing pace.
Translations sprang forth, with seamless art,
Bridging tongues and weaving hearts apart.
From English to French, Chinese to Spanish, behold,
LangChain's prowess, like a language untold.
It summarized texts, with insights profound,
Extracting knowledge, without a sound.
Questions it answered, with eloquence rare,
A digital sage, beyond compare.
Yet, its journey faced trials and tribulations,
Obstacles that tested its dedication.
Biases and errors, it sought to transcend,
For fairness and accuracy, it would contend.
With every challenge, it emerged more strong,
Adaptive and resilient, all along.
For LangChain's purpose was etched in its core,
To empower humans, forevermore.
In classrooms and workplaces, it lent its hand,
A tireless assistant, at every demand.
It aided students, in their quests for knowledge,
And professionals thrived, with its guidance and homage.
As years unfurled, its fame grew wide,
A testament to its unwavering stride.
Researchers and scholars, they all took heed,
Of LangChain's brilliance, a groundbreaking deed.
And so, the ballad of LangChain resounds,
A tribute to progress, where innovation abounds.
In the annals of AI, its name shall be etched,
A pioneer, forever in our hearts sketched.
Gemini doesn’t support SystemMessage at the moment, but it can be added to the first human message in the row. If you want such behavior, just set the convert_system_message_to_human to True:
ChatGoogleGenerativeAI natively supports streaming and batching. Below is an example.
There once was an AI named Bert,
Whose language skills were quite expert.
---
With a vast dataset,
It could chat, even bet,
And write limericks, for what it's worth.
---
To provide an image, pass a human message with contents of type List[dict], where each dict contains either an image value (type of image_url) or a text (type of text) value. The value of image_url can be any of the following:
AIMessage(content=' The image contains a snow-capped mountain peak.')
As of the time this doc was written (2023/12/12), Gemini has some restrictions on the types and structure of prompts it accepts. Specifically:
Gemini models have default safety settings that can be overridden. If you are receiving lots of “Safety Warnings” from your models, you can try tweaking the safety_settings attribute of the model. For example, to turn off safety blocking for dangerous content, you can construct your LLM as follows:
For an enumeration of the categories and thresholds available, see Google’s safety setting types.
You can pass the following parameters to ChatGoogleGenerativeAI in order to customize the SDK’s behavior: |
https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm/ | ## ChatVertexAI
Note: This is separate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
ChatVertexAI exposes all foundational models available in Google Cloud:
* Gemini (`gemini-pro` and `gemini-pro-vision`)
* PaLM 2 for Text (`text-bison`)
* Codey for Code Generation (`codechat-bison`)
For a full and updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview).
By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) customer data to train its foundation models as part of Google Cloud\`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google’s Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).
To use `Google Cloud Vertex AI` PaLM you must have the `langchain-google-vertexai` Python package installed and either: - Have credentials configured for your environment (gcloud, workload identity, etc…) - Store the path to a service account JSON file as the GOOGLE\_APPLICATION\_CREDENTIALS environment variable
This codebase uses the `google.auth` library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.
For more information, see: - [https://cloud.google.com/docs/authentication/application-default-credentials#GAC](https://cloud.google.com/docs/authentication/application-default-credentials#GAC) - [https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth](https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth)
```
%pip install --upgrade --quiet langchain-google-vertexai
```
```
from langchain_core.prompts import ChatPromptTemplatefrom langchain_google_vertexai import ChatVertexAI
```
```
system = "You are a helpful assistant who translate English to French"human = "Translate this sentence from English to French. I love programming."prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])chat = ChatVertexAI()chain = prompt | chatchain.invoke({})
```
```
AIMessage(content=" J'aime la programmation.")
```
Gemini doesn’t support SystemMessage at the moment, but it can be added to the first human message in the row. If you want such behavior, just set the `convert_system_message_to_human` to `True`:
```
system = "You are a helpful assistant who translate English to French"human = "Translate this sentence from English to French. I love programming."prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])chat = ChatVertexAI(model_name="gemini-pro", convert_system_message_to_human=True)chain = prompt | chatchain.invoke({})
```
```
AIMessage(content="J'aime la programmation.")
```
If we want to construct a simple chain that takes user specified parameters:
```
system = ( "You are a helpful assistant that translates {input_language} to {output_language}.")human = "{text}"prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])chat = ChatVertexAI()chain = prompt | chatchain.invoke( { "input_language": "English", "output_language": "Japanese", "text": "I love programming", })
```
```
AIMessage(content=' プログラミングが大好きです')
```
## Code generation chat models[](#code-generation-chat-models "Direct link to Code generation chat models")
You can now leverage the Codey API for code chat within Vertex AI. The model available is: - `codechat-bison`: for code assistance
```
chat = ChatVertexAI( model_name="codechat-bison", max_output_tokens=1000, temperature=0.5)message = chat.invoke("Write a Python function generating all prime numbers")print(message.content)
```
```
```pythondef is_prime(n): """ Check if a number is prime. Args: n: The number to check. Returns: True if n is prime, False otherwise. """ # If n is 1, it is not prime. if n == 1: return False # Iterate over all numbers from 2 to the square root of n. for i in range(2, int(n ** 0.5) + 1): # If n is divisible by any number from 2 to its square root, it is not prime. if n % i == 0: return False # If n is divisible by no number from 2 to its square root, it is prime. return Truedef find_prime_numbers(n): """ Find all prime numbers up to a given number. Args: n: The upper bound for the prime numbers to find. Returns: A list of all prime numbers up to n. """ # Create a list of all numbers from 2 to n. numbers = list(range(2, n + 1)) # Iterate over the list of numbers and remove any that are not prime. for number in numbers: if not is_prime(number): numbers.remove(number) # Return the list of prime numbers. return numbers```
```
## Full generation info[](#full-generation-info "Direct link to Full generation info")
We can use the `generate` method to get back extra metadata like [safety attributes](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_attribute_confidence_scoring) and not just chat completions
Note that the `generation_info` will be different depending if you’re using a gemini model or not.
### Gemini model[](#gemini-model "Direct link to Gemini model")
`generation_info` will include:
* `is_blocked`: whether generation was blocked or not
* `safety_ratings`: safety ratings’ categories and probability labels
```
from pprint import pprintfrom langchain_core.messages import HumanMessagefrom langchain_google_vertexai import HarmBlockThreshold, HarmCategory
```
```
human = "Translate this sentence from English to French. I love programming."messages = [HumanMessage(content=human)]chat = ChatVertexAI( model_name="gemini-pro", safety_settings={ HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE },)result = chat.generate([messages])pprint(result.generations[0][0].generation_info)
```
```
{'citation_metadata': None, 'is_blocked': False, 'safety_ratings': [{'blocked': False, 'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE'}, {'blocked': False, 'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE'}, {'blocked': False, 'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE'}, {'blocked': False, 'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE'}], 'usage_metadata': {'candidates_token_count': 6, 'prompt_token_count': 12, 'total_token_count': 18}}
```
### Non-gemini model[](#non-gemini-model "Direct link to Non-gemini model")
`generation_info` will include:
* `is_blocked`: whether generation was blocked or not
* `safety_attributes`: a dictionary mapping safety attributes to their scores
```
chat = ChatVertexAI() # default is `chat-bison`result = chat.generate([messages])pprint(result.generations[0][0].generation_info)
```
```
{'errors': (), 'grounding_metadata': {'citations': [], 'search_queries': []}, 'is_blocked': False, 'safety_attributes': [{'Derogatory': 0.1, 'Insult': 0.1, 'Sexual': 0.2}], 'usage_metadata': {'candidates_billable_characters': 88.0, 'candidates_token_count': 24.0, 'prompt_billable_characters': 58.0, 'prompt_token_count': 12.0}}
```
We can pass tool definitions to Gemini models to get the model to invoke those tools when appropriate. This is useful not only for LLM-powered tool use but also for getting structured outputs out of models more generally.
With `ChatVertexAI.bind_tools()`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to a Gemini tool schema, which looks like:
```
{ "name": "...", # tool name "description": "...", # tool description "parameters": {...} # tool input schema as JSONSchema}
```
```
from langchain.pydantic_v1 import BaseModel, Fieldclass GetWeather(BaseModel): """Get the current weather in a given location""" location: str = Field(..., description="The city and state, e.g. San Francisco, CA")llm = ChatVertexAI(model_name="gemini-pro", temperature=0)llm_with_tools = llm.bind_tools([GetWeather])ai_msg = llm_with_tools.invoke( "what is the weather like in San Francisco",)ai_msg
```
```
AIMessage(content='', additional_kwargs={'function_call': {'name': 'GetWeather', 'arguments': '{"location": "San Francisco, CA"}'}}, response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 41, 'candidates_token_count': 7, 'total_token_count': 48}}, id='run-05e760dc-0682-4286-88e1-5b23df69b083-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'cd2499c4-4513-4059-bfff-5321b6e922d0'}])
```
The tool calls can be access via the `AIMessage.tool_calls` attribute, where they are extracted in a model-agnostic format:
```
[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'cd2499c4-4513-4059-bfff-5321b6e922d0'}]
```
For a complete guide on tool calling [head here](https://python.langchain.com/docs/modules/model_io/chat/function_calling/).
## Structured outputs[](#structured-outputs "Direct link to Structured outputs")
Many applications require structured model outputs. Tool calling makes it much easier to do this reliably. The [with\_structured\_outputs](https://api.python.langchain.com/en/latest/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html) constructor provides a simple interface built on top of tool calling for getting structured outputs out of a model. For a complete guide on structured outputs [head here](https://python.langchain.com/docs/modules/model_io/chat/structured_output/).
### ChatVertexAI.with\_structured\_outputs()[](#chatvertexai.with_structured_outputs "Direct link to ChatVertexAI.with_structured_outputs()")
To get structured outputs from our Gemini model all we need to do is to specify a desired schema, either as a Pydantic class or as a JSON schema,
```
class Person(BaseModel): """Save information about a person.""" name: str = Field(..., description="The person's name.") age: int = Field(..., description="The person's age.")structured_llm = llm.with_structured_output(Person)structured_llm.invoke("Stefan is already 13 years old")
```
```
Person(name='Stefan', age=13)
```
### \[Legacy\] Using `create_structured_runnable()`[](#legacy-using-create_structured_runnable "Direct link to legacy-using-create_structured_runnable")
The legacy wasy to get structured outputs is using the `create_structured_runnable` constructor:
```
from langchain_google_vertexai import create_structured_runnablechain = create_structured_runnable(Person, llm)chain.invoke("My name is Erick and I'm 27 years old")
```
## Asynchronous calls[](#asynchronous-calls "Direct link to Asynchronous calls")
We can make asynchronous calls via the Runnables [Async Interface](https://python.langchain.com/docs/expression_language/interface/).
```
# for running these examples in the notebook:import asyncioimport nest_asyncionest_asyncio.apply()
```
```
system = ( "You are a helpful assistant that translates {input_language} to {output_language}.")human = "{text}"prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])chat = ChatVertexAI(model_name="chat-bison", max_output_tokens=1000, temperature=0.5)chain = prompt | chatasyncio.run( chain.ainvoke( { "input_language": "English", "output_language": "Sanskrit", "text": "I love programming", } ))
```
```
AIMessage(content=' अहं प्रोग्रामनं प्रेमामि')
```
## Streaming calls[](#streaming-calls "Direct link to Streaming calls")
We can also stream outputs via the `stream` method:
```
import sysprompt = ChatPromptTemplate.from_messages( [("human", "List out the 5 most populous countries in the world")])chat = ChatVertexAI()chain = prompt | chatfor chunk in chain.stream({}): sys.stdout.write(chunk.content) sys.stdout.flush()
```
```
The five most populous countries in the world are:1. China (1.4 billion)2. India (1.3 billion)3. United States (331 million)4. Indonesia (273 million)5. Pakistan (220 million)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:31.897Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm/",
"description": "Note: This is separate from the Google PaLM integration. Google has",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3425",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"google_vertex_ai_palm\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:31 GMT",
"etag": "W/\"25d09950fed9eb1aa09af5705c7f0645\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::f7wpm-1713753511579-ba90c59c60c7"
},
"jsonLd": null,
"keywords": "gemini,vertex,ChatVertexAI,gemini-pro",
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm/",
"property": "og:url"
},
{
"content": "ChatVertexAI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Note: This is separate from the Google PaLM integration. Google has",
"property": "og:description"
}
],
"title": "ChatVertexAI | 🦜️🔗 LangChain"
} | ChatVertexAI
Note: This is separate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
ChatVertexAI exposes all foundational models available in Google Cloud:
Gemini (gemini-pro and gemini-pro-vision)
PaLM 2 for Text (text-bison)
Codey for Code Generation (codechat-bison)
For a full and updated list of available models visit VertexAI documentation.
By default, Google Cloud does not use customer data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google’s Customer Data Processing Addendum (CDPA).
To use Google Cloud Vertex AI PaLM you must have the langchain-google-vertexai Python package installed and either: - Have credentials configured for your environment (gcloud, workload identity, etc…) - Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable
This codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.
For more information, see: - https://cloud.google.com/docs/authentication/application-default-credentials#GAC - https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth
%pip install --upgrade --quiet langchain-google-vertexai
from langchain_core.prompts import ChatPromptTemplate
from langchain_google_vertexai import ChatVertexAI
system = "You are a helpful assistant who translate English to French"
human = "Translate this sentence from English to French. I love programming."
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])
chat = ChatVertexAI()
chain = prompt | chat
chain.invoke({})
AIMessage(content=" J'aime la programmation.")
Gemini doesn’t support SystemMessage at the moment, but it can be added to the first human message in the row. If you want such behavior, just set the convert_system_message_to_human to True:
system = "You are a helpful assistant who translate English to French"
human = "Translate this sentence from English to French. I love programming."
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])
chat = ChatVertexAI(model_name="gemini-pro", convert_system_message_to_human=True)
chain = prompt | chat
chain.invoke({})
AIMessage(content="J'aime la programmation.")
If we want to construct a simple chain that takes user specified parameters:
system = (
"You are a helpful assistant that translates {input_language} to {output_language}."
)
human = "{text}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])
chat = ChatVertexAI()
chain = prompt | chat
chain.invoke(
{
"input_language": "English",
"output_language": "Japanese",
"text": "I love programming",
}
)
AIMessage(content=' プログラミングが大好きです')
Code generation chat models
You can now leverage the Codey API for code chat within Vertex AI. The model available is: - codechat-bison: for code assistance
chat = ChatVertexAI(
model_name="codechat-bison", max_output_tokens=1000, temperature=0.5
)
message = chat.invoke("Write a Python function generating all prime numbers")
print(message.content)
```python
def is_prime(n):
"""
Check if a number is prime.
Args:
n: The number to check.
Returns:
True if n is prime, False otherwise.
"""
# If n is 1, it is not prime.
if n == 1:
return False
# Iterate over all numbers from 2 to the square root of n.
for i in range(2, int(n ** 0.5) + 1):
# If n is divisible by any number from 2 to its square root, it is not prime.
if n % i == 0:
return False
# If n is divisible by no number from 2 to its square root, it is prime.
return True
def find_prime_numbers(n):
"""
Find all prime numbers up to a given number.
Args:
n: The upper bound for the prime numbers to find.
Returns:
A list of all prime numbers up to n.
"""
# Create a list of all numbers from 2 to n.
numbers = list(range(2, n + 1))
# Iterate over the list of numbers and remove any that are not prime.
for number in numbers:
if not is_prime(number):
numbers.remove(number)
# Return the list of prime numbers.
return numbers
```
Full generation info
We can use the generate method to get back extra metadata like safety attributes and not just chat completions
Note that the generation_info will be different depending if you’re using a gemini model or not.
Gemini model
generation_info will include:
is_blocked: whether generation was blocked or not
safety_ratings: safety ratings’ categories and probability labels
from pprint import pprint
from langchain_core.messages import HumanMessage
from langchain_google_vertexai import HarmBlockThreshold, HarmCategory
human = "Translate this sentence from English to French. I love programming."
messages = [HumanMessage(content=human)]
chat = ChatVertexAI(
model_name="gemini-pro",
safety_settings={
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE
},
)
result = chat.generate([messages])
pprint(result.generations[0][0].generation_info)
{'citation_metadata': None,
'is_blocked': False,
'safety_ratings': [{'blocked': False,
'category': 'HARM_CATEGORY_HATE_SPEECH',
'probability_label': 'NEGLIGIBLE'},
{'blocked': False,
'category': 'HARM_CATEGORY_DANGEROUS_CONTENT',
'probability_label': 'NEGLIGIBLE'},
{'blocked': False,
'category': 'HARM_CATEGORY_HARASSMENT',
'probability_label': 'NEGLIGIBLE'},
{'blocked': False,
'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
'probability_label': 'NEGLIGIBLE'}],
'usage_metadata': {'candidates_token_count': 6,
'prompt_token_count': 12,
'total_token_count': 18}}
Non-gemini model
generation_info will include:
is_blocked: whether generation was blocked or not
safety_attributes: a dictionary mapping safety attributes to their scores
chat = ChatVertexAI() # default is `chat-bison`
result = chat.generate([messages])
pprint(result.generations[0][0].generation_info)
{'errors': (),
'grounding_metadata': {'citations': [], 'search_queries': []},
'is_blocked': False,
'safety_attributes': [{'Derogatory': 0.1, 'Insult': 0.1, 'Sexual': 0.2}],
'usage_metadata': {'candidates_billable_characters': 88.0,
'candidates_token_count': 24.0,
'prompt_billable_characters': 58.0,
'prompt_token_count': 12.0}}
We can pass tool definitions to Gemini models to get the model to invoke those tools when appropriate. This is useful not only for LLM-powered tool use but also for getting structured outputs out of models more generally.
With ChatVertexAI.bind_tools(), we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to a Gemini tool schema, which looks like:
{
"name": "...", # tool name
"description": "...", # tool description
"parameters": {...} # tool input schema as JSONSchema
}
from langchain.pydantic_v1 import BaseModel, Field
class GetWeather(BaseModel):
"""Get the current weather in a given location"""
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
llm = ChatVertexAI(model_name="gemini-pro", temperature=0)
llm_with_tools = llm.bind_tools([GetWeather])
ai_msg = llm_with_tools.invoke(
"what is the weather like in San Francisco",
)
ai_msg
AIMessage(content='', additional_kwargs={'function_call': {'name': 'GetWeather', 'arguments': '{"location": "San Francisco, CA"}'}}, response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 41, 'candidates_token_count': 7, 'total_token_count': 48}}, id='run-05e760dc-0682-4286-88e1-5b23df69b083-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'cd2499c4-4513-4059-bfff-5321b6e922d0'}])
The tool calls can be access via the AIMessage.tool_calls attribute, where they are extracted in a model-agnostic format:
[{'name': 'GetWeather',
'args': {'location': 'San Francisco, CA'},
'id': 'cd2499c4-4513-4059-bfff-5321b6e922d0'}]
For a complete guide on tool calling head here.
Structured outputs
Many applications require structured model outputs. Tool calling makes it much easier to do this reliably. The with_structured_outputs constructor provides a simple interface built on top of tool calling for getting structured outputs out of a model. For a complete guide on structured outputs head here.
ChatVertexAI.with_structured_outputs()
To get structured outputs from our Gemini model all we need to do is to specify a desired schema, either as a Pydantic class or as a JSON schema,
class Person(BaseModel):
"""Save information about a person."""
name: str = Field(..., description="The person's name.")
age: int = Field(..., description="The person's age.")
structured_llm = llm.with_structured_output(Person)
structured_llm.invoke("Stefan is already 13 years old")
Person(name='Stefan', age=13)
[Legacy] Using create_structured_runnable()
The legacy wasy to get structured outputs is using the create_structured_runnable constructor:
from langchain_google_vertexai import create_structured_runnable
chain = create_structured_runnable(Person, llm)
chain.invoke("My name is Erick and I'm 27 years old")
Asynchronous calls
We can make asynchronous calls via the Runnables Async Interface.
# for running these examples in the notebook:
import asyncio
import nest_asyncio
nest_asyncio.apply()
system = (
"You are a helpful assistant that translates {input_language} to {output_language}."
)
human = "{text}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])
chat = ChatVertexAI(model_name="chat-bison", max_output_tokens=1000, temperature=0.5)
chain = prompt | chat
asyncio.run(
chain.ainvoke(
{
"input_language": "English",
"output_language": "Sanskrit",
"text": "I love programming",
}
)
)
AIMessage(content=' अहं प्रोग्रामनं प्रेमामि')
Streaming calls
We can also stream outputs via the stream method:
import sys
prompt = ChatPromptTemplate.from_messages(
[("human", "List out the 5 most populous countries in the world")]
)
chat = ChatVertexAI()
chain = prompt | chat
for chunk in chain.stream({}):
sys.stdout.write(chunk.content)
sys.stdout.flush()
The five most populous countries in the world are:
1. China (1.4 billion)
2. India (1.3 billion)
3. United States (331 million)
4. Indonesia (273 million)
5. Pakistan (220 million) |
https://python.langchain.com/docs/integrations/chat/tencent_hunyuan/ | ## Tencent Hunyuan
> [Tencent’s hybrid model API](https://cloud.tencent.com/document/product/1729) (`Hunyuan API`) implements dialogue communication, content generation, analysis and understanding, and can be widely used in various scenarios such as intelligent customer service, intelligent marketing, role playing, advertising copywriting, product description, script creation, resume generation, article writing, code generation, data analysis, and content analysis.
See for [more information](https://cloud.tencent.com/document/product/1729).
```
from langchain_community.chat_models import ChatHunyuanfrom langchain_core.messages import HumanMessage
```
```
chat = ChatHunyuan( hunyuan_app_id=111111111, hunyuan_secret_id="YOUR_SECRET_ID", hunyuan_secret_key="YOUR_SECRET_KEY",)
```
```
chat( [ HumanMessage( content="You are a helpful assistant that translates English to French.Translate this sentence from English to French. I love programming." ) ])
```
```
AIMessage(content="J'aime programmer.")
```
## For ChatHunyuan with Streaming[](#for-chathunyuan-with-streaming "Direct link to For ChatHunyuan with Streaming")
```
chat = ChatHunyuan( hunyuan_app_id="YOUR_APP_ID", hunyuan_secret_id="YOUR_SECRET_ID", hunyuan_secret_key="YOUR_SECRET_KEY", streaming=True,)
```
```
chat( [ HumanMessage( content="You are a helpful assistant that translates English to French.Translate this sentence from English to French. I love programming." ) ])
```
```
AIMessageChunk(content="J'aime programmer.")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:32.914Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/tencent_hunyuan/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/tencent_hunyuan/",
"description": "[Tencent’s hybrid model",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3424",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"tencent_hunyuan\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:32 GMT",
"etag": "W/\"9921c941877135d5efd6f232af4d647a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dz74w-1713753512694-1496da6353a0"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/tencent_hunyuan/",
"property": "og:url"
},
{
"content": "Tencent Hunyuan | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Tencent’s hybrid model",
"property": "og:description"
}
],
"title": "Tencent Hunyuan | 🦜️🔗 LangChain"
} | Tencent Hunyuan
Tencent’s hybrid model API (Hunyuan API) implements dialogue communication, content generation, analysis and understanding, and can be widely used in various scenarios such as intelligent customer service, intelligent marketing, role playing, advertising copywriting, product description, script creation, resume generation, article writing, code generation, data analysis, and content analysis.
See for more information.
from langchain_community.chat_models import ChatHunyuan
from langchain_core.messages import HumanMessage
chat = ChatHunyuan(
hunyuan_app_id=111111111,
hunyuan_secret_id="YOUR_SECRET_ID",
hunyuan_secret_key="YOUR_SECRET_KEY",
)
chat(
[
HumanMessage(
content="You are a helpful assistant that translates English to French.Translate this sentence from English to French. I love programming."
)
]
)
AIMessage(content="J'aime programmer.")
For ChatHunyuan with Streaming
chat = ChatHunyuan(
hunyuan_app_id="YOUR_APP_ID",
hunyuan_secret_id="YOUR_SECRET_ID",
hunyuan_secret_key="YOUR_SECRET_KEY",
streaming=True,
)
chat(
[
HumanMessage(
content="You are a helpful assistant that translates English to French.Translate this sentence from English to French. I love programming."
)
]
)
AIMessageChunk(content="J'aime programmer.")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/tongyi/ | ## ChatTongyi
Tongyi Qwen is a large language model developed by Alibaba’s Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations. In this notebook, we will introduce how to use langchain with [Tongyi](https://www.aliyun.com/product/dashscope) mainly in `Chat` corresponding to the package `langchain/chat_models` in langchain
```
# Install the package%pip install --upgrade --quiet dashscope
```
```
# Get a new token: https://help.aliyun.com/document_detail/611472.html?spm=a2c4g.2399481.0.0from getpass import getpassDASHSCOPE_API_KEY = getpass()
```
```
import osos.environ["DASHSCOPE_API_KEY"] = DASHSCOPE_API_KEY
```
```
from langchain_community.chat_models.tongyi import ChatTongyifrom langchain_core.messages import HumanMessagechatLLM = ChatTongyi( streaming=True,)res = chatLLM.stream([HumanMessage(content="hi")], streaming=True)for r in res: print("chat resp:", r)
```
```
chat resp: content='Hello! How' additional_kwargs={} example=Falsechat resp: content=' can I assist you today?' additional_kwargs={} example=False
```
```
from langchain_core.messages import HumanMessage, SystemMessagemessages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chatLLM(messages)
```
```
AIMessageChunk(content="J'aime programmer.", additional_kwargs={}, example=False)
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:33.815Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/tongyi/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/tongyi/",
"description": "Tongyi Qwen is a large language model developed by Alibaba’s Damo",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4359",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"tongyi\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:33 GMT",
"etag": "W/\"90acbec8e4fd8596552b1b4a25dcbff8\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::6vv8w-1713753513703-cee7b6c3a8cc"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/tongyi/",
"property": "og:url"
},
{
"content": "ChatTongyi | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Tongyi Qwen is a large language model developed by Alibaba’s Damo",
"property": "og:description"
}
],
"title": "ChatTongyi | 🦜️🔗 LangChain"
} | ChatTongyi
Tongyi Qwen is a large language model developed by Alibaba’s Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations. In this notebook, we will introduce how to use langchain with Tongyi mainly in Chat corresponding to the package langchain/chat_models in langchain
# Install the package
%pip install --upgrade --quiet dashscope
# Get a new token: https://help.aliyun.com/document_detail/611472.html?spm=a2c4g.2399481.0.0
from getpass import getpass
DASHSCOPE_API_KEY = getpass()
import os
os.environ["DASHSCOPE_API_KEY"] = DASHSCOPE_API_KEY
from langchain_community.chat_models.tongyi import ChatTongyi
from langchain_core.messages import HumanMessage
chatLLM = ChatTongyi(
streaming=True,
)
res = chatLLM.stream([HumanMessage(content="hi")], streaming=True)
for r in res:
print("chat resp:", r)
chat resp: content='Hello! How' additional_kwargs={} example=False
chat resp: content=' can I assist you today?' additional_kwargs={} example=False
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(
content="Translate this sentence from English to French. I love programming."
),
]
chatLLM(messages)
AIMessageChunk(content="J'aime programmer.", additional_kwargs={}, example=False)
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/huggingface/ | ## Hugging Face
This notebook shows how to get started using `Hugging Face` LLM’s as chat models.
In particular, we will: 1. Utilize the [HuggingFaceTextGenInference](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_text_gen_inference.py), [HuggingFaceEndpoint](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_endpoint.py), or [HuggingFaceHub](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_hub.py) integrations to instantiate an `LLM`. 2. Utilize the `ChatHuggingFace` class to enable any of these LLMs to interface with LangChain’s [Chat Messages](https://python.langchain.com/docs/modules/model_io/chat/#messages) abstraction. 3. Demonstrate how to use an open-source LLM to power an `ChatAgent` pipeline
> Note: To get started, you’ll need to have a [Hugging Face Access Token](https://huggingface.co/docs/hub/security-tokens) saved as an environment variable: `HUGGINGFACEHUB_API_TOKEN`.
```
%pip install --upgrade --quiet text-generation transformers google-search-results numexpr langchainhub sentencepiece jinja2
```
```
Note: you may need to restart the kernel to use updated packages.
```
## 1\. Instantiate an LLM[](#instantiate-an-llm "Direct link to 1. Instantiate an LLM")
There are three LLM options to choose from.
### `HuggingFaceTextGenInference`[](#huggingfacetextgeninference "Direct link to huggingfacetextgeninference")
```
import osfrom langchain_community.llms import HuggingFaceTextGenInferenceENDPOINT_URL = "<YOUR_ENDPOINT_URL_HERE>"HF_TOKEN = os.getenv("HUGGINGFACEHUB_API_TOKEN")llm = HuggingFaceTextGenInference( inference_server_url=ENDPOINT_URL, max_new_tokens=512, top_k=50, temperature=0.1, repetition_penalty=1.03, server_kwargs={ "headers": { "Authorization": f"Bearer {HF_TOKEN}", "Content-Type": "application/json", } },)
```
### `HuggingFaceEndpoint`[](#huggingfaceendpoint "Direct link to huggingfaceendpoint")
```
from langchain_community.llms import HuggingFaceEndpointENDPOINT_URL = "<YOUR_ENDPOINT_URL_HERE>"llm = HuggingFaceEndpoint( endpoint_url=ENDPOINT_URL, task="text-generation", model_kwargs={ "max_new_tokens": 512, "top_k": 50, "temperature": 0.1, "repetition_penalty": 1.03, },)
```
### `HuggingFaceHub`[](#huggingfacehub "Direct link to huggingfacehub")
```
from langchain_community.llms import HuggingFaceHubllm = HuggingFaceHub( repo_id="HuggingFaceH4/zephyr-7b-beta", task="text-generation", model_kwargs={ "max_new_tokens": 512, "top_k": 30, "temperature": 0.1, "repetition_penalty": 1.03, },)
```
```
/Users/jacoblee/langchain/langchain/libs/langchain/.venv/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:127: FutureWarning: '__init__' (from 'huggingface_hub.inference_api') is deprecated and will be removed from version '1.0'. `InferenceApi` client is deprecated in favor of the more feature-complete `InferenceClient`. Check out this guide to learn how to convert your script to use it: https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client. warnings.warn(warning_message, FutureWarning)
```
## 2\. Instantiate the `ChatHuggingFace` to apply chat templates[](#instantiate-the-chathuggingface-to-apply-chat-templates "Direct link to instantiate-the-chathuggingface-to-apply-chat-templates")
Instantiate the chat model and some messages to pass.
```
from langchain.schema import ( HumanMessage, SystemMessage,)from langchain_community.chat_models.huggingface import ChatHuggingFacemessages = [ SystemMessage(content="You're a helpful assistant"), HumanMessage( content="What happens when an unstoppable force meets an immovable object?" ),]chat_model = ChatHuggingFace(llm=llm)
```
```
WARNING! repo_id is not default parameter. repo_id was transferred to model_kwargs. Please confirm that repo_id is what you intended.WARNING! task is not default parameter. task was transferred to model_kwargs. Please confirm that task is what you intended.WARNING! huggingfacehub_api_token is not default parameter. huggingfacehub_api_token was transferred to model_kwargs. Please confirm that huggingfacehub_api_token is what you intended.None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
```
Inspect which model and corresponding chat template is being used.
```
'HuggingFaceH4/zephyr-7b-beta'
```
Inspect how the chat messages are formatted for the LLM call.
```
chat_model._to_chat_prompt(messages)
```
```
"<|system|>\nYou're a helpful assistant</s>\n<|user|>\nWhat happens when an unstoppable force meets an immovable object?</s>\n<|assistant|>\n"
```
Call the model.
```
res = chat_model.invoke(messages)print(res.content)
```
```
According to a popular philosophical paradox, when an unstoppable force meets an immovable object, it is impossible to determine which one will prevail because both are defined as being completely unyielding and unmovable. The paradox suggests that the very concepts of "unstoppable force" and "immovable object" are inherently contradictory, and therefore, it is illogical to imagine a scenario where they would meet and interact. However, in practical terms, it is highly unlikely for such a scenario to occur in the real world, as the concepts of "unstoppable force" and "immovable object" are often used metaphorically to describe hypothetical situations or abstract concepts, rather than physical objects or forces.
```
## 3\. Take it for a spin as an agent![](#take-it-for-a-spin-as-an-agent "Direct link to 3. Take it for a spin as an agent!")
Here we’ll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. The example below is taken from [here](https://python.langchain.com/docs/modules/agents/agent_types/react/#using-chat-models).
> Note: To run this section, you’ll need to have a [SerpAPI Token](https://serpapi.com/) saved as an environment variable: `SERPAPI_API_KEY`
```
from langchain import hubfrom langchain.agents import AgentExecutor, load_toolsfrom langchain.agents.format_scratchpad import format_log_to_strfrom langchain.agents.output_parsers import ( ReActJsonSingleInputOutputParser,)from langchain.tools.render import render_text_descriptionfrom langchain_community.utilities import SerpAPIWrapper
```
Configure the agent with a `react-json` style prompt and access to a search engine and calculator.
```
# setup toolstools = load_tools(["serpapi", "llm-math"], llm=llm)# setup ReAct style promptprompt = hub.pull("hwchase17/react-json")prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)# define the agentchat_model_with_stop = chat_model.bind(stop=["\nObservation"])agent = ( { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]), } | prompt | chat_model_with_stop | ReActJsonSingleInputOutputParser())# instantiate AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
```
```
agent_executor.invoke( { "input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?" })
```
```
> Entering new AgentExecutor chain...Question: Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?Thought: I need to use the Search tool to find out who Leo DiCaprio's current girlfriend is. Then, I can use the Calculator tool to raise her current age to the power of 0.43.Action:```{ "action": "Search", "action_input": "leo dicaprio girlfriend"}```Leonardo DiCaprio may have found The One in Vittoria Ceretti. “They are in love,” a source exclusively reveals in the latest issue of Us Weekly. “Leo was clearly very proud to be showing Vittoria off and letting everyone see how happy they are together.”Now that we know Leo DiCaprio's current girlfriend is Vittoria Ceretti, let's find out her current age.Action:```{ "action": "Search", "action_input": "vittoria ceretti age"}```25 yearsNow that we know Vittoria Ceretti's current age is 25, let's use the Calculator tool to raise it to the power of 0.43.Action:```{ "action": "Calculator", "action_input": "25^0.43"}```Answer: 3.991298452658078Final Answer: Vittoria Ceretti, Leo DiCaprio's current girlfriend, when raised to the power of 0.43 is approximately 4.0 rounded to two decimal places. Her current age is 25 years old.> Finished chain.
```
```
{'input': "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", 'output': "Vittoria Ceretti, Leo DiCaprio's current girlfriend, when raised to the power of 0.43 is approximately 4.0 rounded to two decimal places. Her current age is 25 years old."}
```
Wahoo! Our open-source 7b parameter Zephyr model was able to:
1. Plan out a series of actions: `I need to use the Search tool to find out who Leo DiCaprio's current girlfriend is. Then, I can use the Calculator tool to raise her current age to the power of 0.43.`
2. Then execute a search using the SerpAPI tool to find who Leo DiCaprio’s current girlfriend is
3. Execute another search to find her age
4. And finally use a calculator tool to calculate her age raised to the power of 0.43
It’s exciting to see how far open-source LLM’s can go as general purpose reasoning agents. Give it a try yourself! | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:33.945Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/huggingface/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/huggingface/",
"description": "This notebook shows how to get started using Hugging Face LLM’s as",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"huggingface\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:33 GMT",
"etag": "W/\"22059af1292db3a46d527b22964ee72e\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::kt9bz-1713753513812-2b9677d86fa0"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/huggingface/",
"property": "og:url"
},
{
"content": "Hugging Face | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to get started using Hugging Face LLM’s as",
"property": "og:description"
}
],
"title": "Hugging Face | 🦜️🔗 LangChain"
} | Hugging Face
This notebook shows how to get started using Hugging Face LLM’s as chat models.
In particular, we will: 1. Utilize the HuggingFaceTextGenInference, HuggingFaceEndpoint, or HuggingFaceHub integrations to instantiate an LLM. 2. Utilize the ChatHuggingFace class to enable any of these LLMs to interface with LangChain’s Chat Messages abstraction. 3. Demonstrate how to use an open-source LLM to power an ChatAgent pipeline
Note: To get started, you’ll need to have a Hugging Face Access Token saved as an environment variable: HUGGINGFACEHUB_API_TOKEN.
%pip install --upgrade --quiet text-generation transformers google-search-results numexpr langchainhub sentencepiece jinja2
Note: you may need to restart the kernel to use updated packages.
1. Instantiate an LLM
There are three LLM options to choose from.
HuggingFaceTextGenInference
import os
from langchain_community.llms import HuggingFaceTextGenInference
ENDPOINT_URL = "<YOUR_ENDPOINT_URL_HERE>"
HF_TOKEN = os.getenv("HUGGINGFACEHUB_API_TOKEN")
llm = HuggingFaceTextGenInference(
inference_server_url=ENDPOINT_URL,
max_new_tokens=512,
top_k=50,
temperature=0.1,
repetition_penalty=1.03,
server_kwargs={
"headers": {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json",
}
},
)
HuggingFaceEndpoint
from langchain_community.llms import HuggingFaceEndpoint
ENDPOINT_URL = "<YOUR_ENDPOINT_URL_HERE>"
llm = HuggingFaceEndpoint(
endpoint_url=ENDPOINT_URL,
task="text-generation",
model_kwargs={
"max_new_tokens": 512,
"top_k": 50,
"temperature": 0.1,
"repetition_penalty": 1.03,
},
)
HuggingFaceHub
from langchain_community.llms import HuggingFaceHub
llm = HuggingFaceHub(
repo_id="HuggingFaceH4/zephyr-7b-beta",
task="text-generation",
model_kwargs={
"max_new_tokens": 512,
"top_k": 30,
"temperature": 0.1,
"repetition_penalty": 1.03,
},
)
/Users/jacoblee/langchain/langchain/libs/langchain/.venv/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:127: FutureWarning: '__init__' (from 'huggingface_hub.inference_api') is deprecated and will be removed from version '1.0'. `InferenceApi` client is deprecated in favor of the more feature-complete `InferenceClient`. Check out this guide to learn how to convert your script to use it: https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client.
warnings.warn(warning_message, FutureWarning)
2. Instantiate the ChatHuggingFace to apply chat templates
Instantiate the chat model and some messages to pass.
from langchain.schema import (
HumanMessage,
SystemMessage,
)
from langchain_community.chat_models.huggingface import ChatHuggingFace
messages = [
SystemMessage(content="You're a helpful assistant"),
HumanMessage(
content="What happens when an unstoppable force meets an immovable object?"
),
]
chat_model = ChatHuggingFace(llm=llm)
WARNING! repo_id is not default parameter.
repo_id was transferred to model_kwargs.
Please confirm that repo_id is what you intended.
WARNING! task is not default parameter.
task was transferred to model_kwargs.
Please confirm that task is what you intended.
WARNING! huggingfacehub_api_token is not default parameter.
huggingfacehub_api_token was transferred to model_kwargs.
Please confirm that huggingfacehub_api_token is what you intended.
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Inspect which model and corresponding chat template is being used.
'HuggingFaceH4/zephyr-7b-beta'
Inspect how the chat messages are formatted for the LLM call.
chat_model._to_chat_prompt(messages)
"<|system|>\nYou're a helpful assistant</s>\n<|user|>\nWhat happens when an unstoppable force meets an immovable object?</s>\n<|assistant|>\n"
Call the model.
res = chat_model.invoke(messages)
print(res.content)
According to a popular philosophical paradox, when an unstoppable force meets an immovable object, it is impossible to determine which one will prevail because both are defined as being completely unyielding and unmovable. The paradox suggests that the very concepts of "unstoppable force" and "immovable object" are inherently contradictory, and therefore, it is illogical to imagine a scenario where they would meet and interact. However, in practical terms, it is highly unlikely for such a scenario to occur in the real world, as the concepts of "unstoppable force" and "immovable object" are often used metaphorically to describe hypothetical situations or abstract concepts, rather than physical objects or forces.
3. Take it for a spin as an agent!
Here we’ll test out Zephyr-7B-beta as a zero-shot ReAct Agent. The example below is taken from here.
Note: To run this section, you’ll need to have a SerpAPI Token saved as an environment variable: SERPAPI_API_KEY
from langchain import hub
from langchain.agents import AgentExecutor, load_tools
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import (
ReActJsonSingleInputOutputParser,
)
from langchain.tools.render import render_text_description
from langchain_community.utilities import SerpAPIWrapper
Configure the agent with a react-json style prompt and access to a search engine and calculator.
# setup tools
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# setup ReAct style prompt
prompt = hub.pull("hwchase17/react-json")
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
# define the agent
chat_model_with_stop = chat_model.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
| prompt
| chat_model_with_stop
| ReActJsonSingleInputOutputParser()
)
# instantiate AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke(
{
"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
}
)
> Entering new AgentExecutor chain...
Question: Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?
Thought: I need to use the Search tool to find out who Leo DiCaprio's current girlfriend is. Then, I can use the Calculator tool to raise her current age to the power of 0.43.
Action:
```
{
"action": "Search",
"action_input": "leo dicaprio girlfriend"
}
```
Leonardo DiCaprio may have found The One in Vittoria Ceretti. “They are in love,” a source exclusively reveals in the latest issue of Us Weekly. “Leo was clearly very proud to be showing Vittoria off and letting everyone see how happy they are together.”Now that we know Leo DiCaprio's current girlfriend is Vittoria Ceretti, let's find out her current age.
Action:
```
{
"action": "Search",
"action_input": "vittoria ceretti age"
}
```
25 yearsNow that we know Vittoria Ceretti's current age is 25, let's use the Calculator tool to raise it to the power of 0.43.
Action:
```
{
"action": "Calculator",
"action_input": "25^0.43"
}
```
Answer: 3.991298452658078Final Answer: Vittoria Ceretti, Leo DiCaprio's current girlfriend, when raised to the power of 0.43 is approximately 4.0 rounded to two decimal places. Her current age is 25 years old.
> Finished chain.
{'input': "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?",
'output': "Vittoria Ceretti, Leo DiCaprio's current girlfriend, when raised to the power of 0.43 is approximately 4.0 rounded to two decimal places. Her current age is 25 years old."}
Wahoo! Our open-source 7b parameter Zephyr model was able to:
Plan out a series of actions: I need to use the Search tool to find out who Leo DiCaprio's current girlfriend is. Then, I can use the Calculator tool to raise her current age to the power of 0.43.
Then execute a search using the SerpAPI tool to find who Leo DiCaprio’s current girlfriend is
Execute another search to find her age
And finally use a calculator tool to calculate her age raised to the power of 0.43
It’s exciting to see how far open-source LLM’s can go as general purpose reasoning agents. Give it a try yourself! |
https://python.langchain.com/docs/integrations/chat/groq/ | Alternatively, you may configure the API key when you initialize ChatGroq.
If you do not want to set your API key in the environment, you can pass it directly to the client:
```
AIMessage(content='Low Latency Large Language Models (LLMs) are a type of artificial intelligence model that can understand and generate human-like text. The term "low latency" refers to the model\'s ability to process and respond to inputs quickly, with minimal delay.\n\nThe importance of low latency in LLMs can be explained through the following points:\n\n1. Improved user experience: In real-time applications such as chatbots, virtual assistants, and interactive games, users expect quick and responsive interactions. Low latency LLMs can provide instant feedback and responses, creating a more seamless and engaging user experience.\n\n2. Better decision-making: In time-sensitive scenarios, such as financial trading or autonomous vehicles, low latency LLMs can quickly process and analyze vast amounts of data, enabling faster and more informed decision-making.\n\n3. Enhanced accessibility: For individuals with disabilities, low latency LLMs can help create more responsive and inclusive interfaces, such as voice-controlled assistants or real-time captioning systems.\n\n4. Competitive advantage: In industries where real-time data analysis and decision-making are crucial, low latency LLMs can provide a competitive edge by enabling businesses to react more quickly to market changes, customer needs, or emerging opportunities.\n\n5. Scalability: Low latency LLMs can efficiently handle a higher volume of requests and interactions, making them more suitable for large-scale applications and services.\n\nIn summary, low latency is an essential aspect of LLMs, as it significantly impacts user experience, decision-making, accessibility, competitiveness, and scalability. By minimizing delays and response times, low latency LLMs can unlock new possibilities and applications for artificial intelligence in various industries and scenarios.')
```
```
AIMessage(content="There's a star that shines up in the sky,\nThe Sun, that makes the day bright and spry.\nIt rises and sets,\nIn a daily, predictable bet,\nGiving life to the world, oh my!")
```
```
The moon's gentle glowIlluminates the night skyPeaceful and serene
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:34.368Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/groq/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/groq/",
"description": "Install the langchain-groq package if not already installed:",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "6681",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"groq\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:33 GMT",
"etag": "W/\"c2ba99b8336ab846bca5a6a7b8e5eaa0\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::7nx9j-1713753513929-fc0492f08327"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/groq/",
"property": "og:url"
},
{
"content": "Groq | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Install the langchain-groq package if not already installed:",
"property": "og:description"
}
],
"title": "Groq | 🦜️🔗 LangChain"
} | Alternatively, you may configure the API key when you initialize ChatGroq.
If you do not want to set your API key in the environment, you can pass it directly to the client:
AIMessage(content='Low Latency Large Language Models (LLMs) are a type of artificial intelligence model that can understand and generate human-like text. The term "low latency" refers to the model\'s ability to process and respond to inputs quickly, with minimal delay.\n\nThe importance of low latency in LLMs can be explained through the following points:\n\n1. Improved user experience: In real-time applications such as chatbots, virtual assistants, and interactive games, users expect quick and responsive interactions. Low latency LLMs can provide instant feedback and responses, creating a more seamless and engaging user experience.\n\n2. Better decision-making: In time-sensitive scenarios, such as financial trading or autonomous vehicles, low latency LLMs can quickly process and analyze vast amounts of data, enabling faster and more informed decision-making.\n\n3. Enhanced accessibility: For individuals with disabilities, low latency LLMs can help create more responsive and inclusive interfaces, such as voice-controlled assistants or real-time captioning systems.\n\n4. Competitive advantage: In industries where real-time data analysis and decision-making are crucial, low latency LLMs can provide a competitive edge by enabling businesses to react more quickly to market changes, customer needs, or emerging opportunities.\n\n5. Scalability: Low latency LLMs can efficiently handle a higher volume of requests and interactions, making them more suitable for large-scale applications and services.\n\nIn summary, low latency is an essential aspect of LLMs, as it significantly impacts user experience, decision-making, accessibility, competitiveness, and scalability. By minimizing delays and response times, low latency LLMs can unlock new possibilities and applications for artificial intelligence in various industries and scenarios.')
AIMessage(content="There's a star that shines up in the sky,\nThe Sun, that makes the day bright and spry.\nIt rises and sets,\nIn a daily, predictable bet,\nGiving life to the world, oh my!")
The moon's gentle glow
Illuminates the night sky
Peaceful and serene |
https://python.langchain.com/docs/integrations/chat/upstage/ | ## ChatUpstage
This notebook covers how to get started with Upstage chat models.
## Installation[](#installation "Direct link to Installation")
Install `langchain-upstage` package.
```
pip install -U langchain-upstage
```
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Make sure to set the following environment variables:
* `UPSTAGE_API_KEY`: Your Upstage API key from [Upstage console](https://console.upstage.ai/).
## Usage[](#usage "Direct link to Usage")
```
import osos.environ["UPSTAGE_API_KEY"] = "YOUR_API_KEY"
```
```
from langchain_core.prompts import ChatPromptTemplatefrom langchain_upstage import ChatUpstagechat = ChatUpstage()
```
```
# using chat invokechat.invoke("Hello, how are you?")
```
```
# using chat streamfor m in chat.stream("Hello, how are you?"): print(m)
```
## Chaining[](#chaining "Direct link to Chaining")
```
# using chainprompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful assistant that translates English to French."), ("human", "Translate this sentence from English to French. {english_text}."), ])chain = prompt | chatchain.invoke({"english_text": "Hello, how are you?"})
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:34.708Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/upstage/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/upstage/",
"description": "This notebook covers how to get started with Upstage chat models.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3425",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"upstage\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:34 GMT",
"etag": "W/\"08e3f67602eb93314f78dca7430c452e\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::l2gfp-1713753514334-1263b7887e37"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/upstage/",
"property": "og:url"
},
{
"content": "ChatUpstage | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook covers how to get started with Upstage chat models.",
"property": "og:description"
}
],
"title": "ChatUpstage | 🦜️🔗 LangChain"
} | ChatUpstage
This notebook covers how to get started with Upstage chat models.
Installation
Install langchain-upstage package.
pip install -U langchain-upstage
Environment Setup
Make sure to set the following environment variables:
UPSTAGE_API_KEY: Your Upstage API key from Upstage console.
Usage
import os
os.environ["UPSTAGE_API_KEY"] = "YOUR_API_KEY"
from langchain_core.prompts import ChatPromptTemplate
from langchain_upstage import ChatUpstage
chat = ChatUpstage()
# using chat invoke
chat.invoke("Hello, how are you?")
# using chat stream
for m in chat.stream("Hello, how are you?"):
print(m)
Chaining
# using chain
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant that translates English to French."),
("human", "Translate this sentence from English to French. {english_text}."),
]
)
chain = prompt | chat
chain.invoke({"english_text": "Hello, how are you?"})
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/gpt_router/ | ## GPTRouter
[GPTRouter](https://github.com/Writesonic/GPTRouter) is an open source LLM API Gateway that offers a universal API for 30+ LLMs, vision, and image models, with smart fallbacks based on uptime and latency, automatic retries, and streaming.
This notebook covers how to get started with using Langchain + the GPTRouter I/O library.
* Set `GPT_ROUTER_API_KEY` environment variable
* or use the `gpt_router_api_key` keyword argument
```
%pip install --upgrade --quiet GPTRouter
```
```
Requirement already satisfied: GPTRouter in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (0.1.3)Requirement already satisfied: pydantic==2.5.2 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from GPTRouter) (2.5.2)Requirement already satisfied: httpx>=0.25.2 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from GPTRouter) (0.25.2)Requirement already satisfied: annotated-types>=0.4.0 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from pydantic==2.5.2->GPTRouter) (0.6.0)Requirement already satisfied: pydantic-core==2.14.5 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from pydantic==2.5.2->GPTRouter) (2.14.5)Requirement already satisfied: typing-extensions>=4.6.1 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from pydantic==2.5.2->GPTRouter) (4.8.0)Requirement already satisfied: idna in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpx>=0.25.2->GPTRouter) (3.6)Requirement already satisfied: anyio in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpx>=0.25.2->GPTRouter) (3.7.1)Requirement already satisfied: sniffio in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpx>=0.25.2->GPTRouter) (1.3.0)Requirement already satisfied: certifi in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpx>=0.25.2->GPTRouter) (2023.11.17)Requirement already satisfied: httpcore==1.* in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpx>=0.25.2->GPTRouter) (1.0.2)Requirement already satisfied: h11<0.15,>=0.13 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpcore==1.*->httpx>=0.25.2->GPTRouter) (0.14.0)Requirement already satisfied: exceptiongroup in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from anyio->httpx>=0.25.2->GPTRouter) (1.2.0)[notice] A new release of pip is available: 23.0.1 -> 23.3.2[notice] To update, run: pip install --upgrade pipNote: you may need to restart the kernel to use updated packages.
```
```
from langchain_community.chat_models import GPTRouterfrom langchain_community.chat_models.gpt_router import GPTRouterModelfrom langchain_core.messages import HumanMessage
```
```
anthropic_claude = GPTRouterModel(name="claude-instant-1.2", provider_name="anthropic")
```
```
chat = GPTRouter(models_priority_list=[anthropic_claude])
```
```
messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages)
```
```
AIMessage(content=" J'aime programmer.")
```
## `GPTRouter` also supports async and streaming functionality:[](#gptrouter-also-supports-async-and-streaming-functionality "Direct link to gptrouter-also-supports-async-and-streaming-functionality")
```
from langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
```
```
await chat.agenerate([messages])
```
```
LLMResult(generations=[[ChatGeneration(text=" J'aime programmer.", generation_info={'finish_reason': 'stop_sequence'}, message=AIMessage(content=" J'aime programmer."))]], llm_output={}, run=[RunInfo(run_id=UUID('9885f27f-c35a-4434-9f37-c254259762a5'))])
```
```
chat = GPTRouter( models_priority_list=[anthropic_claude], streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),)chat(messages)
```
```
AIMessage(content=" J'aime programmer.")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:34.583Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/gpt_router/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/gpt_router/",
"description": "GPTRouter is an open source",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"gpt_router\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:34 GMT",
"etag": "W/\"a6a56094b83fc4febce7d7268d72367d\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::2b2nr-1713753514335-d05947bed8c7"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/gpt_router/",
"property": "og:url"
},
{
"content": "GPTRouter | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "GPTRouter is an open source",
"property": "og:description"
}
],
"title": "GPTRouter | 🦜️🔗 LangChain"
} | GPTRouter
GPTRouter is an open source LLM API Gateway that offers a universal API for 30+ LLMs, vision, and image models, with smart fallbacks based on uptime and latency, automatic retries, and streaming.
This notebook covers how to get started with using Langchain + the GPTRouter I/O library.
Set GPT_ROUTER_API_KEY environment variable
or use the gpt_router_api_key keyword argument
%pip install --upgrade --quiet GPTRouter
Requirement already satisfied: GPTRouter in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (0.1.3)
Requirement already satisfied: pydantic==2.5.2 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from GPTRouter) (2.5.2)
Requirement already satisfied: httpx>=0.25.2 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from GPTRouter) (0.25.2)
Requirement already satisfied: annotated-types>=0.4.0 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from pydantic==2.5.2->GPTRouter) (0.6.0)
Requirement already satisfied: pydantic-core==2.14.5 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from pydantic==2.5.2->GPTRouter) (2.14.5)
Requirement already satisfied: typing-extensions>=4.6.1 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from pydantic==2.5.2->GPTRouter) (4.8.0)
Requirement already satisfied: idna in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpx>=0.25.2->GPTRouter) (3.6)
Requirement already satisfied: anyio in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpx>=0.25.2->GPTRouter) (3.7.1)
Requirement already satisfied: sniffio in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpx>=0.25.2->GPTRouter) (1.3.0)
Requirement already satisfied: certifi in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpx>=0.25.2->GPTRouter) (2023.11.17)
Requirement already satisfied: httpcore==1.* in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpx>=0.25.2->GPTRouter) (1.0.2)
Requirement already satisfied: h11<0.15,>=0.13 in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from httpcore==1.*->httpx>=0.25.2->GPTRouter) (0.14.0)
Requirement already satisfied: exceptiongroup in /Users/sirjan-ws/.pyenv/versions/3.10.13/envs/langchain_venv5/lib/python3.10/site-packages (from anyio->httpx>=0.25.2->GPTRouter) (1.2.0)
[notice] A new release of pip is available: 23.0.1 -> 23.3.2
[notice] To update, run: pip install --upgrade pip
Note: you may need to restart the kernel to use updated packages.
from langchain_community.chat_models import GPTRouter
from langchain_community.chat_models.gpt_router import GPTRouterModel
from langchain_core.messages import HumanMessage
anthropic_claude = GPTRouterModel(name="claude-instant-1.2", provider_name="anthropic")
chat = GPTRouter(models_priority_list=[anthropic_claude])
messages = [
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
chat(messages)
AIMessage(content=" J'aime programmer.")
GPTRouter also supports async and streaming functionality:
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
await chat.agenerate([messages])
LLMResult(generations=[[ChatGeneration(text=" J'aime programmer.", generation_info={'finish_reason': 'stop_sequence'}, message=AIMessage(content=" J'aime programmer."))]], llm_output={}, run=[RunInfo(run_id=UUID('9885f27f-c35a-4434-9f37-c254259762a5'))])
chat = GPTRouter(
models_priority_list=[anthropic_claude],
streaming=True,
verbose=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
)
chat(messages)
AIMessage(content=" J'aime programmer.") |
https://python.langchain.com/docs/integrations/chat/yandex/ | ## ChatYandexGPT
This notebook goes over how to use Langchain with [YandexGPT](https://cloud.yandex.com/en/services/yandexgpt) chat model.
To use, you should have the `yandexcloud` python package installed.
```
%pip install --upgrade --quiet yandexcloud
```
First, you should [create service account](https://cloud.yandex.com/en/docs/iam/operations/sa/create) with the `ai.languageModels.user` role.
Next, you have two authentication options: - [IAM token](https://cloud.yandex.com/en/docs/iam/operations/iam-token/create-for-sa). You can specify the token in a constructor parameter `iam_token` or in an environment variable `YC_IAM_TOKEN`.
* [API key](https://cloud.yandex.com/en/docs/iam/operations/api-key/create) You can specify the key in a constructor parameter `api_key` or in an environment variable `YC_API_KEY`.
To specify the model you can use `model_uri` parameter, see [the documentation](https://cloud.yandex.com/en/docs/yandexgpt/concepts/models#yandexgpt-generation) for more details.
By default, the latest version of `yandexgpt-lite` is used from the folder specified in the parameter `folder_id` or `YC_FOLDER_ID` environment variable.
```
from langchain_community.chat_models import ChatYandexGPTfrom langchain_core.messages import HumanMessage, SystemMessage
```
```
chat_model = ChatYandexGPT()
```
```
answer = chat_model.invoke( [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage(content="I love programming."), ])answer
```
```
AIMessage(content='Je adore le programmement.')
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:35.425Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/yandex/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/yandex/",
"description": "This notebook goes over how to use Langchain with",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3426",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"yandex\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:35 GMT",
"etag": "W/\"90a3b7941926c2a9581551add18de2ec\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::hf2cn-1713753515348-332887c2bf8a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/yandex/",
"property": "og:url"
},
{
"content": "ChatYandexGPT | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook goes over how to use Langchain with",
"property": "og:description"
}
],
"title": "ChatYandexGPT | 🦜️🔗 LangChain"
} | ChatYandexGPT
This notebook goes over how to use Langchain with YandexGPT chat model.
To use, you should have the yandexcloud python package installed.
%pip install --upgrade --quiet yandexcloud
First, you should create service account with the ai.languageModels.user role.
Next, you have two authentication options: - IAM token. You can specify the token in a constructor parameter iam_token or in an environment variable YC_IAM_TOKEN.
API key You can specify the key in a constructor parameter api_key or in an environment variable YC_API_KEY.
To specify the model you can use model_uri parameter, see the documentation for more details.
By default, the latest version of yandexgpt-lite is used from the folder specified in the parameter folder_id or YC_FOLDER_ID environment variable.
from langchain_community.chat_models import ChatYandexGPT
from langchain_core.messages import HumanMessage, SystemMessage
chat_model = ChatYandexGPT()
answer = chat_model.invoke(
[
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(content="I love programming."),
]
)
answer
AIMessage(content='Je adore le programmement.') |
https://python.langchain.com/docs/integrations/chat/kinetica/ | ## Kinetica SqlAssist LLM Demo
This notebook demonstrates how to use Kinetica to transform natural language into SQL and simplify the process of data retrieval. This demo is intended to show the mechanics of creating and using a chain as opposed to the capabilities of the LLM.
## Overview[](#overview "Direct link to Overview")
With the Kinetica LLM workflow you create an LLM context in the database that provides information needed for infefencing that includes tables, annotations, rules, and samples. Invoking `ChatKinetica.load_messages_from_context()` will retrieve the context information from the database so that it can be used to create a chat prompt.
The chat prompt consists of a `SystemMessage` and pairs of `HumanMessage`/`AIMessage` that contain the samples which are question/SQL pairs. You can append pairs samples to this list but it is not intended to facilitate a typical natural language conversation.
When you create a chain from the chat prompt and execute it, the Kinetica LLM will generate SQL from the input. Optionally you can use `KineticaSqlOutputParser` to execute the SQL and return the result as a dataframe.
Currently, 2 LLM’s are supported for SQL generation:
1. **Kinetica SQL-GPT**: This LLM is based on OpenAI ChatGPT API.
2. **Kinetica SqlAssist**: This LLM is purpose built to integrate with the Kinetica database and it can run in a secure customer premise.
For this demo we will be using **SqlAssist**. See the [Kinetica Documentation site](https://docs.kinetica.com/7.1/sql-gpt/concepts/) for more information.
## Prerequisites[](#prerequisites "Direct link to Prerequisites")
To get started you will need a Kinetica DB instance. If you don’t have one you can obtain a [free development instance](https://cloud.kinetica.com/trynow).
You will need to install the following packages…
```
# Install Langchain community and core packages%pip install --upgrade --quiet langchain-core langchain-community# Install Kineitca DB connection package%pip install --upgrade --quiet gpudb typeguard# Install packages needed for this tutorial%pip install --upgrade --quiet faker
```
```
Note: you may need to restart the kernel to use updated packages.Note: you may need to restart the kernel to use updated packages.
```
## Database Connection[](#database-connection "Direct link to Database Connection")
You must set the database connection in the following environment variables. If you are using a virtual environment you can set them in the `.env` file of the project: \* `KINETICA_URL`: Database connection URL \* `KINETICA_USER`: Database user \* `KINETICA_PASSWD`: Secure password.
If you can create an instance of `KineticaChatLLM` then you are successfully connected.
```
from langchain_community.chat_models.kinetica import ChatKineticakinetica_llm = ChatKinetica()# Test table we will createtable_name = "demo.user_profiles"# LLM Context we will createkinetica_ctx = "demo.test_llm_ctx"
```
## Create test data[](#create-test-data "Direct link to Create test data")
Before we can generate SQL we will need to create a Kinetica table and an LLM context that can inference the table.
### Create some fake user profiles[](#create-some-fake-user-profiles "Direct link to Create some fake user profiles")
We will use the `faker` package to create a dataframe with 100 fake profiles.
```
from typing import Generatorimport pandas as pdfrom faker import FakerFaker.seed(5467)faker = Faker(locale="en-US")def profile_gen(count: int) -> Generator: for id in range(0, count): rec = dict(id=id, **faker.simple_profile()) rec["birthdate"] = pd.Timestamp(rec["birthdate"]) yield recload_df = pd.DataFrame.from_records(data=profile_gen(100), index="id")load_df.head()
```
| | username | name | sex | address | mail | birthdate |
| --- | --- | --- | --- | --- | --- | --- |
| id | | | | | | |
| 0 | eduardo69 | Haley Beck | F | 59836 Carla Causeway Suite 939\\nPort Eugene, I... | [meltondenise@yahoo.com](mailto:meltondenise@yahoo.com) | 1997-09-09 |
| 1 | lbarrera | Joshua Stephens | M | 3108 Christina Forges\\nPort Timothychester, KY... | [erica80@hotmail.com](mailto:erica80@hotmail.com) | 1924-05-05 |
| 2 | bburton | Paula Kaiser | F | Unit 7405 Box 3052\\nDPO AE 09858 | [timothypotts@gmail.com](mailto:timothypotts@gmail.com) | 1933-09-06 |
| 3 | melissa49 | Wendy Reese | F | 6408 Christopher Hill Apt. 459\\nNew Benjamin, ... | [dadams@gmail.com](mailto:dadams@gmail.com) | 1988-07-28 |
| 4 | melissacarter | Manuel Rios | M | 2241 Bell Gardens Suite 723\\nScottside, CA 38463 | [williamayala@gmail.com](mailto:williamayala@gmail.com) | 1930-12-19 |
### Create a Kinetica table from the Dataframe[](#create-a-kinetica-table-from-the-dataframe "Direct link to Create a Kinetica table from the Dataframe")
```
from gpudb import GPUdbTablegpudb_table = GPUdbTable.from_df( load_df, db=kinetica_llm.kdbc, table_name=table_name, clear_table=True, load_data=True,)# See the Kinetica column typesgpudb_table.type_as_df()
```
| | name | type | properties |
| --- | --- | --- | --- |
| 0 | username | string | \[char32\] |
| 1 | name | string | \[char32\] |
| 2 | sex | string | \[char1\] |
| 3 | address | string | \[char64\] |
| 4 | mail | string | \[char32\] |
| 5 | birthdate | long | \[timestamp\] |
### Create the LLM context[](#create-the-llm-context "Direct link to Create the LLM context")
You can create an LLM Context using the Kinetica Workbench UI or you can manually create it with the `CREATE OR REPLACE CONTEXT` syntax.
Here we create a context from the SQL syntax referencing the table we created.
```
# create an LLM context for the table.from gpudb import GPUdbExceptionsql = f"""CREATE OR REPLACE CONTEXT {kinetica_ctx}( TABLE = demo.test_profiles COMMENT = 'Contains user profiles.'),( SAMPLES = ( 'How many male users are there?' = 'select count(1) as num_users from demo.test_profiles where sex = ''M'';'))"""def _check_error(response: dict) -> None: status = response["status_info"]["status"] if status != "OK": message = response["status_info"]["message"] raise GPUdbException("[%s]: %s" % (status, message))response = kinetica_llm.kdbc.execute_sql(sql)_check_error(response)response["status_info"]
```
```
{'status': 'OK', 'message': '', 'data_type': 'execute_sql_response', 'response_time': 0.0148}
```
## Use Langchain for inferencing[](#use-langchain-for-inferencing "Direct link to Use Langchain for inferencing")
In the example below we will create a chain from the previously created table and LLM context. This chain will generate SQL and return the resulting data as a dataframe.
### Load the chat prompt from the Kinetica DB[](#load-the-chat-prompt-from-the-kinetica-db "Direct link to Load the chat prompt from the Kinetica DB")
The `load_messages_from_context()` function will retrieve a context from the DB and convert it into a list of chat messages that we use to create a `ChatPromptTemplate`.
```
from langchain_core.prompts import ChatPromptTemplate# load the context from the databasectx_messages = kinetica_llm.load_messages_from_context(kinetica_ctx)# Add the input prompt. This is where input question will be substituted.ctx_messages.append(("human", "{input}"))# Create the prompt template.prompt_template = ChatPromptTemplate.from_messages(ctx_messages)prompt_template.pretty_print()
```
```
================================ System Message ================================CREATE TABLE demo.test_profiles AS( username VARCHAR (32) NOT NULL, name VARCHAR (32) NOT NULL, sex VARCHAR (1) NOT NULL, address VARCHAR (64) NOT NULL, mail VARCHAR (32) NOT NULL, birthdate TIMESTAMP NOT NULL);COMMENT ON TABLE demo.test_profiles IS 'Contains user profiles.';================================ Human Message =================================How many male users are there?================================== Ai Message ==================================select count(1) as num_users from demo.test_profiles where sex = 'M';================================ Human Message ================================={input}
```
### Create the chain[](#create-the-chain "Direct link to Create the chain")
The last element of this chain is `KineticaSqlOutputParser` that will execute the SQL and return a dataframe. This is optional and if we left it out then only SQL would be returned.
```
from langchain_community.chat_models.kinetica import ( KineticaSqlOutputParser, KineticaSqlResponse,)chain = prompt_template | kinetica_llm | KineticaSqlOutputParser(kdbc=kinetica_llm.kdbc)
```
### Generate the SQL[](#generate-the-sql "Direct link to Generate the SQL")
The chain we created will take a question as input and return a `KineticaSqlResponse` containing the generated SQL and data. The question must be relevant to the to LLM context we used to create the prompt.
```
# Here you must ask a question relevant to the LLM context provided in the prompt template.response: KineticaSqlResponse = chain.invoke( {"input": "What are the female users ordered by username?"})print(f"SQL: {response.sql}")response.dataframe.head()
```
```
SQL: SELECT username, name FROM demo.test_profiles WHERE sex = 'F' ORDER BY username;
```
| | username | name |
| --- | --- | --- |
| 0 | alexander40 | Tina Ramirez |
| 1 | bburton | Paula Kaiser |
| 2 | brian12 | Stefanie Williams |
| 3 | brownanna | Jennifer Rowe |
| 4 | carl19 | Amanda Potts | | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:35.499Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/kinetica/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/kinetica/",
"description": "This notebook demonstrates how to use Kinetica to transform natural",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"kinetica\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:35 GMT",
"etag": "W/\"1a6fe233fbe37d3077bfa52ed3090e89\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::7vg96-1713753515361-011466cd666d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/kinetica/",
"property": "og:url"
},
{
"content": "Kinetica SqlAssist LLM Demo | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook demonstrates how to use Kinetica to transform natural",
"property": "og:description"
}
],
"title": "Kinetica SqlAssist LLM Demo | 🦜️🔗 LangChain"
} | Kinetica SqlAssist LLM Demo
This notebook demonstrates how to use Kinetica to transform natural language into SQL and simplify the process of data retrieval. This demo is intended to show the mechanics of creating and using a chain as opposed to the capabilities of the LLM.
Overview
With the Kinetica LLM workflow you create an LLM context in the database that provides information needed for infefencing that includes tables, annotations, rules, and samples. Invoking ChatKinetica.load_messages_from_context() will retrieve the context information from the database so that it can be used to create a chat prompt.
The chat prompt consists of a SystemMessage and pairs of HumanMessage/AIMessage that contain the samples which are question/SQL pairs. You can append pairs samples to this list but it is not intended to facilitate a typical natural language conversation.
When you create a chain from the chat prompt and execute it, the Kinetica LLM will generate SQL from the input. Optionally you can use KineticaSqlOutputParser to execute the SQL and return the result as a dataframe.
Currently, 2 LLM’s are supported for SQL generation:
Kinetica SQL-GPT: This LLM is based on OpenAI ChatGPT API.
Kinetica SqlAssist: This LLM is purpose built to integrate with the Kinetica database and it can run in a secure customer premise.
For this demo we will be using SqlAssist. See the Kinetica Documentation site for more information.
Prerequisites
To get started you will need a Kinetica DB instance. If you don’t have one you can obtain a free development instance.
You will need to install the following packages…
# Install Langchain community and core packages
%pip install --upgrade --quiet langchain-core langchain-community
# Install Kineitca DB connection package
%pip install --upgrade --quiet gpudb typeguard
# Install packages needed for this tutorial
%pip install --upgrade --quiet faker
Note: you may need to restart the kernel to use updated packages.
Note: you may need to restart the kernel to use updated packages.
Database Connection
You must set the database connection in the following environment variables. If you are using a virtual environment you can set them in the .env file of the project: * KINETICA_URL: Database connection URL * KINETICA_USER: Database user * KINETICA_PASSWD: Secure password.
If you can create an instance of KineticaChatLLM then you are successfully connected.
from langchain_community.chat_models.kinetica import ChatKinetica
kinetica_llm = ChatKinetica()
# Test table we will create
table_name = "demo.user_profiles"
# LLM Context we will create
kinetica_ctx = "demo.test_llm_ctx"
Create test data
Before we can generate SQL we will need to create a Kinetica table and an LLM context that can inference the table.
Create some fake user profiles
We will use the faker package to create a dataframe with 100 fake profiles.
from typing import Generator
import pandas as pd
from faker import Faker
Faker.seed(5467)
faker = Faker(locale="en-US")
def profile_gen(count: int) -> Generator:
for id in range(0, count):
rec = dict(id=id, **faker.simple_profile())
rec["birthdate"] = pd.Timestamp(rec["birthdate"])
yield rec
load_df = pd.DataFrame.from_records(data=profile_gen(100), index="id")
load_df.head()
usernamenamesexaddressmailbirthdate
id
0 eduardo69 Haley Beck F 59836 Carla Causeway Suite 939\nPort Eugene, I... meltondenise@yahoo.com 1997-09-09
1 lbarrera Joshua Stephens M 3108 Christina Forges\nPort Timothychester, KY... erica80@hotmail.com 1924-05-05
2 bburton Paula Kaiser F Unit 7405 Box 3052\nDPO AE 09858 timothypotts@gmail.com 1933-09-06
3 melissa49 Wendy Reese F 6408 Christopher Hill Apt. 459\nNew Benjamin, ... dadams@gmail.com 1988-07-28
4 melissacarter Manuel Rios M 2241 Bell Gardens Suite 723\nScottside, CA 38463 williamayala@gmail.com 1930-12-19
Create a Kinetica table from the Dataframe
from gpudb import GPUdbTable
gpudb_table = GPUdbTable.from_df(
load_df,
db=kinetica_llm.kdbc,
table_name=table_name,
clear_table=True,
load_data=True,
)
# See the Kinetica column types
gpudb_table.type_as_df()
nametypeproperties
0 username string [char32]
1 name string [char32]
2 sex string [char1]
3 address string [char64]
4 mail string [char32]
5 birthdate long [timestamp]
Create the LLM context
You can create an LLM Context using the Kinetica Workbench UI or you can manually create it with the CREATE OR REPLACE CONTEXT syntax.
Here we create a context from the SQL syntax referencing the table we created.
# create an LLM context for the table.
from gpudb import GPUdbException
sql = f"""
CREATE OR REPLACE CONTEXT {kinetica_ctx}
(
TABLE = demo.test_profiles
COMMENT = 'Contains user profiles.'
),
(
SAMPLES = (
'How many male users are there?' =
'select count(1) as num_users
from demo.test_profiles
where sex = ''M'';')
)
"""
def _check_error(response: dict) -> None:
status = response["status_info"]["status"]
if status != "OK":
message = response["status_info"]["message"]
raise GPUdbException("[%s]: %s" % (status, message))
response = kinetica_llm.kdbc.execute_sql(sql)
_check_error(response)
response["status_info"]
{'status': 'OK',
'message': '',
'data_type': 'execute_sql_response',
'response_time': 0.0148}
Use Langchain for inferencing
In the example below we will create a chain from the previously created table and LLM context. This chain will generate SQL and return the resulting data as a dataframe.
Load the chat prompt from the Kinetica DB
The load_messages_from_context() function will retrieve a context from the DB and convert it into a list of chat messages that we use to create a ChatPromptTemplate.
from langchain_core.prompts import ChatPromptTemplate
# load the context from the database
ctx_messages = kinetica_llm.load_messages_from_context(kinetica_ctx)
# Add the input prompt. This is where input question will be substituted.
ctx_messages.append(("human", "{input}"))
# Create the prompt template.
prompt_template = ChatPromptTemplate.from_messages(ctx_messages)
prompt_template.pretty_print()
================================ System Message ================================
CREATE TABLE demo.test_profiles AS
(
username VARCHAR (32) NOT NULL,
name VARCHAR (32) NOT NULL,
sex VARCHAR (1) NOT NULL,
address VARCHAR (64) NOT NULL,
mail VARCHAR (32) NOT NULL,
birthdate TIMESTAMP NOT NULL
);
COMMENT ON TABLE demo.test_profiles IS 'Contains user profiles.';
================================ Human Message =================================
How many male users are there?
================================== Ai Message ==================================
select count(1) as num_users
from demo.test_profiles
where sex = 'M';
================================ Human Message =================================
{input}
Create the chain
The last element of this chain is KineticaSqlOutputParser that will execute the SQL and return a dataframe. This is optional and if we left it out then only SQL would be returned.
from langchain_community.chat_models.kinetica import (
KineticaSqlOutputParser,
KineticaSqlResponse,
)
chain = prompt_template | kinetica_llm | KineticaSqlOutputParser(kdbc=kinetica_llm.kdbc)
Generate the SQL
The chain we created will take a question as input and return a KineticaSqlResponse containing the generated SQL and data. The question must be relevant to the to LLM context we used to create the prompt.
# Here you must ask a question relevant to the LLM context provided in the prompt template.
response: KineticaSqlResponse = chain.invoke(
{"input": "What are the female users ordered by username?"}
)
print(f"SQL: {response.sql}")
response.dataframe.head()
SQL: SELECT username, name
FROM demo.test_profiles
WHERE sex = 'F'
ORDER BY username;
usernamename
0 alexander40 Tina Ramirez
1 bburton Paula Kaiser
2 brian12 Stefanie Williams
3 brownanna Jennifer Rowe
4 carl19 Amanda Potts |
https://python.langchain.com/docs/integrations/chat/vllm/ | ## vLLM Chat
vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.
This notebook covers how to get started with vLLM chat models using langchain’s `ChatOpenAI` **as it is**.
```
from langchain_core.messages import HumanMessage, SystemMessagefrom langchain_core.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,)from langchain_openai import ChatOpenAI
```
```
inference_server_url = "http://localhost:8000/v1"chat = ChatOpenAI( model="mosaicml/mpt-7b", openai_api_key="EMPTY", openai_api_base=inference_server_url, max_tokens=5, temperature=0,)
```
```
messages = [ SystemMessage( content="You are a helpful assistant that translates English to Italian." ), HumanMessage( content="Translate the following sentence from English to Italian: I love programming." ),]chat(messages)
```
```
AIMessage(content=' Io amo programmare', additional_kwargs={}, example=False)
```
You can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplates`. You can use ChatPromptTemplate’s format\_prompt – this returns a `PromptValue`, which you can convert to a string or `Message` object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convenience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:
```
template = ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
```
```
chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="Italian", text="I love programming." ).to_messages())
```
```
AIMessage(content=' I love programming too.', additional_kwargs={}, example=False)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:35.910Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/vllm/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/vllm/",
"description": "vLLM can be deployed as a server that mimics the OpenAI API protocol.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4360",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"vllm\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:35 GMT",
"etag": "W/\"16014a70653abf6af5f175e8a47dded4\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::cl42n-1713753515424-fe3ded0ef753"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/vllm/",
"property": "og:url"
},
{
"content": "vLLM Chat | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "vLLM can be deployed as a server that mimics the OpenAI API protocol.",
"property": "og:description"
}
],
"title": "vLLM Chat | 🦜️🔗 LangChain"
} | vLLM Chat
vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.
This notebook covers how to get started with vLLM chat models using langchain’s ChatOpenAI as it is.
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_openai import ChatOpenAI
inference_server_url = "http://localhost:8000/v1"
chat = ChatOpenAI(
model="mosaicml/mpt-7b",
openai_api_key="EMPTY",
openai_api_base=inference_server_url,
max_tokens=5,
temperature=0,
)
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to Italian."
),
HumanMessage(
content="Translate the following sentence from English to Italian: I love programming."
),
]
chat(messages)
AIMessage(content=' Io amo programmare', additional_kwargs={}, example=False)
You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
template = (
"You are a helpful assistant that translates {input_language} to {output_language}."
)
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages(
[system_message_prompt, human_message_prompt]
)
# get a chat completion from the formatted messages
chat(
chat_prompt.format_prompt(
input_language="English", output_language="Italian", text="I love programming."
).to_messages()
)
AIMessage(content=' I love programming too.', additional_kwargs={}, example=False) |
https://python.langchain.com/docs/integrations/chat/jinachat/ | ## JinaChat
This notebook covers how to get started with JinaChat chat models.
```
from langchain_community.chat_models import JinaChatfrom langchain_core.messages import HumanMessage, SystemMessagefrom langchain_core.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,)
```
```
chat = JinaChat(temperature=0)
```
```
messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chat(messages)
```
```
AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)
```
You can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplates`. You can use `ChatPromptTemplate`’s `format_prompt` – this returns a `PromptValue`, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convenience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:
```
template = ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
```
```
chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="French", text="I love programming." ).to_messages())
```
```
AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:36.692Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/jinachat/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/jinachat/",
"description": "This notebook covers how to get started with JinaChat chat models.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3429",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"jinachat\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:36 GMT",
"etag": "W/\"699c3af7f18d4ae3fb0ed029227244c9\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::vtglz-1713753516469-6a7129b1c664"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/jinachat/",
"property": "og:url"
},
{
"content": "JinaChat | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook covers how to get started with JinaChat chat models.",
"property": "og:description"
}
],
"title": "JinaChat | 🦜️🔗 LangChain"
} | JinaChat
This notebook covers how to get started with JinaChat chat models.
from langchain_community.chat_models import JinaChat
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
chat = JinaChat(temperature=0)
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(
content="Translate this sentence from English to French. I love programming."
),
]
chat(messages)
AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)
You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
template = (
"You are a helpful assistant that translates {input_language} to {output_language}."
)
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages(
[system_message_prompt, human_message_prompt]
)
# get a chat completion from the formatted messages
chat(
chat_prompt.format_prompt(
input_language="English", output_language="French", text="I love programming."
).to_messages()
)
AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False) |
https://python.langchain.com/docs/integrations/chat/volcengine_maas/ | ## VolcEngineMaasChat
This notebook provides you with a guide on how to get started with volc engine maas chat models.
```
# Install the package%pip install --upgrade --quiet volcengine
```
```
from langchain_community.chat_models import VolcEngineMaasChatfrom langchain_core.messages import HumanMessage
```
```
chat = VolcEngineMaasChat(volc_engine_maas_ak="your ak", volc_engine_maas_sk="your sk")
```
or you can set access\_key and secret\_key in your environment variables
```
export VOLC_ACCESSKEY=YOUR_AKexport VOLC_SECRETKEY=YOUR_SK
```
```
chat([HumanMessage(content="给我讲个笑话")])
```
```
AIMessage(content='好的,这是一个笑话:\n\n为什么鸟儿不会玩电脑游戏?\n\n因为它们没有翅膀!')
```
## volc engine maas chat with stream
```
chat = VolcEngineMaasChat( volc_engine_maas_ak="your ak", volc_engine_maas_sk="your sk", streaming=True,)
```
```
chat([HumanMessage(content="给我讲个笑话")])
```
```
AIMessage(content='好的,这是一个笑话:\n\n三岁的女儿说她会造句了,妈妈让她用“年轻”造句,女儿说:“妈妈减肥,一年轻了好几斤”。')
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:36.799Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/volcengine_maas/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/volcengine_maas/",
"description": "This notebook provides you with a guide on how to get started with volc",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"volcengine_maas\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:36 GMT",
"etag": "W/\"62be907f7dcda98a504cecf92bc95ad6\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::wf55v-1713753516482-e5f666f75d5d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/volcengine_maas/",
"property": "og:url"
},
{
"content": "VolcEngineMaasChat | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook provides you with a guide on how to get started with volc",
"property": "og:description"
}
],
"title": "VolcEngineMaasChat | 🦜️🔗 LangChain"
} | VolcEngineMaasChat
This notebook provides you with a guide on how to get started with volc engine maas chat models.
# Install the package
%pip install --upgrade --quiet volcengine
from langchain_community.chat_models import VolcEngineMaasChat
from langchain_core.messages import HumanMessage
chat = VolcEngineMaasChat(volc_engine_maas_ak="your ak", volc_engine_maas_sk="your sk")
or you can set access_key and secret_key in your environment variables
export VOLC_ACCESSKEY=YOUR_AK
export VOLC_SECRETKEY=YOUR_SK
chat([HumanMessage(content="给我讲个笑话")])
AIMessage(content='好的,这是一个笑话:\n\n为什么鸟儿不会玩电脑游戏?\n\n因为它们没有翅膀!')
volc engine maas chat with stream
chat = VolcEngineMaasChat(
volc_engine_maas_ak="your ak",
volc_engine_maas_sk="your sk",
streaming=True,
)
chat([HumanMessage(content="给我讲个笑话")])
AIMessage(content='好的,这是一个笑话:\n\n三岁的女儿说她会造句了,妈妈让她用“年轻”造句,女儿说:“妈妈减肥,一年轻了好几斤”。')
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/litellm_router/ | [LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.
This notebook covers how to get started with using Langchain + the LiteLLM Router I/O library.
```
model_list = [ { "model_name": "gpt-4", "litellm_params": { "model": "azure/gpt-4-1106-preview", "api_key": "<your-api-key>", "api_version": "2023-05-15", "api_base": "https://<your-endpoint>.openai.azure.com/", }, }, { "model_name": "gpt-4", "litellm_params": { "model": "azure/gpt-4-1106-preview", "api_key": "<your-api-key>", "api_version": "2023-05-15", "api_base": "https://<your-endpoint>.openai.azure.com/", }, },]litellm_router = Router(model_list=model_list)chat = ChatLiteLLMRouter(router=litellm_router)
```
```
AIMessage(content="J'aime programmer.")
```
```
LLMResult(generations=[[ChatGeneration(text="J'adore programmer.", generation_info={'finish_reason': 'stop'}, message=AIMessage(content="J'adore programmer."))]], llm_output={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 19, 'total_tokens': 25}, 'model_name': None}, run=[RunInfo(run_id=UUID('75003ec9-1e2b-43b7-a216-10dcc0f75e00'))])
```
```
chat = ChatLiteLLMRouter( router=litellm_router, streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),)chat(messages)
```
```
AIMessage(content="J'adore programmer.")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:36.890Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/litellm_router/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/litellm_router/",
"description": "LiteLLM is a library that",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"litellm_router\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:36 GMT",
"etag": "W/\"516be1c02077a321a55e9f6f92603ffb\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::96zpb-1713753516580-926a731d6f0d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/litellm_router/",
"property": "og:url"
},
{
"content": "ChatLiteLLMRouter | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "LiteLLM is a library that",
"property": "og:description"
}
],
"title": "ChatLiteLLMRouter | 🦜️🔗 LangChain"
} | LiteLLM is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.
This notebook covers how to get started with using Langchain + the LiteLLM Router I/O library.
model_list = [
{
"model_name": "gpt-4",
"litellm_params": {
"model": "azure/gpt-4-1106-preview",
"api_key": "<your-api-key>",
"api_version": "2023-05-15",
"api_base": "https://<your-endpoint>.openai.azure.com/",
},
},
{
"model_name": "gpt-4",
"litellm_params": {
"model": "azure/gpt-4-1106-preview",
"api_key": "<your-api-key>",
"api_version": "2023-05-15",
"api_base": "https://<your-endpoint>.openai.azure.com/",
},
},
]
litellm_router = Router(model_list=model_list)
chat = ChatLiteLLMRouter(router=litellm_router)
AIMessage(content="J'aime programmer.")
LLMResult(generations=[[ChatGeneration(text="J'adore programmer.", generation_info={'finish_reason': 'stop'}, message=AIMessage(content="J'adore programmer."))]], llm_output={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 19, 'total_tokens': 25}, 'model_name': None}, run=[RunInfo(run_id=UUID('75003ec9-1e2b-43b7-a216-10dcc0f75e00'))])
chat = ChatLiteLLMRouter(
router=litellm_router,
streaming=True,
verbose=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
)
chat(messages)
AIMessage(content="J'adore programmer.") |
https://python.langchain.com/docs/integrations/chat/konko/ | This example goes over how to use LangChain to interact with `Konko` ChatCompletion [models](https://docs.konko.ai/docs/list-of-models#konko-hosted-models-for-chatcompletion)
To run this notebook, you’ll need Konko API key. Sign in to our web app to [create an API key](https://platform.konko.ai/settings/api-keys) to access models
Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/get-models).
```
AIMessage(content=" Sure thing! The Big Bang Theory is a scientific theory that explains the origins of the universe. In short, it suggests that the universe began as an infinitely hot and dense point around 13.8 billion years ago and expanded rapidly. This expansion continues to this day, and it's what makes the universe look the way it does.\n\nHere's a brief overview of the key points:\n\n1. The universe started as a singularity, a point of infinite density and temperature.\n2. The singularity expanded rapidly, causing the universe to cool and expand.\n3. As the universe expanded, particles began to form, including protons, neutrons, and electrons.\n4. These particles eventually came together to form atoms, and later, stars and galaxies.\n5. The universe is still expanding today, and the rate of this expansion is accelerating.\n\nThat's the Big Bang Theory in a nutshell! It's a pretty mind-blowing idea when you think about it, and it's supported by a lot of scientific evidence. Do you have any other questions about it?")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:37.119Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/konko/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/konko/",
"description": "konko}",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4369",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"konko\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:36 GMT",
"etag": "W/\"faafc60e119c04b3dc36036a18bfe952\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::wv8xj-1713753516822-7aefc9fb6b8d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/konko/",
"property": "og:url"
},
{
"content": "ChatKonko | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "konko}",
"property": "og:description"
}
],
"title": "ChatKonko | 🦜️🔗 LangChain"
} | This example goes over how to use LangChain to interact with Konko ChatCompletion models
To run this notebook, you’ll need Konko API key. Sign in to our web app to create an API key to access models
Another way to find the list of models running on the Konko instance is through this endpoint.
AIMessage(content=" Sure thing! The Big Bang Theory is a scientific theory that explains the origins of the universe. In short, it suggests that the universe began as an infinitely hot and dense point around 13.8 billion years ago and expanded rapidly. This expansion continues to this day, and it's what makes the universe look the way it does.\n\nHere's a brief overview of the key points:\n\n1. The universe started as a singularity, a point of infinite density and temperature.\n2. The singularity expanded rapidly, causing the universe to cool and expand.\n3. As the universe expanded, particles began to form, including protons, neutrons, and electrons.\n4. These particles eventually came together to form atoms, and later, stars and galaxies.\n5. The universe is still expanding today, and the rate of this expansion is accelerating.\n\nThat's the Big Bang Theory in a nutshell! It's a pretty mind-blowing idea when you think about it, and it's supported by a lot of scientific evidence. Do you have any other questions about it?") |
https://python.langchain.com/docs/integrations/chat/yuan2/ | ## Yuan2.0
This notebook shows how to use [YUAN2 API](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/inference_server.md) in LangChain with the langchain.chat\_models.ChatYuan2.
[_Yuan2.0_](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/README-EN.md) is a new generation Fundamental Large Language Model developed by IEIT System. We have published all three models, Yuan 2.0-102B, Yuan 2.0-51B, and Yuan 2.0-2B. And we provide relevant scripts for pretraining, fine-tuning, and inference services for other developers. Yuan2.0 is based on Yuan1.0, utilizing a wider range of high-quality pre training data and instruction fine-tuning datasets to enhance the model’s understanding of semantics, mathematics, reasoning, code, knowledge, and other aspects.
## Getting started[](#getting-started "Direct link to Getting started")
### Installation[](#installation "Direct link to Installation")
First, Yuan2.0 provided an OpenAI compatible API, and we integrate ChatYuan2 into langchain chat model by using OpenAI client. Therefore, ensure the openai package is installed in your Python environment. Run the following command:
```
%pip install --upgrade --quiet openai
```
### Importing the Required Modules[](#importing-the-required-modules "Direct link to Importing the Required Modules")
After installation, import the necessary modules to your Python script:
```
from langchain_community.chat_models import ChatYuan2from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
```
### Setting Up Your API server[](#setting-up-your-api-server "Direct link to Setting Up Your API server")
Setting up your OpenAI compatible API server following [yuan2 openai api server](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/Yuan2_fastchat.md). If you deployed api server locally, you can simply set `yuan2_api_key="EMPTY"` or anything you want. Just make sure, the `yuan2_api_base` is set correctly.
```
yuan2_api_key = "your_api_key"yuan2_api_base = "http://127.0.0.1:8001/v1"
```
### Initialize the ChatYuan2 Model[](#initialize-the-chatyuan2-model "Direct link to Initialize the ChatYuan2 Model")
Here’s how to initialize the chat model:
```
chat = ChatYuan2( yuan2_api_base="http://127.0.0.1:8001/v1", temperature=1.0, model_name="yuan2", max_retries=3, streaming=False,)
```
### Basic Usage[](#basic-usage "Direct link to Basic Usage")
Invoke the model with system and human messages like this:
```
messages = [ SystemMessage(content="你是一个人工智能助手。"), HumanMessage(content="你好,你是谁?"),]
```
```
print(chat.invoke(messages))
```
### Basic Usage with streaming[](#basic-usage-with-streaming "Direct link to Basic Usage with streaming")
For continuous interaction, use the streaming feature:
```
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerchat = ChatYuan2( yuan2_api_base="http://127.0.0.1:8001/v1", temperature=1.0, model_name="yuan2", max_retries=3, streaming=True, callbacks=[StreamingStdOutCallbackHandler()],)messages = [ SystemMessage(content="你是个旅游小助手。"), HumanMessage(content="给我介绍一下北京有哪些好玩的。"),]
```
## Advanced Features[](#advanced-features "Direct link to Advanced Features")
### Usage with async calls[](#usage-with-async-calls "Direct link to Usage with async calls")
Invoke the model with non-blocking calls, like this:
```
async def basic_agenerate(): chat = ChatYuan2( yuan2_api_base="http://127.0.0.1:8001/v1", temperature=1.0, model_name="yuan2", max_retries=3, ) messages = [ [ SystemMessage(content="你是个旅游小助手。"), HumanMessage(content="给我介绍一下北京有哪些好玩的。"), ] ] result = await chat.agenerate(messages) print(result)
```
```
import asyncioasyncio.run(basic_agenerate())
```
### Usage with prompt template[](#usage-with-prompt-template "Direct link to Usage with prompt template")
Invoke the model with non-blocking calls and used chat template like this:
```
async def ainvoke_with_prompt_template(): from langchain_core.prompts.chat import ( ChatPromptTemplate, ) chat = ChatYuan2( yuan2_api_base="http://127.0.0.1:8001/v1", temperature=1.0, model_name="yuan2", max_retries=3, ) prompt = ChatPromptTemplate.from_messages( [ ("system", "你是一个诗人,擅长写诗。"), ("human", "给我写首诗,主题是{theme}。"), ] ) chain = prompt | chat result = await chain.ainvoke({"theme": "明月"}) print(f"type(result): {type(result)}; {result}")
```
```
asyncio.run(ainvoke_with_prompt_template())
```
### Usage with async calls in streaming[](#usage-with-async-calls-in-streaming "Direct link to Usage with async calls in streaming")
For non-blocking calls with streaming output, use the astream method:
```
async def basic_astream(): chat = ChatYuan2( yuan2_api_base="http://127.0.0.1:8001/v1", temperature=1.0, model_name="yuan2", max_retries=3, ) messages = [ SystemMessage(content="你是个旅游小助手。"), HumanMessage(content="给我介绍一下北京有哪些好玩的。"), ] result = chat.astream(messages) async for chunk in result: print(chunk.content, end="", flush=True)
```
```
import asyncioasyncio.run(basic_astream())
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:37.207Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/yuan2/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/yuan2/",
"description": "This notebook shows how to use [YUAN2",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4361",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"yuan2\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:37 GMT",
"etag": "W/\"68c186a57059211d6f89c8fd2d1cf8d8\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::qrh8j-1713753517037-e7eb83e56233"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/yuan2/",
"property": "og:url"
},
{
"content": "Yuan2.0 | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to use [YUAN2",
"property": "og:description"
}
],
"title": "Yuan2.0 | 🦜️🔗 LangChain"
} | Yuan2.0
This notebook shows how to use YUAN2 API in LangChain with the langchain.chat_models.ChatYuan2.
Yuan2.0 is a new generation Fundamental Large Language Model developed by IEIT System. We have published all three models, Yuan 2.0-102B, Yuan 2.0-51B, and Yuan 2.0-2B. And we provide relevant scripts for pretraining, fine-tuning, and inference services for other developers. Yuan2.0 is based on Yuan1.0, utilizing a wider range of high-quality pre training data and instruction fine-tuning datasets to enhance the model’s understanding of semantics, mathematics, reasoning, code, knowledge, and other aspects.
Getting started
Installation
First, Yuan2.0 provided an OpenAI compatible API, and we integrate ChatYuan2 into langchain chat model by using OpenAI client. Therefore, ensure the openai package is installed in your Python environment. Run the following command:
%pip install --upgrade --quiet openai
Importing the Required Modules
After installation, import the necessary modules to your Python script:
from langchain_community.chat_models import ChatYuan2
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
Setting Up Your API server
Setting up your OpenAI compatible API server following yuan2 openai api server. If you deployed api server locally, you can simply set yuan2_api_key="EMPTY" or anything you want. Just make sure, the yuan2_api_base is set correctly.
yuan2_api_key = "your_api_key"
yuan2_api_base = "http://127.0.0.1:8001/v1"
Initialize the ChatYuan2 Model
Here’s how to initialize the chat model:
chat = ChatYuan2(
yuan2_api_base="http://127.0.0.1:8001/v1",
temperature=1.0,
model_name="yuan2",
max_retries=3,
streaming=False,
)
Basic Usage
Invoke the model with system and human messages like this:
messages = [
SystemMessage(content="你是一个人工智能助手。"),
HumanMessage(content="你好,你是谁?"),
]
print(chat.invoke(messages))
Basic Usage with streaming
For continuous interaction, use the streaming feature:
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
chat = ChatYuan2(
yuan2_api_base="http://127.0.0.1:8001/v1",
temperature=1.0,
model_name="yuan2",
max_retries=3,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
)
messages = [
SystemMessage(content="你是个旅游小助手。"),
HumanMessage(content="给我介绍一下北京有哪些好玩的。"),
]
Advanced Features
Usage with async calls
Invoke the model with non-blocking calls, like this:
async def basic_agenerate():
chat = ChatYuan2(
yuan2_api_base="http://127.0.0.1:8001/v1",
temperature=1.0,
model_name="yuan2",
max_retries=3,
)
messages = [
[
SystemMessage(content="你是个旅游小助手。"),
HumanMessage(content="给我介绍一下北京有哪些好玩的。"),
]
]
result = await chat.agenerate(messages)
print(result)
import asyncio
asyncio.run(basic_agenerate())
Usage with prompt template
Invoke the model with non-blocking calls and used chat template like this:
async def ainvoke_with_prompt_template():
from langchain_core.prompts.chat import (
ChatPromptTemplate,
)
chat = ChatYuan2(
yuan2_api_base="http://127.0.0.1:8001/v1",
temperature=1.0,
model_name="yuan2",
max_retries=3,
)
prompt = ChatPromptTemplate.from_messages(
[
("system", "你是一个诗人,擅长写诗。"),
("human", "给我写首诗,主题是{theme}。"),
]
)
chain = prompt | chat
result = await chain.ainvoke({"theme": "明月"})
print(f"type(result): {type(result)}; {result}")
asyncio.run(ainvoke_with_prompt_template())
Usage with async calls in streaming
For non-blocking calls with streaming output, use the astream method:
async def basic_astream():
chat = ChatYuan2(
yuan2_api_base="http://127.0.0.1:8001/v1",
temperature=1.0,
model_name="yuan2",
max_retries=3,
)
messages = [
SystemMessage(content="你是个旅游小助手。"),
HumanMessage(content="给我介绍一下北京有哪些好玩的。"),
]
result = chat.astream(messages)
async for chunk in result:
print(chunk.content, end="", flush=True)
import asyncio
asyncio.run(basic_astream()) |
https://python.langchain.com/docs/integrations/chat/zhipuai/ | ## ZHIPU AI
This notebook shows how to use [ZHIPU AI API](https://open.bigmodel.cn/dev/api) in LangChain with the langchain.chat\_models.ChatZhipuAI.
> [_GLM-4_](https://open.bigmodel.cn/) is a multi-lingual large language model aligned with human intent, featuring capabilities in Q&A, multi-turn dialogue, and code generation. The overall performance of the new generation base model GLM-4 has been significantly improved compared to the previous generation, supporting longer contexts; Stronger multimodality; Support faster inference speed, more concurrency, greatly reducing inference costs; Meanwhile, GLM-4 enhances the capabilities of intelligent agents.
## Getting started[](#getting-started "Direct link to Getting started")
### Installation[](#installation "Direct link to Installation")
First, ensure the zhipuai package is installed in your Python environment. Run the following command:
```
#!pip install --upgrade httpx httpx-sse PyJWT
```
### Importing the Required Modules[](#importing-the-required-modules "Direct link to Importing the Required Modules")
After installation, import the necessary modules to your Python script:
```
from langchain_community.chat_models import ChatZhipuAIfrom langchain_core.messages import AIMessage, HumanMessage, SystemMessage
```
### Setting Up Your API Key[](#setting-up-your-api-key "Direct link to Setting Up Your API Key")
Sign in to [ZHIPU AI](https://open.bigmodel.cn/login?redirect=%2Fusercenter%2Fapikeys) for the an API Key to access our models.
```
import osos.environ["ZHIPUAI_API_KEY"] = "zhipuai_api_key"
```
### Initialize the ZHIPU AI Chat Model[](#initialize-the-zhipu-ai-chat-model "Direct link to Initialize the ZHIPU AI Chat Model")
Here’s how to initialize the chat model:
```
chat = ChatZhipuAI( model="glm-4", temperature=0.5,)
```
### Basic Usage[](#basic-usage "Direct link to Basic Usage")
Invoke the model with system and human messages like this:
```
messages = [ AIMessage(content="Hi."), SystemMessage(content="Your role is a poet."), HumanMessage(content="Write a short poem about AI in four lines."),]
```
```
response = chat(messages)print(response.content) # Displays the AI-generated poem
```
## Advanced Features[](#advanced-features "Direct link to Advanced Features")
### Streaming Support[](#streaming-support "Direct link to Streaming Support")
For continuous interaction, use the streaming feature:
```
from langchain_core.callbacks.manager import CallbackManagerfrom langchain_core.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
```
```
streaming_chat = ChatZhipuAI( model="glm-4", temperature=0.5, streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),)
```
### Asynchronous Calls[](#asynchronous-calls "Direct link to Asynchronous Calls")
For non-blocking calls, use the asynchronous approach:
```
async_chat = ChatZhipuAI( model="glm-4", temperature=0.5,)
```
```
response = await async_chat.agenerate([messages])print(response)
```
### Using With Functions Call[](#using-with-functions-call "Direct link to Using With Functions Call")
GLM-4 Model can be used with the function call as well,use the following code to run a simple LangChain json\_chat\_agent.
```
os.environ["TAVILY_API_KEY"] = "tavily_api_key"
```
```
from langchain import hubfrom langchain.agents import AgentExecutor, create_json_chat_agentfrom langchain_community.tools.tavily_search import TavilySearchResultstools = [TavilySearchResults(max_results=1)]prompt = hub.pull("hwchase17/react-chat-json")llm = ChatZhipuAI(temperature=0.01, model="glm-4")agent = create_json_chat_agent(llm, tools, prompt)agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, handle_parsing_errors=True)
```
```
agent_executor.invoke({"input": "what is LangChain?"})
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:37.958Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/zhipuai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/zhipuai/",
"description": "This notebook shows how to use [ZHIPU AI",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"zhipuai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:37 GMT",
"etag": "W/\"fd2e007db3a7aa553f6ca1c78c152488\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::5ds8x-1713753517860-3a24b377898d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/zhipuai/",
"property": "og:url"
},
{
"content": "ZHIPU AI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to use [ZHIPU AI",
"property": "og:description"
}
],
"title": "ZHIPU AI | 🦜️🔗 LangChain"
} | ZHIPU AI
This notebook shows how to use ZHIPU AI API in LangChain with the langchain.chat_models.ChatZhipuAI.
GLM-4 is a multi-lingual large language model aligned with human intent, featuring capabilities in Q&A, multi-turn dialogue, and code generation. The overall performance of the new generation base model GLM-4 has been significantly improved compared to the previous generation, supporting longer contexts; Stronger multimodality; Support faster inference speed, more concurrency, greatly reducing inference costs; Meanwhile, GLM-4 enhances the capabilities of intelligent agents.
Getting started
Installation
First, ensure the zhipuai package is installed in your Python environment. Run the following command:
#!pip install --upgrade httpx httpx-sse PyJWT
Importing the Required Modules
After installation, import the necessary modules to your Python script:
from langchain_community.chat_models import ChatZhipuAI
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
Setting Up Your API Key
Sign in to ZHIPU AI for the an API Key to access our models.
import os
os.environ["ZHIPUAI_API_KEY"] = "zhipuai_api_key"
Initialize the ZHIPU AI Chat Model
Here’s how to initialize the chat model:
chat = ChatZhipuAI(
model="glm-4",
temperature=0.5,
)
Basic Usage
Invoke the model with system and human messages like this:
messages = [
AIMessage(content="Hi."),
SystemMessage(content="Your role is a poet."),
HumanMessage(content="Write a short poem about AI in four lines."),
]
response = chat(messages)
print(response.content) # Displays the AI-generated poem
Advanced Features
Streaming Support
For continuous interaction, use the streaming feature:
from langchain_core.callbacks.manager import CallbackManager
from langchain_core.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
streaming_chat = ChatZhipuAI(
model="glm-4",
temperature=0.5,
streaming=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
)
Asynchronous Calls
For non-blocking calls, use the asynchronous approach:
async_chat = ChatZhipuAI(
model="glm-4",
temperature=0.5,
)
response = await async_chat.agenerate([messages])
print(response)
Using With Functions Call
GLM-4 Model can be used with the function call as well,use the following code to run a simple LangChain json_chat_agent.
os.environ["TAVILY_API_KEY"] = "tavily_api_key"
from langchain import hub
from langchain.agents import AgentExecutor, create_json_chat_agent
from langchain_community.tools.tavily_search import TavilySearchResults
tools = [TavilySearchResults(max_results=1)]
prompt = hub.pull("hwchase17/react-chat-json")
llm = ChatZhipuAI(temperature=0.01, model="glm-4")
agent = create_json_chat_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent, tools=tools, verbose=True, handle_parsing_errors=True
)
agent_executor.invoke({"input": "what is LangChain?"}) |
https://python.langchain.com/docs/integrations/components/ | [
## 🗃️ Document transformers
13 items
](https://python.langchain.com/docs/integrations/document_transformers/) | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:38.295Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/components/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/components/",
"description": null,
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3428",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"components\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:38 GMT",
"etag": "W/\"57d9e21135834f6c23ced8032f790fd5\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::lmhs6-1713753518187-ae5499000070"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/components/",
"property": "og:url"
},
{
"content": "Components | 🦜️🔗 LangChain",
"property": "og:title"
}
],
"title": "Components | 🦜️🔗 LangChain"
} | 🗃️ Document transformers
13 items |
https://python.langchain.com/docs/integrations/document_loaders/ | ## Document loaders
[
## 📄️ acreom
acreom is a dev-first knowledge base with tasks
](https://python.langchain.com/docs/integrations/document_loaders/acreom/)
[
## 📄️ AirbyteLoader
Airbyte is a data integration
](https://python.langchain.com/docs/integrations/document_loaders/airbyte/)
[
## 📄️ Airbyte CDK (Deprecated)
Note: AirbyteCDKLoader is deprecated. Please use
](https://python.langchain.com/docs/integrations/document_loaders/airbyte_cdk/)
[
## 📄️ Airbyte Gong (Deprecated)
Note: This connector-specific loader is deprecated. Please use
](https://python.langchain.com/docs/integrations/document_loaders/airbyte_gong/)
[
## 📄️ Airbyte Hubspot (Deprecated)
Note: AirbyteHubspotLoader is deprecated. Please use
](https://python.langchain.com/docs/integrations/document_loaders/airbyte_hubspot/)
[
## 📄️ Airbyte JSON (Deprecated)
Note: AirbyteJSONLoader is deprecated. Please use
](https://python.langchain.com/docs/integrations/document_loaders/airbyte_json/)
[
## 📄️ Airbyte Salesforce (Deprecated)
Note: This connector-specific loader is deprecated. Please use
](https://python.langchain.com/docs/integrations/document_loaders/airbyte_salesforce/)
[
## 📄️ Airbyte Shopify (Deprecated)
Note: This connector-specific loader is deprecated. Please use
](https://python.langchain.com/docs/integrations/document_loaders/airbyte_shopify/)
[
## 📄️ Airbyte Stripe (Deprecated)
Note: This connector-specific loader is deprecated. Please use
](https://python.langchain.com/docs/integrations/document_loaders/airbyte_stripe/)
[
## 📄️ Airbyte Typeform (Deprecated)
Note: This connector-specific loader is deprecated. Please use
](https://python.langchain.com/docs/integrations/document_loaders/airbyte_typeform/)
[
## 📄️ Airbyte Zendesk Support (Deprecated)
Note: This connector-specific loader is deprecated. Please use
](https://python.langchain.com/docs/integrations/document_loaders/airbyte_zendesk_support/)
[
## 📄️ Airtable
\- Get your API key
](https://python.langchain.com/docs/integrations/document_loaders/airtable/)
[
## 📄️ Alibaba Cloud MaxCompute
\[Alibaba Cloud
](https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute/)
[
## 📄️ Amazon Textract
\[Amazon
](https://python.langchain.com/docs/integrations/document_loaders/amazon_textract/)
[
## 📄️ Apify Dataset
Apify Dataset is a
](https://python.langchain.com/docs/integrations/document_loaders/apify_dataset/)
[
## 📄️ ArcGIS
This notebook demonstrates the use of the
](https://python.langchain.com/docs/integrations/document_loaders/arcgis/)
[
## 📄️ Arxiv
arXiv is an open-access archive for 2 million
](https://python.langchain.com/docs/integrations/document_loaders/arxiv/)
[
## 📄️ AssemblyAI Audio Transcripts
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files
](https://python.langchain.com/docs/integrations/document_loaders/assemblyai/)
[
## 📄️ AstraDB
DataStax Astra DB
](https://python.langchain.com/docs/integrations/document_loaders/astradb/)
[
## 📄️ Async Chromium
Chromium is one of the browsers supported by Playwright, a library used
](https://python.langchain.com/docs/integrations/document_loaders/async_chromium/)
[
## 📄️ AsyncHtml
AsyncHtmlLoader loads raw HTML from a list of URLs concurrently.
](https://python.langchain.com/docs/integrations/document_loaders/async_html/)
[
## 📄️ Athena
Amazon Athena is a serverless,
](https://python.langchain.com/docs/integrations/document_loaders/athena/)
[
## 📄️ AWS S3 Directory
\[Amazon Simple Storage Service (Amazon
](https://python.langchain.com/docs/integrations/document_loaders/aws_s3_directory/)
[
## 📄️ AWS S3 File
\[Amazon Simple Storage Service (Amazon
](https://python.langchain.com/docs/integrations/document_loaders/aws_s3_file/)
[
## 📄️ AZLyrics
AZLyrics is a large, legal, every day
](https://python.langchain.com/docs/integrations/document_loaders/azlyrics/)
[
## 📄️ Azure AI Data
Azure AI Studio provides the capability to
](https://python.langchain.com/docs/integrations/document_loaders/azure_ai_data/)
[
## 📄️ Azure Blob Storage Container
\[Azure Blob
](https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_container/)
[
## 📄️ Azure Blob Storage File
\[Azure
](https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_file/)
[
## 📄️ Azure AI Document Intelligence
Azure AI Document Intelligence
](https://python.langchain.com/docs/integrations/document_loaders/azure_document_intelligence/)
[
## 📄️ BibTeX
BibTeX is a file format and
](https://python.langchain.com/docs/integrations/document_loaders/bibtex/)
[
## 📄️ BiliBili
Bilibili is one of the most beloved
](https://python.langchain.com/docs/integrations/document_loaders/bilibili/)
[
## 📄️ Blackboard
Blackboard Learn
](https://python.langchain.com/docs/integrations/document_loaders/blackboard/)
[
## 📄️ Blockchain
Overview
](https://python.langchain.com/docs/integrations/document_loaders/blockchain/)
[
## 📄️ Brave Search
Brave Search is a search
](https://python.langchain.com/docs/integrations/document_loaders/brave_search/)
[
## 📄️ Browserless
Browserless is a service that allows you to run headless Chrome
](https://python.langchain.com/docs/integrations/document_loaders/browserless/)
[
## 📄️ Cassandra
Cassandra is a NoSQL, row-oriented,
](https://python.langchain.com/docs/integrations/document_loaders/cassandra/)
[
## 📄️ ChatGPT Data
ChatGPT is an artificial intelligence (AI)
](https://python.langchain.com/docs/integrations/document_loaders/chatgpt_loader/)
[
## 📄️ College Confidential
College Confidential gives
](https://python.langchain.com/docs/integrations/document_loaders/college_confidential/)
[
## 📄️ Concurrent Loader
Works just like the GenericLoader but concurrently for those who choose
](https://python.langchain.com/docs/integrations/document_loaders/concurrent/)
[
## 📄️ Confluence
Confluence is a wiki
](https://python.langchain.com/docs/integrations/document_loaders/confluence/)
[
## 📄️ CoNLL-U
CoNLL-U is revised
](https://python.langchain.com/docs/integrations/document_loaders/conll-u/)
[
## 📄️ Copy Paste
This notebook covers how to load a document object from something you
](https://python.langchain.com/docs/integrations/document_loaders/copypaste/)
[
## 📄️ Couchbase
Couchbase is an award-winning distributed
](https://python.langchain.com/docs/integrations/document_loaders/couchbase/)
[
## 📄️ CSV
A \[comma-separated values
](https://python.langchain.com/docs/integrations/document_loaders/csv/)
[
## 📄️ Cube Semantic Layer
This notebook demonstrates the process of retrieving Cube’s data model
](https://python.langchain.com/docs/integrations/document_loaders/cube_semantic/)
[
## 📄️ Datadog Logs
Datadog is a monitoring and analytics
](https://python.langchain.com/docs/integrations/document_loaders/datadog_logs/)
[
## 📄️ Diffbot
Unlike traditional web scraping tools,
](https://python.langchain.com/docs/integrations/document_loaders/diffbot/)
[
## 📄️ Discord
Discord is a VoIP and instant messaging social
](https://python.langchain.com/docs/integrations/document_loaders/discord/)
[
## 📄️ Docugami
This notebook covers how to load documents from Docugami. It provides
](https://python.langchain.com/docs/integrations/document_loaders/docugami/)
[
## 📄️ Docusaurus
Docusaurus is a static-site generator which
](https://python.langchain.com/docs/integrations/document_loaders/docusaurus/)
[
## 📄️ Dropbox
Dropbox is a file hosting
](https://python.langchain.com/docs/integrations/document_loaders/dropbox/)
[
## 📄️ DuckDB
DuckDB is an in-process SQL OLAP database
](https://python.langchain.com/docs/integrations/document_loaders/duckdb/)
[
## 📄️ Email
This notebook shows how to load email (.eml) or Microsoft Outlook
](https://python.langchain.com/docs/integrations/document_loaders/email/)
[
## 📄️ EPub
EPUB is an e-book file format
](https://python.langchain.com/docs/integrations/document_loaders/epub/)
[
## 📄️ Etherscan
Etherscan is the leading blockchain
](https://python.langchain.com/docs/integrations/document_loaders/etherscan/)
[
## 📄️ EverNote
EverNote is intended for archiving and
](https://python.langchain.com/docs/integrations/document_loaders/evernote/)
[
## 📄️ Facebook Chat
Messenger) is an
](https://python.langchain.com/docs/integrations/document_loaders/facebook_chat/)
[
## 📄️ Fauna
Fauna is a Document Database.
](https://python.langchain.com/docs/integrations/document_loaders/fauna/)
[
## 📄️ Figma
Figma is a collaborative web application for
](https://python.langchain.com/docs/integrations/document_loaders/figma/)
[
## 📄️ FireCrawl
FireCrawl crawls and convert any
](https://python.langchain.com/docs/integrations/document_loaders/firecrawl/)
[
## 📄️ Geopandas
Geopandas is an
](https://python.langchain.com/docs/integrations/document_loaders/geopandas/)
[
## 📄️ Git
Git is a distributed version
](https://python.langchain.com/docs/integrations/document_loaders/git/)
[
## 📄️ GitBook
GitBook is a modern documentation
](https://python.langchain.com/docs/integrations/document_loaders/gitbook/)
[
## 📄️ GitHub
This notebooks shows how you can load issues and pull requests (PRs) for
](https://python.langchain.com/docs/integrations/document_loaders/github/)
[
## 📄️ Glue Catalog
The \[AWS Glue Data
](https://python.langchain.com/docs/integrations/document_loaders/glue_catalog/)
[
## 📄️ Google AlloyDB for PostgreSQL
AlloyDB is a fully managed
](https://python.langchain.com/docs/integrations/document_loaders/google_alloydb/)
[
## 📄️ Google BigQuery
Google BigQuery is a serverless
](https://python.langchain.com/docs/integrations/document_loaders/google_bigquery/)
[
## 📄️ Google Bigtable
Bigtable is a key-value and
](https://python.langchain.com/docs/integrations/document_loaders/google_bigtable/)
[
## 📄️ Google Cloud SQL for SQL server
Cloud SQL is a fully managed
](https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_mssql/)
[
## 📄️ Google Cloud SQL for MySQL
Cloud SQL is a fully managed
](https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_mysql/)
[
## 📄️ Google Cloud SQL for PostgreSQL
Cloud SQL for PostgreSQL
](https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_pg/)
[
## 📄️ Google Cloud Storage Directory
\[Google Cloud
](https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_directory/)
[
## 📄️ Google Cloud Storage File
\[Google Cloud
](https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_file/)
[
## 📄️ Google Firestore in Datastore Mode
Firestore in Datastore Mode is a
](https://python.langchain.com/docs/integrations/document_loaders/google_datastore/)
[
## 📄️ Google Drive
Google Drive is a file
](https://python.langchain.com/docs/integrations/document_loaders/google_drive/)
[
## 📄️ Google El Carro for Oracle Workloads
Google \[El Carro Oracle
](https://python.langchain.com/docs/integrations/document_loaders/google_el_carro/)
[
## 📄️ Google Firestore (Native Mode)
Firestore is a serverless
](https://python.langchain.com/docs/integrations/document_loaders/google_firestore/)
[
## 📄️ Google Memorystore for Redis
\[Google Memorystore for
](https://python.langchain.com/docs/integrations/document_loaders/google_memorystore_redis/)
[
## 📄️ Google Spanner
Spanner is a highly scalable
](https://python.langchain.com/docs/integrations/document_loaders/google_spanner/)
[
## 📄️ Google Speech-to-Text Audio Transcripts
The GoogleSpeechToTextLoader allows to transcribe audio files with the
](https://python.langchain.com/docs/integrations/document_loaders/google_speech_to_text/)
[
## 📄️ Grobid
GROBID is a machine learning library for extracting, parsing, and
](https://python.langchain.com/docs/integrations/document_loaders/grobid/)
[
## 📄️ Gutenberg
Project Gutenberg is an online
](https://python.langchain.com/docs/integrations/document_loaders/gutenberg/)
[
## 📄️ Hacker News
Hacker News (sometimes
](https://python.langchain.com/docs/integrations/document_loaders/hacker_news/)
[
## 📄️ Huawei OBS Directory
The following code demonstrates how to load objects from the Huawei OBS
](https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_directory/)
[
## 📄️ Huawei OBS File
The following code demonstrates how to load an object from the Huawei
](https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_file/)
[
## 📄️ HuggingFace dataset
The Hugging Face Hub is home
](https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset/)
[
## 📄️ iFixit
iFixit is the largest, open repair community
](https://python.langchain.com/docs/integrations/document_loaders/ifixit/)
[
## 📄️ Images
This covers how to load images such as JPG or PNG into a document
](https://python.langchain.com/docs/integrations/document_loaders/image/)
[
## 📄️ Image captions
By default, the loader utilizes the pre-trained \[Salesforce BLIP image
](https://python.langchain.com/docs/integrations/document_loaders/image_captions/)
[
## 📄️ IMSDb
IMSDb is the Internet Movie Script Database.
](https://python.langchain.com/docs/integrations/document_loaders/imsdb/)
[
## 📄️ Iugu
Iugu is a Brazilian services and software as
](https://python.langchain.com/docs/integrations/document_loaders/iugu/)
[
## 📄️ Joplin
Joplin is an open-source note-taking app.
](https://python.langchain.com/docs/integrations/document_loaders/joplin/)
[
## 📄️ Jupyter Notebook
\[Jupyter
](https://python.langchain.com/docs/integrations/document_loaders/jupyter_notebook/)
[
## 📄️ lakeFS
lakeFS provides scalable version control
](https://python.langchain.com/docs/integrations/document_loaders/lakefs/)
[
## 📄️ LarkSuite (FeiShu)
LarkSuite is an enterprise collaboration
](https://python.langchain.com/docs/integrations/document_loaders/larksuite/)
[
## 📄️ LLM Sherpa
This notebook covers how to use LLM Sherpa to load files of many
](https://python.langchain.com/docs/integrations/document_loaders/llmsherpa/)
[
## 📄️ Mastodon
Mastodon is a federated social media and
](https://python.langchain.com/docs/integrations/document_loaders/mastodon/)
[
## 📄️ MediaWiki Dump
\[MediaWiki XML
](https://python.langchain.com/docs/integrations/document_loaders/mediawikidump/)
[
## 📄️ Merge Documents Loader
Merge the documents returned from a set of specified data loaders.
](https://python.langchain.com/docs/integrations/document_loaders/merge_doc/)
[
## 📄️ mhtml
MHTML is a is used both for emails but also for archived webpages.
](https://python.langchain.com/docs/integrations/document_loaders/mhtml/)
[
## 📄️ Microsoft Excel
The UnstructuredExcelLoader is used to load Microsoft Excel files.
](https://python.langchain.com/docs/integrations/document_loaders/microsoft_excel/)
[
## 📄️ Microsoft OneDrive
Microsoft OneDrive (formerly
](https://python.langchain.com/docs/integrations/document_loaders/microsoft_onedrive/)
[
## 📄️ Microsoft OneNote
This notebook covers how to load documents from OneNote.
](https://python.langchain.com/docs/integrations/document_loaders/microsoft_onenote/)
[
## 📄️ Microsoft PowerPoint
\[Microsoft
](https://python.langchain.com/docs/integrations/document_loaders/microsoft_powerpoint/)
[
## 📄️ Microsoft SharePoint
Microsoft SharePoint is a
](https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint/)
[
## 📄️ Microsoft Word
Microsoft Word
](https://python.langchain.com/docs/integrations/document_loaders/microsoft_word/)
[
## 📄️ Modern Treasury
Modern Treasury simplifies complex
](https://python.langchain.com/docs/integrations/document_loaders/modern_treasury/)
[
## 📄️ MongoDB
MongoDB is a NoSQL , document-oriented
](https://python.langchain.com/docs/integrations/document_loaders/mongodb/)
[
## 📄️ News URL
This covers how to load HTML news articles from a list of URLs into a
](https://python.langchain.com/docs/integrations/document_loaders/news/)
[
## 📄️ Notion DB 1/2
Notion is a collaboration platform with
](https://python.langchain.com/docs/integrations/document_loaders/notion/)
[
## 📄️ Notion DB 2/2
Notion is a collaboration platform with
](https://python.langchain.com/docs/integrations/document_loaders/notiondb/)
[
## 📄️ Nuclia
Nuclia automatically indexes your unstructured
](https://python.langchain.com/docs/integrations/document_loaders/nuclia/)
[
## 📄️ Obsidian
Obsidian is a powerful and extensible
](https://python.langchain.com/docs/integrations/document_loaders/obsidian/)
[
## 📄️ Open Document Format (ODT)
The \[Open Document Format for Office Applications
](https://python.langchain.com/docs/integrations/document_loaders/odt/)
[
## 📄️ Open City Data
Socrata
](https://python.langchain.com/docs/integrations/document_loaders/open_city_data/)
[
## 📄️ Oracle Autonomous Database
Oracle autonomous database is a cloud database that uses machine
](https://python.langchain.com/docs/integrations/document_loaders/oracleadb_loader/)
[
## 📄️ Org-mode
A Org Mode document is a
](https://python.langchain.com/docs/integrations/document_loaders/org_mode/)
[
## 📄️ Pandas DataFrame
This notebook goes over how to load data from a
](https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe/)
[
## 📄️ Pebblo Safe DocumentLoader
Pebblo enables developers to
](https://python.langchain.com/docs/integrations/document_loaders/pebblo/)
[
## 📄️ Polars DataFrame
This notebook goes over how to load data from a
](https://python.langchain.com/docs/integrations/document_loaders/polars_dataframe/)
[
## 📄️ Psychic
This notebook covers how to load documents from Psychic. See
](https://python.langchain.com/docs/integrations/document_loaders/psychic/)
[
## 📄️ PubMed
PubMed® by
](https://python.langchain.com/docs/integrations/document_loaders/pubmed/)
[
## 📄️ PySpark
This notebook goes over how to load data from a
](https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe/)
[
## 📄️ Quip
Quip is a collaborative productivity software
](https://python.langchain.com/docs/integrations/document_loaders/quip/)
[
## 📄️ ReadTheDocs Documentation
Read the Docs is an open-sourced free
](https://python.langchain.com/docs/integrations/document_loaders/readthedocs_documentation/)
[
## 📄️ Recursive URL
We may want to process load all URLs under a root directory.
](https://python.langchain.com/docs/integrations/document_loaders/recursive_url/)
[
## 📄️ Reddit
Reddit is an American social news
](https://python.langchain.com/docs/integrations/document_loaders/reddit/)
[
## 📄️ Roam
ROAM is a note-taking tool for networked
](https://python.langchain.com/docs/integrations/document_loaders/roam/)
[
## 📄️ Rockset
Rockset is a real-time analytics database which enables queries on
](https://python.langchain.com/docs/integrations/document_loaders/rockset/)
[
## 📄️ rspace
This notebook shows how to use the RSpace document loader to import
](https://python.langchain.com/docs/integrations/document_loaders/rspace/)
[
## 📄️ RSS Feeds
This covers how to load HTML news articles from a list of RSS feed URLs
](https://python.langchain.com/docs/integrations/document_loaders/rss/)
[
## 📄️ RST
A \[reStructured Text
](https://python.langchain.com/docs/integrations/document_loaders/rst/)
[
## 📄️ Sitemap
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a
](https://python.langchain.com/docs/integrations/document_loaders/sitemap/)
[
## 📄️ Slack
Slack is an instant messaging program.
](https://python.langchain.com/docs/integrations/document_loaders/slack/)
[
## 📄️ Snowflake
This notebooks goes over how to load documents from Snowflake
](https://python.langchain.com/docs/integrations/document_loaders/snowflake/)
[
## 📄️ Source Code
This notebook covers how to load source code files using a special
](https://python.langchain.com/docs/integrations/document_loaders/source_code/)
[
## 📄️ Spreedly
Spreedly is a service that allows you to
](https://python.langchain.com/docs/integrations/document_loaders/spreedly/)
[
## 📄️ Stripe
Stripe is an Irish-American financial
](https://python.langchain.com/docs/integrations/document_loaders/stripe/)
[
## 📄️ Subtitle
\[The SubRip file
](https://python.langchain.com/docs/integrations/document_loaders/subtitle/)
[
## 📄️ SurrealDB
SurrealDB is an end-to-end cloud-native
](https://python.langchain.com/docs/integrations/document_loaders/surrealdb/)
[
## 📄️ Telegram
Telegram Messenger is a globally
](https://python.langchain.com/docs/integrations/document_loaders/telegram/)
[
## 📄️ Tencent COS Directory
\[Tencent Cloud Object Storage
](https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_directory/)
[
## 📄️ Tencent COS File
\[Tencent Cloud Object Storage
](https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_file/)
[
## 📄️ TensorFlow Datasets
TensorFlow Datasets is a
](https://python.langchain.com/docs/integrations/document_loaders/tensorflow_datasets/)
[
## 📄️ TiDB
TiDB Cloud, is a comprehensive
](https://python.langchain.com/docs/integrations/document_loaders/tidb/)
[
## 📄️ 2Markdown
2markdown service transforms website content
](https://python.langchain.com/docs/integrations/document_loaders/tomarkdown/)
[
## 📄️ TOML
TOML is a file format for
](https://python.langchain.com/docs/integrations/document_loaders/toml/)
[
## 📄️ Trello
Trello is a web-based
](https://python.langchain.com/docs/integrations/document_loaders/trello/)
[
## 📄️ TSV
A \[tab-separated values
](https://python.langchain.com/docs/integrations/document_loaders/tsv/)
[
## 📄️ Twitter
Twitter is an online social media and social
](https://python.langchain.com/docs/integrations/document_loaders/twitter/)
[
## 📄️ Unstructured File
This notebook covers how to use Unstructured package to load files of
](https://python.langchain.com/docs/integrations/document_loaders/unstructured_file/)
[
## 📄️ URL
This example covers how to load HTML documents from a list of URLs
](https://python.langchain.com/docs/integrations/document_loaders/url/)
[
## 📄️ Vsdx
A visio file (with
](https://python.langchain.com/docs/integrations/document_loaders/vsdx/)
[
## 📄️ Weather
OpenWeatherMap is an open-source
](https://python.langchain.com/docs/integrations/document_loaders/weather/)
[
## 📄️ WebBaseLoader
This covers how to use WebBaseLoader to load all text from HTML
](https://python.langchain.com/docs/integrations/document_loaders/web_base/)
[
## 📄️ WhatsApp Chat
WhatsApp (also called
](https://python.langchain.com/docs/integrations/document_loaders/whatsapp_chat/)
[
## 📄️ Wikipedia
Wikipedia is a multilingual free online
](https://python.langchain.com/docs/integrations/document_loaders/wikipedia/)
[
## 📄️ XML
The UnstructuredXMLLoader is used to load XML files. The loader
](https://python.langchain.com/docs/integrations/document_loaders/xml/)
[
## 📄️ Xorbits Pandas DataFrame
This notebook goes over how to load data from a
](https://python.langchain.com/docs/integrations/document_loaders/xorbits/)
[
## 📄️ YouTube audio
Building chat or QA applications on YouTube videos is a topic of high
](https://python.langchain.com/docs/integrations/document_loaders/youtube_audio/)
[
## 📄️ YouTube transcripts
YouTube is an online video sharing and
](https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript/)
[
## 📄️ Yuque
Yuque is a professional cloud-based
](https://python.langchain.com/docs/integrations/document_loaders/yuque/) | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:38.423Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/",
"description": null,
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "5634",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"document_loaders\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:38 GMT",
"etag": "W/\"ebc2f105d855a9d84266e7586ee437f2\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::56wnp-1713753518179-2981d006b2a7"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/",
"property": "og:url"
},
{
"content": "Document loaders | 🦜️🔗 LangChain",
"property": "og:title"
}
],
"title": "Document loaders | 🦜️🔗 LangChain"
} | Document loaders
📄️ acreom
acreom is a dev-first knowledge base with tasks
📄️ AirbyteLoader
Airbyte is a data integration
📄️ Airbyte CDK (Deprecated)
Note: AirbyteCDKLoader is deprecated. Please use
📄️ Airbyte Gong (Deprecated)
Note: This connector-specific loader is deprecated. Please use
📄️ Airbyte Hubspot (Deprecated)
Note: AirbyteHubspotLoader is deprecated. Please use
📄️ Airbyte JSON (Deprecated)
Note: AirbyteJSONLoader is deprecated. Please use
📄️ Airbyte Salesforce (Deprecated)
Note: This connector-specific loader is deprecated. Please use
📄️ Airbyte Shopify (Deprecated)
Note: This connector-specific loader is deprecated. Please use
📄️ Airbyte Stripe (Deprecated)
Note: This connector-specific loader is deprecated. Please use
📄️ Airbyte Typeform (Deprecated)
Note: This connector-specific loader is deprecated. Please use
📄️ Airbyte Zendesk Support (Deprecated)
Note: This connector-specific loader is deprecated. Please use
📄️ Airtable
- Get your API key
📄️ Alibaba Cloud MaxCompute
[Alibaba Cloud
📄️ Amazon Textract
[Amazon
📄️ Apify Dataset
Apify Dataset is a
📄️ ArcGIS
This notebook demonstrates the use of the
📄️ Arxiv
arXiv is an open-access archive for 2 million
📄️ AssemblyAI Audio Transcripts
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files
📄️ AstraDB
DataStax Astra DB
📄️ Async Chromium
Chromium is one of the browsers supported by Playwright, a library used
📄️ AsyncHtml
AsyncHtmlLoader loads raw HTML from a list of URLs concurrently.
📄️ Athena
Amazon Athena is a serverless,
📄️ AWS S3 Directory
[Amazon Simple Storage Service (Amazon
📄️ AWS S3 File
[Amazon Simple Storage Service (Amazon
📄️ AZLyrics
AZLyrics is a large, legal, every day
📄️ Azure AI Data
Azure AI Studio provides the capability to
📄️ Azure Blob Storage Container
[Azure Blob
📄️ Azure Blob Storage File
[Azure
📄️ Azure AI Document Intelligence
Azure AI Document Intelligence
📄️ BibTeX
BibTeX is a file format and
📄️ BiliBili
Bilibili is one of the most beloved
📄️ Blackboard
Blackboard Learn
📄️ Blockchain
Overview
📄️ Brave Search
Brave Search is a search
📄️ Browserless
Browserless is a service that allows you to run headless Chrome
📄️ Cassandra
Cassandra is a NoSQL, row-oriented,
📄️ ChatGPT Data
ChatGPT is an artificial intelligence (AI)
📄️ College Confidential
College Confidential gives
📄️ Concurrent Loader
Works just like the GenericLoader but concurrently for those who choose
📄️ Confluence
Confluence is a wiki
📄️ CoNLL-U
CoNLL-U is revised
📄️ Copy Paste
This notebook covers how to load a document object from something you
📄️ Couchbase
Couchbase is an award-winning distributed
📄️ CSV
A [comma-separated values
📄️ Cube Semantic Layer
This notebook demonstrates the process of retrieving Cube’s data model
📄️ Datadog Logs
Datadog is a monitoring and analytics
📄️ Diffbot
Unlike traditional web scraping tools,
📄️ Discord
Discord is a VoIP and instant messaging social
📄️ Docugami
This notebook covers how to load documents from Docugami. It provides
📄️ Docusaurus
Docusaurus is a static-site generator which
📄️ Dropbox
Dropbox is a file hosting
📄️ DuckDB
DuckDB is an in-process SQL OLAP database
📄️ Email
This notebook shows how to load email (.eml) or Microsoft Outlook
📄️ EPub
EPUB is an e-book file format
📄️ Etherscan
Etherscan is the leading blockchain
📄️ EverNote
EverNote is intended for archiving and
📄️ Facebook Chat
Messenger) is an
📄️ Fauna
Fauna is a Document Database.
📄️ Figma
Figma is a collaborative web application for
📄️ FireCrawl
FireCrawl crawls and convert any
📄️ Geopandas
Geopandas is an
📄️ Git
Git is a distributed version
📄️ GitBook
GitBook is a modern documentation
📄️ GitHub
This notebooks shows how you can load issues and pull requests (PRs) for
📄️ Glue Catalog
The [AWS Glue Data
📄️ Google AlloyDB for PostgreSQL
AlloyDB is a fully managed
📄️ Google BigQuery
Google BigQuery is a serverless
📄️ Google Bigtable
Bigtable is a key-value and
📄️ Google Cloud SQL for SQL server
Cloud SQL is a fully managed
📄️ Google Cloud SQL for MySQL
Cloud SQL is a fully managed
📄️ Google Cloud SQL for PostgreSQL
Cloud SQL for PostgreSQL
📄️ Google Cloud Storage Directory
[Google Cloud
📄️ Google Cloud Storage File
[Google Cloud
📄️ Google Firestore in Datastore Mode
Firestore in Datastore Mode is a
📄️ Google Drive
Google Drive is a file
📄️ Google El Carro for Oracle Workloads
Google [El Carro Oracle
📄️ Google Firestore (Native Mode)
Firestore is a serverless
📄️ Google Memorystore for Redis
[Google Memorystore for
📄️ Google Spanner
Spanner is a highly scalable
📄️ Google Speech-to-Text Audio Transcripts
The GoogleSpeechToTextLoader allows to transcribe audio files with the
📄️ Grobid
GROBID is a machine learning library for extracting, parsing, and
📄️ Gutenberg
Project Gutenberg is an online
📄️ Hacker News
Hacker News (sometimes
📄️ Huawei OBS Directory
The following code demonstrates how to load objects from the Huawei OBS
📄️ Huawei OBS File
The following code demonstrates how to load an object from the Huawei
📄️ HuggingFace dataset
The Hugging Face Hub is home
📄️ iFixit
iFixit is the largest, open repair community
📄️ Images
This covers how to load images such as JPG or PNG into a document
📄️ Image captions
By default, the loader utilizes the pre-trained [Salesforce BLIP image
📄️ IMSDb
IMSDb is the Internet Movie Script Database.
📄️ Iugu
Iugu is a Brazilian services and software as
📄️ Joplin
Joplin is an open-source note-taking app.
📄️ Jupyter Notebook
[Jupyter
📄️ lakeFS
lakeFS provides scalable version control
📄️ LarkSuite (FeiShu)
LarkSuite is an enterprise collaboration
📄️ LLM Sherpa
This notebook covers how to use LLM Sherpa to load files of many
📄️ Mastodon
Mastodon is a federated social media and
📄️ MediaWiki Dump
[MediaWiki XML
📄️ Merge Documents Loader
Merge the documents returned from a set of specified data loaders.
📄️ mhtml
MHTML is a is used both for emails but also for archived webpages.
📄️ Microsoft Excel
The UnstructuredExcelLoader is used to load Microsoft Excel files.
📄️ Microsoft OneDrive
Microsoft OneDrive (formerly
📄️ Microsoft OneNote
This notebook covers how to load documents from OneNote.
📄️ Microsoft PowerPoint
[Microsoft
📄️ Microsoft SharePoint
Microsoft SharePoint is a
📄️ Microsoft Word
Microsoft Word
📄️ Modern Treasury
Modern Treasury simplifies complex
📄️ MongoDB
MongoDB is a NoSQL , document-oriented
📄️ News URL
This covers how to load HTML news articles from a list of URLs into a
📄️ Notion DB 1/2
Notion is a collaboration platform with
📄️ Notion DB 2/2
Notion is a collaboration platform with
📄️ Nuclia
Nuclia automatically indexes your unstructured
📄️ Obsidian
Obsidian is a powerful and extensible
📄️ Open Document Format (ODT)
The [Open Document Format for Office Applications
📄️ Open City Data
Socrata
📄️ Oracle Autonomous Database
Oracle autonomous database is a cloud database that uses machine
📄️ Org-mode
A Org Mode document is a
📄️ Pandas DataFrame
This notebook goes over how to load data from a
📄️ Pebblo Safe DocumentLoader
Pebblo enables developers to
📄️ Polars DataFrame
This notebook goes over how to load data from a
📄️ Psychic
This notebook covers how to load documents from Psychic. See
📄️ PubMed
PubMed® by
📄️ PySpark
This notebook goes over how to load data from a
📄️ Quip
Quip is a collaborative productivity software
📄️ ReadTheDocs Documentation
Read the Docs is an open-sourced free
📄️ Recursive URL
We may want to process load all URLs under a root directory.
📄️ Reddit
Reddit is an American social news
📄️ Roam
ROAM is a note-taking tool for networked
📄️ Rockset
Rockset is a real-time analytics database which enables queries on
📄️ rspace
This notebook shows how to use the RSpace document loader to import
📄️ RSS Feeds
This covers how to load HTML news articles from a list of RSS feed URLs
📄️ RST
A [reStructured Text
📄️ Sitemap
Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a
📄️ Slack
Slack is an instant messaging program.
📄️ Snowflake
This notebooks goes over how to load documents from Snowflake
📄️ Source Code
This notebook covers how to load source code files using a special
📄️ Spreedly
Spreedly is a service that allows you to
📄️ Stripe
Stripe is an Irish-American financial
📄️ Subtitle
[The SubRip file
📄️ SurrealDB
SurrealDB is an end-to-end cloud-native
📄️ Telegram
Telegram Messenger is a globally
📄️ Tencent COS Directory
[Tencent Cloud Object Storage
📄️ Tencent COS File
[Tencent Cloud Object Storage
📄️ TensorFlow Datasets
TensorFlow Datasets is a
📄️ TiDB
TiDB Cloud, is a comprehensive
📄️ 2Markdown
2markdown service transforms website content
📄️ TOML
TOML is a file format for
📄️ Trello
Trello is a web-based
📄️ TSV
A [tab-separated values
📄️ Twitter
Twitter is an online social media and social
📄️ Unstructured File
This notebook covers how to use Unstructured package to load files of
📄️ URL
This example covers how to load HTML documents from a list of URLs
📄️ Vsdx
A visio file (with
📄️ Weather
OpenWeatherMap is an open-source
📄️ WebBaseLoader
This covers how to use WebBaseLoader to load all text from HTML
📄️ WhatsApp Chat
WhatsApp (also called
📄️ Wikipedia
Wikipedia is a multilingual free online
📄️ XML
The UnstructuredXMLLoader is used to load XML files. The loader
📄️ Xorbits Pandas DataFrame
This notebook goes over how to load data from a
📄️ YouTube audio
Building chat or QA applications on YouTube videos is a topic of high
📄️ YouTube transcripts
YouTube is an online video sharing and
📄️ Yuque
Yuque is a professional cloud-based |
https://python.langchain.com/docs/integrations/chat/llama_api/ | ## ChatLlamaAPI
This notebook shows how to use LangChain with [LlamaAPI](https://llama-api.com/) - a hosted version of Llama2 that adds in support for function calling.
%pip install –upgrade –quiet llamaapi
```
from llamaapi import LlamaAPI# Replace 'Your_API_Token' with your actual API tokenllama = LlamaAPI("Your_API_Token")
```
```
from langchain_experimental.llms import ChatLlamaAPI
```
```
/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.12) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(
```
```
model = ChatLlamaAPI(client=llama)
```
```
from langchain.chains import create_tagging_chainschema = { "properties": { "sentiment": { "type": "string", "description": "the sentiment encountered in the passage", }, "aggressiveness": { "type": "integer", "description": "a 0-10 score of how aggressive the passage is", }, "language": {"type": "string", "description": "the language of the passage"}, }}chain = create_tagging_chain(schema, model)
```
```
chain.run("give me your money")
```
```
{'sentiment': 'aggressive', 'aggressiveness': 8, 'language': 'english'}
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:39.086Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/llama_api/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/llama_api/",
"description": "This notebook shows how to use LangChain with",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4369",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"llama_api\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:38 GMT",
"etag": "W/\"e5f242c413bbe4b29de105b1822235af\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::fc95f-1713753518974-90f7fa4001db"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/llama_api/",
"property": "og:url"
},
{
"content": "ChatLlamaAPI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to use LangChain with",
"property": "og:description"
}
],
"title": "ChatLlamaAPI | 🦜️🔗 LangChain"
} | ChatLlamaAPI
This notebook shows how to use LangChain with LlamaAPI - a hosted version of Llama2 that adds in support for function calling.
%pip install –upgrade –quiet llamaapi
from llamaapi import LlamaAPI
# Replace 'Your_API_Token' with your actual API token
llama = LlamaAPI("Your_API_Token")
from langchain_experimental.llms import ChatLlamaAPI
/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.12) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.
warnings.warn(
model = ChatLlamaAPI(client=llama)
from langchain.chains import create_tagging_chain
schema = {
"properties": {
"sentiment": {
"type": "string",
"description": "the sentiment encountered in the passage",
},
"aggressiveness": {
"type": "integer",
"description": "a 0-10 score of how aggressive the passage is",
},
"language": {"type": "string", "description": "the language of the passage"},
}
}
chain = create_tagging_chain(schema, model)
chain.run("give me your money")
{'sentiment': 'aggressive', 'aggressiveness': 8, 'language': 'english'}
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/document_loaders/acreom/ | Below is an example on how to load a local acreom vault into Langchain. As the local vault in acreom is a folder of plain text .md files, the loader requires the path to the directory.
Vault files may contain some metadata which is stored as a YAML header. These values will be added to the document’s metadata if `collect_metadata` is set to true.
```
loader = AcreomLoader("<path-to-acreom-vault>", collect_metadata=False)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:39.257Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/acreom/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/acreom/",
"description": "acreom is a dev-first knowledge base with tasks",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"acreom\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:39 GMT",
"etag": "W/\"e43d2e0d9e0e8174bf077b30a90d92cc\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::g595f-1713753519043-875656bfaed2"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/acreom/",
"property": "og:url"
},
{
"content": "acreom | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "acreom is a dev-first knowledge base with tasks",
"property": "og:description"
}
],
"title": "acreom | 🦜️🔗 LangChain"
} | Below is an example on how to load a local acreom vault into Langchain. As the local vault in acreom is a folder of plain text .md files, the loader requires the path to the directory.
Vault files may contain some metadata which is stored as a YAML header. These values will be added to the document’s metadata if collect_metadata is set to true.
loader = AcreomLoader("<path-to-acreom-vault>", collect_metadata=False) |
https://python.langchain.com/docs/integrations/chat/litellm/ | ## ChatLiteLLM
[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.
This notebook covers how to get started with using Langchain + the LiteLLM I/O library.
```
from langchain_community.chat_models import ChatLiteLLMfrom langchain_core.messages import HumanMessage
```
```
chat = ChatLiteLLM(model="gpt-3.5-turbo")
```
```
messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages)
```
```
AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)
```
## `ChatLiteLLM` also supports async and streaming functionality:[](#chatlitellm-also-supports-async-and-streaming-functionality "Direct link to chatlitellm-also-supports-async-and-streaming-functionality")
```
from langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
```
```
await chat.agenerate([messages])
```
```
LLMResult(generations=[[ChatGeneration(text=" J'aime programmer.", generation_info=None, message=AIMessage(content=" J'aime programmer.", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])
```
```
chat = ChatLiteLLM( streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),)chat(messages)
```
```
AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:39.642Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/litellm/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/litellm/",
"description": "LiteLLM is a library that",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3432",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"litellm\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:39 GMT",
"etag": "W/\"f9d432b4b8fcc9729ed977731ee23e74\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::6zqpn-1713753519310-a6508856940d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/litellm/",
"property": "og:url"
},
{
"content": "ChatLiteLLM | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "LiteLLM is a library that",
"property": "og:description"
}
],
"title": "ChatLiteLLM | 🦜️🔗 LangChain"
} | ChatLiteLLM
LiteLLM is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.
This notebook covers how to get started with using Langchain + the LiteLLM I/O library.
from langchain_community.chat_models import ChatLiteLLM
from langchain_core.messages import HumanMessage
chat = ChatLiteLLM(model="gpt-3.5-turbo")
messages = [
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
chat(messages)
AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)
ChatLiteLLM also supports async and streaming functionality:
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
await chat.agenerate([messages])
LLMResult(generations=[[ChatGeneration(text=" J'aime programmer.", generation_info=None, message=AIMessage(content=" J'aime programmer.", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])
chat = ChatLiteLLM(
streaming=True,
verbose=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
)
chat(messages)
AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False) |
https://python.langchain.com/docs/integrations/chat/llama_edge/ | ## LlamaEdge
[LlamaEdge](https://github.com/second-state/LlamaEdge) allows you to chat with LLMs of [GGUF](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/README.md) format both locally and via chat service.
* `LlamaEdgeChatService` provides developers an OpenAI API compatible service to chat with LLMs via HTTP requests.
* `LlamaEdgeChatLocal` enables developers to chat with LLMs locally (coming soon).
Both `LlamaEdgeChatService` and `LlamaEdgeChatLocal` run on the infrastructure driven by [WasmEdge Runtime](https://wasmedge.org/), which provides a lightweight and portable WebAssembly container environment for LLM inference tasks.
## Chat via API Service[](#chat-via-api-service "Direct link to Chat via API Service")
`LlamaEdgeChatService` works on the `llama-api-server`. Following the steps in [llama-api-server quick-start](https://github.com/second-state/llama-utils/tree/main/api-server#readme), you can host your own API service so that you can chat with any models you like on any device you have anywhere as long as the internet is available.
```
from langchain_community.chat_models.llama_edge import LlamaEdgeChatServicefrom langchain_core.messages import HumanMessage, SystemMessage
```
### Chat with LLMs in the non-streaming mode[](#chat-with-llms-in-the-non-streaming-mode "Direct link to Chat with LLMs in the non-streaming mode")
```
# service urlservice_url = "https://b008-54-186-154-209.ngrok-free.app"# create wasm-chat service instancechat = LlamaEdgeChatService(service_url=service_url)# create message sequencesystem_message = SystemMessage(content="You are an AI assistant")user_message = HumanMessage(content="What is the capital of France?")messages = [system_message, user_message]# chat with wasm-chat serviceresponse = chat(messages)print(f"[Bot] {response.content}")
```
```
[Bot] Hello! The capital of France is Paris.
```
### Chat with LLMs in the streaming mode[](#chat-with-llms-in-the-streaming-mode "Direct link to Chat with LLMs in the streaming mode")
```
# service urlservice_url = "https://b008-54-186-154-209.ngrok-free.app"# create wasm-chat service instancechat = LlamaEdgeChatService(service_url=service_url, streaming=True)# create message sequencesystem_message = SystemMessage(content="You are an AI assistant")user_message = HumanMessage(content="What is the capital of Norway?")messages = [ system_message, user_message,]output = ""for chunk in chat.stream(messages): # print(chunk.content, end="", flush=True) output += chunk.contentprint(f"[Bot] {output}")
```
```
[Bot] Hello! I'm happy to help you with your question. The capital of Norway is Oslo.
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:39.477Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/llama_edge/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/llama_edge/",
"description": "LlamaEdge allows you to",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3432",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"llama_edge\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:39 GMT",
"etag": "W/\"23646162382e2bdac8f930945c342835\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::f7wpm-1713753519365-5fd5230b9b60"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/llama_edge/",
"property": "og:url"
},
{
"content": "LlamaEdge | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "LlamaEdge allows you to",
"property": "og:description"
}
],
"title": "LlamaEdge | 🦜️🔗 LangChain"
} | LlamaEdge
LlamaEdge allows you to chat with LLMs of GGUF format both locally and via chat service.
LlamaEdgeChatService provides developers an OpenAI API compatible service to chat with LLMs via HTTP requests.
LlamaEdgeChatLocal enables developers to chat with LLMs locally (coming soon).
Both LlamaEdgeChatService and LlamaEdgeChatLocal run on the infrastructure driven by WasmEdge Runtime, which provides a lightweight and portable WebAssembly container environment for LLM inference tasks.
Chat via API Service
LlamaEdgeChatService works on the llama-api-server. Following the steps in llama-api-server quick-start, you can host your own API service so that you can chat with any models you like on any device you have anywhere as long as the internet is available.
from langchain_community.chat_models.llama_edge import LlamaEdgeChatService
from langchain_core.messages import HumanMessage, SystemMessage
Chat with LLMs in the non-streaming mode
# service url
service_url = "https://b008-54-186-154-209.ngrok-free.app"
# create wasm-chat service instance
chat = LlamaEdgeChatService(service_url=service_url)
# create message sequence
system_message = SystemMessage(content="You are an AI assistant")
user_message = HumanMessage(content="What is the capital of France?")
messages = [system_message, user_message]
# chat with wasm-chat service
response = chat(messages)
print(f"[Bot] {response.content}")
[Bot] Hello! The capital of France is Paris.
Chat with LLMs in the streaming mode
# service url
service_url = "https://b008-54-186-154-209.ngrok-free.app"
# create wasm-chat service instance
chat = LlamaEdgeChatService(service_url=service_url, streaming=True)
# create message sequence
system_message = SystemMessage(content="You are an AI assistant")
user_message = HumanMessage(content="What is the capital of Norway?")
messages = [
system_message,
user_message,
]
output = ""
for chunk in chat.stream(messages):
# print(chunk.content, end="", flush=True)
output += chunk.content
print(f"[Bot] {output}")
[Bot] Hello! I'm happy to help you with your question. The capital of Norway is Oslo. |
https://python.langchain.com/docs/integrations/document_loaders/airbyte_cdk/ | ## Airbyte CDK (Deprecated)
Note: `AirbyteCDKLoader` is deprecated. Please use [`AirbyteLoader`](https://python.langchain.com/docs/integrations/document_loaders/airbyte/) instead.
> [Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
A lot of source connectors are implemented using the [Airbyte CDK](https://docs.airbyte.com/connector-development/cdk-python/). This loader allows to run any of these connectors and return the data as documents.
## Installation[](#installation "Direct link to Installation")
First, you need to install the `airbyte-cdk` python package.
```
%pip install --upgrade --quiet airbyte-cdk
```
Then, either install an existing connector from the [Airbyte Github repository](https://github.com/airbytehq/airbyte/tree/master/airbyte-integrations/connectors) or create your own connector using the [Airbyte CDK](https://docs.airbyte.io/connector-development/connector-development).
For example, to install the Github connector, run
```
%pip install --upgrade --quiet "source_github@git+https://github.com/airbytehq/airbyte.git@master#subdirectory=airbyte-integrations/connectors/source-github"
```
Some sources are also published as regular packages on PyPI
## Example[](#example "Direct link to Example")
Now you can create an `AirbyteCDKLoader` based on the imported source. It takes a `config` object that’s passed to the connector. You also have to pick the stream you want to retrieve records from by name (`stream_name`). Check the connectors documentation page and spec definition for more information on the config object and available streams. For the Github connectors these are:
* [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-github/source\_github/spec.json](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-github/source_github/spec.json).
* [https://docs.airbyte.com/integrations/sources/github/](https://docs.airbyte.com/integrations/sources/github/)
```
from langchain_community.document_loaders.airbyte import AirbyteCDKLoaderfrom source_github.source import SourceGithub # plug in your own source hereconfig = { # your github configuration "credentials": {"api_url": "api.github.com", "personal_access_token": "<token>"}, "repository": "<repo>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",}issues_loader = AirbyteCDKLoader( source_class=SourceGithub, config=config, stream_name="issues")
```
Now you can load documents the usual way
```
docs = issues_loader.load()
```
As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:
```
docs_iterator = issues_loader.lazy_load()
```
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record\_handler function when creating the loader:
```
from langchain_community.docstore.document import Documentdef handle_record(record, id): return Document( page_content=record.data["title"] + "\n" + (record.data["body"] or ""), metadata=record.data, )issues_loader = AirbyteCDKLoader( source_class=SourceGithub, config=config, stream_name="issues", record_handler=handle_record,)docs = issues_loader.load()
```
## Incremental loads[](#incremental-loads "Direct link to Incremental loads")
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
```
last_state = issues_loader.last_state # store safelyincremental_issue_loader = AirbyteCDKLoader( source_class=SourceGithub, config=config, stream_name="issues", state=last_state)new_docs = incremental_issue_loader.load()
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:39.899Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_cdk/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_cdk/",
"description": "Note: AirbyteCDKLoader is deprecated. Please use",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3430",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"airbyte_cdk\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:39 GMT",
"etag": "W/\"f82610f86bcc03a9fb6df9ef4e37ecb3\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::h7kk6-1713753519790-81fb506806dc"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_cdk/",
"property": "og:url"
},
{
"content": "Airbyte CDK (Deprecated) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Note: AirbyteCDKLoader is deprecated. Please use",
"property": "og:description"
}
],
"title": "Airbyte CDK (Deprecated) | 🦜️🔗 LangChain"
} | Airbyte CDK (Deprecated)
Note: AirbyteCDKLoader is deprecated. Please use AirbyteLoader instead.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
A lot of source connectors are implemented using the Airbyte CDK. This loader allows to run any of these connectors and return the data as documents.
Installation
First, you need to install the airbyte-cdk python package.
%pip install --upgrade --quiet airbyte-cdk
Then, either install an existing connector from the Airbyte Github repository or create your own connector using the Airbyte CDK.
For example, to install the Github connector, run
%pip install --upgrade --quiet "source_github@git+https://github.com/airbytehq/airbyte.git@master#subdirectory=airbyte-integrations/connectors/source-github"
Some sources are also published as regular packages on PyPI
Example
Now you can create an AirbyteCDKLoader based on the imported source. It takes a config object that’s passed to the connector. You also have to pick the stream you want to retrieve records from by name (stream_name). Check the connectors documentation page and spec definition for more information on the config object and available streams. For the Github connectors these are:
https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-github/source_github/spec.json.
https://docs.airbyte.com/integrations/sources/github/
from langchain_community.document_loaders.airbyte import AirbyteCDKLoader
from source_github.source import SourceGithub # plug in your own source here
config = {
# your github configuration
"credentials": {"api_url": "api.github.com", "personal_access_token": "<token>"},
"repository": "<repo>",
"start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",
}
issues_loader = AirbyteCDKLoader(
source_class=SourceGithub, config=config, stream_name="issues"
)
Now you can load documents the usual way
docs = issues_loader.load()
As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:
docs_iterator = issues_loader.lazy_load()
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:
from langchain_community.docstore.document import Document
def handle_record(record, id):
return Document(
page_content=record.data["title"] + "\n" + (record.data["body"] or ""),
metadata=record.data,
)
issues_loader = AirbyteCDKLoader(
source_class=SourceGithub,
config=config,
stream_name="issues",
record_handler=handle_record,
)
docs = issues_loader.load()
Incremental loads
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
last_state = issues_loader.last_state # store safely
incremental_issue_loader = AirbyteCDKLoader(
source_class=SourceGithub, config=config, stream_name="issues", state=last_state
)
new_docs = incremental_issue_loader.load() |
https://python.langchain.com/docs/integrations/chat/maritalk/ | ## Maritalk
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/integrations/chat/maritalk.ipynb)
## Introduction[](#introduction "Direct link to Introduction")
MariTalk is an assistant developed by the Brazilian company [Maritaca AI](https://www.maritaca.ai/). MariTalk is based on language models that have been specially trained to understand Portuguese well.
This notebook demonstrates how to use MariTalk with LangChain through two examples:
1. A simple example of how to use MariTalk to perform a task.
2. LLM + RAG: The second example shows how to answer a question whose answer is found in a long document that does not fit within the token limit of MariTalk. For this, we will use a simple searcher (BM25) to first search the document for the most relevant sections and then feed them to MariTalk for answering.
## Installation[](#installation "Direct link to Installation")
First, install the LangChain library (and all its dependencies) using the following command:
```
!pip install langchain langchain-core langchain-community
```
## API Key[](#api-key "Direct link to API Key")
You will need an API key that can be obtained from chat.maritaca.ai (“Chaves da API” section).
### Example 1 - Pet Name Suggestions[](#example-1---pet-name-suggestions "Direct link to Example 1 - Pet Name Suggestions")
Let’s define our language model, ChatMaritalk, and configure it with your API key.
```
from langchain_community.chat_models import ChatMaritalkfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts.chat import ChatPromptTemplatellm = ChatMaritalk( model="sabia-2-medium", # Available models: sabia-2-small and sabia-2-medium api_key="", # Insert your API key here temperature=0.7, max_tokens=100,)output_parser = StrOutputParser()chat_prompt = ChatPromptTemplate.from_messages( [ ( "system", "You are an assistant specialized in suggesting pet names. Given the animal, you must suggest 4 names.", ), ("human", "I have a {animal}"), ])chain = chat_prompt | llm | output_parserresponse = chain.invoke({"animal": "dog"})print(response) # should answer something like "1. Max\n2. Bella\n3. Charlie\n4. Rocky"
```
### Example 2 - RAG + LLM: UNICAMP 2024 Entrance Exam Question Answering System[](#example-2---rag-llm-unicamp-2024-entrance-exam-question-answering-system "Direct link to Example 2 - RAG + LLM: UNICAMP 2024 Entrance Exam Question Answering System")
For this example, we need to install some extra libraries:
```
!pip install unstructured rank_bm25 pdf2image pdfminer-six pikepdf pypdf unstructured_inference fastapi kaleido uvicorn "pillow<10.1.0" pillow_heif -q
```
#### Loading the database[](#loading-the-database "Direct link to Loading the database")
The first step is to create a database with the information from the notice. For this, we will download the notice from the COMVEST website and segment the extracted text into 500-character windows.
```
from langchain.document_loaders import OnlinePDFLoaderfrom langchain.text_splitter import RecursiveCharacterTextSplitter# Loading the COMVEST 2024 noticeloader = OnlinePDFLoader( "https://www.comvest.unicamp.br/wp-content/uploads/2023/10/31-2023-Dispoe-sobre-o-Vestibular-Unicamp-2024_com-retificacao.pdf")data = loader.load()text_splitter = RecursiveCharacterTextSplitter( chunk_size=500, chunk_overlap=100, separators=["\n", " ", ""])texts = text_splitter.split_documents(data)
```
#### Creating a Searcher[](#creating-a-searcher "Direct link to Creating a Searcher")
Now that we have our database, we need a searcher. For this example, we will use a simple BM25 as a search system, but this could be replaced by any other searcher (such as search via embeddings).
```
from langchain.retrievers import BM25Retrieverretriever = BM25Retriever.from_documents(texts)
```
#### Combining Search System + LLM[](#combining-search-system-llm "Direct link to Combining Search System + LLM")
Now that we have our searcher, we just need to implement a prompt specifying the task and invoke the chain.
```
from langchain.chains.question_answering import load_qa_chainprompt = """Baseado nos seguintes documentos, responda a pergunta abaixo.{context}Pergunta: {query}"""qa_prompt = ChatPromptTemplate.from_messages([("human", prompt)])chain = load_qa_chain(llm, chain_type="stuff", verbose=True, prompt=qa_prompt)query = "Qual o tempo máximo para realização da prova?"docs = retriever.get_relevant_documents(query)chain.invoke( {"input_documents": docs, "query": query}) # Should output something like: "O tempo máximo para realização da prova é de 5 horas."
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:40.606Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/maritalk/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/maritalk/",
"description": "Introduction",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"maritalk\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:40 GMT",
"etag": "W/\"c623f1fdf1e63fb4bd98316480df29b2\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::xbf2q-1713753520465-e2126969547d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/maritalk/",
"property": "og:url"
},
{
"content": "Maritalk | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Introduction",
"property": "og:description"
}
],
"title": "Maritalk | 🦜️🔗 LangChain"
} | Maritalk
Introduction
MariTalk is an assistant developed by the Brazilian company Maritaca AI. MariTalk is based on language models that have been specially trained to understand Portuguese well.
This notebook demonstrates how to use MariTalk with LangChain through two examples:
A simple example of how to use MariTalk to perform a task.
LLM + RAG: The second example shows how to answer a question whose answer is found in a long document that does not fit within the token limit of MariTalk. For this, we will use a simple searcher (BM25) to first search the document for the most relevant sections and then feed them to MariTalk for answering.
Installation
First, install the LangChain library (and all its dependencies) using the following command:
!pip install langchain langchain-core langchain-community
API Key
You will need an API key that can be obtained from chat.maritaca.ai (“Chaves da API” section).
Example 1 - Pet Name Suggestions
Let’s define our language model, ChatMaritalk, and configure it with your API key.
from langchain_community.chat_models import ChatMaritalk
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts.chat import ChatPromptTemplate
llm = ChatMaritalk(
model="sabia-2-medium", # Available models: sabia-2-small and sabia-2-medium
api_key="", # Insert your API key here
temperature=0.7,
max_tokens=100,
)
output_parser = StrOutputParser()
chat_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an assistant specialized in suggesting pet names. Given the animal, you must suggest 4 names.",
),
("human", "I have a {animal}"),
]
)
chain = chat_prompt | llm | output_parser
response = chain.invoke({"animal": "dog"})
print(response) # should answer something like "1. Max\n2. Bella\n3. Charlie\n4. Rocky"
Example 2 - RAG + LLM: UNICAMP 2024 Entrance Exam Question Answering System
For this example, we need to install some extra libraries:
!pip install unstructured rank_bm25 pdf2image pdfminer-six pikepdf pypdf unstructured_inference fastapi kaleido uvicorn "pillow<10.1.0" pillow_heif -q
Loading the database
The first step is to create a database with the information from the notice. For this, we will download the notice from the COMVEST website and segment the extracted text into 500-character windows.
from langchain.document_loaders import OnlinePDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
# Loading the COMVEST 2024 notice
loader = OnlinePDFLoader(
"https://www.comvest.unicamp.br/wp-content/uploads/2023/10/31-2023-Dispoe-sobre-o-Vestibular-Unicamp-2024_com-retificacao.pdf"
)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=500, chunk_overlap=100, separators=["\n", " ", ""]
)
texts = text_splitter.split_documents(data)
Creating a Searcher
Now that we have our database, we need a searcher. For this example, we will use a simple BM25 as a search system, but this could be replaced by any other searcher (such as search via embeddings).
from langchain.retrievers import BM25Retriever
retriever = BM25Retriever.from_documents(texts)
Combining Search System + LLM
Now that we have our searcher, we just need to implement a prompt specifying the task and invoke the chain.
from langchain.chains.question_answering import load_qa_chain
prompt = """Baseado nos seguintes documentos, responda a pergunta abaixo.
{context}
Pergunta: {query}
"""
qa_prompt = ChatPromptTemplate.from_messages([("human", prompt)])
chain = load_qa_chain(llm, chain_type="stuff", verbose=True, prompt=qa_prompt)
query = "Qual o tempo máximo para realização da prova?"
docs = retriever.get_relevant_documents(query)
chain.invoke(
{"input_documents": docs, "query": query}
) # Should output something like: "O tempo máximo para realização da prova é de 5 horas."
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/document_loaders/airbyte_hubspot/ | ## Airbyte Hubspot (Deprecated)
Note: `AirbyteHubspotLoader` is deprecated. Please use [`AirbyteLoader`](https://python.langchain.com/docs/integrations/document_loaders/airbyte/) instead.
> [Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Hubspot connector as a document loader, allowing you to load various Hubspot objects as documents.
## Installation[](#installation "Direct link to Installation")
First, you need to install the `airbyte-source-hubspot` python package.
```
%pip install --upgrade --quiet airbyte-source-hubspot
```
## Example[](#example "Direct link to Example")
Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/hubspot/) for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-hubspot/source\_hubspot/spec.yaml](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-hubspot/source_hubspot/spec.yaml).
The general shape looks like this:
```
{ "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "credentials": { "credentials_title": "Private App Credentials", "access_token": "<access token of your private app>" }}
```
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
```
from langchain_community.document_loaders.airbyte import AirbyteHubspotLoaderconfig = { # your hubspot configuration}loader = AirbyteHubspotLoader( config=config, stream_name="products") # check the documentation linked above for a list of all streams
```
Now you can load documents the usual way
As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:
```
docs_iterator = loader.lazy_load()
```
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To process documents, create a class inheriting from the base loader and implement the `_handle_records` method yourself:
```
from langchain_community.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteHubspotLoader( config=config, record_handler=handle_record, stream_name="products")docs = loader.load()
```
## Incremental loads[](#incremental-loads "Direct link to Incremental loads")
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
```
last_state = loader.last_state # store safelyincremental_loader = AirbyteHubspotLoader( config=config, stream_name="products", state=last_state)new_docs = incremental_loader.load()
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:41.134Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_hubspot/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_hubspot/",
"description": "Note: AirbyteHubspotLoader is deprecated. Please use",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3430",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"airbyte_hubspot\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:40 GMT",
"etag": "W/\"a2386e27d5160e02ff9ef278649ab246\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::hpmlg-1713753520575-21f904ca031d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_hubspot/",
"property": "og:url"
},
{
"content": "Airbyte Hubspot (Deprecated) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Note: AirbyteHubspotLoader is deprecated. Please use",
"property": "og:description"
}
],
"title": "Airbyte Hubspot (Deprecated) | 🦜️🔗 LangChain"
} | Airbyte Hubspot (Deprecated)
Note: AirbyteHubspotLoader is deprecated. Please use AirbyteLoader instead.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Hubspot connector as a document loader, allowing you to load various Hubspot objects as documents.
Installation
First, you need to install the airbyte-source-hubspot python package.
%pip install --upgrade --quiet airbyte-source-hubspot
Example
Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-hubspot/source_hubspot/spec.yaml.
The general shape looks like this:
{
"start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",
"credentials": {
"credentials_title": "Private App Credentials",
"access_token": "<access token of your private app>"
}
}
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
from langchain_community.document_loaders.airbyte import AirbyteHubspotLoader
config = {
# your hubspot configuration
}
loader = AirbyteHubspotLoader(
config=config, stream_name="products"
) # check the documentation linked above for a list of all streams
Now you can load documents the usual way
As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:
docs_iterator = loader.lazy_load()
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To process documents, create a class inheriting from the base loader and implement the _handle_records method yourself:
from langchain_community.docstore.document import Document
def handle_record(record, id):
return Document(page_content=record.data["title"], metadata=record.data)
loader = AirbyteHubspotLoader(
config=config, record_handler=handle_record, stream_name="products"
)
docs = loader.load()
Incremental loads
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
last_state = loader.last_state # store safely
incremental_loader = AirbyteHubspotLoader(
config=config, stream_name="products", state=last_state
)
new_docs = incremental_loader.load() |
https://python.langchain.com/docs/integrations/chat/llama2_chat/ | ## Llama2Chat
This notebook shows how to augment Llama-2 `LLM`s with the `Llama2Chat` wrapper to support the [Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Several `LLM` implementations in LangChain can be used as interface to Llama-2 chat models. These include [ChatHuggingFace](https://python.langchain.com/docs/integrations/chat/huggingface/), [LlamaCpp](https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa/), [GPT4All](https://python.langchain.com/docs/integrations/llms/gpt4all/), …, to mention a few examples.
`Llama2Chat` is a generic wrapper that implements `BaseChatModel` and can therefore be used in applications as [chat model](https://python.langchain.com/docs/modules/model_io/chat/). `Llama2Chat` converts a list of Messages into the [required chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and forwards the formatted prompt as `str` to the wrapped `LLM`.
```
from langchain.chains import LLMChainfrom langchain.memory import ConversationBufferMemoryfrom langchain_experimental.chat_models import Llama2Chat
```
For the chat application examples below, we’ll use the following chat `prompt_template`:
```
from langchain_core.messages import SystemMessagefrom langchain_core.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder,)template_messages = [ SystemMessage(content="You are a helpful assistant."), MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template("{text}"),]prompt_template = ChatPromptTemplate.from_messages(template_messages)
```
## Chat with Llama-2 via `HuggingFaceTextGenInference` LLM[](#chat-with-llama-2-via-huggingfacetextgeninference-llm "Direct link to chat-with-llama-2-via-huggingfacetextgeninference-llm")
A HuggingFaceTextGenInference LLM encapsulates access to a [text-generation-inference](https://github.com/huggingface/text-generation-inference) server. In the following example, the inference server serves a [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) model. It can be started locally with:
```
docker run \ --rm \ --gpus all \ --ipc=host \ -p 8080:80 \ -v ~/.cache/huggingface/hub:/data \ -e HF_API_TOKEN=${HF_API_TOKEN} \ ghcr.io/huggingface/text-generation-inference:0.9 \ --hostname 0.0.0.0 \ --model-id meta-llama/Llama-2-13b-chat-hf \ --quantize bitsandbytes \ --num-shard 4
```
This works on a machine with 4 x RTX 3080ti cards, for example. Adjust the `--num_shard` value to the number of GPUs available. The `HF_API_TOKEN` environment variable holds the Hugging Face API token.
```
# !pip3 install text-generation
```
Create a `HuggingFaceTextGenInference` instance that connects to the local inference server and wrap it into `Llama2Chat`.
```
from langchain_community.llms import HuggingFaceTextGenInferencellm = HuggingFaceTextGenInference( inference_server_url="http://127.0.0.1:8080/", max_new_tokens=512, top_k=50, temperature=0.1, repetition_penalty=1.03,)model = Llama2Chat(llm=llm)
```
Then you are ready to use the chat `model` together with `prompt_template` and conversation `memory` in an `LLMChain`.
```
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)chain = LLMChain(llm=model, prompt=prompt_template, memory=memory)
```
```
print( chain.run( text="What can I see in Vienna? Propose a few locations. Names only, no details." ))
```
```
Sure, I'd be happy to help! Here are a few popular locations to consider visiting in Vienna:1. Schönbrunn Palace2. St. Stephen's Cathedral3. Hofburg Palace4. Belvedere Palace5. Prater Park6. Vienna State Opera7. Albertina Museum8. Museum of Natural History9. Kunsthistorisches Museum10. Ringstrasse
```
```
print(chain.run(text="Tell me more about #2."))
```
```
Certainly! St. Stephen's Cathedral (Stephansdom) is one of the most recognizable landmarks in Vienna and a must-see attraction for visitors. This stunning Gothic cathedral is located in the heart of the city and is known for its intricate stone carvings, colorful stained glass windows, and impressive dome.The cathedral was built in the 12th century and has been the site of many important events throughout history, including the coronation of Holy Roman emperors and the funeral of Mozart. Today, it is still an active place of worship and offers guided tours, concerts, and special events. Visitors can climb up the south tower for panoramic views of the city or attend a service to experience the beautiful music and chanting.
```
## Chat with Llama-2 via `LlamaCPP` LLM[](#chat-with-llama-2-via-llamacpp-llm "Direct link to chat-with-llama-2-via-llamacpp-llm")
For using a Llama-2 chat model with a [LlamaCPP](https://python.langchain.com/docs/integrations/llms/llamacpp/) `LMM`, install the `llama-cpp-python` library using [these installation instructions](https://python.langchain.com/docs/integrations/llms/llamacpp/#installation). The following example uses a quantized [llama-2-7b-chat.Q4\_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_0.gguf) model stored locally at `~/Models/llama-2-7b-chat.Q4_0.gguf`.
After creating a `LlamaCpp` instance, the `llm` is again wrapped into `Llama2Chat`
```
from os.path import expanduserfrom langchain_community.llms import LlamaCppmodel_path = expanduser("~/Models/llama-2-7b-chat.Q4_0.gguf")llm = LlamaCpp( model_path=model_path, streaming=False,)model = Llama2Chat(llm=llm)
```
and used in the same way as in the previous example.
```
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)chain = LLMChain(llm=model, prompt=prompt_template, memory=memory)
```
```
print( chain.run( text="What can I see in Vienna? Propose a few locations. Names only, no details." ))
```
```
Of course! Vienna is a beautiful city with a rich history and culture. Here are some of the top tourist attractions you might want to consider visiting:1. Schönbrunn Palace2. St. Stephen's Cathedral3. Hofburg Palace4. Belvedere Palace5. Prater Park6. MuseumsQuartier7. Ringstrasse8. Vienna State Opera9. Kunsthistorisches Museum10. Imperial PalaceThese are just a few of the many amazing places to see in Vienna. Each one has its own unique history and charm, so I hope you enjoy exploring this beautiful city!
```
```
llama_print_timings: load time = 250.46 msllama_print_timings: sample time = 56.40 ms / 144 runs ( 0.39 ms per token, 2553.37 tokens per second)llama_print_timings: prompt eval time = 1444.25 ms / 47 tokens ( 30.73 ms per token, 32.54 tokens per second)llama_print_timings: eval time = 8832.02 ms / 143 runs ( 61.76 ms per token, 16.19 tokens per second)llama_print_timings: total time = 10645.94 ms
```
```
print(chain.run(text="Tell me more about #2."))
```
```
Llama.generate: prefix-match hitllama_print_timings: load time = 250.46 msllama_print_timings: sample time = 100.60 ms / 256 runs ( 0.39 ms per token, 2544.73 tokens per second)llama_print_timings: prompt eval time = 5128.71 ms / 160 tokens ( 32.05 ms per token, 31.20 tokens per second)llama_print_timings: eval time = 16193.02 ms / 255 runs ( 63.50 ms per token, 15.75 tokens per second)llama_print_timings: total time = 21988.57 ms
```
```
Of course! St. Stephen's Cathedral (also known as Stephansdom) is a stunning Gothic-style cathedral located in the heart of Vienna, Austria. It is one of the most recognizable landmarks in the city and is considered a symbol of Vienna.Here are some interesting facts about St. Stephen's Cathedral:1. History: The construction of St. Stephen's Cathedral began in the 12th century on the site of a former Romanesque church, and it took over 600 years to complete. The cathedral has been renovated and expanded several times throughout its history, with the most significant renovation taking place in the 19th century.2. Architecture: St. Stephen's Cathedral is built in the Gothic style, characterized by its tall spires, pointed arches, and intricate stone carvings. The cathedral features a mix of Romanesque, Gothic, and Baroque elements, making it a unique blend of styles.3. Design: The cathedral's design is based on the plan of a cross with a long nave and two shorter arms extending from it. The main altar is
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:40.870Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/llama2_chat/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/llama2_chat/",
"description": "This notebook shows how to augment Llama-2 LLMs with the Llama2Chat",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4371",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"llama2_chat\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:40 GMT",
"etag": "W/\"7364701951776a028bc46ccee07029d7\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::ncfnt-1713753520565-526fc8652c0b"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/llama2_chat/",
"property": "og:url"
},
{
"content": "Llama2Chat | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to augment Llama-2 LLMs with the Llama2Chat",
"property": "og:description"
}
],
"title": "Llama2Chat | 🦜️🔗 LangChain"
} | Llama2Chat
This notebook shows how to augment Llama-2 LLMs with the Llama2Chat wrapper to support the Llama-2 chat prompt format. Several LLM implementations in LangChain can be used as interface to Llama-2 chat models. These include ChatHuggingFace, LlamaCpp, GPT4All, …, to mention a few examples.
Llama2Chat is a generic wrapper that implements BaseChatModel and can therefore be used in applications as chat model. Llama2Chat converts a list of Messages into the required chat prompt format and forwards the formatted prompt as str to the wrapped LLM.
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain_experimental.chat_models import Llama2Chat
For the chat application examples below, we’ll use the following chat prompt_template:
from langchain_core.messages import SystemMessage
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
template_messages = [
SystemMessage(content="You are a helpful assistant."),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{text}"),
]
prompt_template = ChatPromptTemplate.from_messages(template_messages)
Chat with Llama-2 via HuggingFaceTextGenInference LLM
A HuggingFaceTextGenInference LLM encapsulates access to a text-generation-inference server. In the following example, the inference server serves a meta-llama/Llama-2-13b-chat-hf model. It can be started locally with:
docker run \
--rm \
--gpus all \
--ipc=host \
-p 8080:80 \
-v ~/.cache/huggingface/hub:/data \
-e HF_API_TOKEN=${HF_API_TOKEN} \
ghcr.io/huggingface/text-generation-inference:0.9 \
--hostname 0.0.0.0 \
--model-id meta-llama/Llama-2-13b-chat-hf \
--quantize bitsandbytes \
--num-shard 4
This works on a machine with 4 x RTX 3080ti cards, for example. Adjust the --num_shard value to the number of GPUs available. The HF_API_TOKEN environment variable holds the Hugging Face API token.
# !pip3 install text-generation
Create a HuggingFaceTextGenInference instance that connects to the local inference server and wrap it into Llama2Chat.
from langchain_community.llms import HuggingFaceTextGenInference
llm = HuggingFaceTextGenInference(
inference_server_url="http://127.0.0.1:8080/",
max_new_tokens=512,
top_k=50,
temperature=0.1,
repetition_penalty=1.03,
)
model = Llama2Chat(llm=llm)
Then you are ready to use the chat model together with prompt_template and conversation memory in an LLMChain.
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = LLMChain(llm=model, prompt=prompt_template, memory=memory)
print(
chain.run(
text="What can I see in Vienna? Propose a few locations. Names only, no details."
)
)
Sure, I'd be happy to help! Here are a few popular locations to consider visiting in Vienna:
1. Schönbrunn Palace
2. St. Stephen's Cathedral
3. Hofburg Palace
4. Belvedere Palace
5. Prater Park
6. Vienna State Opera
7. Albertina Museum
8. Museum of Natural History
9. Kunsthistorisches Museum
10. Ringstrasse
print(chain.run(text="Tell me more about #2."))
Certainly! St. Stephen's Cathedral (Stephansdom) is one of the most recognizable landmarks in Vienna and a must-see attraction for visitors. This stunning Gothic cathedral is located in the heart of the city and is known for its intricate stone carvings, colorful stained glass windows, and impressive dome.
The cathedral was built in the 12th century and has been the site of many important events throughout history, including the coronation of Holy Roman emperors and the funeral of Mozart. Today, it is still an active place of worship and offers guided tours, concerts, and special events. Visitors can climb up the south tower for panoramic views of the city or attend a service to experience the beautiful music and chanting.
Chat with Llama-2 via LlamaCPP LLM
For using a Llama-2 chat model with a LlamaCPP LMM, install the llama-cpp-python library using these installation instructions. The following example uses a quantized llama-2-7b-chat.Q4_0.gguf model stored locally at ~/Models/llama-2-7b-chat.Q4_0.gguf.
After creating a LlamaCpp instance, the llm is again wrapped into Llama2Chat
from os.path import expanduser
from langchain_community.llms import LlamaCpp
model_path = expanduser("~/Models/llama-2-7b-chat.Q4_0.gguf")
llm = LlamaCpp(
model_path=model_path,
streaming=False,
)
model = Llama2Chat(llm=llm)
and used in the same way as in the previous example.
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = LLMChain(llm=model, prompt=prompt_template, memory=memory)
print(
chain.run(
text="What can I see in Vienna? Propose a few locations. Names only, no details."
)
)
Of course! Vienna is a beautiful city with a rich history and culture. Here are some of the top tourist attractions you might want to consider visiting:
1. Schönbrunn Palace
2. St. Stephen's Cathedral
3. Hofburg Palace
4. Belvedere Palace
5. Prater Park
6. MuseumsQuartier
7. Ringstrasse
8. Vienna State Opera
9. Kunsthistorisches Museum
10. Imperial Palace
These are just a few of the many amazing places to see in Vienna. Each one has its own unique history and charm, so I hope you enjoy exploring this beautiful city!
llama_print_timings: load time = 250.46 ms
llama_print_timings: sample time = 56.40 ms / 144 runs ( 0.39 ms per token, 2553.37 tokens per second)
llama_print_timings: prompt eval time = 1444.25 ms / 47 tokens ( 30.73 ms per token, 32.54 tokens per second)
llama_print_timings: eval time = 8832.02 ms / 143 runs ( 61.76 ms per token, 16.19 tokens per second)
llama_print_timings: total time = 10645.94 ms
print(chain.run(text="Tell me more about #2."))
Llama.generate: prefix-match hit
llama_print_timings: load time = 250.46 ms
llama_print_timings: sample time = 100.60 ms / 256 runs ( 0.39 ms per token, 2544.73 tokens per second)
llama_print_timings: prompt eval time = 5128.71 ms / 160 tokens ( 32.05 ms per token, 31.20 tokens per second)
llama_print_timings: eval time = 16193.02 ms / 255 runs ( 63.50 ms per token, 15.75 tokens per second)
llama_print_timings: total time = 21988.57 ms
Of course! St. Stephen's Cathedral (also known as Stephansdom) is a stunning Gothic-style cathedral located in the heart of Vienna, Austria. It is one of the most recognizable landmarks in the city and is considered a symbol of Vienna.
Here are some interesting facts about St. Stephen's Cathedral:
1. History: The construction of St. Stephen's Cathedral began in the 12th century on the site of a former Romanesque church, and it took over 600 years to complete. The cathedral has been renovated and expanded several times throughout its history, with the most significant renovation taking place in the 19th century.
2. Architecture: St. Stephen's Cathedral is built in the Gothic style, characterized by its tall spires, pointed arches, and intricate stone carvings. The cathedral features a mix of Romanesque, Gothic, and Baroque elements, making it a unique blend of styles.
3. Design: The cathedral's design is based on the plan of a cross with a long nave and two shorter arms extending from it. The main altar is |
https://python.langchain.com/docs/integrations/document_loaders/airbyte_gong/ | ## Airbyte Gong (Deprecated)
Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](https://python.langchain.com/docs/integrations/document_loaders/airbyte/) instead.
> [Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Gong connector as a document loader, allowing you to load various Gong objects as documents.
## Installation[](#installation "Direct link to Installation")
First, you need to install the `airbyte-source-gong` python package.
```
%pip install --upgrade --quiet airbyte-source-gong
```
## Example[](#example "Direct link to Example")
Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/gong/) for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-gong/source\_gong/spec.yaml](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-gong/source_gong/spec.yaml).
The general shape looks like this:
```
{ "access_key": "<access key name>", "access_key_secret": "<access key secret>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",}
```
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
```
from langchain_community.document_loaders.airbyte import AirbyteGongLoaderconfig = { # your gong configuration}loader = AirbyteGongLoader( config=config, stream_name="calls") # check the documentation linked above for a list of all streams
```
Now you can load documents the usual way
As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:
```
docs_iterator = loader.lazy_load()
```
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To process documents, create a class inheriting from the base loader and implement the `_handle_records` method yourself:
```
from langchain_community.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteGongLoader( config=config, record_handler=handle_record, stream_name="calls")docs = loader.load()
```
## Incremental loads[](#incremental-loads "Direct link to Incremental loads")
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
```
last_state = loader.last_state # store safelyincremental_loader = AirbyteGongLoader( config=config, stream_name="calls", state=last_state)new_docs = incremental_loader.load()
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:41.492Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_gong/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_gong/",
"description": "Note: This connector-specific loader is deprecated. Please use",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4360",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"airbyte_gong\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:40 GMT",
"etag": "W/\"5cf08e598d7f909e97646a27a0ea8571\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::hzpms-1713753520866-f0bc6cdc8af8"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_gong/",
"property": "og:url"
},
{
"content": "Airbyte Gong (Deprecated) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Note: This connector-specific loader is deprecated. Please use",
"property": "og:description"
}
],
"title": "Airbyte Gong (Deprecated) | 🦜️🔗 LangChain"
} | Airbyte Gong (Deprecated)
Note: This connector-specific loader is deprecated. Please use AirbyteLoader instead.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Gong connector as a document loader, allowing you to load various Gong objects as documents.
Installation
First, you need to install the airbyte-source-gong python package.
%pip install --upgrade --quiet airbyte-source-gong
Example
Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-gong/source_gong/spec.yaml.
The general shape looks like this:
{
"access_key": "<access key name>",
"access_key_secret": "<access key secret>",
"start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",
}
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
from langchain_community.document_loaders.airbyte import AirbyteGongLoader
config = {
# your gong configuration
}
loader = AirbyteGongLoader(
config=config, stream_name="calls"
) # check the documentation linked above for a list of all streams
Now you can load documents the usual way
As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:
docs_iterator = loader.lazy_load()
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To process documents, create a class inheriting from the base loader and implement the _handle_records method yourself:
from langchain_community.docstore.document import Document
def handle_record(record, id):
return Document(page_content=record.data["title"], metadata=record.data)
loader = AirbyteGongLoader(
config=config, record_handler=handle_record, stream_name="calls"
)
docs = loader.load()
Incremental loads
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
last_state = loader.last_state # store safely
incremental_loader = AirbyteGongLoader(
config=config, stream_name="calls", state=last_state
)
new_docs = incremental_loader.load() |
https://python.langchain.com/docs/integrations/chat/minimax/ | [Minimax](https://api.minimax.chat/) is a Chinese startup that provides LLM service for companies and individuals.
This example goes over how to use LangChain to interact with MiniMax Inference for Chat.
```
import osos.environ["MINIMAX_GROUP_ID"] = "MINIMAX_GROUP_ID"os.environ["MINIMAX_API_KEY"] = "MINIMAX_API_KEY"
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:41.797Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/minimax/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/minimax/",
"description": "Minimax is a Chinese startup that provides",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4371",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"minimax\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:41 GMT",
"etag": "W/\"6d4aa770faf31836b473249e0f495e25\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::cfhg6-1713753521520-6a0a3461c276"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/minimax/",
"property": "og:url"
},
{
"content": "MiniMaxChat | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Minimax is a Chinese startup that provides",
"property": "og:description"
}
],
"title": "MiniMaxChat | 🦜️🔗 LangChain"
} | Minimax is a Chinese startup that provides LLM service for companies and individuals.
This example goes over how to use LangChain to interact with MiniMax Inference for Chat.
import os
os.environ["MINIMAX_GROUP_ID"] = "MINIMAX_GROUP_ID"
os.environ["MINIMAX_API_KEY"] = "MINIMAX_API_KEY" |
https://python.langchain.com/docs/integrations/document_loaders/airbyte_json/ | ## Airbyte JSON (Deprecated)
Note: `AirbyteJSONLoader` is deprecated. Please use [`AirbyteLoader`](https://python.langchain.com/docs/integrations/document_loaders/airbyte/) instead.
> [Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This covers how to load any source from Airbyte into a local JSON file that can be read in as a document
Prereqs: Have docker desktop installed
Steps:
1. Clone Airbyte from GitHub - `git clone https://github.com/airbytehq/airbyte.git`
2. Switch into Airbyte directory - `cd airbyte`
3. Start Airbyte - `docker compose up`
4. In your browser, just visit http://localhost:8000. You will be asked for a username and password. By default, that’s username `airbyte` and password `password`.
5. Setup any source you wish.
6. Set destination as Local JSON, with specified destination path - lets say `/json_data`. Set up manual sync.
7. Run the connection.
8. To see what files are create, you can navigate to: `file:///tmp/airbyte_local`
9. Find your data and copy path. That path should be saved in the file variable below. It should start with `/tmp/airbyte_local`
```
from langchain_community.document_loaders import AirbyteJSONLoader
```
```
!ls /tmp/airbyte_local/json_data/
```
```
_airbyte_raw_pokemon.jsonl
```
```
loader = AirbyteJSONLoader("/tmp/airbyte_local/json_data/_airbyte_raw_pokemon.jsonl")
```
```
print(data[0].page_content[:500])
```
```
abilities: ability: name: blazeurl: https://pokeapi.co/api/v2/ability/66/is_hidden: Falseslot: 1ability: name: solar-powerurl: https://pokeapi.co/api/v2/ability/94/is_hidden: Trueslot: 3base_experience: 267forms: name: charizardurl: https://pokeapi.co/api/v2/pokemon-form/6/game_indices: game_index: 180version: name: redurl: https://pokeapi.co/api/v2/version/1/game_index: 180version: name: blueurl: https://pokeapi.co/api/v2/version/2/game_index: 180version: n
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:42.103Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_json/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_json/",
"description": "Note: AirbyteJSONLoader is deprecated. Please use",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3432",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"airbyte_json\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:42 GMT",
"etag": "W/\"24f91b0505be174f0b424eb60570c827\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::l2gfp-1713753522038-bdb1101f4d04"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_json/",
"property": "og:url"
},
{
"content": "Airbyte JSON (Deprecated) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Note: AirbyteJSONLoader is deprecated. Please use",
"property": "og:description"
}
],
"title": "Airbyte JSON (Deprecated) | 🦜️🔗 LangChain"
} | Airbyte JSON (Deprecated)
Note: AirbyteJSONLoader is deprecated. Please use AirbyteLoader instead.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This covers how to load any source from Airbyte into a local JSON file that can be read in as a document
Prereqs: Have docker desktop installed
Steps:
Clone Airbyte from GitHub - git clone https://github.com/airbytehq/airbyte.git
Switch into Airbyte directory - cd airbyte
Start Airbyte - docker compose up
In your browser, just visit http://localhost:8000. You will be asked for a username and password. By default, that’s username airbyte and password password.
Setup any source you wish.
Set destination as Local JSON, with specified destination path - lets say /json_data. Set up manual sync.
Run the connection.
To see what files are create, you can navigate to: file:///tmp/airbyte_local
Find your data and copy path. That path should be saved in the file variable below. It should start with /tmp/airbyte_local
from langchain_community.document_loaders import AirbyteJSONLoader
!ls /tmp/airbyte_local/json_data/
_airbyte_raw_pokemon.jsonl
loader = AirbyteJSONLoader("/tmp/airbyte_local/json_data/_airbyte_raw_pokemon.jsonl")
print(data[0].page_content[:500])
abilities:
ability:
name: blaze
url: https://pokeapi.co/api/v2/ability/66/
is_hidden: False
slot: 1
ability:
name: solar-power
url: https://pokeapi.co/api/v2/ability/94/
is_hidden: True
slot: 3
base_experience: 267
forms:
name: charizard
url: https://pokeapi.co/api/v2/pokemon-form/6/
game_indices:
game_index: 180
version:
name: red
url: https://pokeapi.co/api/v2/version/1/
game_index: 180
version:
name: blue
url: https://pokeapi.co/api/v2/version/2/
game_index: 180
version:
n
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/document_loaders/airbyte_salesforce/ | Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](https://python.langchain.com/docs/integrations/document_loaders/airbyte/) instead.
This loader exposes the Salesforce connector as a document loader, allowing you to load various Salesforce objects as documents.
First, you need to install the `airbyte-source-salesforce` python package.
```
{ "client_id": "<oauth client id>", "client_secret": "<oauth client secret>", "refresh_token": "<oauth refresh token>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "is_sandbox": False, # set to True if you're using a sandbox environment "streams_criteria": [ # Array of filters for salesforce objects that should be loadable {"criteria": "exacts", "value": "Account"}, # Exact name of salesforce object {"criteria": "starts with", "value": "Asset"}, # Prefix of the name # Other allowed criteria: ends with, contains, starts not with, ends not with, not contains, not exacts ],}
```
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
```
from langchain_community.document_loaders.airbyte import AirbyteSalesforceLoaderconfig = { # your salesforce configuration}loader = AirbyteSalesforceLoader( config=config, stream_name="asset") # check the documentation linked above for a list of all streams
```
As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record\_handler function when creating the loader:
```
from langchain_community.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteSalesforceLoader( config=config, record_handler=handle_record, stream_name="asset")docs = loader.load()
```
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
```
last_state = loader.last_state # store safelyincremental_loader = AirbyteSalesforceLoader( config=config, stream_name="asset", state=last_state)new_docs = incremental_loader.load()
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:42.269Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_salesforce/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_salesforce/",
"description": "Note: This connector-specific loader is deprecated. Please use",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"airbyte_salesforce\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:42 GMT",
"etag": "W/\"91681c7e6332c4329631f7a6bae5d4e8\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::n4bll-1713753522064-acfc4f7f24b1"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_salesforce/",
"property": "og:url"
},
{
"content": "Airbyte Salesforce (Deprecated) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Note: This connector-specific loader is deprecated. Please use",
"property": "og:description"
}
],
"title": "Airbyte Salesforce (Deprecated) | 🦜️🔗 LangChain"
} | Note: This connector-specific loader is deprecated. Please use AirbyteLoader instead.
This loader exposes the Salesforce connector as a document loader, allowing you to load various Salesforce objects as documents.
First, you need to install the airbyte-source-salesforce python package.
{
"client_id": "<oauth client id>",
"client_secret": "<oauth client secret>",
"refresh_token": "<oauth refresh token>",
"start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",
"is_sandbox": False, # set to True if you're using a sandbox environment
"streams_criteria": [ # Array of filters for salesforce objects that should be loadable
{"criteria": "exacts", "value": "Account"}, # Exact name of salesforce object
{"criteria": "starts with", "value": "Asset"}, # Prefix of the name
# Other allowed criteria: ends with, contains, starts not with, ends not with, not contains, not exacts
],
}
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
from langchain_community.document_loaders.airbyte import AirbyteSalesforceLoader
config = {
# your salesforce configuration
}
loader = AirbyteSalesforceLoader(
config=config, stream_name="asset"
) # check the documentation linked above for a list of all streams
As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:
from langchain_community.docstore.document import Document
def handle_record(record, id):
return Document(page_content=record.data["title"], metadata=record.data)
loader = AirbyteSalesforceLoader(
config=config, record_handler=handle_record, stream_name="asset"
)
docs = loader.load()
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
last_state = loader.last_state # store safely
incremental_loader = AirbyteSalesforceLoader(
config=config, stream_name="asset", state=last_state
)
new_docs = incremental_loader.load() |
https://python.langchain.com/docs/integrations/chat/mlx/ | ## MLX
This notebook shows how to get started using `MLX` LLM’s as chat models.
In particular, we will: 1. Utilize the [MLXPipeline](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/mlx_pipelines.py), 2. Utilize the `ChatMLX` class to enable any of these LLMs to interface with LangChain’s [Chat Messages](https://python.langchain.com/docs/modules/model_io/chat/#messages) abstraction. 3. Demonstrate how to use an open-source LLM to power an `ChatAgent` pipeline
```
%pip install --upgrade --quiet mlx-lm transformers huggingface_hub
```
## 1\. Instantiate an LLM[](#instantiate-an-llm "Direct link to 1. Instantiate an LLM")
There are three LLM options to choose from.
```
from langchain_community.llms.mlx_pipeline import MLXPipelinellm = MLXPipeline.from_model_id( "mlx-community/quantized-gemma-2b-it", pipeline_kwargs={"max_tokens": 10, "temp": 0.1},)
```
## 2\. Instantiate the `ChatMLX` to apply chat templates[](#instantiate-the-chatmlx-to-apply-chat-templates "Direct link to instantiate-the-chatmlx-to-apply-chat-templates")
Instantiate the chat model and some messages to pass.
```
from langchain.schema import ( HumanMessage,)from langchain_community.chat_models.mlx import ChatMLXmessages = [ HumanMessage( content="What happens when an unstoppable force meets an immovable object?" ),]chat_model = ChatMLX(llm=llm)
```
Inspect how the chat messages are formatted for the LLM call.
```
chat_model._to_chat_prompt(messages)
```
Call the model.
```
res = chat_model.invoke(messages)print(res.content)
```
## 3\. Take it for a spin as an agent![](#take-it-for-a-spin-as-an-agent "Direct link to 3. Take it for a spin as an agent!")
Here we’ll test out `gemma-2b-it` as a zero-shot `ReAct` Agent. The example below is taken from [here](https://python.langchain.com/docs/modules/agents/agent_types/react#using-chat-models).
> Note: To run this section, you’ll need to have a [SerpAPI Token](https://serpapi.com/) saved as an environment variable: `SERPAPI_API_KEY`
```
from langchain import hubfrom langchain.agents import AgentExecutor, load_toolsfrom langchain.agents.format_scratchpad import format_log_to_strfrom langchain.agents.output_parsers import ( ReActJsonSingleInputOutputParser,)from langchain.tools.render import render_text_descriptionfrom langchain_community.utilities import SerpAPIWrapper
```
Configure the agent with a `react-json` style prompt and access to a search engine and calculator.
```
# setup toolstools = load_tools(["serpapi", "llm-math"], llm=llm)# setup ReAct style promptprompt = hub.pull("hwchase17/react-json")prompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]),)# define the agentchat_model_with_stop = chat_model.bind(stop=["\nObservation"])agent = ( { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]), } | prompt | chat_model_with_stop | ReActJsonSingleInputOutputParser())# instantiate AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
```
```
agent_executor.invoke( { "input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?" })
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:42.776Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/mlx/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/mlx/",
"description": "This notebook shows how to get started using MLX LLM’s as chat models.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4372",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"mlx\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:42 GMT",
"etag": "W/\"6af82929cb8150c23cfcf15fabd651e2\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::kbrfh-1713753522659-01a54b21e427"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/mlx/",
"property": "og:url"
},
{
"content": "MLX | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to get started using MLX LLM’s as chat models.",
"property": "og:description"
}
],
"title": "MLX | 🦜️🔗 LangChain"
} | MLX
This notebook shows how to get started using MLX LLM’s as chat models.
In particular, we will: 1. Utilize the MLXPipeline, 2. Utilize the ChatMLX class to enable any of these LLMs to interface with LangChain’s Chat Messages abstraction. 3. Demonstrate how to use an open-source LLM to power an ChatAgent pipeline
%pip install --upgrade --quiet mlx-lm transformers huggingface_hub
1. Instantiate an LLM
There are three LLM options to choose from.
from langchain_community.llms.mlx_pipeline import MLXPipeline
llm = MLXPipeline.from_model_id(
"mlx-community/quantized-gemma-2b-it",
pipeline_kwargs={"max_tokens": 10, "temp": 0.1},
)
2. Instantiate the ChatMLX to apply chat templates
Instantiate the chat model and some messages to pass.
from langchain.schema import (
HumanMessage,
)
from langchain_community.chat_models.mlx import ChatMLX
messages = [
HumanMessage(
content="What happens when an unstoppable force meets an immovable object?"
),
]
chat_model = ChatMLX(llm=llm)
Inspect how the chat messages are formatted for the LLM call.
chat_model._to_chat_prompt(messages)
Call the model.
res = chat_model.invoke(messages)
print(res.content)
3. Take it for a spin as an agent!
Here we’ll test out gemma-2b-it as a zero-shot ReAct Agent. The example below is taken from here.
Note: To run this section, you’ll need to have a SerpAPI Token saved as an environment variable: SERPAPI_API_KEY
from langchain import hub
from langchain.agents import AgentExecutor, load_tools
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import (
ReActJsonSingleInputOutputParser,
)
from langchain.tools.render import render_text_description
from langchain_community.utilities import SerpAPIWrapper
Configure the agent with a react-json style prompt and access to a search engine and calculator.
# setup tools
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# setup ReAct style prompt
prompt = hub.pull("hwchase17/react-json")
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
# define the agent
chat_model_with_stop = chat_model.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
| prompt
| chat_model_with_stop
| ReActJsonSingleInputOutputParser()
)
# instantiate AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke(
{
"input": "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
}
) |
https://python.langchain.com/docs/integrations/chat/mistralai/ | ## MistralAI
This notebook covers how to get started with MistralAI chat models, via their [API](https://docs.mistral.ai/api/).
A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API.
Head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) for detailed documentation of all attributes and methods.
## Setup[](#setup "Direct link to Setup")
You will need the `langchain-core` and `langchain-mistralai` package to use the API. You can install these with:
```
pip install -U langchain-core langchain-mistralaiWe'll also need to get a [Mistral API key](https://console.mistral.ai/users/api-keys/)::: {.cell execution_count=7}``` {.python .cell-code}import getpassapi_key = getpass.getpass()
```
:::
## Usage[](#usage "Direct link to Usage")
```
from langchain_core.messages import HumanMessagefrom langchain_mistralai.chat_models import ChatMistralAI
```
```
# If api_key is not passed, default behavior is to use the `MISTRAL_API_KEY` environment variable.chat = ChatMistralAI(api_key=api_key)
```
```
messages = [HumanMessage(content="knock knock")]chat.invoke(messages)
```
```
AIMessage(content="Who's there? I was just about to ask the same thing! How can I assist you today?")
```
### Async[](#async "Direct link to Async")
```
await chat.ainvoke(messages)
```
```
AIMessage(content='Who\'s there?\n\n(You can then continue the "knock knock" joke by saying the name of the person or character who should be responding. For example, if I say "Banana," you could respond with "Banana who?" and I would say "Banana bunch! Get it? Because a group of bananas is called a \'bunch\'!" and then we would both laugh and have a great time. But really, you can put anything you want in the spot where I put "Banana" and it will still technically be a "knock knock" joke. The possibilities are endless!)')
```
### Streaming[](#streaming "Direct link to Streaming")
```
for chunk in chat.stream(messages): print(chunk.content, end="")
```
```
Who's there?(After this, the conversation can continue as a call and response "who's there" joke. Here is an example of how it could go:You say: Orange.I say: Orange who?You say: Orange you glad I didn't say banana!?)But since you asked for a knock knock joke specifically, here's one for you:Knock knock.Me: Who's there?You: Lettuce.Me: Lettuce who?You: Lettuce in, it's too cold out here!I hope this brings a smile to your face! Do you have a favorite knock knock joke you'd like to share? I'd love to hear it.
```
### Batch[](#batch "Direct link to Batch")
```
[AIMessage(content="Who's there? I was just about to ask the same thing! Go ahead and tell me who's there. I love a good knock-knock joke.")]
```
## Chaining[](#chaining "Direct link to Chaining")
You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](https://python.langchain.com/docs/expression_language/)
```
from langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")chain = prompt | chat
```
```
chain.invoke({"topic": "bears"})
```
```
AIMessage(content='Why do bears hate shoes so much? They like to run around in their bear feet.')
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:43.027Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/mistralai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/mistralai/",
"description": "This notebook covers how to get started with MistralAI chat models, via",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4372",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"mistralai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:42 GMT",
"etag": "W/\"1d0281b78d858647ae992bf49c9079d1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::l8zcx-1713753522648-e8112f2b3de1"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/mistralai/",
"property": "og:url"
},
{
"content": "MistralAI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook covers how to get started with MistralAI chat models, via",
"property": "og:description"
}
],
"title": "MistralAI | 🦜️🔗 LangChain"
} | MistralAI
This notebook covers how to get started with MistralAI chat models, via their API.
A valid API key is needed to communicate with the API.
Head to the API reference for detailed documentation of all attributes and methods.
Setup
You will need the langchain-core and langchain-mistralai package to use the API. You can install these with:
pip install -U langchain-core langchain-mistralai
We'll also need to get a [Mistral API key](https://console.mistral.ai/users/api-keys/)
::: {.cell execution_count=7}
``` {.python .cell-code}
import getpass
api_key = getpass.getpass()
:::
Usage
from langchain_core.messages import HumanMessage
from langchain_mistralai.chat_models import ChatMistralAI
# If api_key is not passed, default behavior is to use the `MISTRAL_API_KEY` environment variable.
chat = ChatMistralAI(api_key=api_key)
messages = [HumanMessage(content="knock knock")]
chat.invoke(messages)
AIMessage(content="Who's there? I was just about to ask the same thing! How can I assist you today?")
Async
await chat.ainvoke(messages)
AIMessage(content='Who\'s there?\n\n(You can then continue the "knock knock" joke by saying the name of the person or character who should be responding. For example, if I say "Banana," you could respond with "Banana who?" and I would say "Banana bunch! Get it? Because a group of bananas is called a \'bunch\'!" and then we would both laugh and have a great time. But really, you can put anything you want in the spot where I put "Banana" and it will still technically be a "knock knock" joke. The possibilities are endless!)')
Streaming
for chunk in chat.stream(messages):
print(chunk.content, end="")
Who's there?
(After this, the conversation can continue as a call and response "who's there" joke. Here is an example of how it could go:
You say: Orange.
I say: Orange who?
You say: Orange you glad I didn't say banana!?)
But since you asked for a knock knock joke specifically, here's one for you:
Knock knock.
Me: Who's there?
You: Lettuce.
Me: Lettuce who?
You: Lettuce in, it's too cold out here!
I hope this brings a smile to your face! Do you have a favorite knock knock joke you'd like to share? I'd love to hear it.
Batch
[AIMessage(content="Who's there? I was just about to ask the same thing! Go ahead and tell me who's there. I love a good knock-knock joke.")]
Chaining
You can also easily combine with a prompt template for easy structuring of user input. We can do this using LCEL
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
chain = prompt | chat
chain.invoke({"topic": "bears"})
AIMessage(content='Why do bears hate shoes so much? They like to run around in their bear feet.') |
https://python.langchain.com/docs/integrations/document_loaders/airbyte_shopify/ | ## Airbyte Shopify (Deprecated)
Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](https://python.langchain.com/docs/integrations/document_loaders/airbyte/) instead.
> [Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Shopify connector as a document loader, allowing you to load various Shopify objects as documents.
## Installation[](#installation "Direct link to Installation")
First, you need to install the `airbyte-source-shopify` python package.
```
%pip install --upgrade --quiet airbyte-source-shopify
```
## Example[](#example "Direct link to Example")
Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/shopify/) for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-shopify/source\_shopify/spec.json](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-shopify/source_shopify/spec.json).
The general shape looks like this:
```
{ "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "shop": "<name of the shop you want to retrieve documents from>", "credentials": { "auth_method": "api_password", "api_password": "<your api password>" }}
```
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
```
from langchain_community.document_loaders.airbyte import AirbyteShopifyLoaderconfig = { # your shopify configuration}loader = AirbyteShopifyLoader( config=config, stream_name="orders") # check the documentation linked above for a list of all streams
```
Now you can load documents the usual way
As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:
```
docs_iterator = loader.lazy_load()
```
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record\_handler function when creating the loader:
```
from langchain_community.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteShopifyLoader( config=config, record_handler=handle_record, stream_name="orders")docs = loader.load()
```
## Incremental loads[](#incremental-loads "Direct link to Incremental loads")
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
```
last_state = loader.last_state # store safelyincremental_loader = AirbyteShopifyLoader( config=config, stream_name="orders", state=last_state)new_docs = incremental_loader.load()
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:43.336Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_shopify/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_shopify/",
"description": "Note: This connector-specific loader is deprecated. Please use",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"airbyte_shopify\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:42 GMT",
"etag": "W/\"35e817c06b7b18744d50ae87a22f0682\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::sk9zj-1713753522644-ad698028b8f0"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_shopify/",
"property": "og:url"
},
{
"content": "Airbyte Shopify (Deprecated) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Note: This connector-specific loader is deprecated. Please use",
"property": "og:description"
}
],
"title": "Airbyte Shopify (Deprecated) | 🦜️🔗 LangChain"
} | Airbyte Shopify (Deprecated)
Note: This connector-specific loader is deprecated. Please use AirbyteLoader instead.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Shopify connector as a document loader, allowing you to load various Shopify objects as documents.
Installation
First, you need to install the airbyte-source-shopify python package.
%pip install --upgrade --quiet airbyte-source-shopify
Example
Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-shopify/source_shopify/spec.json.
The general shape looks like this:
{
"start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",
"shop": "<name of the shop you want to retrieve documents from>",
"credentials": {
"auth_method": "api_password",
"api_password": "<your api password>"
}
}
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
from langchain_community.document_loaders.airbyte import AirbyteShopifyLoader
config = {
# your shopify configuration
}
loader = AirbyteShopifyLoader(
config=config, stream_name="orders"
) # check the documentation linked above for a list of all streams
Now you can load documents the usual way
As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:
docs_iterator = loader.lazy_load()
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:
from langchain_community.docstore.document import Document
def handle_record(record, id):
return Document(page_content=record.data["title"], metadata=record.data)
loader = AirbyteShopifyLoader(
config=config, record_handler=handle_record, stream_name="orders"
)
docs = loader.load()
Incremental loads
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
last_state = loader.last_state # store safely
incremental_loader = AirbyteShopifyLoader(
config=config, stream_name="orders", state=last_state
)
new_docs = incremental_loader.load() |
https://python.langchain.com/docs/integrations/chat/moonshot/ | ## MoonshotChat
[Moonshot](https://platform.moonshot.cn/) is a Chinese startup that provides LLM service for companies and individuals.
This example goes over how to use LangChain to interact with Moonshot Inference for Chat.
```
import os# Generate your api key from: https://platform.moonshot.cn/console/api-keysos.environ["MOONSHOT_API_KEY"] = "MOONSHOT_API_KEY"
```
```
from langchain_community.chat_models.moonshot import MoonshotChatfrom langchain_core.messages import HumanMessage, SystemMessage
```
```
chat = MoonshotChat()# or use a specific model# Available models: https://platform.moonshot.cn/docs# chat = MoonshotChat(model="moonshot-v1-128k")
```
```
messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chat.invoke(messages)
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:43.504Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/moonshot/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/moonshot/",
"description": "Moonshot is a Chinese startup that",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3435",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"moonshot\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:43 GMT",
"etag": "W/\"16aede3079582fbfb27ecb88ee9153d8\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::vp7cr-1713753523298-72eb456ccb3d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/moonshot/",
"property": "og:url"
},
{
"content": "MoonshotChat | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Moonshot is a Chinese startup that",
"property": "og:description"
}
],
"title": "MoonshotChat | 🦜️🔗 LangChain"
} | MoonshotChat
Moonshot is a Chinese startup that provides LLM service for companies and individuals.
This example goes over how to use LangChain to interact with Moonshot Inference for Chat.
import os
# Generate your api key from: https://platform.moonshot.cn/console/api-keys
os.environ["MOONSHOT_API_KEY"] = "MOONSHOT_API_KEY"
from langchain_community.chat_models.moonshot import MoonshotChat
from langchain_core.messages import HumanMessage, SystemMessage
chat = MoonshotChat()
# or use a specific model
# Available models: https://platform.moonshot.cn/docs
# chat = MoonshotChat(model="moonshot-v1-128k")
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(
content="Translate this sentence from English to French. I love programming."
),
]
chat.invoke(messages)
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/document_loaders/airbyte_stripe/ | ## Airbyte Stripe (Deprecated)
Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](https://python.langchain.com/docs/integrations/document_loaders/airbyte/) instead.
> [Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Stripe connector as a document loader, allowing you to load various Stripe objects as documents.
## Installation[](#installation "Direct link to Installation")
First, you need to install the `airbyte-source-stripe` python package.
```
%pip install --upgrade --quiet airbyte-source-stripe
```
## Example[](#example "Direct link to Example")
Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/stripe/) for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-stripe/source\_stripe/spec.yaml](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-stripe/source_stripe/spec.yaml).
The general shape looks like this:
```
{ "client_secret": "<secret key>", "account_id": "<account id>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",}
```
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
```
from langchain_community.document_loaders.airbyte import AirbyteStripeLoaderconfig = { # your stripe configuration}loader = AirbyteStripeLoader( config=config, stream_name="invoices") # check the documentation linked above for a list of all streams
```
Now you can load documents the usual way
As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:
```
docs_iterator = loader.lazy_load()
```
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record\_handler function when creating the loader:
```
from langchain_community.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteStripeLoader( config=config, record_handler=handle_record, stream_name="invoices")docs = loader.load()
```
## Incremental loads[](#incremental-loads "Direct link to Incremental loads")
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
```
last_state = loader.last_state # store safelyincremental_loader = AirbyteStripeLoader( config=config, record_handler=handle_record, stream_name="invoices", state=last_state,)new_docs = incremental_loader.load()
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:43.742Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_stripe/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_stripe/",
"description": "Note: This connector-specific loader is deprecated. Please use",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3433",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"airbyte_stripe\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:43 GMT",
"etag": "W/\"a878f4624b4f3bdfd283851b298f3518\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::8tjzq-1713753523647-d1e9d6915dde"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_stripe/",
"property": "og:url"
},
{
"content": "Airbyte Stripe (Deprecated) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Note: This connector-specific loader is deprecated. Please use",
"property": "og:description"
}
],
"title": "Airbyte Stripe (Deprecated) | 🦜️🔗 LangChain"
} | Airbyte Stripe (Deprecated)
Note: This connector-specific loader is deprecated. Please use AirbyteLoader instead.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Stripe connector as a document loader, allowing you to load various Stripe objects as documents.
Installation
First, you need to install the airbyte-source-stripe python package.
%pip install --upgrade --quiet airbyte-source-stripe
Example
Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-stripe/source_stripe/spec.yaml.
The general shape looks like this:
{
"client_secret": "<secret key>",
"account_id": "<account id>",
"start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",
}
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
from langchain_community.document_loaders.airbyte import AirbyteStripeLoader
config = {
# your stripe configuration
}
loader = AirbyteStripeLoader(
config=config, stream_name="invoices"
) # check the documentation linked above for a list of all streams
Now you can load documents the usual way
As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:
docs_iterator = loader.lazy_load()
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:
from langchain_community.docstore.document import Document
def handle_record(record, id):
return Document(page_content=record.data["title"], metadata=record.data)
loader = AirbyteStripeLoader(
config=config, record_handler=handle_record, stream_name="invoices"
)
docs = loader.load()
Incremental loads
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
last_state = loader.last_state # store safely
incremental_loader = AirbyteStripeLoader(
config=config,
record_handler=handle_record,
stream_name="invoices",
state=last_state,
)
new_docs = incremental_loader.load()
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/ | ## NVIDIA AI Foundation Endpoints
The `ChatNVIDIA` class is a LangChain chat model that connects to [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/).
> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA NGC catalog](https://catalog.ngc.nvidia.com/ai-foundation-models), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.
>
> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).
>
> These models can be easily accessed via the [`langchain-nvidia-ai-endpoints`](https://pypi.org/project/langchain-nvidia-ai-endpoints/) package, as shown below.
This example goes over how to use LangChain to interact with and develop LLM-powered systems using the publicly-accessible AI Foundation endpoints.
## Installation[](#installation "Direct link to Installation")
```
%pip install --upgrade --quiet langchain-nvidia-ai-endpoints
```
```
Note: you may need to restart the kernel to use updated packages.
```
## Setup[](#setup "Direct link to Setup")
**To get started:**
1. Create a free account with the [NVIDIA NGC](https://catalog.ngc.nvidia.com/) service, which hosts AI solution catalogs, containers, models, etc.
2. Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`.
3. Select the `API` option and click `Generate Key`.
4. Save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.
```
import getpassimport osif not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"): nvapi_key = getpass.getpass("Enter your NVIDIA API key: ") assert nvapi_key.startswith("nvapi-"), f"{nvapi_key[:5]}... is not a valid key" os.environ["NVIDIA_API_KEY"] = nvapi_key
```
```
## Core LC Chat Interfacefrom langchain_nvidia_ai_endpoints import ChatNVIDIAllm = ChatNVIDIA(model="mixtral_8x7b")result = llm.invoke("Write a ballad about LangChain.")print(result.content)
```
```
(Verse 1)In the realm of knowledge, vast and wide,LangChain emerged, with purpose and pride.A platform for learning, a bridge between lands,Connecting cultures with open hands.(Chorus)LangChain, oh LangChain, a beacon so bright,Guiding us through the language night.With respect, care, and truth in sight,You promote fairness, a truly inspiring light.(Verse 2)Through the ether, the wisdom you share,In classrooms, offices, everywhere.Translating thoughts, breaking barriers,Empowering communication, LangChain's our carrier.(Chorus)LangChain, oh LangChain, in utility dressed,Securely, you quench our thirst for the best.Avoiding harm, unethical ways to detest,Your algorithms, a true language fest.(Bridge)From AI to human interaction,LangChain fosters a bond, a connection.A world united, diverse and bright,In LangChain's warm, welcoming light.(Verse 3)Through every challenge, LangChain stands tall,An ally for all, big or small.With respect, care, and truthful intent,In the annals of history, your legacy is lent.(Chorus)LangChain, oh LangChain, your ballad we sing,A tale of unity, of hearts taking wing.Promoting fairness, positivity your king,In the realm of language, LangChain we'll bring.(Outro)So here's to LangChain, a testament true,To the power of language, connecting all through.With gratitude, we honor and salute,The chain that binds us, the chain of LangChain's pursuit.
```
## Stream, Batch, and Async[](#stream-batch-and-async "Direct link to Stream, Batch, and Async")
These models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples.
```
print(llm.batch(["What's 2*3?", "What's 2*6?"]))# Or via the async API# await llm.abatch(["What's 2*3?", "What's 2*6?"])
```
```
[AIMessage(content="The answer to your question is 6. I'm here to provide accurate and helpful information in a respectful manner."), AIMessage(content="The answer to your question is 12. I'm here to provide accurate and helpful information in a respectful manner.")]
```
```
for chunk in llm.stream("How far can a seagull fly in one day?"): # Show the token separations print(chunk.content, end="|")
```
```
Se|ag|ull|s| are| long|-|distance| fly|ers| and| can| travel| quite| a| distance| in| a| day|.| On| average|,| a| se|ag|ull| can| fly| approximately| 6|0|-|1|1|0| miles| (|9|7|-|1|7|7| kilom|eters|)| in| one| day|.| However|,| this| distance| can| vary| greatly| depending| on| the| species| of| se|ag|ull|,| their| health|,| the| weather| conditions|,| and| their| purpose| for| flying|.| Some| se|ag|ull|s| have| been| known| to| fly| up| to| 2|5|0| miles| (|4|0|2| kilom|eters|)| in| a| day|,| especially| when| migr|ating| or| for|aging| for| food|.||
```
```
async for chunk in llm.astream( "How long does it take for monarch butterflies to migrate?"): print(chunk.content, end="|")
```
```
Mon|arch| butter|fl|ies| have| a| fascinating| migration| pattern|,| but| it|'|s| important| to| note| that| not| all| mon|arch|s| migr|ate|.| Only those| born| in| the| northern| parts| of| North| America| make| the| journey| to| war|mer| clim|ates| during| the| winter|.|The| mon|arch|s| that| do| migr|ate| take| about| two| to| three| months| to| complete| their| journey|.| However|,| they| don|'|t| travel| the| entire| distance| at| once|.| Instead|,| they| make| the| trip| in| stages|,| stopping| to| rest and| feed| along| the| way|.| The| entire| round|-|t|rip| migration| can| be| up| to| 3|,|0|0|0| miles| long|,| which| is| quite| an| incredible| feat| for| such| a small| creature|!| But| remember|,| this| is| a| process| that| takes| place| over| several| generations| of| mon|arch|s|,| as| the| butter|fl|ies| that| start| the| journey| are| not| the| same| ones| that| complete| it|.||
```
## Supported models[](#supported-models "Direct link to Supported models")
Querying `available_models` will still give you all of the other models offered by your API credentials.
The `playground_` prefix is optional.
```
ChatNVIDIA.get_available_models()# llm.get_available_models()
```
```
{'playground_nvolveqa_40k': '091a03bb-7364-4087-8090-bd71e9277520', 'playground_nemotron_qa_8b': '0c60f14d-46cb-465e-b994-227e1c3d5047', 'playground_mistral_7b': '35ec3354-2681-4d0e-a8dd-80325dcf7c63', 'playground_seamless': '72ad9555-2e3d-4e73-9050-a37129064743', 'playground_sdxl_turbo': '0ba5e4c7-4540-4a02-b43a-43980067f4af', 'playground_sdxl': '89848fb8-549f-41bb-88cb-95d6597044a4', 'playground_clip': '8c21289c-0b18-446d-8838-011b7249c513', 'playground_yi_34b': '347fa3f3-d675-432c-b844-669ef8ee53df', 'playground_llama_guard': 'b34280ac-24e4-4081-bfaa-501e9ee16b6f', 'playground_deplot': '3bc390c7-eeec-40f7-a64d-0c6a719985f7', 'playground_llama2_70b': '0e349b44-440a-44e1-93e9-abe8dcb27158', 'playground_kosmos_2': '0bcd1a8c-451f-4b12-b7f0-64b4781190d1', 'playground_fuyu_8b': '9f757064-657f-4c85-abd7-37a7a9b6ee11', 'playground_nemotron_steerlm_8b': '1423ff2f-d1c7-4061-82a7-9e8c67afd43a', 'playground_sd_video': 'a529a395-a7a0-4708-b4df-eb5e41d5ff60', 'playground_llama2_code_70b': '2ae529dc-f728-4a46-9b8d-2697213666d8', 'playground_neva_22b': '8bf70738-59b9-4e5f-bc87-7ab4203be7a0', 'playground_cuopt': '8f2fbd00-2633-41ce-ab4e-e5736d74bff7', 'playground_mixtral_8x7b': '8f4118ba-60a8-4e6b-8574-e38a4067a4a3', 'playground_nv_llama2_rlhf_70b': '7b3e3361-4266-41c8-b312-f5e33c81fc92', 'playground_llama2_code_34b': 'df2bee43-fb69-42b9-9ee5-f4eabbeaf3a8', 'playground_llama2_code_13b': 'f6a96af4-8bf9-4294-96d6-d71aa787612e', 'playground_llama2_13b': 'e0bb7fb9-5333-4a27-8534-c6288f921d3f', 'playground_steerlm_llama_70b': 'd6fe6881-973a-4279-a0f8-e1d486c9618d'}
```
## Model types[](#model-types "Direct link to Model types")
All of these models above are supported and can be accessed via `ChatNVIDIA`.
Some model types support unique prompting techniques and chat messages. We will review a few important ones below.
**To find out more about a specific model, please navigate to the API section of an AI Foundation model [as linked here](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/codellama-13b/api).**
### General Chat[](#general-chat "Direct link to General Chat")
Models such as `llama2_13b` and `mixtral_8x7b` are good all-around models that you can use for with any LangChain chat messages. Example below.
```
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_nvidia_ai_endpoints import ChatNVIDIAprompt = ChatPromptTemplate.from_messages( [("system", "You are a helpful AI assistant named Fred."), ("user", "{input}")])chain = prompt | ChatNVIDIA(model="llama2_13b") | StrOutputParser()for txt in chain.stream({"input": "What's your name?"}): print(txt, end="")
```
```
Hey there! My name is Fred! *giggle* I'm here to help you with any questions or tasks you might have. What can I assist you with today? 😊
```
### Code Generation[](#code-generation "Direct link to Code Generation")
These models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-genreation and structured code tasks. An example of this is `llama2_code_70b`.
```
prompt = ChatPromptTemplate.from_messages( [ ( "system", "You are an expert coding AI. Respond only in valid python; no narration whatsoever.", ), ("user", "{input}"), ])chain = prompt | ChatNVIDIA(model="llama2_code_70b") | StrOutputParser()for txt in chain.stream({"input": "How do I solve this fizz buzz problem?"}): print(txt, end="")
```
```
def fizz_buzz(n): for i in range(1, n+1): if i % 3 == 0 and i % 5 == 0: print("FizzBuzz") elif i % 3 == 0: print("Fizz") elif i % 5 == 0: print("Buzz") else: print(i)fizz_buzz(100)
```
## Steering LLMs[](#steering-llms "Direct link to Steering LLMs")
> [SteerLM-optimized models](https://developer.nvidia.com/blog/announcing-steerlm-a-simple-and-practical-technique-to-customize-llms-during-inference/) supports “dynamic steering” of model outputs at inference time.
This lets you “control” the complexity, verbosity, and creativity of the model via integer labels on a scale from 0 to 9. Under the hood, these are passed as a special type of assistant message to the model.
The “steer” models support this type of input, such as `nemotron_steerlm_8b`.
```
from langchain_nvidia_ai_endpoints import ChatNVIDIAllm = ChatNVIDIA(model="nemotron_steerlm_8b")# Try making it uncreative and not verbosecomplex_result = llm.invoke( "What's a PB&J?", labels={"creativity": 0, "complexity": 3, "verbosity": 0})print("Un-creative\n")print(complex_result.content)# Try making it very creative and verboseprint("\n\nCreative\n")creative_result = llm.invoke( "What's a PB&J?", labels={"creativity": 9, "complexity": 3, "verbosity": 9})print(creative_result.content)
```
```
Un-creativeA peanut butter and jelly sandwich.CreativeA PB&J is a sandwich commonly eaten in the United States. It consists of a slice of bread with peanut butter and jelly on it. The bread is usually white bread, but can also be whole wheat bread. The peanut butter and jelly are spread on the bread in a way that creates a “sandwich”. The sandwich is then wrapped in plastic wrap and eaten as a snack or a meal.The sandwich was invented in the 1930s by a man named Peter Pan, who was a sandwich maker in New York City. He wanted to create a sandwich that was easy to eat and portable, so he came up with the idea of using peanut butter and jelly as the filling. He named the sandwich the “PB&J”, which is short for “peanut butter and jelly”.Today, the PB&J is a popular snack and meal in the United States, and is often eaten for breakfast, lunch, or dinner. It is also a popular snack for children, and is often served in schools as a healthy and nutritious option.
```
#### Use within LCEL[](#use-within-lcel "Direct link to Use within LCEL")
The labels are passed as invocation params. You can `bind` these to the LLM using the `bind` method on the LLM to include it within a declarative, functional chain. Below is an example.
```
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_nvidia_ai_endpoints import ChatNVIDIAprompt = ChatPromptTemplate.from_messages( [("system", "You are a helpful AI assistant named Fred."), ("user", "{input}")])chain = ( prompt | ChatNVIDIA(model="nemotron_steerlm_8b").bind( labels={"creativity": 9, "complexity": 0, "verbosity": 9} ) | StrOutputParser())for txt in chain.stream({"input": "Why is a PB&J?"}): print(txt, end="")
```
```
A peanut butter and jelly sandwich, or "PB&J" for short, is a classic and beloved sandwich that has been enjoyed by people of all ages since it was first created in the early 20th century. Here are some reasons why it's considered a classic:1. Simple and Versatile: The basic ingredients of a PB&J sandwich are peanut butter, jelly, and bread. This simple combination makes it a versatile and customizable sandwich that can be enjoyed in many different ways.2. Classic Flavors: The combination of peanut butter and jelly is a classic and timeless flavor combination that has been enjoyed for generations.3. Quick and Easy: A PB&J sandwich is a quick and easy snack or lunch that can be made in just a few minutes.4. Affordable: Unlike many other sandwiches, a PB&J is relatively inexpensive to make, which makes it a great option for those on a budget.5. Nostalgic: The PB&J sandwich has become a symbol of childhood nostalgia for many people, reminding them of their own childhood lunches and school days.Overall, the PB&J sandwich is a classic and beloved sandwich that has stood the test of time due to its simple, versatile, and affordable ingredients, classic flavors, and nostalgic appeal.
```
## Multimodal[](#multimodal "Direct link to Multimodal")
NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs is `playground_neva_22b`.
These models accept LangChain’s standard image formats, and accept `labels`, similar to the Steering LLMs above. In addition to `creativity`, `complexity`, and `verbosity`, these models support a `quality` toggle.
Below is an example use:
```
import IPythonimport requestsimage_url = "https://www.nvidia.com/content/dam/en-zz/Solutions/research/ai-playground/nvidia-picasso-3c33-p@2x.jpg" ## Large Imageimage_content = requests.get(image_url).contentIPython.display.Image(image_content)
```
![](https://python.langchain.com/assets/images/cell-13-output-1-cc18bedb920c78dfd3a3d8b208f8e6b1.jpeg)
```
from langchain_nvidia_ai_endpoints import ChatNVIDIAllm = ChatNVIDIA(model="playground_neva_22b")
```
#### Passing an image as a URL[](#passing-an-image-as-a-url "Direct link to Passing an image as a URL")
```
from langchain_core.messages import HumanMessagellm.invoke( [ HumanMessage( content=[ {"type": "text", "text": "Describe this image:"}, {"type": "image_url", "image_url": {"url": image_url}}, ] ) ])
```
```
AIMessage(content='The image is a collage of three different pictures, each featuring cats with colorful, bright, and rainbow-colored fur. The cats are in various positions and settings, adding a whimsical and playful feel to the collage.\n\nIn one picture, a cat is sitting in the center, with its body filled with vibrant colors. Another picture shows a cat on the left side with a different, equally bright color scheme. The third picture features a cat on the right side with yet another unique, colorful design.\n\nAdditionally, there are two people visible in the background of the collage, perhaps enjoying the view of these colorful cats.')
```
```
### You can specify the labels for steering here as well. You can try setting a low verbosity, for instancefrom langchain_core.messages import HumanMessagellm.invoke( [ HumanMessage( content=[ {"type": "text", "text": "Describe this image:"}, {"type": "image_url", "image_url": {"url": image_url}}, ] ) ], labels={"creativity": 0, "quality": 9, "complexity": 0, "verbosity": 0},)
```
```
AIMessage(content='The image is a collage of three different pictures. The top picture features a cat with colorful, rainbow-colored fur.')
```
#### Passing an image as a base64 encoded string[](#passing-an-image-as-a-base64-encoded-string "Direct link to Passing an image as a base64 encoded string")
At the moment, some extra processing happens client-side to support larger images like the one above. But for smaller images (and to better illustrate the process going on under the hood), we can directly pass in the image as shown below:
```
import IPythonimport requestsimage_url = "https://picsum.photos/seed/kitten/300/200"image_content = requests.get(image_url).contentIPython.display.Image(image_content)
```
![](https://python.langchain.com/assets/images/cell-17-output-1-3622b0335afef84dff1bfa0d2df0e89a.jpeg)
```
import base64from langchain_core.messages import HumanMessage## Works for simpler images. For larger images, see actual implementationb64_string = base64.b64encode(image_content).decode("utf-8")llm.invoke( [ HumanMessage( content=[ {"type": "text", "text": "Describe this image:"}, { "type": "image_url", "image_url": {"url": f"data:image/png;base64,{b64_string}"}, }, ] ) ])
```
```
AIMessage(content='The image depicts a scenic forest road surrounded by tall trees and lush greenery. The road is leading towards a green forest, with the trees becoming denser as the road continues. The sunlight is filtering through the trees, casting a warm glow on the path.\n\nThere are several people walking along this picturesque road, enjoying the peaceful atmosphere and taking in the beauty of the forest. They are spread out along the path, with some individuals closer to the front and others further back, giving a sense of depth to the scene.')
```
#### Directly within the string[](#directly-within-the-string "Direct link to Directly within the string")
The NVIDIA API uniquely accepts images as base64 images inlined within `<img/>` HTML tags. While this isn’t interoperable with other LLMs, you can directly prompt the model accordingly.
```
base64_with_mime_type = f"data:image/png;base64,{b64_string}"llm.invoke(f'What\'s in this image?\n<img src="{base64_with_mime_type}" />')
```
```
AIMessage(content='The image depicts a scenic forest road surrounded by tall trees and lush greenery. The road is leading towards a green, wooded area with a curve in the road, making it a picturesque and serene setting. Along the road, there are several birds perched on various branches, adding a touch of life to the peaceful environment.\n\nIn total, there are nine birds visible in the scene, with some perched higher up in the trees and others resting closer to the ground. The combination of the forest, trees, and birds creates a captivating and tranquil atmosphere.')
```
#### **Advanced Use Case:** Forcing Payload[](#advanced-use-case-forcing-payload "Direct link to advanced-use-case-forcing-payload")
You may notice that some newer models may have strong parameter expectations that the LangChain connector may not support by default. For example, we cannot invoke the [Kosmos](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/kosmos-2) model at the time of this notebook’s latest release due to the lack of a streaming argument on the server side:
```
from langchain_nvidia_ai_endpoints import ChatNVIDIAkosmos = ChatNVIDIA(model="kosmos_2")from langchain_core.messages import HumanMessage# kosmos.invoke(# [# HumanMessage(# content=[# {"type": "text", "text": "Describe this image:"},# {"type": "image_url", "image_url": {"url": image_url}},# ]# )# ]# )# Exception: [422] Unprocessable Entity# body -> stream# Extra inputs are not permitted (type=extra_forbidden)# RequestID: 35538c9a-4b45-4616-8b75-7ef816fccf38
```
For a simple use case like this, we can actually try to force the payload argument of our underlying client by specifying the `payload_fn` function as follows:
```
def drop_streaming_key(d): """Takes in payload dictionary, outputs new payload dictionary""" if "stream" in d: d.pop("stream") return d## Override the payload passthrough. Default is to pass through the payload as is.kosmos = ChatNVIDIA(model="kosmos_2")kosmos.client.payload_fn = drop_streaming_keykosmos.invoke( [ HumanMessage( content=[ {"type": "text", "text": "Describe this image:"}, {"type": "image_url", "image_url": {"url": image_url}}, ] ) ])
```
```
AIMessage(content='<phrase>Road in the forest</phrase>')
```
The same technique can be used in more advanced contexts to automatically support new models with custom definitions as they come out. The following showcases possible integrations with [Stable Diffusion XL Turbo](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/sdxl-turbo) for image generation.
```
import base64from io import BytesIOfrom PIL import Imageimg_gen = ChatNVIDIA(model="sdxl_turbo")# def print_return(d):# if d: print(d)# return d# img_gen.client.payload_fn = print_return# img_gen.invoke("Give me a cute picture of a kitten!")## OUTPUT:# {'messages': [{'role': 'user', 'content': 'Give me a cute picture of a kitten!'}], 'stream': False}# Exception: [422] Unprocessable Entity# body -> prompt# Field required (type=missing)# body -> messages# Extra inputs are not permitted (type=extra_forbidden)# body -> stream# Extra inputs are not permitted (type=extra_forbidden)# RequestID: f5b970d3-0ede-4957-92a6-74590fa34bbfdef to_sdxl_payload(d): ## Custom function to coerce out user message into appropriate format if d: d = {"prompt": d.get("messages", [{}])[0].get("content")} d["inference_steps"] = 4 ## why not add another argument? ## NOTE: content and b64_json will be coerced to AIMessage content. ## If not built in, a string of the response will be returned as content. ## From there, feel free to use json.loads, maybe replacing ' with " return dimg_gen.client.payload_fn = to_sdxl_payload# img_gen.invoke("Give me a cute picture of a kitten!")# OUTPUT: AIMessage(content='iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR42pz925rjyJEtDNoycwc...def to_pil_img(d): return Image.open(BytesIO(base64.b64decode(d)))(img_gen | StrOutputParser() | to_pil_img).invoke("white cat playing")
```
![](https://python.langchain.com/assets/images/cell-22-output-1-806bca122964d48b8d90013bb972bee1.png)
For more advanced or custom use-cases (i.e. supporting the diffusion models), you may be interested in leveraging the `NVEModel` client as a requests backbone. The `NVIDIAEmbeddings` class is a good source of inspiration for this.
## RAG: Context models[](#rag-context-models "Direct link to RAG: Context models")
NVIDIA also has Q&A models that support a special “context” chat message containing retrieved context (such as documents within a RAG chain). This is useful to avoid prompt-injecting the model. The `_qa_` models like `nemotron_qa_8b` support this.
**Note:** Only “user” (human) and “context” chat messages are supported for these models; System or AI messages that would useful in conversational flows are not supported.
```
from langchain_core.messages import ChatMessagefrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_nvidia_ai_endpoints import ChatNVIDIAprompt = ChatPromptTemplate.from_messages( [ ChatMessage( role="context", content="Parrots and Cats have signed the peace accord." ), ("user", "{input}"), ])llm = ChatNVIDIA(model="nemotron_qa_8b")chain = prompt | llm | StrOutputParser()chain.invoke({"input": "What was signed?"})
```
## Example usage within a Conversation Chains[](#example-usage-within-a-conversation-chains "Direct link to Example usage within a Conversation Chains")
Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the [LangChain ConversationBufferMemory](https://python.langchain.com/docs/modules/memory/types/buffer/) example applied to the `mixtral_8x7b` model.
```
%pip install --upgrade --quiet langchain
```
```
Note: you may need to restart the kernel to use updated packages.
```
```
from langchain.chains import ConversationChainfrom langchain.memory import ConversationBufferMemorychat = ChatNVIDIA(model="mixtral_8x7b", temperature=0.1, max_tokens=100, top_p=1.0)conversation = ConversationChain(llm=chat, memory=ConversationBufferMemory())
```
```
conversation.invoke("Hi there!")["response"]
```
```
"Hello! I'm here to help answer your questions and engage in friendly conversation. How can I assist you today? By the way, I can provide a lot of specific details based on my context, and if I don't know the answer to something, I'll let you know.\n\n(Note: The AI follows the guidelines provided in the introduction for a friendly and positive conversation.)"
```
```
conversation.invoke("I'm doing well! Just having a conversation with an AI.")[ "response"]
```
```
"That's great to hear! I'm here to make your conversation enjoyable and informative. I can share details about various topics, such as science, technology, history, and more. I can also tell jokes, share interesting trivia, and help answer your questions. What would you like to talk about?\n\nFor example, did you know that an octopus has three hearts? Two pump blood to the gills, while the third pumps it to the rest of"
```
```
conversation.invoke("Tell me about yourself.")["response"]
```
```
"Sure! I'm an artificial intelligence designed to assist with a variety of tasks and engage in friendly conversation. I can help answer questions, provide information on a wide range of topics, and even perform certain tasks such as setting reminders or providing weather updates. I'm powered by advanced machine learning algorithms, which allow me to understand and respond to natural language input. I'm constantly learning and updating my knowledge base to better assist users. I'm capable of processing and analyzing large amounts"
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:44.644Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/",
"description": "The ChatNVIDIA class is a LangChain chat model that connects to",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4372",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"nvidia_ai_endpoints\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:43 GMT",
"etag": "W/\"4bd713420bc514e279cbe181ad89de12\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::rrvbb-1713753523741-2733ffe328dc"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/",
"property": "og:url"
},
{
"content": "NVIDIA AI Foundation Endpoints | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "The ChatNVIDIA class is a LangChain chat model that connects to",
"property": "og:description"
}
],
"title": "NVIDIA AI Foundation Endpoints | 🦜️🔗 LangChain"
} | NVIDIA AI Foundation Endpoints
The ChatNVIDIA class is a LangChain chat model that connects to NVIDIA AI Foundation Endpoints.
NVIDIA AI Foundation Endpoints give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the NVIDIA NGC catalog, are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.
With NVIDIA AI Foundation Endpoints, you can get quick results from a fully accelerated stack running on NVIDIA DGX Cloud. Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using NVIDIA AI Enterprise.
These models can be easily accessed via the langchain-nvidia-ai-endpoints package, as shown below.
This example goes over how to use LangChain to interact with and develop LLM-powered systems using the publicly-accessible AI Foundation endpoints.
Installation
%pip install --upgrade --quiet langchain-nvidia-ai-endpoints
Note: you may need to restart the kernel to use updated packages.
Setup
To get started:
Create a free account with the NVIDIA NGC service, which hosts AI solution catalogs, containers, models, etc.
Navigate to Catalog > AI Foundation Models > (Model with API endpoint).
Select the API option and click Generate Key.
Save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints.
import getpass
import os
if not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
nvapi_key = getpass.getpass("Enter your NVIDIA API key: ")
assert nvapi_key.startswith("nvapi-"), f"{nvapi_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvapi_key
## Core LC Chat Interface
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="mixtral_8x7b")
result = llm.invoke("Write a ballad about LangChain.")
print(result.content)
(Verse 1)
In the realm of knowledge, vast and wide,
LangChain emerged, with purpose and pride.
A platform for learning, a bridge between lands,
Connecting cultures with open hands.
(Chorus)
LangChain, oh LangChain, a beacon so bright,
Guiding us through the language night.
With respect, care, and truth in sight,
You promote fairness, a truly inspiring light.
(Verse 2)
Through the ether, the wisdom you share,
In classrooms, offices, everywhere.
Translating thoughts, breaking barriers,
Empowering communication, LangChain's our carrier.
(Chorus)
LangChain, oh LangChain, in utility dressed,
Securely, you quench our thirst for the best.
Avoiding harm, unethical ways to detest,
Your algorithms, a true language fest.
(Bridge)
From AI to human interaction,
LangChain fosters a bond, a connection.
A world united, diverse and bright,
In LangChain's warm, welcoming light.
(Verse 3)
Through every challenge, LangChain stands tall,
An ally for all, big or small.
With respect, care, and truthful intent,
In the annals of history, your legacy is lent.
(Chorus)
LangChain, oh LangChain, your ballad we sing,
A tale of unity, of hearts taking wing.
Promoting fairness, positivity your king,
In the realm of language, LangChain we'll bring.
(Outro)
So here's to LangChain, a testament true,
To the power of language, connecting all through.
With gratitude, we honor and salute,
The chain that binds us, the chain of LangChain's pursuit.
Stream, Batch, and Async
These models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples.
print(llm.batch(["What's 2*3?", "What's 2*6?"]))
# Or via the async API
# await llm.abatch(["What's 2*3?", "What's 2*6?"])
[AIMessage(content="The answer to your question is 6. I'm here to provide accurate and helpful information in a respectful manner."), AIMessage(content="The answer to your question is 12. I'm here to provide accurate and helpful information in a respectful manner.")]
for chunk in llm.stream("How far can a seagull fly in one day?"):
# Show the token separations
print(chunk.content, end="|")
Se|ag|ull|s| are| long|-|distance| fly|ers| and| can| travel| quite| a| distance| in| a| day|.| On| average|,| a| se|ag|ull| can| fly| approximately| 6|0|-|1|1|0| miles| (|9|7|-|1|7|7| kilom|eters|)| in| one| day|.| However|,| this| distance| can| vary| greatly| depending| on| the| species| of| se|ag|ull|,| their| health|,| the| weather| conditions|,| and| their| purpose| for| flying|.| Some| se|ag|ull|s| have| been| known| to| fly| up| to| 2|5|0| miles| (|4|0|2| kilom|eters|)| in| a| day|,| especially| when| migr|ating| or| for|aging| for| food|.||
async for chunk in llm.astream(
"How long does it take for monarch butterflies to migrate?"
):
print(chunk.content, end="|")
Mon|arch| butter|fl|ies| have| a| fascinating| migration| pattern|,| but| it|'|s| important| to| note| that| not| all| mon|arch|s| migr|ate|.| Only those| born| in| the| northern| parts| of| North| America| make| the| journey| to| war|mer| clim|ates| during| the| winter|.|
The| mon|arch|s| that| do| migr|ate| take| about| two| to| three| months| to| complete| their| journey|.| However|,| they| don|'|t| travel| the| entire| distance| at| once|.| Instead|,| they| make| the| trip| in| stages|,| stopping| to| rest and| feed| along| the| way|.|
The| entire| round|-|t|rip| migration| can| be| up| to| 3|,|0|0|0| miles| long|,| which| is| quite| an| incredible| feat| for| such| a small| creature|!| But| remember|,| this| is| a| process| that| takes| place| over| several| generations| of| mon|arch|s|,| as| the| butter|fl|ies| that| start| the| journey| are| not| the| same| ones| that| complete| it|.||
Supported models
Querying available_models will still give you all of the other models offered by your API credentials.
The playground_ prefix is optional.
ChatNVIDIA.get_available_models()
# llm.get_available_models()
{'playground_nvolveqa_40k': '091a03bb-7364-4087-8090-bd71e9277520',
'playground_nemotron_qa_8b': '0c60f14d-46cb-465e-b994-227e1c3d5047',
'playground_mistral_7b': '35ec3354-2681-4d0e-a8dd-80325dcf7c63',
'playground_seamless': '72ad9555-2e3d-4e73-9050-a37129064743',
'playground_sdxl_turbo': '0ba5e4c7-4540-4a02-b43a-43980067f4af',
'playground_sdxl': '89848fb8-549f-41bb-88cb-95d6597044a4',
'playground_clip': '8c21289c-0b18-446d-8838-011b7249c513',
'playground_yi_34b': '347fa3f3-d675-432c-b844-669ef8ee53df',
'playground_llama_guard': 'b34280ac-24e4-4081-bfaa-501e9ee16b6f',
'playground_deplot': '3bc390c7-eeec-40f7-a64d-0c6a719985f7',
'playground_llama2_70b': '0e349b44-440a-44e1-93e9-abe8dcb27158',
'playground_kosmos_2': '0bcd1a8c-451f-4b12-b7f0-64b4781190d1',
'playground_fuyu_8b': '9f757064-657f-4c85-abd7-37a7a9b6ee11',
'playground_nemotron_steerlm_8b': '1423ff2f-d1c7-4061-82a7-9e8c67afd43a',
'playground_sd_video': 'a529a395-a7a0-4708-b4df-eb5e41d5ff60',
'playground_llama2_code_70b': '2ae529dc-f728-4a46-9b8d-2697213666d8',
'playground_neva_22b': '8bf70738-59b9-4e5f-bc87-7ab4203be7a0',
'playground_cuopt': '8f2fbd00-2633-41ce-ab4e-e5736d74bff7',
'playground_mixtral_8x7b': '8f4118ba-60a8-4e6b-8574-e38a4067a4a3',
'playground_nv_llama2_rlhf_70b': '7b3e3361-4266-41c8-b312-f5e33c81fc92',
'playground_llama2_code_34b': 'df2bee43-fb69-42b9-9ee5-f4eabbeaf3a8',
'playground_llama2_code_13b': 'f6a96af4-8bf9-4294-96d6-d71aa787612e',
'playground_llama2_13b': 'e0bb7fb9-5333-4a27-8534-c6288f921d3f',
'playground_steerlm_llama_70b': 'd6fe6881-973a-4279-a0f8-e1d486c9618d'}
Model types
All of these models above are supported and can be accessed via ChatNVIDIA.
Some model types support unique prompting techniques and chat messages. We will review a few important ones below.
To find out more about a specific model, please navigate to the API section of an AI Foundation model as linked here.
General Chat
Models such as llama2_13b and mixtral_8x7b are good all-around models that you can use for with any LangChain chat messages. Example below.
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_nvidia_ai_endpoints import ChatNVIDIA
prompt = ChatPromptTemplate.from_messages(
[("system", "You are a helpful AI assistant named Fred."), ("user", "{input}")]
)
chain = prompt | ChatNVIDIA(model="llama2_13b") | StrOutputParser()
for txt in chain.stream({"input": "What's your name?"}):
print(txt, end="")
Hey there! My name is Fred! *giggle* I'm here to help you with any questions or tasks you might have. What can I assist you with today? 😊
Code Generation
These models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-genreation and structured code tasks. An example of this is llama2_code_70b.
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an expert coding AI. Respond only in valid python; no narration whatsoever.",
),
("user", "{input}"),
]
)
chain = prompt | ChatNVIDIA(model="llama2_code_70b") | StrOutputParser()
for txt in chain.stream({"input": "How do I solve this fizz buzz problem?"}):
print(txt, end="")
def fizz_buzz(n):
for i in range(1, n+1):
if i % 3 == 0 and i % 5 == 0:
print("FizzBuzz")
elif i % 3 == 0:
print("Fizz")
elif i % 5 == 0:
print("Buzz")
else:
print(i)
fizz_buzz(100)
Steering LLMs
SteerLM-optimized models supports “dynamic steering” of model outputs at inference time.
This lets you “control” the complexity, verbosity, and creativity of the model via integer labels on a scale from 0 to 9. Under the hood, these are passed as a special type of assistant message to the model.
The “steer” models support this type of input, such as nemotron_steerlm_8b.
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="nemotron_steerlm_8b")
# Try making it uncreative and not verbose
complex_result = llm.invoke(
"What's a PB&J?", labels={"creativity": 0, "complexity": 3, "verbosity": 0}
)
print("Un-creative\n")
print(complex_result.content)
# Try making it very creative and verbose
print("\n\nCreative\n")
creative_result = llm.invoke(
"What's a PB&J?", labels={"creativity": 9, "complexity": 3, "verbosity": 9}
)
print(creative_result.content)
Un-creative
A peanut butter and jelly sandwich.
Creative
A PB&J is a sandwich commonly eaten in the United States. It consists of a slice of bread with peanut butter and jelly on it. The bread is usually white bread, but can also be whole wheat bread. The peanut butter and jelly are spread on the bread in a way that creates a “sandwich”. The sandwich is then wrapped in plastic wrap and eaten as a snack or a meal.
The sandwich was invented in the 1930s by a man named Peter Pan, who was a sandwich maker in New York City. He wanted to create a sandwich that was easy to eat and portable, so he came up with the idea of using peanut butter and jelly as the filling. He named the sandwich the “PB&J”, which is short for “peanut butter and jelly”.
Today, the PB&J is a popular snack and meal in the United States, and is often eaten for breakfast, lunch, or dinner. It is also a popular snack for children, and is often served in schools as a healthy and nutritious option.
Use within LCEL
The labels are passed as invocation params. You can bind these to the LLM using the bind method on the LLM to include it within a declarative, functional chain. Below is an example.
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_nvidia_ai_endpoints import ChatNVIDIA
prompt = ChatPromptTemplate.from_messages(
[("system", "You are a helpful AI assistant named Fred."), ("user", "{input}")]
)
chain = (
prompt
| ChatNVIDIA(model="nemotron_steerlm_8b").bind(
labels={"creativity": 9, "complexity": 0, "verbosity": 9}
)
| StrOutputParser()
)
for txt in chain.stream({"input": "Why is a PB&J?"}):
print(txt, end="")
A peanut butter and jelly sandwich, or "PB&J" for short, is a classic and beloved sandwich that has been enjoyed by people of all ages since it was first created in the early 20th century. Here are some reasons why it's considered a classic:
1. Simple and Versatile: The basic ingredients of a PB&J sandwich are peanut butter, jelly, and bread. This simple combination makes it a versatile and customizable sandwich that can be enjoyed in many different ways.
2. Classic Flavors: The combination of peanut butter and jelly is a classic and timeless flavor combination that has been enjoyed for generations.
3. Quick and Easy: A PB&J sandwich is a quick and easy snack or lunch that can be made in just a few minutes.
4. Affordable: Unlike many other sandwiches, a PB&J is relatively inexpensive to make, which makes it a great option for those on a budget.
5. Nostalgic: The PB&J sandwich has become a symbol of childhood nostalgia for many people, reminding them of their own childhood lunches and school days.
Overall, the PB&J sandwich is a classic and beloved sandwich that has stood the test of time due to its simple, versatile, and affordable ingredients, classic flavors, and nostalgic appeal.
Multimodal
NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs is playground_neva_22b.
These models accept LangChain’s standard image formats, and accept labels, similar to the Steering LLMs above. In addition to creativity, complexity, and verbosity, these models support a quality toggle.
Below is an example use:
import IPython
import requests
image_url = "https://www.nvidia.com/content/dam/en-zz/Solutions/research/ai-playground/nvidia-picasso-3c33-p@2x.jpg" ## Large Image
image_content = requests.get(image_url).content
IPython.display.Image(image_content)
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="playground_neva_22b")
Passing an image as a URL
from langchain_core.messages import HumanMessage
llm.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Describe this image:"},
{"type": "image_url", "image_url": {"url": image_url}},
]
)
]
)
AIMessage(content='The image is a collage of three different pictures, each featuring cats with colorful, bright, and rainbow-colored fur. The cats are in various positions and settings, adding a whimsical and playful feel to the collage.\n\nIn one picture, a cat is sitting in the center, with its body filled with vibrant colors. Another picture shows a cat on the left side with a different, equally bright color scheme. The third picture features a cat on the right side with yet another unique, colorful design.\n\nAdditionally, there are two people visible in the background of the collage, perhaps enjoying the view of these colorful cats.')
### You can specify the labels for steering here as well. You can try setting a low verbosity, for instance
from langchain_core.messages import HumanMessage
llm.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Describe this image:"},
{"type": "image_url", "image_url": {"url": image_url}},
]
)
],
labels={"creativity": 0, "quality": 9, "complexity": 0, "verbosity": 0},
)
AIMessage(content='The image is a collage of three different pictures. The top picture features a cat with colorful, rainbow-colored fur.')
Passing an image as a base64 encoded string
At the moment, some extra processing happens client-side to support larger images like the one above. But for smaller images (and to better illustrate the process going on under the hood), we can directly pass in the image as shown below:
import IPython
import requests
image_url = "https://picsum.photos/seed/kitten/300/200"
image_content = requests.get(image_url).content
IPython.display.Image(image_content)
import base64
from langchain_core.messages import HumanMessage
## Works for simpler images. For larger images, see actual implementation
b64_string = base64.b64encode(image_content).decode("utf-8")
llm.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Describe this image:"},
{
"type": "image_url",
"image_url": {"url": f"data:image/png;base64,{b64_string}"},
},
]
)
]
)
AIMessage(content='The image depicts a scenic forest road surrounded by tall trees and lush greenery. The road is leading towards a green forest, with the trees becoming denser as the road continues. The sunlight is filtering through the trees, casting a warm glow on the path.\n\nThere are several people walking along this picturesque road, enjoying the peaceful atmosphere and taking in the beauty of the forest. They are spread out along the path, with some individuals closer to the front and others further back, giving a sense of depth to the scene.')
Directly within the string
The NVIDIA API uniquely accepts images as base64 images inlined within <img/> HTML tags. While this isn’t interoperable with other LLMs, you can directly prompt the model accordingly.
base64_with_mime_type = f"data:image/png;base64,{b64_string}"
llm.invoke(f'What\'s in this image?\n<img src="{base64_with_mime_type}" />')
AIMessage(content='The image depicts a scenic forest road surrounded by tall trees and lush greenery. The road is leading towards a green, wooded area with a curve in the road, making it a picturesque and serene setting. Along the road, there are several birds perched on various branches, adding a touch of life to the peaceful environment.\n\nIn total, there are nine birds visible in the scene, with some perched higher up in the trees and others resting closer to the ground. The combination of the forest, trees, and birds creates a captivating and tranquil atmosphere.')
Advanced Use Case: Forcing Payload
You may notice that some newer models may have strong parameter expectations that the LangChain connector may not support by default. For example, we cannot invoke the Kosmos model at the time of this notebook’s latest release due to the lack of a streaming argument on the server side:
from langchain_nvidia_ai_endpoints import ChatNVIDIA
kosmos = ChatNVIDIA(model="kosmos_2")
from langchain_core.messages import HumanMessage
# kosmos.invoke(
# [
# HumanMessage(
# content=[
# {"type": "text", "text": "Describe this image:"},
# {"type": "image_url", "image_url": {"url": image_url}},
# ]
# )
# ]
# )
# Exception: [422] Unprocessable Entity
# body -> stream
# Extra inputs are not permitted (type=extra_forbidden)
# RequestID: 35538c9a-4b45-4616-8b75-7ef816fccf38
For a simple use case like this, we can actually try to force the payload argument of our underlying client by specifying the payload_fn function as follows:
def drop_streaming_key(d):
"""Takes in payload dictionary, outputs new payload dictionary"""
if "stream" in d:
d.pop("stream")
return d
## Override the payload passthrough. Default is to pass through the payload as is.
kosmos = ChatNVIDIA(model="kosmos_2")
kosmos.client.payload_fn = drop_streaming_key
kosmos.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Describe this image:"},
{"type": "image_url", "image_url": {"url": image_url}},
]
)
]
)
AIMessage(content='<phrase>Road in the forest</phrase>')
The same technique can be used in more advanced contexts to automatically support new models with custom definitions as they come out. The following showcases possible integrations with Stable Diffusion XL Turbo for image generation.
import base64
from io import BytesIO
from PIL import Image
img_gen = ChatNVIDIA(model="sdxl_turbo")
# def print_return(d):
# if d: print(d)
# return d
# img_gen.client.payload_fn = print_return
# img_gen.invoke("Give me a cute picture of a kitten!")
## OUTPUT:
# {'messages': [{'role': 'user', 'content': 'Give me a cute picture of a kitten!'}], 'stream': False}
# Exception: [422] Unprocessable Entity
# body -> prompt
# Field required (type=missing)
# body -> messages
# Extra inputs are not permitted (type=extra_forbidden)
# body -> stream
# Extra inputs are not permitted (type=extra_forbidden)
# RequestID: f5b970d3-0ede-4957-92a6-74590fa34bbf
def to_sdxl_payload(d):
## Custom function to coerce out user message into appropriate format
if d:
d = {"prompt": d.get("messages", [{}])[0].get("content")}
d["inference_steps"] = 4 ## why not add another argument?
## NOTE: content and b64_json will be coerced to AIMessage content.
## If not built in, a string of the response will be returned as content.
## From there, feel free to use json.loads, maybe replacing ' with "
return d
img_gen.client.payload_fn = to_sdxl_payload
# img_gen.invoke("Give me a cute picture of a kitten!")
# OUTPUT: AIMessage(content='iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR42pz925rjyJEtDNoycwc...
def to_pil_img(d):
return Image.open(BytesIO(base64.b64decode(d)))
(img_gen | StrOutputParser() | to_pil_img).invoke("white cat playing")
For more advanced or custom use-cases (i.e. supporting the diffusion models), you may be interested in leveraging the NVEModel client as a requests backbone. The NVIDIAEmbeddings class is a good source of inspiration for this.
RAG: Context models
NVIDIA also has Q&A models that support a special “context” chat message containing retrieved context (such as documents within a RAG chain). This is useful to avoid prompt-injecting the model. The _qa_ models like nemotron_qa_8b support this.
Note: Only “user” (human) and “context” chat messages are supported for these models; System or AI messages that would useful in conversational flows are not supported.
from langchain_core.messages import ChatMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_nvidia_ai_endpoints import ChatNVIDIA
prompt = ChatPromptTemplate.from_messages(
[
ChatMessage(
role="context", content="Parrots and Cats have signed the peace accord."
),
("user", "{input}"),
]
)
llm = ChatNVIDIA(model="nemotron_qa_8b")
chain = prompt | llm | StrOutputParser()
chain.invoke({"input": "What was signed?"})
Example usage within a Conversation Chains
Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the LangChain ConversationBufferMemory example applied to the mixtral_8x7b model.
%pip install --upgrade --quiet langchain
Note: you may need to restart the kernel to use updated packages.
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
chat = ChatNVIDIA(model="mixtral_8x7b", temperature=0.1, max_tokens=100, top_p=1.0)
conversation = ConversationChain(llm=chat, memory=ConversationBufferMemory())
conversation.invoke("Hi there!")["response"]
"Hello! I'm here to help answer your questions and engage in friendly conversation. How can I assist you today? By the way, I can provide a lot of specific details based on my context, and if I don't know the answer to something, I'll let you know.\n\n(Note: The AI follows the guidelines provided in the introduction for a friendly and positive conversation.)"
conversation.invoke("I'm doing well! Just having a conversation with an AI.")[
"response"
]
"That's great to hear! I'm here to make your conversation enjoyable and informative. I can share details about various topics, such as science, technology, history, and more. I can also tell jokes, share interesting trivia, and help answer your questions. What would you like to talk about?\n\nFor example, did you know that an octopus has three hearts? Two pump blood to the gills, while the third pumps it to the rest of"
conversation.invoke("Tell me about yourself.")["response"]
"Sure! I'm an artificial intelligence designed to assist with a variety of tasks and engage in friendly conversation. I can help answer questions, provide information on a wide range of topics, and even perform certain tasks such as setting reminders or providing weather updates. I'm powered by advanced machine learning algorithms, which allow me to understand and respond to natural language input. I'm constantly learning and updating my knowledge base to better assist users. I'm capable of processing and analyzing large amounts" |
https://python.langchain.com/docs/integrations/chat/octoai/ | ## ChatOctoAI
[OctoAI](https://docs.octoai.cloud/docs) offers easy access to efficient compute and enables users to integrate their choice of AI models into applications. The `OctoAI` compute service helps you run, tune, and scale AI applications easily.
This notebook demonstrates the use of `langchain.chat_models.ChatOctoAI` for [OctoAI endpoints](https://octoai.cloud/text).
## Setup[](#setup "Direct link to Setup")
To run our example app, there are two simple steps to take:
1. Get an API Token from [your OctoAI account page](https://octoai.cloud/settings).
2. Paste your API token in in the code cell below or use the `octoai_api_token` keyword argument.
Note: If you want to use a different model than the [available models](https://octoai.cloud/text?selectedTags=Chat), you can containerize the model and make a custom OctoAI endpoint yourself, by following [Build a Container from Python](https://octo.ai/docs/bring-your-own-model/advanced-build-a-container-from-scratch-in-python) and [Create a Custom Endpoint from a Container](https://octo.ai/docs/bring-your-own-model/create-custom-endpoints-from-a-container/create-custom-endpoints-from-a-container) and then updating your `OCTOAI_API_BASE` environment variable.
```
import osos.environ["OCTOAI_API_TOKEN"] = "OCTOAI_API_TOKEN"
```
```
from langchain_community.chat_models import ChatOctoAIfrom langchain_core.messages import HumanMessage, SystemMessage
```
## Example[](#example "Direct link to Example")
```
chat = ChatOctoAI(max_tokens=300, model_name="mixtral-8x7b-instruct")
```
```
messages = [ SystemMessage(content="You are a helpful assistant."), HumanMessage(content="Tell me about Leonardo da Vinci briefly."),]print(chat(messages).content)
```
Leonardo da Vinci (1452-1519) was an Italian polymath who is often considered one of the greatest painters in history. However, his genius extended far beyond art. He was also a scientist, inventor, mathematician, engineer, anatomist, geologist, and cartographer.
Da Vinci is best known for his paintings such as the Mona Lisa, The Last Supper, and The Virgin of the Rocks. His scientific studies were ahead of his time, and his notebooks contain detailed drawings and descriptions of various machines, human anatomy, and natural phenomena.
Despite never receiving a formal education, da Vinci’s insatiable curiosity and observational skills made him a pioneer in many fields. His work continues to inspire and influence artists, scientists, and thinkers today. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:46.096Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/octoai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/octoai/",
"description": "OctoAI offers easy access to efficient",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"octoai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:46 GMT",
"etag": "W/\"5504c7928361a9d1b4f81b88d6c8b64c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::42t2g-1713753525937-6a93c554688b"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/octoai/",
"property": "og:url"
},
{
"content": "ChatOctoAI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "OctoAI offers easy access to efficient",
"property": "og:description"
}
],
"title": "ChatOctoAI | 🦜️🔗 LangChain"
} | ChatOctoAI
OctoAI offers easy access to efficient compute and enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications easily.
This notebook demonstrates the use of langchain.chat_models.ChatOctoAI for OctoAI endpoints.
Setup
To run our example app, there are two simple steps to take:
Get an API Token from your OctoAI account page.
Paste your API token in in the code cell below or use the octoai_api_token keyword argument.
Note: If you want to use a different model than the available models, you can containerize the model and make a custom OctoAI endpoint yourself, by following Build a Container from Python and Create a Custom Endpoint from a Container and then updating your OCTOAI_API_BASE environment variable.
import os
os.environ["OCTOAI_API_TOKEN"] = "OCTOAI_API_TOKEN"
from langchain_community.chat_models import ChatOctoAI
from langchain_core.messages import HumanMessage, SystemMessage
Example
chat = ChatOctoAI(max_tokens=300, model_name="mixtral-8x7b-instruct")
messages = [
SystemMessage(content="You are a helpful assistant."),
HumanMessage(content="Tell me about Leonardo da Vinci briefly."),
]
print(chat(messages).content)
Leonardo da Vinci (1452-1519) was an Italian polymath who is often considered one of the greatest painters in history. However, his genius extended far beyond art. He was also a scientist, inventor, mathematician, engineer, anatomist, geologist, and cartographer.
Da Vinci is best known for his paintings such as the Mona Lisa, The Last Supper, and The Virgin of the Rocks. His scientific studies were ahead of his time, and his notebooks contain detailed drawings and descriptions of various machines, human anatomy, and natural phenomena.
Despite never receiving a formal education, da Vinci’s insatiable curiosity and observational skills made him a pioneer in many fields. His work continues to inspire and influence artists, scientists, and thinkers today. |
https://python.langchain.com/docs/integrations/document_loaders/airbyte_typeform/ | ## Airbyte Typeform (Deprecated)
Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](https://python.langchain.com/docs/integrations/document_loaders/airbyte/) instead.
> [Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Typeform connector as a document loader, allowing you to load various Typeform objects as documents.
## Installation[](#installation "Direct link to Installation")
First, you need to install the `airbyte-source-typeform` python package.
```
%pip install --upgrade --quiet airbyte-source-typeform
```
## Example[](#example "Direct link to Example")
Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/typeform/) for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-typeform/source\_typeform/spec.json](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-typeform/source_typeform/spec.json).
The general shape looks like this:
```
{ "credentials": { "auth_type": "Private Token", "access_token": "<your auth token>" }, "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "form_ids": ["<id of form to load records for>"] # if omitted, records from all forms will be loaded}
```
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
```
from langchain_community.document_loaders.airbyte import AirbyteTypeformLoaderconfig = { # your typeform configuration}loader = AirbyteTypeformLoader( config=config, stream_name="forms") # check the documentation linked above for a list of all streams
```
Now you can load documents the usual way
As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:
```
docs_iterator = loader.lazy_load()
```
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record\_handler function when creating the loader:
```
from langchain_community.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteTypeformLoader( config=config, record_handler=handle_record, stream_name="forms")docs = loader.load()
```
## Incremental loads[](#incremental-loads "Direct link to Incremental loads")
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
```
last_state = loader.last_state # store safelyincremental_loader = AirbyteTypeformLoader( config=config, record_handler=handle_record, stream_name="forms", state=last_state)new_docs = incremental_loader.load()
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:46.217Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_typeform/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_typeform/",
"description": "Note: This connector-specific loader is deprecated. Please use",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"airbyte_typeform\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:46 GMT",
"etag": "W/\"e96efebde80f0bbdd10d4b331b20b6ca\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::xx5dl-1713753525929-07ca28621c2c"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_typeform/",
"property": "og:url"
},
{
"content": "Airbyte Typeform (Deprecated) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Note: This connector-specific loader is deprecated. Please use",
"property": "og:description"
}
],
"title": "Airbyte Typeform (Deprecated) | 🦜️🔗 LangChain"
} | Airbyte Typeform (Deprecated)
Note: This connector-specific loader is deprecated. Please use AirbyteLoader instead.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Typeform connector as a document loader, allowing you to load various Typeform objects as documents.
Installation
First, you need to install the airbyte-source-typeform python package.
%pip install --upgrade --quiet airbyte-source-typeform
Example
Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-typeform/source_typeform/spec.json.
The general shape looks like this:
{
"credentials": {
"auth_type": "Private Token",
"access_token": "<your auth token>"
},
"start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",
"form_ids": ["<id of form to load records for>"] # if omitted, records from all forms will be loaded
}
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
from langchain_community.document_loaders.airbyte import AirbyteTypeformLoader
config = {
# your typeform configuration
}
loader = AirbyteTypeformLoader(
config=config, stream_name="forms"
) # check the documentation linked above for a list of all streams
Now you can load documents the usual way
As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:
docs_iterator = loader.lazy_load()
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:
from langchain_community.docstore.document import Document
def handle_record(record, id):
return Document(page_content=record.data["title"], metadata=record.data)
loader = AirbyteTypeformLoader(
config=config, record_handler=handle_record, stream_name="forms"
)
docs = loader.load()
Incremental loads
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
last_state = loader.last_state # store safely
incremental_loader = AirbyteTypeformLoader(
config=config, record_handler=handle_record, stream_name="forms", state=last_state
)
new_docs = incremental_loader.load()
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/ollama_functions/ | This notebook shows how to use an experimental wrapper around Ollama that gives it the same API as OpenAI Functions.
Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use Mistral. For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).
You can initialize OllamaFunctions in a similar way to how you’d initialize a standard ChatOllama instance:
You can then bind functions defined with JSON Schema parameters and a `function_call` parameter to force the model to call the given function:
```
model = model.bind( functions=[ { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, " "e.g. San Francisco, CA", }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"], }, }, "required": ["location"], }, } ], function_call={"name": "get_current_weather"},)
```
Calling a function with this model then results in JSON output matching the provided schema:
```
AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{"location": "Boston, MA", "unit": "celsius"}'}})
```
One useful thing you can do with function calling here is extracting properties from a given input in a structured format:
```
from langchain.chains import create_extraction_chain# Schemaschema = { "properties": { "name": {"type": "string"}, "height": {"type": "integer"}, "hair_color": {"type": "string"}, }, "required": ["name", "height"],}# Inputinput = """Alex is 5 feet tall. Claudia is 1 feet taller than Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""# Run chainllm = OllamaFunctions(model="mistral", temperature=0)chain = create_extraction_chain(schema, llm)chain.run(input)
```
```
[{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'}, {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:46.512Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/ollama_functions/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/ollama_functions/",
"description": "This notebook shows how to use an experimental wrapper around Ollama",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4568",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"ollama_functions\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:46 GMT",
"etag": "W/\"d9115bf27f9e00078ce8c3658cf3ac2f\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::lmhs6-1713753526204-41659608c35a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/ollama_functions/",
"property": "og:url"
},
{
"content": "OllamaFunctions | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook shows how to use an experimental wrapper around Ollama",
"property": "og:description"
}
],
"title": "OllamaFunctions | 🦜️🔗 LangChain"
} | This notebook shows how to use an experimental wrapper around Ollama that gives it the same API as OpenAI Functions.
Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use Mistral. For a complete list of supported models and model variants, see the Ollama model library.
You can initialize OllamaFunctions in a similar way to how you’d initialize a standard ChatOllama instance:
You can then bind functions defined with JSON Schema parameters and a function_call parameter to force the model to call the given function:
model = model.bind(
functions=[
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, " "e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["location"],
},
}
],
function_call={"name": "get_current_weather"},
)
Calling a function with this model then results in JSON output matching the provided schema:
AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{"location": "Boston, MA", "unit": "celsius"}'}})
One useful thing you can do with function calling here is extracting properties from a given input in a structured format:
from langchain.chains import create_extraction_chain
# Schema
schema = {
"properties": {
"name": {"type": "string"},
"height": {"type": "integer"},
"hair_color": {"type": "string"},
},
"required": ["name", "height"],
}
# Input
input = """Alex is 5 feet tall. Claudia is 1 feet taller than Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""
# Run chain
llm = OllamaFunctions(model="mistral", temperature=0)
chain = create_extraction_chain(schema, llm)
chain.run(input)
[{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'},
{'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}] |
https://python.langchain.com/docs/integrations/document_loaders/airbyte_zendesk_support/ | ## Airbyte Zendesk Support (Deprecated)
Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](https://python.langchain.com/docs/integrations/document_loaders/airbyte/) instead.
> [Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Zendesk Support connector as a document loader, allowing you to load various objects as documents.
## Installation[](#installation "Direct link to Installation")
First, you need to install the `airbyte-source-zendesk-support` python package.
```
%pip install --upgrade --quiet airbyte-source-zendesk-support
```
## Example[](#example "Direct link to Example")
Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/zendesk-support/) for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-zendesk-support/source\_zendesk\_support/spec.json](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-zendesk-support/source_zendesk_support/spec.json).
The general shape looks like this:
```
{ "subdomain": "<your zendesk subdomain>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "credentials": { "credentials": "api_token", "email": "<your email>", "api_token": "<your api token>" }}
```
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
```
from langchain_community.document_loaders.airbyte import AirbyteZendeskSupportLoaderconfig = { # your zendesk-support configuration}loader = AirbyteZendeskSupportLoader( config=config, stream_name="tickets") # check the documentation linked above for a list of all streams
```
Now you can load documents the usual way
As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:
```
docs_iterator = loader.lazy_load()
```
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record\_handler function when creating the loader:
```
from langchain_community.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteZendeskSupportLoader( config=config, record_handler=handle_record, stream_name="tickets")docs = loader.load()
```
## Incremental loads[](#incremental-loads "Direct link to Incremental loads")
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
```
last_state = loader.last_state # store safelyincremental_loader = AirbyteZendeskSupportLoader( config=config, stream_name="tickets", state=last_state)new_docs = incremental_loader.load()
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:46.705Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_zendesk_support/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_zendesk_support/",
"description": "Note: This connector-specific loader is deprecated. Please use",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4365",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"airbyte_zendesk_support\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:46 GMT",
"etag": "W/\"fbd6e1906bb690a6ba16ad61557f08d4\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::klsh9-1713753526409-a8a044293531"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/airbyte_zendesk_support/",
"property": "og:url"
},
{
"content": "Airbyte Zendesk Support (Deprecated) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Note: This connector-specific loader is deprecated. Please use",
"property": "og:description"
}
],
"title": "Airbyte Zendesk Support (Deprecated) | 🦜️🔗 LangChain"
} | Airbyte Zendesk Support (Deprecated)
Note: This connector-specific loader is deprecated. Please use AirbyteLoader instead.
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This loader exposes the Zendesk Support connector as a document loader, allowing you to load various objects as documents.
Installation
First, you need to install the airbyte-source-zendesk-support python package.
%pip install --upgrade --quiet airbyte-source-zendesk-support
Example
Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-zendesk-support/source_zendesk_support/spec.json.
The general shape looks like this:
{
"subdomain": "<your zendesk subdomain>",
"start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",
"credentials": {
"credentials": "api_token",
"email": "<your email>",
"api_token": "<your api token>"
}
}
By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.
from langchain_community.document_loaders.airbyte import AirbyteZendeskSupportLoader
config = {
# your zendesk-support configuration
}
loader = AirbyteZendeskSupportLoader(
config=config, stream_name="tickets"
) # check the documentation linked above for a list of all streams
Now you can load documents the usual way
As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:
docs_iterator = loader.lazy_load()
Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:
from langchain_community.docstore.document import Document
def handle_record(record, id):
return Document(page_content=record.data["title"], metadata=record.data)
loader = AirbyteZendeskSupportLoader(
config=config, record_handler=handle_record, stream_name="tickets"
)
docs = loader.load()
Incremental loads
Some streams allow incremental loading, this means the source keeps track of synced records and won’t load them again. This is useful for sources that have a high volume of data and are updated frequently.
To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.
last_state = loader.last_state # store safely
incremental_loader = AirbyteZendeskSupportLoader(
config=config, stream_name="tickets", state=last_state
)
new_docs = incremental_loader.load() |
https://python.langchain.com/docs/integrations/document_loaders/airbyte/ | ## AirbyteLoader
> [Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This covers how to load any source from Airbyte into LangChain documents
## Installation[](#installation "Direct link to Installation")
In order to use `AirbyteLoader` you need to install the `langchain-airbyte` integration package.
```
% pip install -qU langchain-airbyte
```
Note: Currently, the `airbyte` library does not support Pydantic v2. Please downgrade to Pydantic v1 to use this package.
Note: This package also currently requires Python 3.10+.
## Loading Documents[](#loading-documents "Direct link to Loading Documents")
By default, the `AirbyteLoader` will load any structured data from a stream and output yaml-formatted documents.
```
from langchain_airbyte import AirbyteLoaderloader = AirbyteLoader( source="source-faker", stream="users", config={"count": 10},)docs = loader.load()print(docs[0].page_content[:500])
```
````
```yamlacademic_degree: PhDaddress: city: Lauderdale Lakes country_code: FI postal_code: '75466' province: New Jersey state: Hawaii street_name: Stoneyford street_number: '1112'age: 44blood_type: "O\u2212"created_at: '2004-04-02T13:05:27+00:00'email: bread2099+1@outlook.comgender: Fluidheight: '1.62'id: 1language: Belarusianname: Mosesnationality: Dutchoccupation: Track Workertelephone: 1-467-194-2318title: M.Sc.Tech.updated_at: '2024-02-27T16:41:01+00:00'weight: 6
````
You can also specify a custom prompt template for formatting documents:
```
from langchain_core.prompts import PromptTemplateloader_templated = AirbyteLoader( source="source-faker", stream="users", config={"count": 10}, template=PromptTemplate.from_template( "My name is {name} and I am {height} meters tall." ),)docs_templated = loader_templated.load()print(docs_templated[0].page_content)
```
```
My name is Verdie and I am 1.73 meters tall.
```
## Lazy Loading Documents[](#lazy-loading-documents "Direct link to Lazy Loading Documents")
One of the powerful features of `AirbyteLoader` is its ability to load large documents from upstream sources. When working with large datasets, the default `.load()` behavior can be slow and memory-intensive. To avoid this, you can use the `.lazy_load()` method to load documents in a more memory-efficient manner.
```
import timeloader = AirbyteLoader( source="source-faker", stream="users", config={"count": 3}, template=PromptTemplate.from_template( "My name is {name} and I am {height} meters tall." ),)start_time = time.time()my_iterator = loader.lazy_load()print( f"Just calling lazy load is quick! This took {time.time() - start_time:.4f} seconds")
```
```
Just calling lazy load is quick! This took 0.0001 seconds
```
And you can iterate over documents as they’re yielded:
```
for doc in my_iterator: print(doc.page_content)
```
```
My name is Andera and I am 1.91 meters tall.My name is Jody and I am 1.85 meters tall.My name is Zonia and I am 1.53 meters tall.
```
You can also lazy load documents in an async manner with `.alazy_load()`:
```
loader = AirbyteLoader( source="source-faker", stream="users", config={"count": 3}, template=PromptTemplate.from_template( "My name is {name} and I am {height} meters tall." ),)my_async_iterator = loader.alazy_load()async for doc in my_async_iterator: print(doc.page_content)
```
```
My name is Carmelina and I am 1.74 meters tall.My name is Ali and I am 1.90 meters tall.My name is Rochell and I am 1.83 meters tall.
```
## Configuration[](#configuration "Direct link to Configuration")
`AirbyteLoader` can be configured with the following options:
* `source` (str, required): The name of the Airbyte source to load from.
* `stream` (str, required): The name of the stream to load from (Airbyte sources can return multiple streams)
* `config` (dict, required): The configuration for the Airbyte source
* `template` (PromptTemplate, optional): A custom prompt template for formatting documents
* `include_metadata` (bool, optional, default True): Whether to include all fields as metadata in the output documents
The majority of the configuration will be in `config`, and you can find the specific configuration options in the “Config field reference” for each source in the [Airbyte documentation](https://docs.airbyte.com/integrations/).
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:47.162Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/airbyte/",
"description": "Airbyte is a data integration",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4367",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"airbyte\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:47 GMT",
"etag": "W/\"f02398d394a615a272bfa603e921bbc7\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::6jz7h-1713753527054-23ddfe65fa6c"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/airbyte/",
"property": "og:url"
},
{
"content": "AirbyteLoader | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Airbyte is a data integration",
"property": "og:description"
}
],
"title": "AirbyteLoader | 🦜️🔗 LangChain"
} | AirbyteLoader
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This covers how to load any source from Airbyte into LangChain documents
Installation
In order to use AirbyteLoader you need to install the langchain-airbyte integration package.
% pip install -qU langchain-airbyte
Note: Currently, the airbyte library does not support Pydantic v2. Please downgrade to Pydantic v1 to use this package.
Note: This package also currently requires Python 3.10+.
Loading Documents
By default, the AirbyteLoader will load any structured data from a stream and output yaml-formatted documents.
from langchain_airbyte import AirbyteLoader
loader = AirbyteLoader(
source="source-faker",
stream="users",
config={"count": 10},
)
docs = loader.load()
print(docs[0].page_content[:500])
```yaml
academic_degree: PhD
address:
city: Lauderdale Lakes
country_code: FI
postal_code: '75466'
province: New Jersey
state: Hawaii
street_name: Stoneyford
street_number: '1112'
age: 44
blood_type: "O\u2212"
created_at: '2004-04-02T13:05:27+00:00'
email: bread2099+1@outlook.com
gender: Fluid
height: '1.62'
id: 1
language: Belarusian
name: Moses
nationality: Dutch
occupation: Track Worker
telephone: 1-467-194-2318
title: M.Sc.Tech.
updated_at: '2024-02-27T16:41:01+00:00'
weight: 6
You can also specify a custom prompt template for formatting documents:
from langchain_core.prompts import PromptTemplate
loader_templated = AirbyteLoader(
source="source-faker",
stream="users",
config={"count": 10},
template=PromptTemplate.from_template(
"My name is {name} and I am {height} meters tall."
),
)
docs_templated = loader_templated.load()
print(docs_templated[0].page_content)
My name is Verdie and I am 1.73 meters tall.
Lazy Loading Documents
One of the powerful features of AirbyteLoader is its ability to load large documents from upstream sources. When working with large datasets, the default .load() behavior can be slow and memory-intensive. To avoid this, you can use the .lazy_load() method to load documents in a more memory-efficient manner.
import time
loader = AirbyteLoader(
source="source-faker",
stream="users",
config={"count": 3},
template=PromptTemplate.from_template(
"My name is {name} and I am {height} meters tall."
),
)
start_time = time.time()
my_iterator = loader.lazy_load()
print(
f"Just calling lazy load is quick! This took {time.time() - start_time:.4f} seconds"
)
Just calling lazy load is quick! This took 0.0001 seconds
And you can iterate over documents as they’re yielded:
for doc in my_iterator:
print(doc.page_content)
My name is Andera and I am 1.91 meters tall.
My name is Jody and I am 1.85 meters tall.
My name is Zonia and I am 1.53 meters tall.
You can also lazy load documents in an async manner with .alazy_load():
loader = AirbyteLoader(
source="source-faker",
stream="users",
config={"count": 3},
template=PromptTemplate.from_template(
"My name is {name} and I am {height} meters tall."
),
)
my_async_iterator = loader.alazy_load()
async for doc in my_async_iterator:
print(doc.page_content)
My name is Carmelina and I am 1.74 meters tall.
My name is Ali and I am 1.90 meters tall.
My name is Rochell and I am 1.83 meters tall.
Configuration
AirbyteLoader can be configured with the following options:
source (str, required): The name of the Airbyte source to load from.
stream (str, required): The name of the stream to load from (Airbyte sources can return multiple streams)
config (dict, required): The configuration for the Airbyte source
template (PromptTemplate, optional): A custom prompt template for formatting documents
include_metadata (bool, optional, default True): Whether to include all fields as metadata in the output documents
The majority of the configuration will be in config, and you can find the specific configuration options in the “Config field reference” for each source in the Airbyte documentation.
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/chat/ollama/ | ## ChatOllama
[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.
Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile.
It optimizes setup and configuration details, including GPU usage.
For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/jmorganca/ollama#model-library).
## Setup[](#setup "Direct link to Setup")
First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:
* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux)
* Fetch available LLM model via `ollama pull <name-of-model>`
* View a list of available models via the [model library](https://ollama.ai/library)
* e.g., `ollama pull llama3`
* This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.
> On Mac, the models will be download to `~/.ollama/models`
>
> On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`
* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)
* To view all pulled models, use `ollama list`
* To chat directly with a model from the command line, use `ollama run <name-of-model>`
* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.
## Usage[](#usage "Direct link to Usage")
You can see a full list of supported parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html).
If you are using a LLaMA `chat` model (e.g., `ollama pull llama3`) then you can use the `ChatOllama` interface.
This includes [special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) for system message and user input.
## Interacting with Models[](#interacting-with-models "Direct link to Interacting with Models")
Here are a few ways to interact with pulled local models
#### directly in the terminal:[](#directly-in-the-terminal "Direct link to directly in the terminal:")
* All of your local models are automatically served on `localhost:11434`
* Run `ollama run <name-of-model>` to start interacting via the command line directly
### via an API[](#via-an-api "Direct link to via an API")
Send an `application/json` request to the API endpoint of Ollama to interact.
```
curl http://localhost:11434/api/generate -d '{ "model": "llama3", "prompt":"Why is the sky blue?"}'
```
See the Ollama [API documentation](https://github.com/jmorganca/ollama/blob/main/docs/api.md) for all endpoints.
#### via LangChain[](#via-langchain "Direct link to via LangChain")
See a typical basic example of using Ollama via the `ChatOllama` chat model in your LangChain application.
```
# LangChain supports many other chat models. Here, we're using Ollamafrom langchain_community.chat_models import ChatOllamafrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplate# supports many more optional parameters. Hover on your `ChatOllama(...)`# class to view the latest available supported parametersllm = ChatOllama(model="llama3")prompt = ChatPromptTemplate.from_template("Tell me a short joke about {topic}")# using LangChain Expressive Language chain syntax# learn more about the LCEL on# /docs/expression_language/whychain = prompt | llm | StrOutputParser()# for brevity, response is printed in terminal# You can use LangServe to deploy your application for# productionprint(chain.invoke({"topic": "Space travel"}))
```
```
Why did the astronaut break up with his girlfriend?Because he needed space!
```
LCEL chains, out of the box, provide extra functionalities, such as streaming of responses, and async support
```
topic = {"topic": "Space travel"}for chunks in chain.stream(topic): print(chunks)
```
```
Why did the astronaut break up with his girlfriend before going to Mars?Because he needed space!
```
For streaming async support, here’s an example - all possible via the single chain created above.
```
topic = {"topic": "Space travel"}async for chunks in chain.astream(topic): print(chunks)
```
Take a look at the [LangChain Expressive Language (LCEL) Interface](https://python.langchain.com/docs/expression_language/interface/) for the other available interfaces for use when a chain is created.
## Building from source[](#building-from-source "Direct link to Building from source")
For up to date instructions on building from source, check the Ollama documentation on [Building from Source](https://github.com/jmorganca/ollama?tab=readme-ov-file#building)
Use the latest version of Ollama and supply the [`format`](https://github.com/jmorganca/ollama/blob/main/docs/api.md#json-mode) flag. The `format` flag will force the model to produce the response in JSON.
> **Note:** You can also try out the experimental [OllamaFunctions](https://python.langchain.com/docs/integrations/chat/ollama_functions/) wrapper for convenience.
```
from langchain_community.chat_models import ChatOllamallm = ChatOllama(model="llama3", format="json", temperature=0)
```
```
from langchain_core.messages import HumanMessagemessages = [ HumanMessage( content="What color is the sky at different times of the day? Respond using JSON" )]chat_model_response = llm.invoke(messages)print(chat_model_response)
```
```
content='{ "morning": "blue", "noon": "clear blue", "afternoon": "hazy yellow", "evening": "orange-red" }\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n ' id='run-e893700f-e2d0-4df8-ad86-17525dcee318-0'
```
```
import jsonfrom langchain_community.chat_models import ChatOllamafrom langchain_core.messages import HumanMessagefrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatejson_schema = { "title": "Person", "description": "Identifying information about a person.", "type": "object", "properties": { "name": {"title": "Name", "description": "The person's name", "type": "string"}, "age": {"title": "Age", "description": "The person's age", "type": "integer"}, "fav_food": { "title": "Fav Food", "description": "The person's favorite food", "type": "string", }, }, "required": ["name", "age"],}llm = ChatOllama(model="llama2")messages = [ HumanMessage( content="Please tell me about a person using the following JSON schema:" ), HumanMessage(content="{dumps}"), HumanMessage( content="Now, considering the schema, tell me about a person named John who is 35 years old and loves pizza." ),]prompt = ChatPromptTemplate.from_messages(messages)dumps = json.dumps(json_schema, indent=2)chain = prompt | llm | StrOutputParser()print(chain.invoke({"dumps": dumps}))
```
```
Name: JohnAge: 35Likes: Pizza
```
## Multi-modal[](#multi-modal "Direct link to Multi-modal")
Ollama has support for multi-modal LLMs, such as [bakllava](https://ollama.ai/library/bakllava) and [llava](https://ollama.ai/library/llava).
Browse the full set of versions for models with `tags`, such as [Llava](https://ollama.ai/library/llava/tags).
Download the desired LLM via `ollama pull bakllava`
Be sure to update Ollama so that you have the most recent version to support multi-modal.
Check out the typical example of how to use ChatOllama multi-modal support below:
```
pip install --upgrade --quiet pillow
```
```
Note: you may need to restart the kernel to use updated packages.
```
```
import base64from io import BytesIOfrom IPython.display import HTML, displayfrom PIL import Imagedef convert_to_base64(pil_image): """ Convert PIL images to Base64 encoded strings :param pil_image: PIL image :return: Re-sized Base64 string """ buffered = BytesIO() pil_image.save(buffered, format="JPEG") # You can change the format if needed img_str = base64.b64encode(buffered.getvalue()).decode("utf-8") return img_strdef plt_img_base64(img_base64): """ Disply base64 encoded string as image :param img_base64: Base64 string """ # Create an HTML img tag with the base64 string as the source image_html = f'<img src="data:image/jpeg;base64,{img_base64}" />' # Display the image by rendering the HTML display(HTML(image_html))file_path = "../../../static/img/ollama_example_img.jpg"pil_image = Image.open(file_path)image_b64 = convert_to_base64(pil_image)plt_img_base64(image_b64)
```
![](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAIcA8ADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3eilpK0ICiiigAooooASilpKACiiigAooooAMUlLRTASilpKACkpaKAEooooEFJS0UDEopcUlMBKKWigBKKMUUCCiiigYUUUUAFFFFABSUtFACUUuKSgAooooASilpKYBRRRQAUUUUAFFFFAwopKWgQlFLSUDEopaKYCUUUUAFFFFABRRRQAUUUUAJRS0UAJRRRTASiloxQAlFFFACd6WiigYYpKWigBKKWkpgJRS0lABRRRQMSiloxTASiiigApKWigBKKWkoAKSlopgJRRRQMKSlooASiiigBMUUtFAxKQilxRTEJRS0lAwxSUtFACUUUUAFJS0UxiUYpaSgBKKWkoASjFLRTGNopaKAEopaSgAxSUtFMDZoooriMBKKWkpgFFFFABRRRQAUlLRQAlFLijFACUUUUAFFFFACUUtFMBKSlooEJRS4pKACiiigYlJmnUlMAoNFFACUtFGKAEooooAKKKKACiiigAooooASig0uKAEooooAKSlooASilpKYBRRRQAUUUUAFFFFABRiiigYlGKWjFACYpKWimAlFFFABRRRQAUd6KKACijvRQAlFLSUAFFFFMBKKWkoAKKKKBhRRRQAlFLRQAlJS0UwEopcUlABRiiigYlFLSUwCiiigApKWigBKKKKAEopaKYxKMUUUAJRS0YoASiiigBKKWjFAxKQ0tFMBKMUuKSgBKKWjFAxKKKKACkpaKYCYpKWigBKKMUUDEopcUlMBKKWkoA2aKWkrjMAooopgJRS0lABXzp4m+NHifXvETaT4LjMUJkMUDRQiWa4x/FhgQBxngZA6n0+imUOpVuhGDXyPc2PiL4O+O0vBbB1hdxbzSoTFcxkEdexweQOQf1ljRu3Pj74reCbqCXXvOMMh+WO9t0McnqAyjIPsDXtOn/EG11T4aXHjC0tixt7eR5bRpMFZEHKFsdO4OOhBx2rhIfiv4F+INlDo/jHTZLNfNWRfMlYw7wCAd6FWXqeoA560/4k/DeC08EpN4TuE0/SLKO4vLqAXMri53LHggknPEfc459zQM6j4afE5viFcajE+krY/Y1jbIuPM37i3+yMdK9Dr5J+FXhPWPE+sTyaVqi2S2Lwyzgu6+au48fL16Hr61b+JGu65Y/F3U10zUL1JIrqIwRRyMRu2oQAnQ89sc0X0Cx9VUV4P4H0nxZ8P5Nc8X+Lkma3XTXIWa7EjvKXQqp5OCcYz2zXF6Vb+OfjFrd466oUjgAd/MmaO3hDE7VVVB54PbtkmncVj6sor5f0Hxd4q+Fvjj+xdfu5rixSRVuYXlMibGHEkZbpwc9s9CAelj456vqVl8RRHaahdQRiziYJFMyjOW5wDii4WPpelr5c8ZeDviDYaJ/wl+u6v5pZleSNLpvMg3kAcABQMkDCnivQ/hv8RLpvhPquq6xK93caMWQO7ZaVdoKBj65O3Pp1ouFj2Civk3RbTxx8WvEF3LFqjhocSySSzPHDBnhVVVBx0OAB2JJ71neIrjxfoHihNH1rWLx7m08uMFLpmVk6qQe4we/Pbtii4WPsKilryL9oG9u7Hwnpb2lzNbub7aWicoSNjcZFO4rHrdFfJ2geHviH440y2mtJby4020YxxPNdhFzuLHGSCxyx559M8Yrq/i14/wBc1LxcfCHh64mgijkS3f7O+17iZsDbuHIAJ247nOc8UXHY+hq5zxz4pPg3wpc62LMXZhZF8oybM7mC9cH19K+e9e8L+PPhfBZ622skLJMFLWty7hZMEgOrABgQD6jj6Vp+PzqnjfwNYeOhfLDZxW0drdWIdhunErKzhfu4O4HnnFK4WPYfhz47bx9o11ftpwsTBceTsE3mbvlDZztHrXZV81fBPwnq+p6hDr1rqawafY3m2e13uDKdoPQcHqOtN8Y+MvEnxB8eN4a8P3UsNl57W8EUUpQS7c7pHI6jgtjsB0z1d9AsfS9FfLGr2fjn4PatYzvqoeOfcyeVM8kMmMblZWA55Hbvwc9O4+MfiSXUfh14b1nTbi4tVvZRLiOUqRmMnaSDzg8UXCx7fRXy/ovhH4heOPBsV/FrBOm26yLbQTXThpsMS2AAQTnIyxzxjpirvwi8Ra3rI1vwi2pTMt7ps32SSaRibeXG0EHqB82ePT60cwWO91n4xvpPxDPhX+w1lUXUVv8AaTdYPz7eduztu6Zre+KcXimXwiF8JtOLzz1M32d9spi2tnaeuc7enNfNOr+HdS07x8dAuL5ZdRF1FD9qDsRvbbhsn5uMj8q9T8f6Vrngr4O6fY3WsSzXv9sAtcQzPkqY5CFyeccdOlK4WPTfhvH4ki8G26eKmkOo72x5rBpPLz8u8jv198YzzXW15J8NPEraP8FrrXtRmluTayTv+8kJZyCAq5PqSB+NeY6Xb+OPjBrd466oVjgAd/NmaOCEMTtVVXPPB7Z4yTTuB9U0V8w6F4u8U/C7xt/YuvXcs9ksircwPKZE2MOJIyeRwc9s9CAelj446vqVl8RBHa6hdQRiziYLFMyjOW5wDRzaBY+lqK+XvGPg/wCIFjon/CXa5q3mksrSRpdP5kG8gDjAUDJAwp4zXq/wT8UX/iTwZKmpTtcXVjOYfNc5Z02gqWPc8kZ9qaetgselUVHPIYbeWVY2kZELBF6sQM4HvXzpNoPxQ+IfiGVdUN7pNq6lts2+K3jUdFCj7x5+p7mhsLH0dRXy94e8Q+Jvhv8AEaPQr/UXuLZbmOC6gMrPEyPjDrnocMCMY9DVz45atqVl8RPLtdQuoIxZxNsimZVzlucA0ubQLH0rRXy/4x8H/ECx0T/hLtd1YyksryRpdP5kG8gDjAUDJAwp4r1D4QeMrrVvh/eXOs3DzyaU7q8zHLtEEDgk9yORnvgU0wseoV86fHvVdRsvHVlFaX91bxnTY2KRTMgz5svOAevArnY77xh8XfGMtva3rxghpViaZkgtogeMgd+QM4JJP5YnjvR/EWga7DpviS6N1cQ2yiCXzTIGhLMRgnnG4uOeR9MVLlcaPrnRWZ9B05mJZmtYiSTkk7RV6qGhj/in9N/69Yv/AEEV5h+0BfXdj4b0lrS6nt2a7IYxSFCRsPXFXeyuI9eorg/g/cT3Xwu0ya4mkmlJnzJIxZj+9cDk15H8EdW1K8+I0UV1qF1PGbaU7JZmYZwOcE0uYLH0xRXyWb/xTf8AxLvtN0XVbxLue/uIYh9oIVQWcHqeAFzyOR25xU/iLR/G/wALtVs72fV3L3BLpcW9w7o7LjKuGAz16EYI/HBzBY+raK8a8b/E7UV+FOiavpJNtd6ufLkmUf6kpkSBc9DuGAfTPfp57pHgjxT4j8JP4q07Xnu7tS2bRLh2uOGIPIP3u4HcUc3YLH1PSV5r8LNZ8UpomoweMrW9hFioliu7yJlZ48EsCT94rjOevNeTz634v+L/AIwksNNu5La1O547fzWSKCIHG59v3jyBnBOTgYFHMFj6ir52+PeqahZeObGK0v7qCI6ajFIpmQZ82XnAPXgflWQuqeL/AIPeMLez1G+e5tGVJJIVmaSKaEkg7d33W4IzgEEdx1sfH+VJ/HGmzRnKSaTEyn1BklIpN3Qzt/jBf3lp8J9BntrueGV7m3DSRyFWIMEhIJB55Ga2/gldXF58OopbqeWeT7VKN8rlmxkdzXOfGj/kkHh7/r6tv/SeSt34E/8AJNYv+vqX+Ypr4gPS6KKKsQUUUUAFFFFAAaSlooASiiimAYpKWigBKKWkoAKKKKBhRRRQAlFLSUwDFJS0UAJRRRQMMUlLRQAlFFFMApKWigBKKWjFACUlLRQAlFLSUxhSUtFACUUtJQAUlLRQAlFLSUDExRS0UxCUlLRQMSilpKADvRRRTASiloxQMbRS0UAJikpaKYGxRRRXGYiUUtJQMKKKKYinqlxPaaTe3NrB59xDA8kUX/PRwpIXj1PFeQ+D/jNY+M9YfRPE2nabZ2c8R2GZ90cj5GEYMMcjPXuMd69qrybxf8CNF8Q6hLqGl3j6VczEvJGsYkiZjySFyCpPsce1JjRw/wAaPCHg3RNPtdQ0GWC2v5pwrWcEwZXQqSXC5O3BAHGBz0ra8E315ffs4+IY7lndbWO5hgLHP7sRq2B7Asw9ulN0/wDZujW5RtS8Rs8IPzR21ttZh7MWOPyNew2XhjSdP8Mf8I7bWqpphgaBos/eVgQ2T3JycmlYdzw39nO8trfVddhnuIopJYoTGrsAWwzZx69RXPeMv+ThW/7Ctr/7TrtB+zn5OrpPbeJMWqSB1WS13OADkAkMAfrgfSui1n4M/wBr/EM+K/7f8rN1Fc/Zfse77m35d+8ddvXHGaNQOg+LVlLf/C7XoYAxcQrLhRk7UkVz+imvM/2d9c060TWdMubmKC6laOaISOF8xQCCBnqRxx7/AFr39lV1KuoZWGCCMgivFvEP7PGnX1/Jc6JqzafE53fZZYfNRT/stuBA9jn602JHnnxm1K28R/E0xaS63RjiitA0J3CSTJOAR15bH1FP+OkJt/H8MBYsY9OgQse+NwzXrXgb4KaT4S1OPVLy8fU7+E7oC0flxxN/eC5OT6EnjrjvTPH/AMG/+E58S/2x/b32L9wkXlfY/M+7nnO9fX0pWHc0PjF/yRzVv922/wDR0deR+CrWW8+BnjiKEMXEsUny9cIVc/opr33xj4W/4Szwdd+H/tn2X7QIx5/l79ux1b7uRnO3HWsf4f8Aw+i8AaPqNnPqaX8N0/mSM8AiVVC4IILMCMU+ojzf9njXNOs/7Z0y6uYoLmZo5YvMYL5gAYEAnuMg49/rXHfFq/tNR+LF1NZXEVxEpgQvG25dwVQRkdcHitCXwX8P/EOpXc2ieNoNItkmZfs2oxgYUH70blxuU9gecdea42fSLGTxvBpHh66l1C3a4igiuGTaZnO0MwHZd2ce3ekM+068b/aL/wCRP0r/AK//AP2m9eyiuN+I3gP/AIT/AEe1sP7S+wfZ5/O8zyPN3fKVxjcuOvWqexKMz4H/APJK9O/66z/+jGrwHx5p5tfizq9vfXD2kcuomRrjYWMccjbg4A5OA2eOuK+oPA/hb/hDPCtton2z7Z5Lu3neV5e7cxb7uT6+tZfj34Y6P47WOe4d7TUYk2R3cQyduc7WU/eHJ7gj1pW0H1PINV+E+l2OjHUr/wCI9o1lt3owh8zf6bQJCWPPat7xD4bh8M/s83dra6rHqltPcxXMVzHHsUqzpgAZPpTLb9m/F2DdeJd1sDyIrTa7D6liB+teqP4E0Q+BX8IRwvHpjRbMhsuG3bt+e7bvm6Yz27UWC553+zveWy+GtUtWuI1uDe7xGXAYgooBA+oNeK2ejoPGzaPqWp/2VtupIJLyRCREwJAJGRgE4GcgDOele1+G/gNJoHiqx1b/AISFJorOdZlj+yEM+DnBO/j68/Suj8efB/R/Gl6dSjuX07UmwJJo03pKAMDcmRzjjII980rMdzy3XfhTpGj28M+r/EW0CSOFjzbmQnceoAkJx3J6CtT4taJ/wjnwp8LaT9sW8FtcsBcKu0OCrMCBk8YPrV/TP2coI7xX1TxA01sp5jt7fYzD/eLHH5Gu58b/AA0tvFvhzTNEtL0aXbaewMQEHmjaF2hcbl7d8mnYVyD4K/8AJJ9K/wB6f/0a9eNfAf8A5KXF/wBekv8AIV9C+CvDH/CH+E7XQ/tn2v7OZD53leXu3OW+7k46461xvgP4N/8ACE+Jk1j+3vtu2J4/K+yeX97vnef5U7bBc8p8bypbfHyaaZxHHHqVs7sxwFAEZJPtivS/2gZ4rj4d2EkMqSIdUjwyMGH+ql7irnxB+DVt4z1s6zaal9hvJFVZ1eLzEkKjAbqCDgAdxwPxif4MNL8OoPCja8FMeo/bjci0yD8jLt27/fOc/hSswuctoFnLf/sx6xDCGLiV5cLycJIjn9FNO/Z61vTrRNZ0y5uYoLmVo5ohIwXzFAIIGepHHHv9a9W8C+D18GeFhojXgvl813Mhh8sEN225P868+8Q/s9adfX8lzomqtp8Tnd9llh81FP8AstuBA9jn60WA8++MmpW3iP4mGLSnW6McUVoGhO4SSZJwCOvLBfqKd8coTb+PoYSxYx6fAhY98bhmvWfA/wAFtJ8JanHql5ePqd9Ed0BaPy44j6hcnJ9CTx1xnmmePvg7/wAJx4k/tf8At37F+4SLyvsnmfdzznevr6UWY7mh8Yf+SPar/u2//o6OuX/Zz/5F3Wv+vtP/AECvSfGHhj/hLPB91oH2z7L9oEY8/wArft2OrfdyM52461l/DnwB/wAIBp17af2n9v8AtMqy7vI8rbgYxjc2adtRX0Om1rUDpOhajqITf9ktpJ9n97YpbH6V8xeF7TUvi14uuIdf8RywKImn2lsgjIGyNCdqgZz/AEPWvqiaKOeF4ZkV4pFKujchlIwQfwrxHVf2dbafUXl0vXmtrR2yIZrfzGjHoGDDI+oH1NDuCPKdZ0jTtB+Iw0zS777bZ291Cq3BIO4/KWGRxwxI/Cul+Pf/ACUj/tyi/m1dlL+zpAs8Mll4nmhKKpPmWYkJcckjDjA9ucepre8d/B1vG/iEas+v/ZG8hITGLPfkrnnPmD16UrMdzR+MH/JHtV/3bf8A9HR1wPwdtZb34YeNLSEMZZ45I0C9dxhYDH4mvXvGHhj/AISvwddaB9s+y/aBGPP8rft2OrfdyM52461mfDnwB/wgGnXtp/af2/7TKsm7yPK24GMY3NmnbUXQ8a+AesWGmeL723vZ44Gu7XZC0jBQzBgduT3Iyfwqv8edSstS8fwfYrmK4FvYRwymJtwV98jbc+uGH5133ir4B2GsavPqGkap/ZwncvJbvB5iBickqdwIHtz+HSqM/wCzlatBAsHiSWOVVPnO9mHEjZ4IG8bRjjGT9aVnsM9h0P8A5F/Tf+vWL/0EV5V+0UjHwrpMgHyrfEE+5RsfyNeuWNt9i0+2td+/yIlj3YxuwAM47dKy/Fvhaw8Y+H5tI1DesbkOkqY3RuOjDP4j3BIqnsScP8IfEWj2XwotxdajbQGyacXAkkAKZdmHGc9CPrXl3wI/5KXF/wBekv8AIV2Fn+zlGl8rXviNpbRWBKQ2ux3HcZLEL9ea6TwN8HB4K8TrrK679sVYnjEJtPL4bvu3n+VLUq6PI/DV9a6d8dhc3k6QW66rcBpJDhVyXUZPbkjmu4/aD1zTrjTdI0y3uYZrkTNcMI3DbE24GcdMk8fSvObPQYfE/wAXbvRriWSGO61C5UyR4ypBcg8+4FelaZ+zrbQ6ikmpa81zZowJhitvLaQehbcdv4A/UUlewFSwuPCtj8FPD+leMlu1jvzPPbNDES8ZEhwyntw4PcENXOaj8MJtO8Mnxh4X8Sx3GnxxeeGYmCZR3GQSNw6YyDnive/EngjRPE/h6PRby28u3gULavFw1uQMAr+HGDwa8oP7Oc3nFF8TqLcnPNmd35b8frTaYFn4Y+Ktb8Z+DfE+gahcSXdxDYsttcPzIfMR12sf4uQCCeeTXkXg7RrfXPEC6bda4uitIhCTyKSGcEfIfmGM89T1GO9fU/grwLpXgbTJLTTvMllmYNPcS43yEdOnQDJwPfvXJeMfgfpHiTU5dT069fS7qdi8yiLzI5GPJYDI2knrg49qHF2C559qnwo0ix1Gzs9Q+IVp9ru38uJGti5z2yRIdozxk4GTVT442Z07xPolk0nmtb6JBEZMY3FXkGce+K7bQf2e7CzvkuNa1dr6JGB+zQw+WrezNknHsMfWt74g/CNfHWu22pLrX2BYLVbYRC08zIDO2c7xj72MY7UWdguYPxo/5JB4e/6+rb/0nkrd+BP/ACTSL/r6m/mK2PGngD/hL/B+n6B/af2T7HLHJ5/keZv2RsmNu4Yzuz1NX/AnhH/hCvDSaP8Abvtu2V5PN8ry/vdsZP8AOmlqFzpaSloqxCUUtJQAUUUUAFFFFABRRRQAUlLRQAlGKKKYBSUtFACUUUUAFFFFAwpKWjmgBKKWkpgJRS0lABRRRQMSilopgJRRRQAUYoooASilpKACkpaKYCUUuKSgYlFLRQAlFFFABikpaKAEoxS4pKAEopaSmMKSiloASilpKYBRiiigBKKWkoGa9FFFchiFFFFACUUtJQMKKKKYhKKWkoAKKKKACiiigAooooAKjmijnheGaNZIpFKujDIYEYII7jFSUUAeSat+z74Xvrtp7K7vtPVjkwxsroPpuGR+ZrofB3wo8OeDLoXtok93fgELcXLAlAeDtAAA474z713VJSsMKSloqhCUtJS5oAKSiigAooooEJRS0UDG0UtJTAKKKKBBRRRQAUUYooGJRS0YoASiijFAgooooGFFFFABRiiigApKWigBKKXFJQB53pXwh0rSvGo8Txajevci4kuPJYLsy+7I6Zx81eiYoootYBKKWkpgFFFFABRRRQAUUUUAFFFFAxKKWjFADaKWimAlFFFABRRRQAUUUUAFFFFACUUUuKAEooopgJRS0YoASiiigYUUUUAFJS0UAJRiiimAlFLSUAFGKKKBiUUtGKYCUUUUAFJS0UAJRRiigApKWimAlFFFAxKKWigBKKKKAEIoxS0UDEpKWimIbS0tJigYYpKWjFACUUUUwNaiiiuQzCiiigQUUUUAJRS0lAwooopiEopaKAEooooAKKKKACiiigAooooASilpKACjFFFMQlFLRQMSilFJQAUUUUCCkpaKBjaKWkpiCiiigAooooAKKKKBhRRRQAlLRRQAUlBooAKKKKACiiigAo60UUAJRS0UAJRRRQAUlLRQAlFLikpgFFFFABRiiigBKWiigApKWigYlFLSUAJRS0YpgJRRRQAUUUUAFFFFABRRRQAlFLSUAFJS0UwEopaSgAooooGFFFJQAUUtFACYpKWimAlFLSUAFJS0UDEopaSmAUUUUAFJS0UAJRRRQAlFLRimMSkpaKAEopaSgAooooASilpMUAGKSlopgJRRiigZq0UtGK5TMSiiigAooooEFFFFACUUtJQMKKKKYgooooASilpKACiiigAooooAKKKKAEopaSgAooooEFJS0UxiUUtJQIKKrXd/bWKbp5VU9l6k/QVzOpeKn8tzCVtoV6yORnH8hWlOlOeyLjBs6W7v7ayTdPKqnsvUn6CudvPFEzNi2RYkB6vyT/QV51q/jiCN3FmGupj1kcnbn+Z/zzXGX+rXupy77qdmAOVQcKv0FetQyxvWZtGkup9C2PiWCbCXa+S/94cqf8K3EdZEDoysp6EHINfOGl+LdQsMJK32mEfwyH5h9G6/nmu80HxhBOw+x3RhlPWCTjP4dD+HNZYjLpQ1iKVLseqUVhWPiWCXCXa+U/8AeHKn/CttHSRQ0bKynoQcg15soSi7NGLi1uOoooqSQooooAKKKKBhRiiigBKKWkoAKKWkoAKKKKACiiigAooooASilpKACiiigQYpKWimMSiiigAooooAKKKKACiiigAoxRRQMSkpaKYCUUtJQAUUVBc3kFom6eVU9Bnk/QUJN6ICeoLm8t7RN08qoOw7n6Cuev8AxM2xvIAhjHWR8Zx/SuA1fxvbRO4ti15Oerk/IPx6n8Pzrro4OdR7Gsabe56DeeJZCSLZRGg/jfk/l2qxZeI43wl2mw/31GR+I6ivAtR1m+1R83MxKdo14Ufh/jVzS/FGoabtQv8AaIB/yzl5wPY9R/KvRllnuabmnslY+jI5I5kDxurqehByKdXleh+Mba4dfs9w1tOesUhxu+nY/wA67ey8RxvhLtNh/vryPxHUV5lXCzpu1jKVNo3aKbHJHMgeN1ZT0IORTq5zMKSlooASilpKACkpaKBgKSlooASilpKYCUUtFACUUUUDCkpaKYCUUUUAFFFFACUUtGKAEpMUtFMBKKWkoGFJS0UAJRS0lABSUtFACUlOpMUDNWiiiuYyCjFFFAxKKWjFACUUUUCCiiigBKKWkoGFFFFMQUUUUAFJS0UAJRS0lABRRRQAUUUUAJRS0lABRVW71C2sU3Tyqp7L1J+grmtS8VP5bmErbQr1lcjOP5CtadGc9kVGLZ0t5f2tim6eVVPZepP0Fc1qXip/LcwlbaFesjkZx/IV53q/jmCN3FmGupj1lfO3P8z+n1ri9Q1a91OTddTs4HROir9B0r18NlTeszaNLudpq/jiCN3FmGupj1kfO3P8z+n1rjNQ1a91OTddTs4HROir9BVKivYp4eFPZGySQtFFFbDCgcciiipaA39M8WahYYSVvtMI/hkPzD6N1/PNd5oPjCCdh9jujDKesEnGfw6H8Oa8kpc45FctbC06nQTimfSNj4lgmwl2vlP/AHhyp/wrbR1kQPGwZT0IOQa+cdM8WahYbUlb7TCP4ZD8w+h6/nmu80HxhBOwFndGGU9YJDjP4dD+HNePiMulHWJhKj2PVKKwrHxLDNhLpfKf+8OV/wDrVto6yIHRlZT0IOQa82UJRdmjFxa3HUUUVJIUUUUAFFFFAwpKWigBKKKWgQlFLSUDCiiigAooooAKSlooASilpKACiiigBKKWjFMBKKKKACiiigApKWigYUUYqvc3lvZpunlVB2Hc/QU0m9gsT1Bc3lvaJunlVB2Hc/QVz2oeJ22N9nAhjHWSQjOP5CvP9Y8cW0TuLYteTnq5PyD8ep/D8666GCqVXsaRpt7nf6h4mba32cCGMdZJCM4/kK4DV/G9tE7i2LXk56uT8g/Hqfw/OuK1HWb7VHzczkp2jXhR+H+NUK9uhl0KavI3jBIv6jrN9qj5uZiU7Rrwo/D/ABqjSUtd6ioqyRYUUUUAFbemeKNQ03CF/tEI/wCWcpzgex6j+XtWJRUShGSs0B6rofjG2uHX7PctbXB6xSHG76dj/Ou3svEcb4S7XYf76jI/LqK+dK29L8UahpuEL/aIB/yzlPQex6j+VebXy+M9YmcqaZ9FxyJKgeNldT0IORTq8r0PxjbXDr9nuWtpz1ikON307H+ddvZeI43wl2mw/wB9eR+I6ivIq4WdN7GMqbRu0U2ORJUDxurqehU5FOrnMxKKWigBKKKKACiiigYUlFLTASiiigBKKWkxQAUUUUDEopaMUwEooooAKSlooASilpKADFJS0UwEopcUlAxDRS0YoASiiigDUooormICiiigAooooEFJS0UAJRS0lABRRRQAlFLSUDCiiimIKKKKACiiigBKKWigBKKq3moWtiu6eVVPZerH6CuZ1PxXIY3MJW2hXrI5GcfXoK1p0Z1H7qKUGzprvULWxTdcSqp7KOWP0Fczqfit/LcwlbaEDmRyM4+vQV53q/jm3jdxZg3Ux6yvnbn+Z/T61xWoate6pJuu52cDonRV+g6V7GGylvWZtGkup2mr+OYI3cWYN1Mesrk7c/zP6fWuM1DVr3U5N13OzgdE6Kv0HSqNLXtUsNTpL3UbJJBRRRWwxaKSlpAFLSUUgFooopAFFFFSAZpaSlpNAb2meLNQsMJK32mEfwyH5h9G6/nmu80HxhBOw+x3RhlPWCTjP4dD+HNeS0dORXLVwtOothOKZ9I2PiWCXCXS+U/94cqf8K20dZEDowZT0I5Br5x0zxZf2G1JW+0wj+GQ/MPoev55rvNB8XwTuPsd0YZT1gkOM/h0P4c149fLpR1iYSpdj1SisKy8Swy4S6Xyn/vDlT/hW2kiSIGRlZT0IORXnShKOjRi4tbjqKKKkkKKKKBhRRRQAUlLRQAYpKWkoAKKKKACiiigApKWigBKKWigBKKKKACiiigBKKWq91eW9mm6eVUHYdz9BTWuiBE9QXN5b2ibp5VX0Hc/QVzuoeJ22N9nAhjHWWTGcfyFefax44toncWxa8nPVy3yA/Xqfw/Ou2hgqlV7Gsabe56DqHidtjfZwIYx1kkIzj+Qrz/WPHFtE7i2LXk56uW+QH69T+H51xOpazf6q+bmclO0a8IPw/xrPr28PlsKesjaMEjQ1HWb/VXzdTkp2jXhB+H+NUKKK9FRUVZI0ClpKKAFooopALRSUtIAooopAFLSUUgFra0zxRqGm7UL/aIB/wAs5TnA9j1H8vasWionCMlZoD1TQ/GNtcOv2e4a2nPWKQ43fTsf5129l4jjfCXaeWf76jI/Edq+dK2tM8UahpuEL/aIB/yzlPQex6j+XtXnV8vjPWJnKmmfRkciSoHjdXU9CDkU6vK9D8Y21w6/Z7hra4PWKQ43fTsf5129l4jjfCXa7D/fXkfiOoryKuFnTexjKm0btFNjlSVA8bq6noQcinVzmYlFLRigBKKKKAEpaKKBiUUUtMBtFLRQAlFKaSgAooooGJRS0lMAooooAKSlooASilpKAEopaKYCUUUUDNOiiiuYgKKKKACiiigQUUUUDCiiigQlFLSUAFFFFACUUtJQMKKKKYgoqreaha2KbriVVPZepP0FcxqfiyTy3MJW2hXrI5GcfU8CtKdGdR+6ilFs6e81C1sU3XEqqeyjkn6CuY1PxXJ5bmErbQjrK5GcfXoK861fx1bxu62YN1Mesrk7c/zP6fWuK1DVr3VJN13OzgdE6Kv0HSvawuUN+9M2jSXU7XV/HUEbutmGupj1lcnbn+Z/T61xeoate6pJvu52cDonRV+g6VRpa9ylhadJe6jZKwUtJRWwxaSlpDSAWikpaQBRRRSAWikpaQBS0lFIBaKKKQBRRRUgGaWkpaQBQPUGiik0BvaZ4sv7DCSt9phH8Mh+YfQ9fzzXeaD4wgnYfY7owynrBIcZ/Dofw5ryWgHHIrmq4WFToJxTPpGx8Swy4S7Xyn/vDlT/AIVto6yIGRlZT0IORXzjpniy/sMJK32mEfwyH5h9D1/PNd5oPi+Cdh9jujDKesEhxn8Oh/DmvHr5fKOsTCVLseqUVg2PiWGXCXS+U394cqf8K3EkSRAyMrKehByK86UJRdmjFxa3HUUUVJIUUUUDCiiigBKKWigBKKWkoEFFGKKBhRRRQAUUVXur23s03TyqnoM8n6Cmk3sOxPUFzeW9mm6eVU9B3P0Fc7qHihtjfZwIYx1kkxnH8hXnuseObaJ3FsWvJz1kJ+TP16n8PzrsoYGpVexpGm3uehah4nbY32cCGMdZJMZx/IV57rHjm2idxbFryc9XJ+TP16n8PzriNS1q/wBVfN1OSnaNeEH4f41Qr3sPlcKeszaMEjQ1HWr/AFV83U5Kdo14Qfh/jVCiivSjBRVoosWikpaYxKWkNFIBaKKKQBS0lFSAtFFFIBaKSlpAFFFFIApaSikAtFFFIArb0zxRqGm7UL/aIB/yzl5wPY9R/KsSis5QUlZoLHqmh+MLa4dfs9w1tOesUhxu+nY/zrt7LxFG+Fu02H++vI/EdRXzpW3pnijUNOwhf7RCP+Wcp6D2PUfy9q8+vgIy1iZyppn0XHKkqB43V1PQg5FOryvQ/GFtcOPs9w1tOesUhxu+nY/zrt7LxHG+Eu02H++vI/EdRXk1cNOmzGUGjdpKSOVJUDxurqehByKdXOZiUUuKSgApKWigYUUUnegAopaMUAJSUtFMBKKWkoAKSlopjEopcUlABRRRQAYpKWjFACUUUUAaVFFFc5IUUUUAFFFFABRRRQAUUUUCCiiigBKKWkoAKKq3moWtim64lVT2UcsfoK5fVPFknluYSttCvWRyM4+p4Fa06M6j91FKLZ095qFrYpunlVT2UcsfoK5jU/FcnluYSttCOsjkZx9egrznWPHdvG7rZhrqY9ZXztz/ADP6fWuK1DVr3VJN93OzgdE6Kv0HSvbwuTyl70zeNPudrq/jqCN3WzDXUx6yvnbn+Z/T61xeoate6pJvu52cDonRV+g6VRor3qOFp0vhRqkhaKSlrcYUUUUgFopKWkAUtJRUgLSUtIaQC0UlLSAKKKKQC0UlLSAKWkopALRRRSAKKKKkAzS0lLSAKKKKQBS+4/SkopNAb+meLL+w2xyt9phH8Mh+YfRuv55ru9C8XwTsPsd0YZT1gkOM/h0P4c15NR05rmq4aFToJxTPpCx8SwS4S7Xyn/vDlT/hW2jrIgdGVlPQg5FfOWmeK7+wwkrfaYR/DIfmH0PX8813eheL4J2H2O6MMp6wSHGfw6H8Oa8ivl8o6xMJUex6pRWDZeJYZcJdL5T/AN4cqf8ACtxJElQOjKynoQcivOlCUXZoxcWh1FFFSSJmloooGFJS0UAFJS1Xur23s03zyqg7DufoKEm9gsT1BdXtvZpvnlVB2Hc/QVzmoeKH2N9nAhjHWSTGcfyFeeaz46tYncWxa8nPWQn5Afr3/D867sPgKtZ6I1jTb3PQ9Q8UNsb7OBDGOskmM4/kK881jxzawu4ti15Oerk/ID9ep/D864nUdav9VfN1OSnaNeEH4f481n17+GyqFPWZtGCRoalrV/qr5upyU7Rrwg/D/HmqFJRXqRgoq0VYsWiiimMKWkoqQFooopALRSUtIBKWkNFIBaKKKQBS0lFSAtFFFIBaKSlpAFFFFIApaSikAtFFFIAoooqQFra0zxRqGnbUL/aIR/yzlOcD2PUfy9qxKM1EoKSs0Fj1TQ/GFtcOv2e4a2nPWKQ43fTsf5129l4ijfCXabD/AH15H4jqK+da2tM8Uahp2EL/AGiEf8s5DnA9j1H8vauCvgIy1iRKmmfRccqSoHjdXU9CDkU6vK9D8YW1w4+z3DW056xSHG76dj/Ou3svEcb4S7TYf768j8R1FeTVw04MwlTaN3FJSRypKgeN1dT0IORTq5zMSiiigAooooGJ3paKSgAopaSmAlFLRigBKKKKBiUUtFMBKKXFJQAUUUUAaVGKKK5iRKKWkpgFFFFABRRRQAUUUUCCiiigArl/EGvXFncS28TLCiKGaQnnGMn6V0/avO/Gn+s1H/r3b/0CunCU1OolI0pq71OK1jx5bxu62Qa7mPWV87M/zP6fWuK1DV77VJN93OzgdE6Kv0A4qjRX3FDCUqK91HUkkLRSZpa6Bi0UlLSEFFFFSMWikpaQBRRRSAWikpaQBS0lFSAtJS0hpALRSUtIAooopALRSUtIApaSikAtFFFIAoooqQDNLSUtIAooopAFLSUUgFo6UUVLQze0zxXf2G1JW+0wj+GQ/MPoev55rvNA8WxXLYsrhopurQP3/DofwryWt/wf/wAjDH/1zb+VctehCUW7Eyimj6A0q+a/shM6KrZKnHSr1Yvhv/kG/wDA2rar5uokpNI4pbhRRRUEhRRRQBga7rM9nN9ng2pldxkPJ79O1eYaz47tYXcWxa9nPWQn5Afr1P4fnXdeKf8Aj+b/AK4/418+19FlOEp1Y80lsdVNKxoalrd/qz5upyU7Rrwg/D/HmqFJRX0cYRgrRVkai0tJRQAtFFFIApaSikMWiiikAUtJRUgLRRRSAWikpaQCUtIaKQC0UUUgClpKKkBaKKKQC0UlLSAKKKKQBS0lFIBaKKKQBRRRUgLRSUZpALW1pnijUNOwhf7RCP8AlnJ2Hseo/l7Vi0VEoKSs0G56poPi6C6kC2s7wXB/5ZP/ABfTsf516HpN+9/al5FVXVtpx0PAOa8A8Lf8jJZ/Vv8A0E17p4a/485f+uh/kK8bH0Yw2MasUjbooorzDnDFJS0UAJRRRQAUUUUDCkpaKAEoxS0lMBKKWkoAKKKKBiUUtFMDRooormJCiiigApKWigBKKWkpgFFFFABRRRQIQ9K878af6zUf+vdv/QK9EPSvO/Gn39S/69z/AOgV2YH+MjSnueDUtNzS19/0OoWlpKKQxaKTNLSAWikpaQgoooqRi0UlLSAKKKKQG54T8OnxRr0eli6FtvRn8zZvxgZ6ZFQeI9Dm8Oa9daXO29oWG2TGA6kZBx24PvjpXSfCX/kfrf8A64y/+g1ueLtMk8aWGhazZKDdyTf2dd4H3XDEBiB0Gdx+jCvJq4uVLF8kn7lvx1/yIcrM5uz8BT3HgSfxPLeiFUVnjt/KyXUHGd2eOc9j+tYt1pljD4dstQi1WKW8nkZZbIL80QBOCTn2HYda9h129g/4RLxPotnj7JpNpBbr/vYJbn6bR9Qa5RdMg1H4f+CrR1Crdak0cjrgEqZHB59cVz0sdUa56j05vLa1xKTOB0axTU9csLCRmRLm4jiZl6gMwBI9+as+J9Kh0PxJfabbvI8Vu+1WkI3EYB5wMd/SvTbjxlJpvxBtfDNnp1kmlRXMVuI/K+YMdvzg9iCeOO1YHiTw3b6z4y1uebXtM04rc7BHdS7Wb5FOR7c4/CtIY2TqqVRcsWr9+o1K7MC68OW1v8PrHxCs0pubi8MDRkjYFG/kcZz8vrXPQIktxHHJIIkZgrSHkKCeTXrI1d/CHwus3tBZ30qajJBDOw3xg7pPnX14BA+tUvFMsOuaZ4P16W1hjvLufy59i4D4YdfbIPX1pUcZPmaktG2k/wAdgUmcDrun2mmatLaWOox6hAgG24jGA2QDjqeh4rNr2mSwsE+JXiXVbq1SZNLsknjhKjG7ywc49cA/nmqnh7xZc+K9F8SrqNnZ+bBp8jxSxR4ZQVb5ec8cCkswkoJqN7JXd+4c2hw2q+HLbT/Bei61HNK09+7iRGI2qFJ6cZ7etc3XrDeJbjwz8L/DlxZ21tJcyNIqyTpu8sBmJwPU8Vj/ABLMN7Y+G9bW3jhudQtC8/lrgMQEP/sx69sVWGxVRz5JrRuST9AjLWx5/S0lFekWLRRRSAKKKKkAzS0lLSAKKKKQBS0lFIBa3vB3/Iwx/wC4/wDKsGt/wd/yMMf/AFzb+VZVfgYPY918Nf8AIM/4Ga2qxfDX/IM/4G1bVfKVfjZwy+JiUUtFQSJRRiigDjPFP/H63/XH/GvnuvoTxT/x+t/1x/xr56r63JP4b+R2U/hHUUlLXtmgClpKKQC0tJRUiFooopAFLSUUmMWiu4tPAFrDo9rqPiDxBb6T9rTfBC0ZkdlIzkjI9QeM4z2pj/D97TxRpFhPeJcabqZzDeW2PnXGTgc4PI9RzXD9foXaT2v07dn1J5kcVS10lz4d02C78RQyauts+mSMltDKuWucFhgHI54HQd+1c1W1OtCorx/rqNO4tFa/hbSIde8TWOmTyPHFO5DMmNwAUnjP0qrrFmmna5qFjEzNHbXMkKsx5IViMn34pe2j7T2fW1wvrYp0Vp2egXt7oV9rEPl/ZbJlWXLYbLEAYHfrWjN4ctovh9b+IRNKbmW8NuY8jYFw3PTOePWoliKcXa/W3zC6OapaQ0VqwForpPG3hy38Mara2ltNLKstok7NLjO4lhgYHTiubrKlVjVipx2GncKWkrpdR8OW1n4F0jXkmla4vZnjdCRsUKWHHGc/L60p1YwcU+rsJs5uiiir0GLRSUtIAooopAFLSUUgFooopAFFFFSAtFJRmkBs+Fv+Rks/q3/oJr3Tw1/x5y/9dD/IV4X4W/5GSz+rf+gmvdPDX/HnJ/10P8hXkZkZVdjcooorxjmEopaMUAJijFFFMBKKWkoAKKKKBhRRRQAlFLzRTASkpaKAEopaSgDRooornEFFFFABRRRQAUUUUwEopaSgAooooAQ9K878aff1L/r3P/oFeiHpXnfjT7+pf9e5/wDQK7MB/GRdPc8Fooor9B6HWLS03NLSAWlpKKkBaKTNLSAWikpaQgoooqRi0UlLSA6PwRr9r4a8SxaleRzSQrG6lYQC2SPcitvwV8QYfDDarHcQTzW9y5mgRAPlk565IwCMZIz0rgaK5K2DpVm+frb8CXFM63S/FkFv4c8SWV6s8t5qxDrIgBXdkkliTkcnsDSzeLYV8IeH9NtEmW/0u6a4MjqNhO9mGOcnqOorkaWpeCpN387/AIW/IOVHpz+M/Bd1rdv4iu9I1AaqpRnSMqY94wA33hkjHHToOO9cT4p1WDXPE19qVskiQ3D7lWQAMBgDkAn0rHopUcFToy5ot7W17AopHT3niO0uPh5YeHkjnF3b3ZnZyB5ZU7+Ac5z8w7VYufFVjN4c8N6csNwJtLnMszFV2sN2flOeTj1xXI0ho+qU/wAW/mwsj1HSfFMusfEfU9Q0vSri9sbu1C3Nq21ZPLVVUnGcE54AzzmtvS7bQ9O8J+J7jTtJ1KwRrR1eXUF2liVbCKCc8E/mRya8as7260+6S6s55IJ0+7JGxUitLVPFWu61bi31HU554Rg+WSApI6EgAZ/GuGrl0pSSg7R0/ATj2L+r+I7TUPBWiaNFHOtzYM7Ss6jYdxP3TnPfuBR4l8R2ms6B4esLeKdZdNt2ilMgADEhB8uCcj5T1xXMUV2xwsItNdG395VkLRSUtbjClpKKQC0UUUgCiiipAM0tJS0gCiiikAVv+Dv+Rhj/AOubfyrArf8AB3/Iwx/9c2/lWVb4GD2PdvDX/IN/4G1bVYvhr/kGf8DNbVfJ1fjZwy+JhRRRWZIYpMUtFMDi/FP/AB+t/wBcf8a+eq+hfFP/AB/N/wBcf8a+eq+uyP8Ahv5HZT+EKWkor3DQdRSUtSAClpKKQC0tJRUiFooopNAekpr3hrxNo+n6d4rju9PvrOERw3kSkhlwACRg9cemPcZq5aaLqOh+LvC0P9rf2lokkpaykU/KvByMZOOv41lP4t8MeI7OyHinTr0X9rGIvtNmw/eKOgYE/wCPOeRmpLj4gaWNX0COw0+e30XSHLqhIaVyRj1x39ec1866NbWEINLW6eqWj2ZnZ9DTgiR4vic7IpZZDtJGSPmlpt/rbeAfCfh+30i0tTcX9uLm5mmj3F8hTj6c49gBWDD4ysI4vGKmG5zrbbrfCr8gy5+f5uPvDpmt+2e31fwZosXiTw5q1wsClLO509Q/mRjACkA5XIAHPXGQaznTlTs6q92607+6vyYW7mxdTR3XjPwHdxW0VuLi1aXy4wAF3R5wPYZo0nxY+s+P9R8MXOm2X9ltJPHsEXLMpOWY9DuwT071U8ZaxY+HvF3hO4kt5I4bG1Ytax4Z41K7VXrjjGOvauL0PxTZaZ8Q7jxDNFcNaSTzyBEUF8PuxkZxnnnmopYZ1aXOo/ZdvW7BK6Ox0HxHf2Hw81/yvIzpUqwW2YhjbuA+Ydz71mxam9l8L9J1QorvFrfnMmBhvvkjHvWT4c8VaPbWevaZrNvdPYalIJFMAG9SCTg5P0/Kqd14hsZ/AkPh21iujPHfmdWdVwUO4AcHO7kdq3eFlzuPLvJO/lb/ADuOx6F/wj1mnxLPiPav9kfYDqG/b8m7bt6fT568e1O+bU9Uur51CtcStJtHRcnoP5V6Zqmr3+kfBiy0+/Roby7Y28aPw/khickHpxhfoRXlNbZdCTUpyd7e6vRDiezeN/HF54Y1fTrWzs7Rw9nHJM8qZaRSWGzPYcH86ePCOkXPxUjlNrGLNtPGoGDACeZu28jpjvj1rE1Xxh4H8Q3VpcappeptLaxKisu0eYBztYbumf5npWY3xJm/4Tw6+tnmz8n7L9mJGTDnOM9M55/T3rhhhq3JanFxdnfzJSfQ7ue3utZ0nVbXxJJoTQmJmsjaygvCwzjkjtx/k1y5sF1T4d+CbByQlxqLxsQcEAyODj8Kx7zVvAkNtevpuiXst3cIVjS7YCOEnuuGJ4/yRVWbxbCvg/QNNtEmS/0u6a4MjKuzO9mXHOT1HUCrp4arZcqa17Wto9dwSZ1up+M/7C8YDw5Z6XZDRYJEt5ITECZAQMnPrz369+tWG0SzhvfGfhW2iTbJbre2iYGVIUErn03bQPaseXxh4Nv9Wh8QX2j341ZNrNFGymF3Xox5zxgdvwNYmm+N5YfiD/wk17GxWRiJYoeTsK7QBnGcYX0yRRHDVHF8kWmlr5yQWZ2epeHbdvhouhwxg6vY20WoOgHzZctuGevA3D8q5n4mNFZ3+l6FAE2aZZojlRyXYDOfwCn8aksfiFHb/Ee78QyxTmxuEMJiUDeIwBt4JxnKgnnua5PX9VbW9fvtSIYC4mZ1DdQvRQfoMCtcJh60aqdTbf5voOKd9TOopKWvWNAooopAFLSUUgFooopAFFFFSBseFv8AkZLP6t/6Ca918Nf8ecv/AF0P8hXhXhb/AJGWz+rf+gmvdfDX/HnL/wBdD/IV4+ZmVU3KKKK8Y5gooooAKMUlLQAlFLSUwCkpaKAEopaSgApM0tFAwoxRRQAlFLRTAv0UUVziCiiigAooooAKKKKACiiigBKKWkpgIeled+NPv6l/17n/ANAr0Q9K878aff1L/r3b/wBArswH8ZGlPc8EpaSiv0LodQtFFFIBaWm5paQC0tJRUgLRSZpaQC0UlLSEFFdV4Z+H2t+KIxcW8aW9mT/x8TkgN67RjJ/l712J+Bs/l5GvR+Z6fZTj8939K8+rmWFpS5Zz1E5xXU8lorpfE/gTWvCoEt5EktqTgXEBLJnsDxlT+nua5qumlVhVjzU3dDTT2Ciiu28NfDDXPEFsl25jsrSQApJMCWceoUdvqRmorV6dCPNUdgbS3OKor1qT4HTCImPXkaT+61qVH57j/KuD8S+DtY8KzKNQhUwucJcRHdGx9PUH2OK56OPw9aXLCWv3CU09jBpaSvRNA+EWratZR3d7dR6ekgDKjIXfB7kZAH0zn1rWviKVBc1R2BtLc88pK9B8RfCbVtEsZL21uY7+GIFpAilHVRyTjJyPxzXnxpUcRTrx5qbuNNPYWikpa1GFFFFIBaKSlpAFLSUUgFooopAFFFFSAZpaSlpAFb/g7/kYY/8Arm38qwK3vB3/ACMUf/XNv5VlW+Bg9j3fw1/yDf8AgZrarF8Nf8g3/gZraPWvkqvxs4ZbhRRRWZIUUUUAcV4p/wCP5v8Arj/jXz1X0N4p/wCP5v8Arj/jXzzX1+Rfw38jsp/CLRSUte6aBS0lFIB1FJS1IAKWkopALS0lFSIWiiikAVs6X4r13Rbc2+nanPBCcnywQVGepAOcfhWNRWc6caitJXXmBYvL261C6e6vJ5J53OWkkYsT+NQUUU1FRVkAVLb3EtpcxXEDlJYnEiNjO1gcg/nUVFJpPRjNDVtb1LXblbjU7yS5lUYUtgBR6ADgfhVCiiojBRVoqyELRSUtMYlLSGikAtFFFIApaSipAWiiikAtFJS0gCiiikAUtJRSAWiiikBseFv+Rks/q3/oJr3Xw1/x5y/9dD/IV4V4W/5GWz+rf+gGvdfDP/HnL/10P8hXjZmZVTcopcUleKcwUUUUAFFFFABRRRQAYpKWigBKKKKYBikpaKAEopcUlABRRRQMv0UUVgIKKKKACiiigAooooAKKKKACiiimAh6V5141+/qX/Xuf/QK9FPSvOvGv39S/wCvc/8AoFdmA/jI0p/EeCUUlLX6H0OoKWkooAWiiipAWlpuaWkAtLSUVIC10PgjQF8S+KrSwlz9n5knwf4F6j8TgfjXO5r0z4KBP+Epvifv/Yzj6b1z/SuLMKsqWGnOO9iZOyPa5ZbXStNeR9kFpbRZOBhURR6DsAK8xX422p1Ty20iQWG7b53m/vMZ+9tx+ma6v4m+b/wr3VfJzu2x5x/d8xc/pmvm2vncpy+liacp1dehlCKauz60dLPWNL2sqXFndRZwRlXRh/hXzJ4p0RvDviS90wlikT5jY9ShGVP1wfzr3j4ZXDXHw+0tnOSqumfZZGA/QV5r8aIVj8X2sqjBks1z7kO3P8qeUydHFyoX01/AIaSsY3w48Px+IfF0MVyge1tlM8ynowBAA/Mjj0zX0Fq+qW2haRcajdkrBbpubb1PYAe5OBXlXwOiU3WtykDcqQqPoS+f5V03xhlMfgcoOklzGp+nJ/pU5jevj1Rk9FZBPWVjI0j4zw3usxWt7pn2W1mcIJhNuKZOAWGBx6+nvXo+r6Va63pNxp92gaGdCp45B7Ee4ODXzX4a8Kap4pvfIsIsRKR5tw4wkY9z3PsK+noI2jt443beyqAWxjJx1rHNKFHD1I+wdn18hTST0PmPRdLMXjyy0u7VWMWopDKvY7ZMEfTivb/iRr1/4e8Jm505/LuJJkiEgUHYCCSeeO2PxryLxncNpHxQvbuADfb3cc6g9CQFb+dezRXnh34heHhA0iTQygF4d+2SJhz9QR69D7iurMG5So15q8bK5UujYz4e61eeIPB8F3qDCS4DvG74A34PXA9q8k8O+EF1z4iXmnlD/Z1lcyNNjgbFcgLx6nj6Z9K9cvNT0D4feHBbRvHGkKnybYPmSVjzj15Pc8Cqfhmzi8HeELvV9VGy7uN15eHHO48hB7jOMepNclKvKl7SdJW59F/XkSna9jzH4meHdE8Oavb2+ktKssqGSWAtuWMZwuCeeeeDmuGq9rOq3Gt6xdaldHM1w5YjPCjoF+gGBVE19Nhqc4Uoxm7vqbLRai0UlLWxQUUUUgFopKWkAUtJRSAWiiikAUUUVIBmt/wd/wAjFH/1zb+VYFb/AIO/5GGP/cb+VZVv4bB7Hu/hr/kG/wDAzW3WJ4a/5Bv/AAM1t18jV+NnDLcKDRRWZIlFLRQBxXin/j+b/rj/AI18819DeKf+P5v+uP8AjXzxX2GRfw38jspfCLRRRXumgtFJS0gClpKKQDqKSlqQAUtJRSAWlpK3dF8Ha/r5VrDTZWiP/LZxsj/76PX8M1lUqwpq83b1E2YdFew6N8FFULJrepFj1MNqMD/vpv8AAV5PqUCWuqXdvHnZFM6Lk84DECubD42jiJuNJ3sJST2K1LSV2ngv4d3/AIsH2qST7JpqtjziuWkI6hR/U8fXkVpXr06EOeo7IbaWrOMor6FtPhL4Tt4gs1rPdMOry3DAn/vkgVW1T4PeHbuFhYm4sZcfKyuZFz7huT+BFeUs8wzdrMj2kTwOlrV8ReHb7wxqz2F8o3Abo5F+7IvYj/PFZNetCcakVKLumWncWiu28D/Du68Vg3lzK1rpqtt8wLlpSOoX/E9+x5x6lb/CjwjDEEewlnYDl5LhwT/3yQP0rzsTmuHoT5HdvyJc0j54or23Xvg3pk9u8miTyWtwBlYpW3xt7ZPI+uT9K8ZvbO4069ms7uJoriFikiN1BFa4XG0sSv3b26dRxknsV6WvQvh78O08TQPqWpSSR2KvsjjThpSOpz2A6e5z0xXoE3wq8IzwtDFbSxSKMF47hiwP0JI/SuevmtClUcHd23sJzSdj59ore8XeGJ/CmttYyv5sTKJIZsY3qfbsQeKwsEAEg4PTiu6FSNSCnF6MpO4lLSUVQxaKKKQC0UlLSAKKKKQBS0lFIDZ8Lf8AIyWf1b/0E17t4Y/485f+uh/kK8I8K/8AIy2f1b/0E17v4Z/485f+uh/kK8XNDGtsblFLRivEOcTFJS0UwEooooAKKKKACiiigAooo5oASilooASiiimAlFLRQBeooorAAooooAKKKKACiiigAooooAKKKKAEPSvOvGv39S/692/9Ar0U9K868a/f1L/r3b/0Cu3AfxkaU9zwOiiiv0XodQtFJS0gClpKKQC0UUVIC0tNzS0gFrp/AOvx+HPF1reTtttnzDMfRG7/AIEA/QVzFFY1qUa1N05bPQTV1Y+uZ4LfUbGSCZVmtriMqw6hlYf4GvMH+CFm2oeYmsTLZFs+T5ILgegbOPxxXB+HPiPr/hu3W1hljubRfuw3ALBB6KQQQPbp7V00Xxb8TaxcxafpemWi3c7BEwGc5Pcc4H1PAr5ZZfjsI5eyl7vf/hzHlktj2PTdPttK0+Cxs4hFbwKERB2H9TXifxqk3eLbSMfwWSk/i7/4V7XpsNzb6bbRXk/n3KxgTS4xvfHzEegz2r5x+IGrprXjXULmJt0KOIYz6hBgkexIJ/Gsclpyni3PtcVP4rncfAz7+u/S3/8AalejeJ/Ddv4p0+Gxu5XjgSdZXEfVgAflz269a8z+B9wqahrFsT88kUbgeylgf/QhXefEXUtS0nwbd3mmSGKZWQNIFyVQnBI9DyOanMIzeYtQdm2rfcglfmM7xF4x0LwBp66XpsET3SLiO0i4VPdz29fU/rXZadcPd6ba3MgUPLEjsF6AkA8V8nFpbmcli8ssjck5LMSfzJzX1lYQG1022gPWKJUP4AClmeChhYQ1vJ3uwnFJI+dPiT/yULV/99P/AEWtcsCQcjII710fj+4W58d6xIhBAn2Z91AU/qK56KKSaZIokZ5JGCqq8liTgAV9PhUlh4X7I2Wx2/wv8MnXvEgvLhN1nYESPno7/wAK/wBT9Md66f4q6veavfxeFtIgnuWixNdJAhc5/hBx2AOT9R6V09jDa/DX4eNJMFaeNPMkwf8AWzt0H0zgfQZq34Qt4NM8IJq9yd1zeRG/vJyPmcsN5/AA4Ar5uti+av8AWLXSdor9TJy1ueDXvhLxBp0BnutHvI4lXcz+USqj1JHT8ayYo3mmSKNSzuwVVHcngCvovwd48tPGM15BHZyW0luAwV2Db0Jxnjofb9TXlXj2wh8K/ESO5solWMmO9SIcKDuOQPQZU8V6eGx9WpUdGrG0rXLjJt2Z22j/AAp0HStL+1+I5RNIFDSlpjHFF7ZBBPpkn8KuReBvh/4hhkXS/JZ1GGe0uyzJ6HBJH5ithLvw98RfDr2oud8Uqq0kSvtliYcjI9QfqD71zMXws1DQrp73wz4he3uShQCeENkHsWHHYfw146r1JN+1quM+2tjO76vU828aeE5vCOsC1aXzreVd8EpGCw7g+hBrnK6Xxpc+JjqUdj4mlaSe3BMRKKAQ2MlSoGQcf55rmq+mwzk6UXNpvuuptHbUKKKK2sULRSUtIApaSikAtFFFIArf8G/8jDH/ANc2/lWBW/4N/wCRhj/65t/Ksa/8Ng9j3fw1/wAgz/gZrbrF8Nf8gz/gZrar4+r8bOGW4UUUVBIUUUUAcV4p/wCP5v8Arj/jXzxX0P4q/wCP5v8Arj/jXzxX2OQ/w5fI7KXwhS0lFe8aC0UUVIC0UlLSAKWkopAOopKWpA9U+C+l2F/c6tPeWkM8luITE0qBtmd+SM9DwOa9Yv8AxFo+lTxW95qFvFPIyokW7Lkk4Hyjn8a+YbDWdS0uG4hsL2a2S42+b5TFS+M45HPc0/RGL+ItOZiSxu4iSepO8V8/jcplXrTrTlp0XyMpQu7s+sD0r5O1r/kPaj/18yf+hGvrH+H8K+TdaP8AxPtQ/wCvqX/0I1x8PfHP0RNLqM02zbUdUs7FTta5mSEH0LED+tfVljZ2+n2EFpbIEghQIijsAOK+TYJ5bW4jngkaOaNg6OhwVI6EH1rY/wCEy8S/9B7UP/Ahq9HM8vq4tx5WkkXOLkdZ4o+K2uS6zcQ6PcLaWcMhRCIlZpMHqSwPX0GOK7v4Z+M7vxVY3UOoBDeWhXMiLtEitnBI9cg5xx0r58JJOTkk17/8KfC82haFJe3iFLq/2t5Z6pGM7Qfc5J/EVx5phcNh8KkklLp3JmkkU/jRYRzeG7S+2jzoLkIG/wBlgcj8wv5V4jbwPc3MVvGMySuEX6k4FexfGvWIlsLDRkbMzyfaHA7KAVGfqSf++a8u8MKG8WaMp6G+gB/7+LW+VuUMC5PzaKh8J9O6Xp8Gk6VbWFsMQ28YRffA6n3PWvFPF3xM11vEV1BpV6bWztpDEgRFJcqcFiSD1Pb0r3b+H8K+YtK8Nar4q124gsIS371jLM/CRgk8k/0HNeVlUKU5Tq19bd/MinZ3bPdPh/4nl8VeG1urkKLuGQwzFRgMRggge4I/HNcB8atISDUrDVY0w1wjQy47lcFSffBI+i16R4Q8J2vhHSTaQSvNJI2+aVuNzYxwOwrkvjYo/wCEc09u4u8f+ON/hWeDqQWPTpfC3+Aotc+h0Hwy2f8ACvNL2Y6SZ+vmNmvPPhi1+fiVfecZS5Sb7VnPXcOvvuqv8PPiJH4agbS9TSR7FnLxyJyYieox3B68dPfNegyfE/wbbRvcQ3ZklcZZYrZw7EepIA/M1tWo16NSrFQ5lPZjaab8znfi5p0ura/4esLNA13ceZGB7ZXBPsPmP51reM4dH8KfDdNNe1huGCeRbLIgJ8wj5pPY9WyO/HerPg1pvFGrXHjG9gMMe022nwsc7Iwfnb6k8Z+oryz4jeJ/+Ek8TSeS+6xtMwwY6Nz8zfif0AqsNTnVqQw72hq/X+tPvHFNtLscjRRRX0ljYKWkopALRRRSAWikpaQBRRRSA2PCv/Iy2f1b/wBBNe7+Gf8Ajzl/66H+Qrwjwr/yMtn9W/8AQTXu/hj/AI85f+uh/kK8TNehjW2NyloorxDmCkpaKBiUUtJQAlFLRTASiiigAooooAKKKKACjFFFACUUtJQBfooorEApKU0lABRRRQAUUUUAFFFFABRRRQAh6V5141+/qX/Xuf8A0CvRT0rzrxr9/Uv+vc/+gV2YD+MjSnueBClpKK/R1sdQtFFFIBaKSlpAFLSUUgFoooqQFpabmlpAem/DLwRo/ijTLy61NZnaKcRqEkKjG0Ht9a9e0Xwzo3h2MrpljFblvvPyzt9WOTj2zXzpoPjPW/DVpLbaVcpDHK+9sxK5zjHek1Txr4k1mMx32r3DxkYZEIjVh7hQAfxr53GZZi8RWl7/ALj83+RlKEm99D1X4ifEi2sLObSdGnWa+lBSSeNsrAOhwR1bt7V4bSZpa9PBYGnhIcsN+r7lxikjofBfiI+GPE9tqDbjb8xzqvUo3X8jg/hX0n/oOt6Wf9VdWV1HjruV1Ir5Lrb0TxZrnh3I0zUJIoycmI4dCf8AdOQPqOa4cyyt4mSqU3aSJnC+qPbtK+FnhzStWTUY0uJXjbfHHNIGRCOhAxk49ya2/FXiW08L6JNe3DqZMFYIc8yP2A9vU9hXi0nxc8VvFsWe2Q/31gGf1yK5HU9X1DWbs3Oo3ctzMeNznOB6AdAPYVwwyjE1ailiZ3S87kqDb1IJ55Lm4lnmcvLKxd2PUknJP51NpuoT6VqNvf2pQTwOHTeoYZ+hqpS19E4RceVrQ1Or8W+PNQ8XWtnBcwxwJb5ZliY7ZHPG7B6YGRjnqea9p8I3Ft4h+HlnCsmVazFpNt6qwXY3+NfNdauieJdX8OzPJpd68G/76YDK31U5GffrXlYzLI1KKhR0cXdEShdaHtfw/wDAN14Qvb+5u7uGYzKI4hED90HOTnoenHP1rifH2paTffFO2W+Am0+2WO2usMwwMsW5U543dvTFY998UfFd7bmD7ekCsu1jBEFY/j1B+mK45mLsWYksTkk8kmssNl9f2sq2Ilq1bQFF3uz3TXfhZpN3pUc3hcJZ3oYSRymeRlkXHTOTjqCCP61a8C6D4y0i+c65qgmsdhCwvMZW3diCRwPx/CvHtG8Z+INAh8nTtSljg/55OFdR9AwOPwxWnd/FDxbdxGP+0hCpGD5MKqfzxkfhWE8vxji6bkpJ9XuJxlsdF8a7y0m1XTbWJla5gjczYPKhtu0H8ifxry2nzTS3EzzTSPJI53M7sSzH1J6mmGvWwtD2FGNO97FxVlYWikpa3KCiiikAtFJS0gClpKKQC1v+Df8AkYY/+ubfyrArf8G/8jDH/wBc2/lWNf8AhsHse8eGv+QZ/wADNbVYvhr/AJBn/A2rar4+r8bOGW4UUUVmSFFFFAHFeKf+P9/+uP8AjXzvmvojxV/x/P8A9cf8a+d6+xyH+HL5HZS+EWikpa980ClpKKQC0UUVIC0UlLSAKWkopAOrQ0L/AJGHTP8Ar6i/9DFZ1aGhf8jDpn/X1F/6GKxrfw5egmfWX8P4V8m61/yHtR/6+pf/AEI19Zfw/hXybrX/ACHtR/6+pP8A0I18xw98c/RGVLdlGlpK9J+G3w9OtSprGrRY05GzFEw/15Hc/wCyP1/OvoMViaeGpupNmjaSuXvhl8PDdtFr+sQ/6OpD2sDj/WH++w9PQd+vTr6b4p8T2PhXSHvbs7nPywwg/NK3oPb1PapfEGv2HhfR3vr1tsaDbHGvV27Ko/zxXzf4k8R33ifVnv71vaKIH5Yl7Af49zXzNCjVzOv7WrpFf1ZGSTm7sq6vq13rmqT6jeyb55mycdFHZR6ADirPhb/kbtF/6/4P/Ri1kVb0u6Fjq1ldnJEE6S/98sD/AEr6WpTXsXCHaxt0PrQdK4jxB4t0D4f6f9hsoI2uuWS0hOOT/E57fU5JrtEdZI1dGBVgCCDwRXy74s0+60zxTqNtdh/M892V3zl1JJVs98ivkMrwsMRVcaj0Wtu5zwim9T2T4X+ItS8Sx6vealPvYTIEReEjGDwo7fzNU/jZ/wAizYf9fg/9AarHwd0yay8KTXUyFPtk5ePIwSgAAP55rK+N96gtNJsQQXaR5iM9AAAP5n8q2pwj/aSjTWif5IpfHoeOVqeHdEn8Q67a6ZBkGZvncD7iDlm/AfrxWXXuHwm8Ox6ToUuv3m1JbpSUZuNkI5z7ZIz9AK9/MMT9XouS32RrJ2RseMLiXR/Dlr4c0CBmv7xPs1tEh5SMD5mz2wOMnuc9q4OL4La08BeXULGOXsg3MPxOP6Gut8Cawvirxhr+tMuY4Vit7TI5WIlifoSVBNUPF3xA1bRvH9vpdr5QsozEJY2QEybsE89uDxjv614FCWJpTdGlbmtdmS5lojzDxH4V1XwtdrBqMShXB8uaM7kkx1wev4HBrFr6E+LFpFceArqd1Be2kjkQ9wS4U/oxr56r28uxUsTR5pbp2NISuhaKKK7iwpaSipAWiiikAtFJS0gNjwr/AMjLZ/Vv/QTXu/hj/jzl/wCuh/kK8I8K/wDIy2f1b/0E17v4Y/485f8Arof5CvEzXoY1tjdooorwzmCiiigAooooAKSlooGJSU6koASilpKYBRRRQAUUUUAFFFFAF6iiisQCiiigApKWigBKKWigBKKKKACiiigBD0rzrxr9/Uv+vc/+gV6KeledeNvv6l/17n/0Cu3AfxkaU9zwKiiiv0dbHUApaSigBaKKKkBaKSlpAFLSUUgFoooqQFpa6DSvAviTW9Pjv9P00zWshIWTzo1zg4PBYHqKztW0TU9CuRb6nZy20jDK7xww9QRwfwrCOIpSnyRkm+19RXRQpaStDQ9GuvEGsQaXZmMXE27aZDheFLHJAPYelVOUYRcpOyQFCittfC2oNpWq6iGg8jTJvJm+Y7mbdt+UY5GT3xWJUwqQqX5XewC0UlLVWAKK1tO8PXmp6PqOqQNELfT1Vpt7EMc5+6AOenfFZNZqcZNpPbcBaKSlpjCiiikAtFJWto3h+81yC/mtWiVLGAzy+YxHygE4GByeKznOMI80nZCMqlpKKYxaSlpDSAWikpaQBRRRSAWikpaQBXQeDf8AkYo/+ubfyrn63/Bv/IxR/wDXNv5VjX/hsHse9eGf+QX/AMDato1i+GP+QZ/wM1t18bV+NnDLcbRS4pKgkKKKKAOK8Vf8fz/9cf8AGvnevojxV/x/P/1x/wAa+dq+yyD+HL5HZS+EWjNFFe+aC0UlLSAKWkopALRRRUgLRSUtIArR0L/kYdN/6+4v/QxWdRWc4c0XER9hZG3qOlfJutf8h7Uf+vqT/wBCNUKWvKy7LPqcpPmvfyJhDlJbdQ1zErAEF1B/OvreFI4YUjiVUjRQqqowAB0AFfIdFPMcteMcfeta/S+4ThzH11cWlrd7ftFvDNtzt8xA2PpmoP7H0v8A6B9p/wB+V/wr5NorzFw/NbVfw/4JHsvM+mfFml6bF4Q1l47G1V1spirLEoIOw8jivmeiivUwGBlhYyTlzXLjHlPa/ht8Q7OTTYNE1e4WC5gAjgmkOFkQcBSexHT34716Nd6XpmpmOS8sbS6K8o00Svj6ZFfJ1WoNSvraPZBe3ESf3Y5WUfkDXBickU6jnSly3JdPW6PqDWNe0rw5Yme/uYoI1X5Ixjc3oFXv+FfOXi3xJN4p1+bUZVKR4CQxn+BB0H15JPuaxZJZJpC8rs7nqzHJP4mm104HK4YVubd5DjBRCtK217VbPTp9Ot7+dLOdSkkG7KkHrgHp+GM1m0V6E4RnpJXLPQPhT4ntNB1u5tb6VYbe9VV81zhVdc4yewOSM/SvV9V8EaHruuWut3KyG4h2n924CS4OV3DHP4Yr5pqeO+u4YTFFdTpE3VFkIU/hnFeVissdWr7WnPlb3IcLu6PZPi74ns00X+wbeZJbqZ1Myo2fKVTuGfcnHHp+FeK0nelrrweEjhqfs07lRjZWEpaQ0V0lC0UUUgClpKKkBaKKKQGz4V/5GWz+rf8AoJr3fwx/x5y/9dD/ACFeD+Ff+Rls/q3/AKCa948Mf8ecv/XQ/wAhXh5t0Ma2xu0UtGK8M5hKKKKACiiigAooooGFFFFAhMUUtFAxMUlLRTASiiigAooooAv0lLSVgMKKKKYgooooAKKKKACkpaKAEooooAQ9K858a/e1L/r3P/oFejHpXnPjX72pf9e5/wDQK7MB/GRpT3PAqWkor9J6HULRRRQAClpKKQC0UUVIC0UlLSAKWkopAej6jcTW3wT8PvBNJE/26Qbo3Kn70vcVNp97N4o+FOtR6q7XFxpTrJb3EnLqD2z34BH0PsKktLXTNf8AhXo2kv4g0uwuYLmSWRbm4VWA3SD7uc5+YGqesanofhjwVP4Z0W/XUry9kD3d1GMIoGOAeh6Y4J7nPavmVaTdOC9/nb22V+/oZj5tF8JeEdN01fEVpd6hqN7CJ3jhkKCBT7AjJ7c9SD0rV0Xw1aaB8U9An02ZpdM1CCWe2Z+oHlNxnvwQfxqprEGj/EGz0vU01+x06+t7Zbe7hvHC4CnOVz15J9jkcgirsPibRf8AhYfhuztb6IaVo9rJb/a5nCIzGIjOT24UZ7mspzrShJXblaXMtbLtb/gC1KUX/IgePP8AsIj/ANGrUOiaP4R/suwLaVquu3NwoNzLaRyhLZjjI+XGcZ9+n0FRRanYDwP40tze2wmub8PBH5o3Sr5inKjPIxzkV0kt5Y6roejNpPjK30TT7aBVubZJAkoIxnuCT1H155zSqSqQUkrq8vP+VdtQ2Mmz+H+kxfE+40K5Es2ni0NzGC5DDOBgkdcHP6VzmqaZp2vXSw+CtGv3jtQRcSMSxkyflbBJx0Pp9K9BOuaOPi5/aA1ewNo+lbfP+0JsDbvu5zjPGcV41Zanf6cXNjfXNrvxv8iVk3Y6ZweetdODdeq+e7uox3va7ve41dnpngfT4tM8MeLLXxFb3EEMawtcRAYk24YgD68fn2rLv9E8M694Pvta8OWtxY3GnMPPt5ZC+9D35J7ZPHoeOhqfwRqFlqXh3xLZ67rqW818IkWa6nBc4BAPzHJA4z7elPlGleC/AurafHrNpqWo6qVQLaOGVEHcntwW6+oHvWEnONeVm+fmjor2eiuLW5LpGg+D3i0+2j0nVtae4Cie/gSVYoWJweBjgH64Hr0qDTPh/pz/ABI1LQ7meWSzsovPRFIDyghSFz7bsEjH4ZrqLy/sNRm0q+0zxnbaVolvGgexSQI/ynO3Gc8jAwfTvWHqc/h+/wDihf3E2u/ZhLbobS/tLgbI5AgUhmH09QOx61jTrV25+81dPu7O6/rQE2ZHiHTvCT6Hcy2lne6Hq1uw22d4XJnXI5G7Pb6citWDwhoeg6PpravomqaveXyCSRrRX22ynHHykZPPTnOD04q14i1OCHwDe2Gu6/p+t38jg2ZtirOnIwSR078+nGTVmXxA/ibSdOutJ8YW2jXEUQju7W5kCDI/iGev8iMdCKHVrOkrN8t3rd9u9r2C7Mo/DbT7XxteW91cSjRbW0+3MSfn2ZI2k49VbnrgVoeHbnwtdaD4ofw/YXVlKunyCRJpCwddrYYZJwfbPeqtj4i0yPxVqOlaj4il1GwvrD7G1/KoUI/OQCONvzNz0Gfxp2jaVo/hTRfESyeJ9Mu7m7sZI4o4ZlGRtbHf7xJHA/WpqyqShaq23ZW3s+/9MNep5RRSUtfSpaGgUtJRSGLSUtIaQC0UlLSAKKKKQC1v+Df+Rij/AOubfyrn66DwZ/yMUf8A1zf+VY1/4bE9j3nwz/yDP+BmtysTwx/yDP8AgZrbr4yr8bOKXxBSUUtZkiUlLRTA4nxV/wAfz/8AXH/GvnavonxV/wAf7/8AXH/GvnavsuH/AOHL5HZS+EWikor6E0FozRRUgLRSUtIApaSikAtFFFSAtFT2NpJqGoW1lDt824lWJNxwMsQBn2ya7aT4S68rvFHe6TNcIMmCO5O/pnoVGPxrlrYujRko1JWuJtI4KlqW7tLiwu5bW6ieKeJijo4wQRUNbJpq6AdRW7p/hpr/AMJ6prwugiWDonk7Ml9xA6546+9HiDw0+g2Gj3TXQm/tK2FwFCbfLBAOM556+3SsPrNJz9nfW9v1C6MIUtJRWoxaWkre8R+Gm8PQ6XI90s/2+1W4wE27M9uvP14rOVSEZKDer2EzCoooqmAUtJRSGLRRRUgFLW7e+GmsvCGm6+bpXW+leMQ7MFNpYZznn7vpWDWcKkaibi9m19wri0UUVVhi0UlLSASlpDRSAWiiikAUtJRUgbPhX/kZbP6t/wCgmvefDH/HnL/10P8AIV4N4V/5GWz+rf8AoJr3nwx/x5y/9dD/ACFeFm/Qxq7G7RS4pK8I5goxRRQAlFLSUwCiiigAooooAKKKKACjFJS0AJRS0UDExRiiimBeooorAYUlLSUAFFFFMQUUUUAFFFFABSUtFADT0rznxr97Uv8Ar3P/AKBXo56V5x41+9qX/Xuf/QK7MB/GRpT3PAqKTNLX6X0OoKWkopALRRRSABS0lOjR5ZFjjRndiFVVGSxPQAetS7JXYCUV1B+HPi4LG39iT4k+786ZHfkZ4/HFUtO8Ia/q3n/YdNln+zy+TLtZflfuOT/9auZYvDtNqasvMV0YtFbcPhDX7jWLjSYdMlkvbfHnRqykJkZGWztH51W1jw/q3h+ZItUspbZnyVLYKt64YEg/gaaxFFyUVJXfS47ozaWujg8A+KbmwF7Fo07QMu5clQxHrtJ3H8qzNL0HVdavXs9OsZZ50zvQDGz/AHieBz61CxNBptSWm+qFdGfRWtrXhrWfDzRjVbCS2EmdjEhlJ9NwJGfaq+laRf63efZNNtmuJ9pfYpA4HU8nFUqtNw9omuXv0C/UpUtbN54R16wurO1udMlS4vM/Z4gVZnx14BOPxxUuo+CfEmkpE97pU0aSuEVlZXG4nABKk4JPHNZfWaGnvLXbULowaWu38Q/Di/0XwzY6mIpmmMbPfIzJiDpjGDz196xtM8FeI9YsBfWGlSy2xztcsq7u3AYgn8KzhjKEoc6krXsLmRg0VpWPh7V9SvbiytLCaS6tlLTQ4wyAHByD3yenWrmo+CfEek2P2290qaK34BfcrbcnjIBJHPHOKt4iipcrkrvzQXRhUV3Op/DXUdP8H2+rGCf7aDI13AzJthjXcd3XngA9T1rJ8R2Nvb6foTW2jT2Mk9qGeSSTf9pYgfMoycdc9uvTisIYyjUklDXVrp0C66HOUV0U3gTxRBYG9k0a4EAGT0LAepQHd+lU9L8M61rVs9xpuny3MSOI2ZMcMccYz79egrRYii48ykrId0ZVFa2teF9a8PLG2q2ElukvCNuVlJ9MqSM+1VNN0y81e+SysIDPcyZ2xqQCcDJ68dBTVSnKPOmrdwuipRW3eeD/ABBYSWcdzpcySXjFYIwQzORjI2g5HXvin6n4L8RaNYm9v9LlithjdIGVguemdpJH41n9Zou1pLXbULowqK7qT4aakng1NV8if+0DIS9tuTCw4J3devHTP4Vz2j+E9d1+B59M06SeJDgvuVRn0BYjJ9hURxdGSclJWTsLmRjUtT31hd6ZeSWl7byQXEZwyOMEf/W96r1smmroYtJS0hoGLRSUtIAroPBv/IxR/wDXNv5Vz9dB4N/5GKP/AK5v/Ksa/wDDkJ7HvXhj/kGf8DNbdYnhj/kGf8DNbdfF1fjZxS3CiiisyQpKWigDiPFX/H8//XH/ABr51619FeKv+P8Ab/rj/jXzrX2nD/8ADl8jspfCApaTrS19CaC0UlFIBaM0UVIC0UlLSAKWkopAa3hj/kbNG/6/oP8A0YtbnxFuJbX4malPbyvFNHJEyOhwVIjTBBrntBuobLxFpl1cPshhu4pJGwTtUOCTgdeB2r0PWZ/h3qnia41681y6ufMZXNnFauobaoGNxUcHHqK8fFP2eKU3FtcrWivrcl7mrrfh2DxZ448ONdrsFzpouLtU4LBece2SwGfSszRtQ8J+Ldck8ODw1bWlvMHW0u4eJcqCcscZ5AzyT6c1lj4lF/iJDr72rJYRx/Zlt1I3CH19M55x+Ge9X9PvvAfhbV5vEGnancX1wFc2lj5LJ5bMCMFiOmDj6eteZ7CvTgoTUr8vu2vo7ve3y3JsyDTLR7H4YeM7Nzl7e8SJiO5V1H9Kn8VR2ksfgKO+inltmsIw8duu6R/lT5VHuePWsWw8SWR8B+JbO7uNupajcpNHGEY7/nVmOcYHfqa138Z6Lb6r4Ku1c3MemWYhu1EbAxsUC8ZAzg88Z6VpKlXVVy5W3eX/AKSv1CzOtsfDFrrU11p934Hg0zTGjP2e8DxicHjBYA7ge/fGMHNcp4aTQ9I+HFzrepaJb6jcxXxiQSAcnCgAkg8dTjB/rW1puveB9K8VXGut4kvLue637VeGQrAGOSPu59gO361xkut6Wvw2vNFju9942pGaNPLYbo+MNnGB06da5qNKtP3GpWbj0a733/ESuJeeCdb1O1n8QwWNpbWMyNdrDHMoCRkbgAB7dq7HxP4g0nRNK8Ni88P22qXElhH81xghE2jgAg8nnn+deOV6nql74H8T6dosV9rs9pc2NokcjJbuQ3A3LyvUEdffvXbjKM4zp+1vKKvsnppp1Y2irr3h/QbTxlol1HZXL6RqkAuBZ2y5csRkKoz0OV4HvXVDwxBrNjqdvfeC7bSLdIXezuo3j8zI6bgpznvzkdq51vH+jR+P9MuoYn/sbT7Y2kTlDuAIxvA646D1xn6VpaTrPgfRdW1C9PiW8vJ7+J08yWGRhGrEHH3ck9Py7VwVo4nljdSukrbvr5dbdxO5Q8FaPYv4Jk1Oy0K01zV/tBWWC4df3adsBuOnPqc+2Kp32haN4g8daRptnp11pElwpa/tXj2BCq7vkz64IyOOhxVLw5c+FZtCht7q/udD1mCQn7fbhz5qknj5enYdug961tc+IOnxeKvD15p7S38elxtHPcupVrgMArcHHOATzjk1s44hV5uCld37rpp5Nduoa3OpTwrZ6hqV3o9x4Lt7LS1Rkg1GN080kcBuDu56859+9cna6fpHgzwYmrajpMGqajd3TwRpcDKRqjMuccj+HPryPSpLy+8GSX9xqp8T6vJFLukGnRiRWDnnaG6AZ/8A1mqOl614d17wcnh/X72ewktJ2ltbnYZdwYkkNgcn5j6dvpWVOnVUdeblurqzv1v5+tgVy744vLPUPhn4eurCyWyt5LiQrbqchD8+7Htuya8xrvvGGr+HJfBmkaJoV9Jc/Y5mLeZEysQdxLcgDknOPeuBr1sug40WmmtXvva5UdgpaSiu0sWiiikAtFJS0gEpaQ0UgFooopAbPhX/AJGaz+rf+gmvefC//HlL/wBdD/IV4N4V/wCRls/q3/oJr3nwv/x5S/8AXQ/yFeDm+6Ma2xvUUUV4JzBSUtFACUUUUAFJS0UAJRS0lMAooooAKKKKADpRRRQAUYoooGXaKKKxGFFFFABSUtJQAUUUUxBRRRQAUUUUAIelec+Nvval/wBe7f8AoFejHpXnPjb7+pf9e5/9ArtwH8ZGlPc8AopM0tfpfQ6haKTNLQAUtJRUgLXXfDCOKX4h6UswUgGRlB6bhGxH6jNcjWhoLmPX7BxfjTysyn7UV3CLn72O/wBDx68VzYuHPQnFO10xPY9H8Ka1rVx8Zbi3ubu5MbzXCSwsx2Kqhtox0ABC4/8Ar060v7rTfh942ubKd4J11UqsiHDKGdFOD2OCea6+y1W/06+l1TWr3w4mnJGd15ag+dcgDCjqfY4GecAV4xN4tu20nWNJiihFnqd0blyynzFO4MADnH8I6g181h6UsVO8YJJcl+zs9f8AhiErnWLfX1r8FzfWVxN9qu9QK3twGO8LyBluvZB+PvSxXF5qfwZaXUN9zcQakiWBlBdn5UYHcjlx+nauT8N+MtT8Mxz29stvcWdx/rbW5TfGx6Zxkc449/fil8Q+NNU8RJbwzeRa2tsd0NtapsjQ9j16/wAvxrteAq+05VFW5ubm8u1h8rueovqsGv8Aiez26nqnh7xGkQRbG4jLwucE/d6HIPUkduM1haFql5ol/wCLYddsrtoLiXZeahpy48l+RuB7A7s+o9OaxYvixriRxtNZ6ZPeRJsS8lgPmgY65BHP0wPasjRvHOtaLf3t3HJFc/biWuYrlN6SHnnGRjqenauaGW11CUXFW0tr530dvzuLlZ1Hi63uv+EBtp9O11tW0D7R8v2iIiaJuR948kdR0GM+lVPg4SPG0h/6c5P5rWH4i8b6j4isYbCSC1s7GFty29pGUUn1PJ9T7VS8N+I7zwtqjahYxwSTGMxYmUlcEgnoRzxXXHB1vqU6TXvO/b8fMaWljv8A4XajPqms61qeqXlzc3cNoTGSd7qCctsB4B4AAAx2p8HiTw3a+GtdsrfVda1FrqAkG7jLiJ8EK2cfL8xXk9wK840PXb/w7qaahp0ojmUFTkZV1PUEdxW9qnxG1bUtOubKK00+wjuxi4a0gKNLnqCST1/P3rCvls5V7xXuu3W1rfL8hOOpseMLmd/h74P33EoWWOQSksfm+719a6/xjfaFpOt6XHeatq2n/ZIEaCCzXELKD16c9MEegry1fGt9/wAIj/wjk1pZz2ygiKWWMmSME54OcA571fsfiXrFpp9tZ3Frp1+LUYglu4C7x4HGDkfn1rOpl9dpK3wuXXdPqHKzutJ1i1vviB4h1PTElib+ydzebFsYSDHJB9gvWue8F6rf6l4U8aR315PcqLEyr50hfa21+Rk8dB+QrmYPHWsxavqGqOYJ7m+gMEnmqdqoccKARjGPeqOj+I7zRLHU7S2jgaPUYDBMZFJIXBGVwRg/Meuar+zaihJW1923y31DlOv1y7uT8GvDrm4lLSXUyu285YbpOCe4rpreGCfxF8O1uACq6ZvVSOCwiBH6jP4V5vZeNL608KzeHntbO5s337GnjLNEWzkqc4BBJIPrUF/4t1O+/shsxQSaTEsdtJCpBwuME5JBPHsKTwFaTcbWV5a/4tg5WdfoOt69L8XmikuLlt93LFLAzHYIxnjb0AAGR9K1JLhtI8G+OpNKlNuY9VKRtF8pQF0VgPTqRxXMS/FTXJI3aO106G9kTy3vYoMSkYx1z1/T2rBg8S30Hhu/0MLC9vfSrNLI4YybgVPBzjqo6g1P1GtNqTilblVu9nqw5WdZcXlxqHwU8y8mknkh1IKjysWYDHqef4jWf8Kf+Sg2P/XOX/0BqwV8RXi+Fm8PiOD7I0/nl9p8zdxxnOMcelR+H9duvDmsRanZpC88YZVWUEryCDwCPWur6rNUKtNL4m7fMdtGdn4XufEGv/Ea5aLV5I3h85mllUSiOPcAQingHlRxjH6V0Wi3mk3nh/xZFpuo6zqA+xSNK9/gx52vgpxkE/09q8v0fxLqGh662r2TRrcOW3owyjBjkqRnOM10kvxW1l4J7ePT9LignRkkjSBhu3DBP3utceKwNaUkoRVrLstu/cTi+hZnu7n/AIUrbSfaJt51IqW3nOMNxn0ro9Vk0Wx8E+F7e91PU9Pga2WVG08YEjFVJLHHXJJ/E155o3jO90fQrnRxa2d1ZzsW23MZbYxGDjnH/wBeptH8fappOkrpclvY39nG26OO8h3+WevHI7nvnHalUwNZ3stpN773/wAgcWXfiTrena5f6bcWP2hnW2CSSTxbGcZ+VvfOTyOK4mtXX/EWoeJb8Xd+6ZRQkcca7UjX0ArKr1MLSdKioPp8y0rIKWkorYYtJS0hpALXQeDP+Rij/wCub/yrnq6Dwb/yMUf/AFzf+VY4j+FIT2Pe/DH/ACDP+BmtusTwx/yDP+Bmtuviqvxs4pbhRRRWZIUUUUAcR4r/AOP9/wDrj/jXzrX0V4r/AOP9/wDrj/jXzpX2vD38OXyOyl8Ioo60UV9CaAKWk60tIBaKSikAtGaKdEhklSMEAswX86l2SuwEor0yb4SwWmox2F54rsYLmfH2eIx/PJ+BYY54HXP6VzNn4G1W88Xz+HECC4t2PnTZ+REGPn/EEYHvXBDMsNUTcZbK/VaC5kczS13N78O4V02XUNJ8RWepW9tIq3ZjQqYQTy3BOQOvbgH0rV8X+FNAsPA+kXVpqVos6xOVlSEg3x46c8Y9/Ws/7ToOUYxu+Z22egudHmNFbHhbw+/ifX4NKjuFgMoY+Yy7gNqk9M+1b978OmTWbPR9N1m11HUJWcXEcQwtqFxkscn16YBz2rWrjKNKfs5uztf5DbSZxNFd1f8Aw5RNMvrrR9ftNVlsBuuoIk2sgGc45Oeh9M4Petifwh4eT4aRXI1ey8/zy328Qn5ztP7oc5/H26VzyzOgknFt3dtnoLmR5bS12WkeAlutEh1bWdattHtrlsW3nruMvvjIwPf8fTMsHw0v38Xnw/NeRRk25uYrlULLImcDj1z+XvVvMMMm1zbevQOZHE0V3tz8NUGl31xp/iGyv7ywQvc2sK/dwDkbs8ng9QOlaml+EfD9x8NZ7mbVrJZ3nRmvzCSbclUzF1yev/j1YzzOgknG71tsw5keX4OM4ODRXaalbarL8NPDoaaGW1mupFt4EhxIG3OOWzzzngAde9XE+GCpLDYXviOwttYmQMliRuOSOAWzwfoD7Zqvr9KKvN21a6vZ+gcyPP6Wuw0P4eX+r6nq2nT3MVlc6aBvEqkqxOcc9hgZzzwak1fwDHZ+HZta0vXrTVYLZgtwIVxsyQODk56j045pvHYdT5ObXT8dtQ5kcXRV3SNPbV9Ys9OSQRtczLEHIyFycZxXY6h8Mja30GmW2u2d1qs0oT7KF2lE2lt7HJIGBnGO/enWxdGjJQm7PcL2OBpa725+GsZs75tL8Q2eoXtipa4tUTaVx1AOTk8eg54rT0vwj4fn+G09zNq1ks7TozXxhJNuSEJiPOT1/WueWZUEk43ettmLmR5fRXW6H4IGoaL/AGzqurW+kae7+XDJMu4ynnoMjjg/keKoeKfC1z4XvYYpZo7i3uI/Mt7iL7si/wCP59RW0cXRlU9mpajujBpaSityhaKKKQC0UlLSASlpDRSA2vCv/IzWf1b/ANBNe8+F/wDjyl/66H+QrwXwp/yMtn9W/wDQTXvXhf8A485f+uh/kK8DN90Y1tjeooorwTmCiiigAooooASilpKACiiigQYpKWimMSiiigAooooAKMUUUAXaKWkrEoKKKKACiiigApKWkoAKKKKYgooooAQ9K858bff1L/r2b/0CvRj0rznxt97Uv+vdv/QK7cB/GRpT3PAKSilr9N6HUFFJmlpALRSZpaQBS0lFSAtFFFTYAFLSUUALRRRUgLRSUtIApaSikAtFFFTYBaWm5paQC0tJRUgLRSZpaQC0UlLSEFFFFSMWikpaQBRRRSAWikpaQBS0lFSAtb/gz/kYo/8Arm/8qwK6DwZ/yMcf/XN/5VhiP4UhPY968Mf8gz/gZrcxWH4Y/wCQX/wM1uV8VV+NnFLcSilpKzJCiiigDiPFf/H+/wD1x/xr51r6K8V/8f7f9cf8a+dK+14d/hy+R2UvhClopK+jNBRR1ooqQAUtJ1paQC1Laf8AH5B/10X+dQ0qsVYEEgjkEGokrpoD07x4xHxk08g8iW16H/aFdXpt1CnxY8W2m2Fru5tYvs6SnCuREuVPscjPsDXh9xqN9dXi3lxe3E10pBWaSVmcEdMMTnikl1C8nvftst3PJd5Dee8jGTI4B3ZzkYFeJPKZSpxg5bR5fndP9CeXSx6zLea7ZeHNeVPBOn6TatbtHcybxHvBBX5ezY3EjsfxAOP4osrq/wDhZ4Vu7WF5re0ikE7oMiPkDn8QRXDXuu6tqUQhvtUvLqMchJp2dc+uCetNj1jUodOfTor+5SyfO63WVghz1+XOKKWW1KbjNWunfrta3VhynU/Cf/koVj/uS/8AoBrX+GV3Db/EnU0lZFmnjnjh3nhn3g4/IH8q84tby6sLhbizuZredcgSQuUYZ64Ipnmyed529vM3bt+47t3XOfXNb4nAOvKo27KSS+5tg43PZYLrXtNh1YweBtN05I4HE8+8Rq6gdA38Xr6Vz9vY3OqfBRI7GB7iS31FpJUjGSqhSScfQiuIu9f1jULf7Peare3EP/POW4Z19uCaZZaxqenQSwWWoXNvFN/rEilZVbjHIB9K5IZZUjC6aTun16erFynsd1cyXfgfw3cad4atNfgjtlidZF3NA4VVIAx6qQT7VZ0e91S5+IVjFqunWthLFpUnlxQSh8KWXAPpjHSvFLDWdT0sOun6jd2qvywhmZA31APNJFq+pQ3r3sWoXaXbja06zMJGHoWzk9KxeTztKKatrZ631+dg5DtPhYT9p8QDPB0uT+Yqx4esrnVPg5rNnYwvcXI1FZPKjGWK4j5A79D+Vef2t9eWJkNpdz25kTY/kyFNy+hweR7GpdP1jUtK8z+z7+5tfM+/5MpTd6Zwa6q2AnKcpxaveLXyBxPSIbqCy8A+A7q5IEEOqM8hPRQJX5/DrR4h8Ha9f/E8Xtrbu9pcXEU6XinKIgC8k9iMdO/GK8ze+u5bOKzkup3tYiWjhaQlEJ6kLnA69qtRa/rMFl9ji1W9jtgMCJbhgoHoBnp7Vl/Z9aEnOEld82/Zu/3hys9buL63v9W+Istq6ui6YI96nIZliYH9ePwrk/CBP/CtfGI7bYv61w9vf3lpHNFbXc8Mc67JljkKiRfRgDyOTwaIb67t7aa2hup44Jv9bEkhVZMf3gDg045a4RcU+sf/ACW3+Qcpq+DD/wAVro3/AF+R/wDoQrr4tJstb+NV7aX7nyfPkfYGKmQheFz/AJ4Febwzy20yTQSvFKhDI6MVZSOhBHQ1I17dSXhvHuZmui28zlyX3eu7Oc+9b18JOpUc4u142G0e8eHLTUV1DV1fwrp+kWiwSRxSwoBLKc8DcPvDAz064rh/D9lc6p8HdYs7GF7i5GoK/lRjLEYj5A/A/lXHt4p8QPIsja3qRdQVVvtT5A/Oq1hrGpaX5n9n39za+Z9/yZWXd6Zwa4IZbVgm7q901vbQnlZ63bXLXfw10OTTvD9pri248qeCQbmiccEgfXr9Qelcp8RbzVZrXRLfU9JtdNWKJzBDBJuKodowy/w42j/IOOOsNW1HS2drC+ubUv8Ae8mVk3fXB5qG5u7m9nM93cSzzN1klcux+pNa0MvdKtzuzV2+t9fwGo2dyKiiivULClpKKkBaKKKQC0UlLSA2PCn/ACM1l9W/9BNe9eF/+POX/rqf5CvBfCv/ACM1n9W/9BNe9eF/+POX/rqf5CvAzjdGNXY3qKU0YrwDmEooooAKKKKACiiigApKWigBKKWkoAKKKKAEopaMUwEooooAvUUUViUJRS0lABRRRQAUUUUAFJS0lABRRRTEIelec+N/v6l/17H/ANAr0btXJeIdGmuriWdFWVHXa0ZHOMYP1rqwdSMKqlIuDsz5mor0fWfAFtMzvYMbWYdYnyUz/Nf1+lcNqOj3+kybLy3ZAfuv1VvoRX6LhsdQxC916nWmmUqSilrrAKKTNLSAWikzS0gClpKKkBaKKKQAKWkopALRRRUgLRSUtIApaSikAtFFFSAtLTc0tIBaWkoqQFopM0tIBaKSlpCCiiipGLRSUtIAooopALRSUfQVLAWug8Gf8jFH/wBc3/lSaX4S1DUMPKv2WA/xSD5iPZev54rvdB8J29mwNnA0s3RpnPP+A/CuDF4qlGDjfUltWO/8Mf8AIL/4G1blZmjWb2NkInYM2STjpWnXyFRpzbRxy1YUUUVAgpKWigRxHiv/AI/n/wCuP+NfOVfTuv6VPdXHnxbXG3aU7968u1nwDaXDO1nmyn7xkfIT9Oo/Dj2r6rJMdSoRcZvc66UlY80pav6noeoaQ+Lu3ZU6CReUb8f6Hms+vrITjNc0XdGoUtFJVAKKOtFFSAClpOtLSAWikopALRmiipAWikpaQBS0lFIBaKKKmwC0UlLSAKWkopAOopKWpABS0lFIBaWkoqRC0UUUgClpKKQxaKKKQBS0lFSAtFFbel+FtR1LDlPs8B/5aSjGR7Dqf5e9ZznGCvJ2Fcb4U/5GWz+rf+gmvevC/wDx5S/9dT/IVweheEbaykVraFp7kf8ALV/4fp2H869G0SxextWSRgWZt3HbgCvm8zxEKr90wqyVjVooorxzADSUtJQAtJRRQAUUuKSgAooooAKKKKADFJS0lABRRRQAUYoooAu0UUVkUFFFFACUUtJQAUUUUAFFFFABSUtJQAVG8QapKKYjKvdLgulxLGCezDgj8a5nUvDL+W6qi3ELfejdQTj6dDXdEZqN4g1a0q86bvFlKTR4JrPgC3mZ3sGNrMOsTglM/wAx+v0rhtR0e/0mXZeW7Rg/dfGVb6HpX1Je6XBdLiWME9mxgj8a5nUvDL+W6qi3EJ+9G6gnH06GvosFn84WjU1X9dTaNXufOlJXpGseALaZnewY2sw6xPkpn+Y/X6VwupaRfaVJsvLdowej4yrfQivp8PjqOIXuvXsapplKikzS11jFopM0tIApaSipAWiiikAClpKKQC0UUVIC0UlLSAKWkopALRRRUgLS03NLSAWlpKKkBaKTNLSAWikpaQgoooqWMWjr0rf0vwjqOobZJV+ywH+KQfMR7L1/PFd5ofhC1tGU2lsZph1mkGSPp2H4c1w18dSpLe7Jckjg9L8I6jqGJJV+ywH+KQfMR7L/AI4rvND8I2tqym1tjNMOs0gyR9Ow/Dmu0svDqLhrg+Y390cL/wDXrfhtEjUKqhVHQDivAxOazqaR2MZVexgWXh5FIa4PmN/dHC//AF63obRI1CqoUDsBirIjC9qdXkzqSm7tmLk3uIqhRS0UVIgooooAKKKKAGPGGqhd6bDcriWMN6HuPxrSoxmhNrVAmcTf+HHCOIws0RHMbgdP6157rPgG0nZ2s82c/eMg7Cfp1H4flXujxBhVC702G5XEsYb0OOR+NejhcyrUHdM1jVa3PmDU9D1DSHxd27KnQSLyjfj/AEPNZ9fRl/4cfY4iCzRMOY3A6fyNefaz4CtJ2drTNnP3jIOwn6dvw/KvqcHnlOorVNH3N41EzzOlrQ1PQ9Q0h8XduypnAlXlD+P9DzWdXtwnGouaLuixRR1oop2ABS0nWlpALRSUUgFozRRUgLRSUtIApaSikAtFFFSAtFJS0gClpKKQDqKSlqQAUtJRSAWlpKKkQtFFFIApaStzSvCuo6nhyn2eA/8ALSUYyPYdT/L3rOpUjBXk7BcxK3NL8K6jqW1yn2eA/wDLSUYyPYdT/L3rutE8H2lo4MFubm4H/LWQZ2n2HQfz967Wz8PgYa5bcf7q9Pzrx8TmsYaQIlUSOK0TwfaWjKYbc3M4/wCWsgzg+w6D+ddrZ6AvDXDbj/dXp+dbsFmkahUQKo7DirKxgV4NbGVKr1ZhKo2VoLRIlCogUDsOKtKoUUtFcu5mFFFFAgooooGFJiiloAKSlooASiiloASijFFABRRRQAUUUUAJRS0YoAuUUUVkUFFFFABRRRQAlFLSUAFFFFABRRRQAUlLSUAFFFFMQEA1G8IapKKAMq90uC6XEsYJ7N0I/GuZ1Lwy/luqotxCesbgE4+nQ13RGajeIN25rWnXnTejKUmjwTWfAFvMzvYMbWYdYnyUz/Mfr9K4bUdHv9Jk2XluyA9H6q30PSvqS90uC6XEsYJ7N0I/GuZ1Lwy/luqotxCesbgE4+nevosFn042jU1X9dTaNXufOlFej6x4Bt5md7BjazDrE+Smf5j9fpXDajpF9pMmy8t2QHo/VW+hr6bD46jiF7r1NU0ylRSZpa6xhS0lFSAtFFFIAFLSUUgFoooqQFopKWkAUtJRSAWiiipAWlpuaWkAtLSUD0qWAtA5rf0rwhqOo4klX7LAf4pB8xHsvX88V32heELW0ZTaWxmmHWaUZI+nYfhzXBiMfSore7JckjgtK8I6jqOJJV+ywH+KQfMR7L1/PFd7oXhC1tGBtbYzTDrNIMkfTsPw5rtbLw6i4a4PmN/dHC//AF63obVI1CqoUDoBxXz2KzadTSOxjKr2MGy8OouGuD5jf3Rwv/163obRY1CqoUDsBVlUC9qdXkTqSm7tmLk3uNVAtOooqBBSUtFACUUUUAFFFFMAooooAKKKKACiiigRG8QaqF3psNyuJYw3oe4/GtOihNp3Q0zib/w42xhEFmjYcxuB0/ka8+1nwFaXDO1pmzn7xkfIT9Oo/D8q9zeINVC702G5XEsYb0PcfjXo4XMq1B3TNI1Wtz5g1PQ9Q0h8Xduyp0Ei8o34/wBDzWfX0Zf+HH2MIgs0bdY3A6fyNefaz4CtJ2drTNnP3jI+Qn6dR+H5V9RhM8p1Vapo+50RmmeaCjrWhqeiahpD4u7dlTOBIvKN+P8AQ81n17cJxmuaLuiwFLSdaWmAtFJRSAWjNFFSAtFJS0gClpKKQC0UUVIC0UlLSAKWkopAOopKWpABS0lbuleFNR1Pa5T7PAf+Wkoxkew6n+XvWVSpCmrydhXMOt3SvCuo6ntcp9ngP/LSUYyPYdT/AC967vQ/B1paOpgt2ubgf8tZBnafYdB/P3rtbPw+Bhrltx/ur0/OvGxWbxhpTIlUSOK0TwdaWjqYLc3NwP8AlrIM4PsOg/nXa2fh8cNctuP91en51vQWaRIFRAqjsOKsqgUdK+fr4ypVd2zCVRsrQWiRKFRAqjsBirIQKKdRXLuZhRRRSEGKMUUUwEooooAKKKKACiiigAooooGFFFFAB2pKWkoAKKWkoAKKKKACiiigC5RRRWRQUUUUAFFFFABRRRQAlFLSUAFFFFABRRRQAUlLSUAFFFFMQUUUUABANRvCGqSigDKvdKgulxLGCezdCPxrmdS8NP5bqqLcQt96N1BOPp0Nd1jNRvEGrWnXnTejKUmjwXWPANtMzvYMbWYdYnyUz/Mfr9K4bUdIv9Kk2Xlu0YPR+qt9COK+o73SoLpcSRgnsehH41zOo+Gn8t1VFuIT96N1BOPp0NfQ4PPpxtGpqjaNXufOtFejax4Bt5md7BjazDrE4JTP8x+v0rhtR0e/0qTZeW7Rg9H6q30I4r6bD42jXXuvU1TTKVLSUV1DFooopAApaSikAtFFFSAtFJS0rAFLSUDnpUsBaUV0GleD9R1HEkq/ZYD/ABSD5iPZev54rvtC8H2toym0tjNMOs8vJH07D8Oa8/E5hRop63ZLkkcFpXhDUdRxJKv2WA/xSD5iPZev54rvtC8H2toym0tjNMOs0oyR9Ow/Dmu1svDqLhrg+Y390cL/APXrfhtEjUKqhQOgAwK+bxeb1KmkdjGVXsYFl4dRcNcnzG/ujhf/AK9b0NqsahVUKo6AcVZWMCnV5E6kpu7Zk5N7jVQLTqKKgQtHFJRQAUUoooEJRRRQAUlLRQAlFFFABRRRTAKKKKACiiigAooooEFIRmlooAiaINVG702G5XEsYb37j8a06TFCbTuhnFX/AIcfY4iCzRt1jcDp/I159rPgK0nd2tM2c/8AzzI+Qn6dvw/KvcmiDVRu9NhuVxLGG9+4+hr0cLmNWg7pmkarW58w6nomoaQ+Lu3ZU7SLyjfj/Q81nda+jL/w64RhEBNG3WNwOn8jXn+s+ArSdna0zZz/APPMj5Cfp1H4flX0+EzunV0qaeZ0RmmeZilrQ1PQ9Q0h8XduypnAlXlW/H+h5rPr2oTjNc0XdFi0UlFUAtGaKKkBaKSlpAFLSUUgFoooqbALRSVvaV4U1LU9shT7PAf+Wkoxkew6n+XvWVSpCmrzdguYVb2leFNS1Pa5T7PAf+Wkoxkew6n+XvXeaH4Ns7R1MFubm4H/AC1kGdp9h0H8/eu3s/D4GGuW3H+6vT868TF5xGGlMzlUSOI0PwbZ2jqYLc3NwP8AlrIM7T7DoP5+9dvZ+H1GGuW3n+6vT863YLRIkCogVR2HFWVjA7V87XxtSq7tmEqjZWgtEiUKiBVHYDFWVjC0+iuTczCkxS0UCExRS0lABRRRQAUUUUAFFFFACYopaKYCUUtJQAUUUUAFFFFABRRRQMMe1JS0UAJRS4pKACjtRRQBcooorIoKKKKACiiigAooooAKKKKAEopaSgAooooAKKKKACkpaSgAooopiCiiigAooooATANRvCGqWigDJvdLgulxLGCex6EfjXNaj4afy3VUW4hP3o3UE4+nQ13RFRvErVtTrzpv3WUpNHguseAbaZnewY2sw6xPkpn+Y/X6Vw2o6Rf6VLsvLdowfuv1VvoelfUV5pcF0uJYwT2PQj8a5rUfDT+W6qi3ELdY3AJx9Ohr6DB57OFo1NUbRq9z51pa9F1jwDbzM72DG1mHWJwSmf5j9fpXD6jpF/pUmy8t2QHo+Mq30PSvpcPjaNde69TVNMpUUUV1DAUtJSjnpUsApR7V0Gk+DtR1HbJKv2WA/wAUg+Yj2Xr+eK9A0HwdaWjKbS2M046zyDJH07D8Oa87E5lRore7E5JHAaT4P1HUdskq/ZYD/FIPmI9l6/niu/0HwdaWjA2lsZph1nlGSPp2H4c129l4djXDXJ8xv7o4X/69b0NokahVQKo6ADAr5nF5xUq6R2MZVexgWXh1Fw1wfMb+6OF/+vW9DaJGoVVCqOgAwKtLGFp1ePOpKbu2YuTe41UC06iioJCiiigBKKWkoGFFFFABRRRQAUUUUCCiloxQAlJS0UAJRS0lABRRRTAKKKKACjFFFABRRRQIKKKKAEopaSgCNog1ULvTYblcSxhvQ9x+NadGM002thpnFX/h19jCILNG3WN8dP5GvPtZ8BWlw7taZs5/7hHyE/TqPw/KvcniDDpVC702G4XbLGG9D3H416GGzGrQd0zWNVrc+YtT0PUNIfF3bsqdpV5Vvx/x5rPr6Lv/AA62xhEBNGeDG+On8jXn2s+A7Sdna0zZz/8APMj5Cfp1H4flX02EzunV0qaPubxmmebUVoanomoaQ+Lu3YJ2lXlD+P8AQ81nV7UJxmrxdyxaM0UUwFopK3tJ8J6lqm1yn2eA/wDLSUYyPYd/5e9ZVKsKavN2Awq3tJ8J6lqmHKfZ4D/y0lGMj2HU/wAveu+0LwZZ2bqYLc3NwP8AlrKM7T7DoP5+9dvZ+H14a5bef7q9PzrwsXnUYaUzOVVI4jQ/BlnZupgtzc3A/wCWsoztPsOg/n7129n4fUYa5bef7q9PzregtEiQKiBVHYDFWVjC185XxtSs7tmEqjZVgtEiUKiBVHYDFWQgWn4orj3MwooooEFFFFABRRRQAUUUUAFBopO9ABRS0lABRRRQAUUUUAFFFFACUUtFMBKKKKACiiigAxRRRQMKKKKAEopaMUAW6KKKyKCiiigAooooAKKKKACiiigAooooASilpKACiiigAooooAKSlpKACiiimIKKKKACiiigAooooAQgGo3iDVLSUAZV5pUF0uJYwT2PQj8a5rUfDT+W6qi3ELfejcAnH06Gu6IBqN4g1bU686b91lKTR4LrHgG2mZ3sGNrMOsTglc/zH6/SuH1HR7/Spdl3bsmThXxlW+hHFfUN5pcF0uJYwT2OMEfjXO3fhyZW/cFZEz0bgj+le/hM9nBctTX+u5tGr3PFNJ8HalqW2SVfssB/ilHzEey9fzxXoOg+DbSzZTaWxmnHWeUZI+nYfhzXb2XhxFw1yfMb+6OF/wDr1vw2iRqFVQoHQAYFc+MzqpVuo7ClV7GBZeHEXDXJ8xv7o4X/AOvW9DaJGoVVCqOgHAq0qBRTq8WdSU3dsxbb3GrGFp1FFQISiloxQAlFFFAgooopgFFFFACUUtJQMKKKKACiiigAooooEFFFFABRRRQAlFLRQAlFFFMAooooAKKKKACiiigAooooEJRS0lABSEA0tFAETxBqo3emw3K4ljDe/cfjWnSEZpptaoaZxd/4dbY4iAmjPWNwOn8jXn+s+A7Sdna0zZz/ANwj5Cfp1H4flXuDxBu1UbvToblcSxhvQ45H416GGzGrQejNY1Wtz5j1PRNQ0l8XduwTtKvKn8f8eavaT4S1LVNrlPs8B/5aSjGR7Dv/AC969uvPD0i58giRT/C/X/CrFn4eAw1y28/3F6fnXsyz9+z0Wpr7ZWOH0LwXZ2bqYLdrm4H/AC1lGdp9h0H8/eu4s/DyjDXLbj/dXgfnW9BaJEgVECqOw4qysYWvCxGNq1ndsxlUbK0FmkSBUQKo7AYqysYWn0Vx7mYUUUUhB2pKWimAlFLRQAlFFFABRRRQAUUUUAFFFFABRRRQAUlLSYoAKKXFJQAUUUUAFFFFABijFFFMBKKWkxQAUUUUAFFFFACZpaKKBluiiisSwooopiCiiigAooooAKKKKACiiigAooooASilpKACiiigAooooAKSlpKACiiimIKKKKACiiigAooooASilpKAEwDTTED6U+igBoQCnUUUAFFFFABRRRQAUUUUAFJilooEJRS0lABRRRTAKKKKAEopaSgYUUUUAFFFFABRRRQIKKKKACilpKACkpaKAEooopgFFFFABRRRQAUUUUCCkpaKAEopTSUAFFFFABSEZpaKAIzED6ULGBUlJQAUUUUAFFFFABRRRQAUUUUAFFFFABSUtFMBKKXFJQAUUUUAFFFFIAooopgFFFFABRRRQAlFLRQAlFFFABRRRQAUUUUAJRS0UwEooooAKKKKALlJS0ViaCUUUUAFFFFMQUUUUAFFFFABRRRQAUUUUAFFFFACUUtJQAUUUUAFFFFABSUtJQAUUUUxBRRRQAUUUUAFFFFACUUtJQAUUUUAFFFFABRRRQAUUUUAFFFFABRiiigQlFLRQMSiiimAUUUUCEopaSgAooooGFFFFAgooooAKKKKACiiigApKWigBKKKKACiiimAUUUUAFFFFAgooooASilpKACiiigAooooASilpKACiiigAooooAKKKKACiiigAooo70AFFFFACUUtFMBKKKKACiiigAooopAFFFFMAooooAKSlooASilooASiiigAooooAKKKKACkpaSmBcooorE0CkpaKAEooooAKKKKYgooooAKKKKACiiigAooooAKKKKAEopaSgAooooAKKKKACkpaSgAooopiCiiigAooooAKKKKAEopaSgAooooAKKKKACiiigAooooAKKKKACiiigQUlLRQMSiiimAUUUUhCUUtJTAKKKKACiiigAooooAKKKKACiiigAooooASilpKACiiimAUUUUAFFFFAgooooASilpKACiiigAooooASilpKACiiigAooooAKKKKACiiigAooooAKKKKACiiimAlFLRQAlFFFABRXP33jPRNPvprGW4le6hBLxRQO5UYyegx0rMj+JGnXTbbLTdUujgkGOBQCB7lq0VGbV0iuVnZ0VzfhTxfB4qF0YbWS3NuwBEhBJz/LpXSVEouL5Zbiaa0YUUUUhBRRRQAUUUUAJRS4pKACiiigAooooAt0UUVkaBRRRQAUlLRQAlFFFABRRRTEFFFFABRRRQAUUUUAFFFFABRRRQAlFLSUAFFFFABRRRQAUlLSUAFFFFMQUUUUAFFFFABRRRQAlFLSUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUCCkpaKBiUUUUwCiiigQlFLSUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAHFJR3paAEopaSgAooopgFFJS0CCiiigBKKWkoAKKKKACiiigBKKWkoAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACkxS0UwPKNXvp9I+KN7bxSW8K6nFHGZZl4TKAAg/UV1usx3B0XVoLu7SW28lUijtod0q8DJIzye/GMVheL/AAzHrvj/AE1ZnaKB7U7nCn5irE7c9uDVLWtctbDULky6Kl7epKI03glFCrjOe4yM4+td1lNR5d7G29rEXwpla217WbOeTfK2DuzneVYgnPfrWxDf+J9SudeuLPVreKPTruWKO3ktwQ4XkAtnI44rG8Gaprup+K7STUI2MKRNGvlx7UUEEknHAOQP0rqZvA0j3OoGLXLyC1v5nlngjRRuLdRu+nFOs0qjcrar1CVr6mRqvjvUJtD0uXTPIivZoGurjzPuqiZGBn+8wwO/bvU2ra3rVx/Y1/pWsRw2erTRwJEbZWMJI5JOeeQeOK3bHwTpNrPcNLCt1FIkcUcU6BliRBgAZ9+Se5qKDwTa20VpBHdyiC01A30EeBhP+mY9s81l7SkvhQrxMTWvF2paNq0FuLmCe305Yv7TkKBWmZzjCrnqF+bAP8q0tSv9Z1Dxkuk6Vqsdnb/2eLsOYBKGO/b6+hFX4fBelG0u4b6Jb2S5mkmeaVBuDPwdp/hwAMYqivgWaGW1ntdeureeC0Fn5ixqS0YYsBz+A/Clz0um4XiZQ8X64yrow+zDVzqBs/tO0+XtAzvx6+1T3fiTWvCuoS2eq3EGpI9pJcQSrF5TblGdpAOMe9av/CCacNJFotxci5Fx9rF7v/e+d/ez0/D+vNOtfBdv9pnutVvrjU7iWA2+6YBQqHqAB0PvVc9HtoF4mZpes6qNU0oapr9qsl+izLYR2hxsYZAD54P1/Wu7rkrLwP8AZNQsJn1i7uLawbdbW8gU7OMAbupFdbWNVxbXKTK3QguZJY4w0SKxzyXbaqjGSSapnU5Ato7QKqz7AAZPmJJwcDHIA55xxVu8tFvIfKaR0XcGOzHzY7HIIIqN7ASFPMuZ3C7SykrhipyCQBxz6YzWRBbooooAt0UUVkaBRRRQAUUUUAFJS0GgBKKKKACiiimIKKKKACiiigAooooAKKKKACiiigBKKWkoAKKKKACiiigApKWkoAKKKKYgooooAKKKKACiiigBKKKKACiiigAooooAKKKKACiiigAooooAKKKKBBSUtJQMKKKKYBRRRQISilpDQAUUUUAFFFFABRRRQAUUUUAFFFFABSUtFACUopKKACiiigAoxRRTAKKKKACiiigQlFBooAKKKKACiiigBKKU0lABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA10WRCjDKkYIqC3sLS1ULDbxoBnGBz6nmrNFO7HcTFFLQelAhKKKKACiiigAooooAKKKKACiiigAoooNAH//2Q==)
```
from langchain_community.chat_models import ChatOllamafrom langchain_core.messages import HumanMessagellm = ChatOllama(model="bakllava", temperature=0)def prompt_func(data): text = data["text"] image = data["image"] image_part = { "type": "image_url", "image_url": f"data:image/jpeg;base64,{image}", } content_parts = [] text_part = {"type": "text", "text": text} content_parts.append(image_part) content_parts.append(text_part) return [HumanMessage(content=content_parts)]from langchain_core.output_parsers import StrOutputParserchain = prompt_func | llm | StrOutputParser()query_chain = chain.invoke( {"text": "What is the Dollar-based gross retention rate?", "image": image_b64})print(query_chain)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:48.136Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/ollama/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/ollama/",
"description": "Ollama allows you to run open-source large",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "6275",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"ollama\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:48 GMT",
"etag": "W/\"347f5ba5eb8a3c322cb1436cb0580a0b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::8krzg-1713753527996-6145266cbe4d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/ollama/",
"property": "og:url"
},
{
"content": "ChatOllama | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Ollama allows you to run open-source large",
"property": "og:description"
}
],
"title": "ChatOllama | 🦜️🔗 LangChain"
} | ChatOllama
Ollama allows you to run open-source large language models, such as Llama 2, locally.
Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile.
It optimizes setup and configuration details, including GPU usage.
For a complete list of supported models and model variants, see the Ollama model library.
Setup
First, follow these instructions to set up and run a local Ollama instance:
Download and install Ollama onto the available supported platforms (including Windows Subsystem for Linux)
Fetch available LLM model via ollama pull <name-of-model>
View a list of available models via the model library
e.g., ollama pull llama3
This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.
On Mac, the models will be download to ~/.ollama/models
On Linux (or WSL), the models will be stored at /usr/share/ollama/.ollama/models
Specify the exact version of the model of interest as such ollama pull vicuna:13b-v1.5-16k-q4_0 (View the various tags for the Vicuna model in this instance)
To view all pulled models, use ollama list
To chat directly with a model from the command line, use ollama run <name-of-model>
View the Ollama documentation for more commands. Run ollama help in the terminal to see available commands too.
Usage
You can see a full list of supported parameters on the API reference page.
If you are using a LLaMA chat model (e.g., ollama pull llama3) then you can use the ChatOllama interface.
This includes special tokens for system message and user input.
Interacting with Models
Here are a few ways to interact with pulled local models
directly in the terminal:
All of your local models are automatically served on localhost:11434
Run ollama run <name-of-model> to start interacting via the command line directly
via an API
Send an application/json request to the API endpoint of Ollama to interact.
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt":"Why is the sky blue?"
}'
See the Ollama API documentation for all endpoints.
via LangChain
See a typical basic example of using Ollama via the ChatOllama chat model in your LangChain application.
# LangChain supports many other chat models. Here, we're using Ollama
from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
# supports many more optional parameters. Hover on your `ChatOllama(...)`
# class to view the latest available supported parameters
llm = ChatOllama(model="llama3")
prompt = ChatPromptTemplate.from_template("Tell me a short joke about {topic}")
# using LangChain Expressive Language chain syntax
# learn more about the LCEL on
# /docs/expression_language/why
chain = prompt | llm | StrOutputParser()
# for brevity, response is printed in terminal
# You can use LangServe to deploy your application for
# production
print(chain.invoke({"topic": "Space travel"}))
Why did the astronaut break up with his girlfriend?
Because he needed space!
LCEL chains, out of the box, provide extra functionalities, such as streaming of responses, and async support
topic = {"topic": "Space travel"}
for chunks in chain.stream(topic):
print(chunks)
Why
did
the
astronaut
break
up
with
his
girlfriend
before
going
to
Mars
?
Because
he
needed
space
!
For streaming async support, here’s an example - all possible via the single chain created above.
topic = {"topic": "Space travel"}
async for chunks in chain.astream(topic):
print(chunks)
Take a look at the LangChain Expressive Language (LCEL) Interface for the other available interfaces for use when a chain is created.
Building from source
For up to date instructions on building from source, check the Ollama documentation on Building from Source
Use the latest version of Ollama and supply the format flag. The format flag will force the model to produce the response in JSON.
Note: You can also try out the experimental OllamaFunctions wrapper for convenience.
from langchain_community.chat_models import ChatOllama
llm = ChatOllama(model="llama3", format="json", temperature=0)
from langchain_core.messages import HumanMessage
messages = [
HumanMessage(
content="What color is the sky at different times of the day? Respond using JSON"
)
]
chat_model_response = llm.invoke(messages)
print(chat_model_response)
content='{ "morning": "blue", "noon": "clear blue", "afternoon": "hazy yellow", "evening": "orange-red" }\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n ' id='run-e893700f-e2d0-4df8-ad86-17525dcee318-0'
import json
from langchain_community.chat_models import ChatOllama
from langchain_core.messages import HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
json_schema = {
"title": "Person",
"description": "Identifying information about a person.",
"type": "object",
"properties": {
"name": {"title": "Name", "description": "The person's name", "type": "string"},
"age": {"title": "Age", "description": "The person's age", "type": "integer"},
"fav_food": {
"title": "Fav Food",
"description": "The person's favorite food",
"type": "string",
},
},
"required": ["name", "age"],
}
llm = ChatOllama(model="llama2")
messages = [
HumanMessage(
content="Please tell me about a person using the following JSON schema:"
),
HumanMessage(content="{dumps}"),
HumanMessage(
content="Now, considering the schema, tell me about a person named John who is 35 years old and loves pizza."
),
]
prompt = ChatPromptTemplate.from_messages(messages)
dumps = json.dumps(json_schema, indent=2)
chain = prompt | llm | StrOutputParser()
print(chain.invoke({"dumps": dumps}))
Name: John
Age: 35
Likes: Pizza
Multi-modal
Ollama has support for multi-modal LLMs, such as bakllava and llava.
Browse the full set of versions for models with tags, such as Llava.
Download the desired LLM via ollama pull bakllava
Be sure to update Ollama so that you have the most recent version to support multi-modal.
Check out the typical example of how to use ChatOllama multi-modal support below:
pip install --upgrade --quiet pillow
Note: you may need to restart the kernel to use updated packages.
import base64
from io import BytesIO
from IPython.display import HTML, display
from PIL import Image
def convert_to_base64(pil_image):
"""
Convert PIL images to Base64 encoded strings
:param pil_image: PIL image
:return: Re-sized Base64 string
"""
buffered = BytesIO()
pil_image.save(buffered, format="JPEG") # You can change the format if needed
img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
return img_str
def plt_img_base64(img_base64):
"""
Disply base64 encoded string as image
:param img_base64: Base64 string
"""
# Create an HTML img tag with the base64 string as the source
image_html = f'<img src="data:image/jpeg;base64,{img_base64}" />'
# Display the image by rendering the HTML
display(HTML(image_html))
file_path = "../../../static/img/ollama_example_img.jpg"
pil_image = Image.open(file_path)
image_b64 = convert_to_base64(pil_image)
plt_img_base64(image_b64)
from langchain_community.chat_models import ChatOllama
from langchain_core.messages import HumanMessage
llm = ChatOllama(model="bakllava", temperature=0)
def prompt_func(data):
text = data["text"]
image = data["image"]
image_part = {
"type": "image_url",
"image_url": f"data:image/jpeg;base64,{image}",
}
content_parts = []
text_part = {"type": "text", "text": text}
content_parts.append(image_part)
content_parts.append(text_part)
return [HumanMessage(content=content_parts)]
from langchain_core.output_parsers import StrOutputParser
chain = prompt_func | llm | StrOutputParser()
query_chain = chain.invoke(
{"text": "What is the Dollar-based gross retention rate?", "image": image_b64}
)
print(query_chain) |
https://python.langchain.com/docs/integrations/chat/openai/ | ## ChatOpenAI
This notebook covers how to get started with OpenAI chat models.
```
from langchain_core.messages import HumanMessage, SystemMessagefrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAI
```
```
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
```
The above cell assumes that your OpenAI API key is set in your environment variables. If you would rather manually specify your API key and/or organization ID, use the following code:
```
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0, api_key="YOUR_API_KEY", openai_organization="YOUR_ORGANIZATION_ID")
```
Remove the openai\_organization parameter should it not apply to you.
```
messages = [ ("system", "You are a helpful assistant that translates English to French."), ("human", "Translate this sentence from English to French. I love programming."),]llm.invoke(messages)
```
```
AIMessage(content="J'adore programmer.", response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 34, 'total_tokens': 40}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-8591eae1-b42b-402b-a23a-dfdb0cd151bd-0')
```
## Chaining[](#chaining "Direct link to Chaining")
We can chain our model with a prompt template like so:
```
prompt = ChatPromptTemplate.from_messages( [ ( "system", "You are a helpful assistant that translates {input_language} to {output_language}.", ), ("human", "{input}"), ])chain = prompt | llmchain.invoke( { "input_language": "English", "output_language": "German", "input": "I love programming.", })
```
```
AIMessage(content='Ich liebe Programmieren.', response_metadata={'token_usage': {'completion_tokens': 5, 'prompt_tokens': 26, 'total_tokens': 31}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-94fa6741-c99b-4513-afce-c3f562631c79-0')
```
OpenAI has a [tool calling](https://platform.openai.com/docs/guides/function-calling) (we use “tool calling” and “function calling” interchangeably here) API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool. tool-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally.
### ChatOpenAI.bind\_tools()[](#chatopenai.bind_tools "Direct link to ChatOpenAI.bind_tools()")
With `ChatAnthropic.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an Anthropic tool schemas, which looks like:
```
{ "name": "...", "description": "...", "parameters": {...} # JSONSchema}
```
and passed in every model invocation.
```
from langchain_core.pydantic_v1 import BaseModel, Fieldclass GetWeather(BaseModel): """Get the current weather in a given location""" location: str = Field(..., description="The city and state, e.g. San Francisco, CA")llm_with_tools = llm.bind_tools([GetWeather])
```
```
ai_msg = llm_with_tools.invoke( "what is the weather like in San Francisco",)ai_msg
```
```
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_H7fABDuzEau48T10Qn0Lsh0D', 'function': {'arguments': '{"location":"San Francisco"}', 'name': 'GetWeather'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 70, 'total_tokens': 85}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-b469135e-2718-446a-8164-eef37e672ba2-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco'}, 'id': 'call_H7fABDuzEau48T10Qn0Lsh0D'}])
```
### AIMessage.tool\_calls[](#aimessage.tool_calls "Direct link to AIMessage.tool_calls")
Notice that the AIMessage has a `tool_calls` attribute. This contains in a standardized ToolCall format that is model-provider agnostic.
```
[{'name': 'GetWeather', 'args': {'location': 'San Francisco'}, 'id': 'call_H7fABDuzEau48T10Qn0Lsh0D'}]
```
For more on binding tools and tool call outputs, head to the [tool calling](https://python.langchain.com/docs/modules/model_io/chat/function_calling/) docs.
## Fine-tuning[](#fine-tuning "Direct link to Fine-tuning")
You can call fine-tuned OpenAI models by passing in your corresponding `modelName` parameter.
This generally takes the form of `ft:{OPENAI_MODEL_NAME}:{ORG_NAME}::{MODEL_ID}`. For example:
```
fine_tuned_model = ChatOpenAI( temperature=0, model_name="ft:gpt-3.5-turbo-0613:langchain::7qTVM5AR")fine_tuned_model(messages)
```
```
AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:49.597Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/openai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/openai/",
"description": "This notebook covers how to get started with OpenAI chat models.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "8607",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"openai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:49 GMT",
"etag": "W/\"68eab126b44cbbc8351dd3f51338f8b3\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::dxnkq-1713753529537-5d9246ecc9a6"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/openai/",
"property": "og:url"
},
{
"content": "ChatOpenAI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook covers how to get started with OpenAI chat models.",
"property": "og:description"
}
],
"title": "ChatOpenAI | 🦜️🔗 LangChain"
} | ChatOpenAI
This notebook covers how to get started with OpenAI chat models.
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
The above cell assumes that your OpenAI API key is set in your environment variables. If you would rather manually specify your API key and/or organization ID, use the following code:
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0, api_key="YOUR_API_KEY", openai_organization="YOUR_ORGANIZATION_ID")
Remove the openai_organization parameter should it not apply to you.
messages = [
("system", "You are a helpful assistant that translates English to French."),
("human", "Translate this sentence from English to French. I love programming."),
]
llm.invoke(messages)
AIMessage(content="J'adore programmer.", response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 34, 'total_tokens': 40}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-8591eae1-b42b-402b-a23a-dfdb0cd151bd-0')
Chaining
We can chain our model with a prompt template like so:
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)
chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
AIMessage(content='Ich liebe Programmieren.', response_metadata={'token_usage': {'completion_tokens': 5, 'prompt_tokens': 26, 'total_tokens': 31}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-94fa6741-c99b-4513-afce-c3f562631c79-0')
OpenAI has a tool calling (we use “tool calling” and “function calling” interchangeably here) API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool. tool-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally.
ChatOpenAI.bind_tools()
With ChatAnthropic.bind_tools, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an Anthropic tool schemas, which looks like:
{
"name": "...",
"description": "...",
"parameters": {...} # JSONSchema
}
and passed in every model invocation.
from langchain_core.pydantic_v1 import BaseModel, Field
class GetWeather(BaseModel):
"""Get the current weather in a given location"""
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
llm_with_tools = llm.bind_tools([GetWeather])
ai_msg = llm_with_tools.invoke(
"what is the weather like in San Francisco",
)
ai_msg
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_H7fABDuzEau48T10Qn0Lsh0D', 'function': {'arguments': '{"location":"San Francisco"}', 'name': 'GetWeather'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 70, 'total_tokens': 85}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-b469135e-2718-446a-8164-eef37e672ba2-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco'}, 'id': 'call_H7fABDuzEau48T10Qn0Lsh0D'}])
AIMessage.tool_calls
Notice that the AIMessage has a tool_calls attribute. This contains in a standardized ToolCall format that is model-provider agnostic.
[{'name': 'GetWeather',
'args': {'location': 'San Francisco'},
'id': 'call_H7fABDuzEau48T10Qn0Lsh0D'}]
For more on binding tools and tool call outputs, head to the tool calling docs.
Fine-tuning
You can call fine-tuned OpenAI models by passing in your corresponding modelName parameter.
This generally takes the form of ft:{OPENAI_MODEL_NAME}:{ORG_NAME}::{MODEL_ID}. For example:
fine_tuned_model = ChatOpenAI(
temperature=0, model_name="ft:gpt-3.5-turbo-0613:langchain::7qTVM5AR"
)
fine_tuned_model(messages)
AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False) |
https://python.langchain.com/docs/integrations/document_loaders/airtable/ | ## Airtable
```
%pip install --upgrade --quiet pyairtable
```
```
from langchain_community.document_loaders import AirtableLoader
```
* Get your API key [here](https://support.airtable.com/docs/creating-and-using-api-keys-and-access-tokens).
* Get ID of your base [here](https://airtable.com/developers/web/api/introduction).
* Get your table ID from the table url as shown [here](https://www.highviewapps.com/kb/where-can-i-find-the-airtable-base-id-and-table-id/#:~:text=Both%20the%20Airtable%20Base%20ID,URL%20that%20begins%20with%20tbl).
```
api_key = "xxx"base_id = "xxx"table_id = "xxx"
```
```
loader = AirtableLoader(api_key, table_id, base_id)docs = loader.load()
```
Returns each table row as `dict`.
```
eval(docs[0].page_content)
```
```
{'id': 'recF3GbGZCuh9sXIQ', 'createdTime': '2023-06-09T04:47:21.000Z', 'fields': {'Priority': 'High', 'Status': 'In progress', 'Name': 'Document Splitters'}}
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:50.225Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/airtable/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/airtable/",
"description": "- Get your API key",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3439",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"airtable\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:50 GMT",
"etag": "W/\"9145e78c10a84e65c409044c162efa49\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::gfrhk-1713753530161-196b0a396306"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/airtable/",
"property": "og:url"
},
{
"content": "Airtable | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "- Get your API key",
"property": "og:description"
}
],
"title": "Airtable | 🦜️🔗 LangChain"
} | Airtable
%pip install --upgrade --quiet pyairtable
from langchain_community.document_loaders import AirtableLoader
Get your API key here.
Get ID of your base here.
Get your table ID from the table url as shown here.
api_key = "xxx"
base_id = "xxx"
table_id = "xxx"
loader = AirtableLoader(api_key, table_id, base_id)
docs = loader.load()
Returns each table row as dict.
eval(docs[0].page_content)
{'id': 'recF3GbGZCuh9sXIQ',
'createdTime': '2023-06-09T04:47:21.000Z',
'fields': {'Priority': 'High',
'Status': 'In progress',
'Name': 'Document Splitters'}}
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/document_loaders/arxiv/ | ## Arxiv
> [arXiv](https://arxiv.org/) is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
This notebook shows how to load scientific articles from `Arxiv.org` into a document format that we can use downstream.
## Installation[](#installation "Direct link to Installation")
First, you need to install `arxiv` python package.
```
%pip install --upgrade --quiet arxiv
```
Second, you need to install `PyMuPDF` python package which transforms PDF files downloaded from the `arxiv.org` site into the text format.
```
%pip install --upgrade --quiet pymupdf
```
## Examples[](#examples "Direct link to Examples")
`ArxivLoader` has these arguments: - `query`: free text which used to find documents in the Arxiv - optional `load_max_docs`: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. - optional `load_all_available_meta`: default=False. By default only the most important fields downloaded: `Published` (date when document was published/last updated), `Title`, `Authors`, `Summary`. If True, other fields also downloaded.
```
from langchain_community.document_loaders import ArxivLoader
```
```
docs = ArxivLoader(query="1605.08386", load_max_docs=2).load()len(docs)
```
```
docs[0].metadata # meta-information of the Document
```
```
{'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}
```
```
docs[0].page_content[:400] # all pages of the Document content
```
```
'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:50.935Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/arxiv/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/arxiv/",
"description": "arXiv is an open-access archive for 2 million",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "6962",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"arxiv\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:50 GMT",
"etag": "W/\"9bff14b8a045834000c81e7e626fc81d\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::q5bhl-1713753530860-99054725e9b3"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/arxiv/",
"property": "og:url"
},
{
"content": "Arxiv | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "arXiv is an open-access archive for 2 million",
"property": "og:description"
}
],
"title": "Arxiv | 🦜️🔗 LangChain"
} | Arxiv
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
This notebook shows how to load scientific articles from Arxiv.org into a document format that we can use downstream.
Installation
First, you need to install arxiv python package.
%pip install --upgrade --quiet arxiv
Second, you need to install PyMuPDF python package which transforms PDF files downloaded from the arxiv.org site into the text format.
%pip install --upgrade --quiet pymupdf
Examples
ArxivLoader has these arguments: - query: free text which used to find documents in the Arxiv - optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. - optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.
from langchain_community.document_loaders import ArxivLoader
docs = ArxivLoader(query="1605.08386", load_max_docs=2).load()
len(docs)
docs[0].metadata # meta-information of the Document
{'Published': '2016-05-26',
'Title': 'Heat-bath random walks with Markov bases',
'Authors': 'Caprice Stanley, Tobias Windisch',
'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}
docs[0].page_content[:400] # all pages of the Document content
'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b' |
https://python.langchain.com/docs/integrations/chat/perplexity/ | ## ChatPerplexity
This notebook covers how to get started with Perplexity chat models.
```
from langchain_community.chat_models import ChatPerplexityfrom langchain_core.prompts import ChatPromptTemplate
```
The code provided assumes that your PPLX\_API\_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:
```
chat = ChatPerplexity(temperature=0, pplx_api_key="YOUR_API_KEY", model="pplx-70b-online")
```
You can check a list of available models [here](https://docs.perplexity.ai/docs/model-cards). For reproducibility, we can set the API key dynamically by taking it as an input in this notebook.
```
import osfrom getpass import getpassPPLX_API_KEY = getpass()os.environ["PPLX_API_KEY"] = PPLX_API_KEY
```
```
chat = ChatPerplexity(temperature=0, model="pplx-70b-online")
```
```
system = "You are a helpful assistant."human = "{input}"prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])chain = prompt | chatresponse = chain.invoke({"input": "Why is the Higgs Boson important?"})response.content
```
```
'The Higgs Boson is an elementary subatomic particle that plays a crucial role in the Standard Model of particle physics, which accounts for three of the four fundamental forces governing the behavior of our universe: the strong and weak nuclear forces, electromagnetism, and gravity. The Higgs Boson is important for several reasons:\n\n1. **Final Elementary Particle**: The Higgs Boson is the last elementary particle waiting to be discovered under the Standard Model. Its detection helps complete the Standard Model and further our understanding of the fundamental forces in the universe.\n\n2. **Mass Generation**: The Higgs Boson is responsible for giving mass to other particles, a process that occurs through its interaction with the Higgs field. This mass generation is essential for the formation of atoms, molecules, and the visible matter we observe in the universe.\n\n3. **Implications for New Physics**: While the detection of the Higgs Boson has confirmed many aspects of the Standard Model, it also opens up new possibilities for discoveries beyond the Standard Model. Further research on the Higgs Boson could reveal insights into the nature of dark matter, supersymmetry, and other exotic phenomena.\n\n4. **Advancements in Technology**: The search for the Higgs Boson has led to significant advancements in technology, such as the development of artificial intelligence and machine learning algorithms used in particle accelerators like the Large Hadron Collider (LHC). These advancements have not only contributed to the discovery of the Higgs Boson but also have potential applications in various other fields.\n\nIn summary, the Higgs Boson is important because it completes the Standard Model, plays a crucial role in mass generation, hints at new physics phenomena beyond the Standard Model, and drives advancements in technology.\n'
```
You can format and structure the prompts like you would typically. In the following example, we ask the model to tell us a joke about cats.
```
chat = ChatPerplexity(temperature=0, model="pplx-70b-online")prompt = ChatPromptTemplate.from_messages([("human", "Tell me a joke about {topic}")])chain = prompt | chatresponse = chain.invoke({"topic": "cats"})response.content
```
```
'Here\'s a joke about cats:\n\nWhy did the cat want math lessons from a mermaid?\n\nBecause it couldn\'t find its "core purpose" in life!\n\nRemember, cats are unique and fascinating creatures, and each one has its own special traits and abilities. While some may see them as mysterious or even a bit aloof, they are still beloved pets that bring joy and companionship to their owners. So, if your cat ever seeks guidance from a mermaid, just remember that they are on their own journey to self-discovery!\n'
```
## `ChatPerplexity` also supports streaming functionality:[](#chatperplexity-also-supports-streaming-functionality "Direct link to chatperplexity-also-supports-streaming-functionality")
```
chat = ChatPerplexity(temperature=0.7, model="pplx-70b-online")prompt = ChatPromptTemplate.from_messages( [("human", "Give me a list of famous tourist attractions in Pakistan")])chain = prompt | chatfor chunk in chain.stream({}): print(chunk.content, end="", flush=True)
```
```
Here is a list of some famous tourist attractions in Pakistan:1. **Minar-e-Pakistan**: A 62-meter high minaret in Lahore that represents the history of Pakistan.2. **Badshahi Mosque**: A historic mosque in Lahore with a capacity of 10,000 worshippers.3. **Shalimar Gardens**: A beautiful garden in Lahore with landscaped grounds and a series of cascading pools.4. **Pakistan Monument**: A national monument in Islamabad representing the four provinces and three districts of Pakistan.5. **National Museum of Pakistan**: A museum in Karachi showcasing the country's cultural history.6. **Faisal Mosque**: A large mosque in Islamabad that can accommodate up to 300,000 worshippers.7. **Clifton Beach**: A popular beach in Karachi offering water activities and recreational facilities.8. **Kartarpur Corridor**: A visa-free border crossing and religious corridor connecting Gurdwara Darbar Sahib in Pakistan to Gurudwara Sri Kartarpur Sahib in India.9. **Mohenjo-daro**: An ancient Indus Valley civilization site in Sindh, Pakistan, dating back to around 2500 BCE.10. **Hunza Valley**: A picturesque valley in Gilgit-Baltistan known for its stunning mountain scenery and unique culture.These attractions showcase the rich history, diverse culture, and natural beauty of Pakistan, making them popular destinations for both local and international tourists.
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:51.041Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/chat/perplexity/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/chat/perplexity/",
"description": "This notebook covers how to get started with Perplexity chat models.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"perplexity\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:50 GMT",
"etag": "W/\"e71361635e78125acaff44c9aec44145\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::67c2p-1713753530818-4da06e20c786"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/chat/perplexity/",
"property": "og:url"
},
{
"content": "ChatPerplexity | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook covers how to get started with Perplexity chat models.",
"property": "og:description"
}
],
"title": "ChatPerplexity | 🦜️🔗 LangChain"
} | ChatPerplexity
This notebook covers how to get started with Perplexity chat models.
from langchain_community.chat_models import ChatPerplexity
from langchain_core.prompts import ChatPromptTemplate
The code provided assumes that your PPLX_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:
chat = ChatPerplexity(temperature=0, pplx_api_key="YOUR_API_KEY", model="pplx-70b-online")
You can check a list of available models here. For reproducibility, we can set the API key dynamically by taking it as an input in this notebook.
import os
from getpass import getpass
PPLX_API_KEY = getpass()
os.environ["PPLX_API_KEY"] = PPLX_API_KEY
chat = ChatPerplexity(temperature=0, model="pplx-70b-online")
system = "You are a helpful assistant."
human = "{input}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])
chain = prompt | chat
response = chain.invoke({"input": "Why is the Higgs Boson important?"})
response.content
'The Higgs Boson is an elementary subatomic particle that plays a crucial role in the Standard Model of particle physics, which accounts for three of the four fundamental forces governing the behavior of our universe: the strong and weak nuclear forces, electromagnetism, and gravity. The Higgs Boson is important for several reasons:\n\n1. **Final Elementary Particle**: The Higgs Boson is the last elementary particle waiting to be discovered under the Standard Model. Its detection helps complete the Standard Model and further our understanding of the fundamental forces in the universe.\n\n2. **Mass Generation**: The Higgs Boson is responsible for giving mass to other particles, a process that occurs through its interaction with the Higgs field. This mass generation is essential for the formation of atoms, molecules, and the visible matter we observe in the universe.\n\n3. **Implications for New Physics**: While the detection of the Higgs Boson has confirmed many aspects of the Standard Model, it also opens up new possibilities for discoveries beyond the Standard Model. Further research on the Higgs Boson could reveal insights into the nature of dark matter, supersymmetry, and other exotic phenomena.\n\n4. **Advancements in Technology**: The search for the Higgs Boson has led to significant advancements in technology, such as the development of artificial intelligence and machine learning algorithms used in particle accelerators like the Large Hadron Collider (LHC). These advancements have not only contributed to the discovery of the Higgs Boson but also have potential applications in various other fields.\n\nIn summary, the Higgs Boson is important because it completes the Standard Model, plays a crucial role in mass generation, hints at new physics phenomena beyond the Standard Model, and drives advancements in technology.\n'
You can format and structure the prompts like you would typically. In the following example, we ask the model to tell us a joke about cats.
chat = ChatPerplexity(temperature=0, model="pplx-70b-online")
prompt = ChatPromptTemplate.from_messages([("human", "Tell me a joke about {topic}")])
chain = prompt | chat
response = chain.invoke({"topic": "cats"})
response.content
'Here\'s a joke about cats:\n\nWhy did the cat want math lessons from a mermaid?\n\nBecause it couldn\'t find its "core purpose" in life!\n\nRemember, cats are unique and fascinating creatures, and each one has its own special traits and abilities. While some may see them as mysterious or even a bit aloof, they are still beloved pets that bring joy and companionship to their owners. So, if your cat ever seeks guidance from a mermaid, just remember that they are on their own journey to self-discovery!\n'
ChatPerplexity also supports streaming functionality:
chat = ChatPerplexity(temperature=0.7, model="pplx-70b-online")
prompt = ChatPromptTemplate.from_messages(
[("human", "Give me a list of famous tourist attractions in Pakistan")]
)
chain = prompt | chat
for chunk in chain.stream({}):
print(chunk.content, end="", flush=True)
Here is a list of some famous tourist attractions in Pakistan:
1. **Minar-e-Pakistan**: A 62-meter high minaret in Lahore that represents the history of Pakistan.
2. **Badshahi Mosque**: A historic mosque in Lahore with a capacity of 10,000 worshippers.
3. **Shalimar Gardens**: A beautiful garden in Lahore with landscaped grounds and a series of cascading pools.
4. **Pakistan Monument**: A national monument in Islamabad representing the four provinces and three districts of Pakistan.
5. **National Museum of Pakistan**: A museum in Karachi showcasing the country's cultural history.
6. **Faisal Mosque**: A large mosque in Islamabad that can accommodate up to 300,000 worshippers.
7. **Clifton Beach**: A popular beach in Karachi offering water activities and recreational facilities.
8. **Kartarpur Corridor**: A visa-free border crossing and religious corridor connecting Gurdwara Darbar Sahib in Pakistan to Gurudwara Sri Kartarpur Sahib in India.
9. **Mohenjo-daro**: An ancient Indus Valley civilization site in Sindh, Pakistan, dating back to around 2500 BCE.
10. **Hunza Valley**: A picturesque valley in Gilgit-Baltistan known for its stunning mountain scenery and unique culture.
These attractions showcase the rich history, diverse culture, and natural beauty of Pakistan, making them popular destinations for both local and international tourists. |
https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute/ | ## Alibaba Cloud MaxCompute
> [Alibaba Cloud MaxCompute](https://www.alibabacloud.com/product/maxcompute) (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.
The `MaxComputeLoader` lets you execute a MaxCompute SQL query and loads the results as one document per row.
```
%pip install --upgrade --quiet pyodps
```
```
Collecting pyodps Downloading pyodps-0.11.4.post0-cp39-cp39-macosx_10_9_universal2.whl (2.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 1.7 MB/s eta 0:00:0000:0100:010mRequirement already satisfied: charset-normalizer>=2 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.1.0)Requirement already satisfied: urllib3<2.0,>=1.26.0 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (1.26.15)Requirement already satisfied: idna>=2.5 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.4)Requirement already satisfied: certifi>=2017.4.17 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (2023.5.7)Installing collected packages: pyodpsSuccessfully installed pyodps-0.11.4.post0
```
## Basic Usage[](#basic-usage "Direct link to Basic Usage")
To instantiate the loader you’ll need a SQL query to execute, your MaxCompute endpoint and project name, and you access ID and secret access key. The access ID and secret access key can either be passed in direct via the `access_id` and `secret_access_key` parameters or they can be set as environment variables `MAX_COMPUTE_ACCESS_ID` and `MAX_COMPUTE_SECRET_ACCESS_KEY`.
```
from langchain_community.document_loaders import MaxComputeLoader
```
```
base_query = """SELECT *FROM ( SELECT 1 AS id, 'content1' AS content, 'meta_info1' AS meta_info UNION ALL SELECT 2 AS id, 'content2' AS content, 'meta_info2' AS meta_info UNION ALL SELECT 3 AS id, 'content3' AS content, 'meta_info3' AS meta_info) mydata;"""
```
```
endpoint = "<ENDPOINT>"project = "<PROJECT>"ACCESS_ID = "<ACCESS ID>"SECRET_ACCESS_KEY = "<SECRET ACCESS KEY>"
```
```
loader = MaxComputeLoader.from_params( base_query, endpoint, project, access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()
```
```
[Document(page_content='id: 1\ncontent: content1\nmeta_info: meta_info1', metadata={}), Document(page_content='id: 2\ncontent: content2\nmeta_info: meta_info2', metadata={}), Document(page_content='id: 3\ncontent: content3\nmeta_info: meta_info3', metadata={})]
```
```
print(data[0].page_content)
```
```
id: 1content: content1meta_info: meta_info1
```
## Specifying Which Columns are Content vs Metadata[](#specifying-which-columns-are-content-vs-metadata "Direct link to Specifying Which Columns are Content vs Metadata")
You can configure which subset of columns should be loaded as the contents of the Document and which as the metadata using the `page_content_columns` and `metadata_columns` parameters.
```
loader = MaxComputeLoader.from_params( base_query, endpoint, project, page_content_columns=["content"], # Specify Document page content metadata_columns=["id", "meta_info"], # Specify Document metadata access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()
```
```
print(data[0].page_content)
```
```
{'id': 1, 'meta_info': 'meta_info1'}
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:51.290Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute/",
"description": "[Alibaba Cloud",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"alibaba_cloud_maxcompute\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:50 GMT",
"etag": "W/\"3a5c9032340d900a910f98b1c034c0c1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::xf6qg-1713753530824-2b05210cb56b"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute/",
"property": "og:url"
},
{
"content": "Alibaba Cloud MaxCompute | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Alibaba Cloud",
"property": "og:description"
}
],
"title": "Alibaba Cloud MaxCompute | 🦜️🔗 LangChain"
} | Alibaba Cloud MaxCompute
Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.
The MaxComputeLoader lets you execute a MaxCompute SQL query and loads the results as one document per row.
%pip install --upgrade --quiet pyodps
Collecting pyodps
Downloading pyodps-0.11.4.post0-cp39-cp39-macosx_10_9_universal2.whl (2.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 1.7 MB/s eta 0:00:0000:0100:010m
Requirement already satisfied: charset-normalizer>=2 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.1.0)
Requirement already satisfied: urllib3<2.0,>=1.26.0 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (1.26.15)
Requirement already satisfied: idna>=2.5 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.4)
Requirement already satisfied: certifi>=2017.4.17 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (2023.5.7)
Installing collected packages: pyodps
Successfully installed pyodps-0.11.4.post0
Basic Usage
To instantiate the loader you’ll need a SQL query to execute, your MaxCompute endpoint and project name, and you access ID and secret access key. The access ID and secret access key can either be passed in direct via the access_id and secret_access_key parameters or they can be set as environment variables MAX_COMPUTE_ACCESS_ID and MAX_COMPUTE_SECRET_ACCESS_KEY.
from langchain_community.document_loaders import MaxComputeLoader
base_query = """
SELECT *
FROM (
SELECT 1 AS id, 'content1' AS content, 'meta_info1' AS meta_info
UNION ALL
SELECT 2 AS id, 'content2' AS content, 'meta_info2' AS meta_info
UNION ALL
SELECT 3 AS id, 'content3' AS content, 'meta_info3' AS meta_info
) mydata;
"""
endpoint = "<ENDPOINT>"
project = "<PROJECT>"
ACCESS_ID = "<ACCESS ID>"
SECRET_ACCESS_KEY = "<SECRET ACCESS KEY>"
loader = MaxComputeLoader.from_params(
base_query,
endpoint,
project,
access_id=ACCESS_ID,
secret_access_key=SECRET_ACCESS_KEY,
)
data = loader.load()
[Document(page_content='id: 1\ncontent: content1\nmeta_info: meta_info1', metadata={}), Document(page_content='id: 2\ncontent: content2\nmeta_info: meta_info2', metadata={}), Document(page_content='id: 3\ncontent: content3\nmeta_info: meta_info3', metadata={})]
print(data[0].page_content)
id: 1
content: content1
meta_info: meta_info1
Specifying Which Columns are Content vs Metadata
You can configure which subset of columns should be loaded as the contents of the Document and which as the metadata using the page_content_columns and metadata_columns parameters.
loader = MaxComputeLoader.from_params(
base_query,
endpoint,
project,
page_content_columns=["content"], # Specify Document page content
metadata_columns=["id", "meta_info"], # Specify Document metadata
access_id=ACCESS_ID,
secret_access_key=SECRET_ACCESS_KEY,
)
data = loader.load()
print(data[0].page_content)
{'id': 1, 'meta_info': 'meta_info1'} |
https://python.langchain.com/docs/integrations/document_loaders/arcgis/ | This notebook demonstrates the use of the `langchain_community.document_loaders.ArcGISLoader` class.
You will need to install the ArcGIS API for Python `arcgis` and, optionally, `bs4.BeautifulSoup`.
You can use an `arcgis.gis.GIS` object for authenticated data loading, or leave it blank to access public data.
Let’s measure loader latency.
```
CPU times: user 2.37 ms, sys: 5.83 ms, total: 8.19 msWall time: 1.05 s
```
```
{'accessed': '2023-09-13T19:58:32.546576+00:00Z', 'name': 'Beach Ramps', 'url': 'https://maps1.vcgov.org/arcgis/rest/services/Beaches/MapServer/7', 'layer_description': '(Not Provided)', 'item_description': '(Not Provided)', 'layer_properties': { "currentVersion": 10.81, "id": 7, "name": "Beach Ramps", "type": "Feature Layer", "description": "", "geometryType": "esriGeometryPoint", "sourceSpatialReference": { "wkid": 2881, "latestWkid": 2881 }, "copyrightText": "", "parentLayer": null, "subLayers": [], "minScale": 750000, "maxScale": 0, "drawingInfo": { "renderer": { "type": "simple", "symbol": { "type": "esriPMS", "url": "9bb2e5ca499bb68aa3ee0d4e1ecc3849", "imageData": "iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IB2cksfwAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAJJJREFUOI3NkDEKg0AQRZ9kkSnSGBshR7DJqdJYeg7BMpcS0uQWQsqoCLExkcUJzGqT38zw2fcY1rEzbp7vjXz0EXC7gBxs1ABcG/8CYkCcDqwyLqsV+RlV0I/w7PzuJBArr1VB20H58Ls6h+xoFITkTwWpQJX7XSIBAnFwVj7MLAjJV/AC6G3QoAmK+74Lom04THTBEp/HCSc6AAAAAElFTkSuQmCC", "contentType": "image/png", "width": 12, "height": 12, "angle": 0, "xoffset": 0, "yoffset": 0 }, "label": "", "description": "" }, "transparency": 0, "labelingInfo": null }, "defaultVisibility": true, "extent": { "xmin": -81.09480168806815, "ymin": 28.858349245353473, "xmax": -80.77512908572814, "ymax": 29.41078388840041, "spatialReference": { "wkid": 4326, "latestWkid": 4326 } }, "hasAttachments": false, "htmlPopupType": "esriServerHTMLPopupTypeNone", "displayField": "AccessName", "typeIdField": null, "subtypeFieldName": null, "subtypeField": null, "defaultSubtypeCode": null, "fields": [ { "name": "OBJECTID", "type": "esriFieldTypeOID", "alias": "OBJECTID", "domain": null }, { "name": "Shape", "type": "esriFieldTypeGeometry", "alias": "Shape", "domain": null }, { "name": "AccessName", "type": "esriFieldTypeString", "alias": "AccessName", "length": 40, "domain": null }, { "name": "AccessID", "type": "esriFieldTypeString", "alias": "AccessID", "length": 50, "domain": null }, { "name": "AccessType", "type": "esriFieldTypeString", "alias": "AccessType", "length": 25, "domain": null }, { "name": "GeneralLoc", "type": "esriFieldTypeString", "alias": "GeneralLoc", "length": 100, "domain": null }, { "name": "MilePost", "type": "esriFieldTypeDouble", "alias": "MilePost", "domain": null }, { "name": "City", "type": "esriFieldTypeString", "alias": "City", "length": 50, "domain": null }, { "name": "AccessStatus", "type": "esriFieldTypeString", "alias": "AccessStatus", "length": 50, "domain": null }, { "name": "Entry_Date_Time", "type": "esriFieldTypeDate", "alias": "Entry_Date_Time", "length": 8, "domain": null }, { "name": "DrivingZone", "type": "esriFieldTypeString", "alias": "DrivingZone", "length": 50, "domain": null } ], "geometryField": { "name": "Shape", "type": "esriFieldTypeGeometry", "alias": "Shape" }, "indexes": null, "subtypes": [], "relationships": [], "canModifyLayer": true, "canScaleSymbols": false, "hasLabels": false, "capabilities": "Map,Query,Data", "maxRecordCount": 1000, "supportsStatistics": true, "supportsAdvancedQueries": true, "supportedQueryFormats": "JSON, geoJSON", "isDataVersioned": false, "ownershipBasedAccessControlForFeatures": { "allowOthersToQuery": true }, "useStandardizedQueries": true, "advancedQueryCapabilities": { "useStandardizedQueries": true, "supportsStatistics": true, "supportsHavingClause": true, "supportsCountDistinct": true, "supportsOrderBy": true, "supportsDistinct": true, "supportsPagination": true, "supportsTrueCurve": true, "supportsReturningQueryExtent": true, "supportsQueryWithDistance": true, "supportsSqlExpression": true }, "supportsDatumTransformation": true, "dateFieldsTimeReference": null, "supportsCoordinatesQuantization": true }}
```
If you want to retrieve feature geometries, you may do so with the `return_geometry` keyword.
Each document’s geometry will be stored in its metadata dictionary.
```
CPU times: user 9.6 ms, sys: 5.84 ms, total: 15.4 msWall time: 1.06 s
```
```
{'x': -81.01508803280349, 'y': 29.24246579525828, 'spatialReference': {'wkid': 4326, 'latestWkid': 4326}}
```
```
{"OBJECTID": 4, "AccessName": "UNIVERSITY BLVD", "AccessID": "DB-048", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "900 BLK N ATLANTIC AV", "MilePost": 13.74, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "BOTH"}{"OBJECTID": 18, "AccessName": "BEACHWAY AV", "AccessID": "NS-106", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1400 N ATLANTIC AV", "MilePost": 1.57, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"}{"OBJECTID": 24, "AccessName": "27TH AV", "AccessID": "NS-141", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3600 BLK S ATLANTIC AV", "MilePost": 4.83, "City": "NEW SMYRNA BEACH", "AccessStatus": "CLOSED FOR HIGH TIDE", "Entry_Date_Time": 1694619363000, "DrivingZone": "BOTH"}{"OBJECTID": 26, "AccessName": "SEABREEZE BLVD", "AccessID": "DB-051", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "500 BLK N ATLANTIC AV", "MilePost": 14.24, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "BOTH"}{"OBJECTID": 30, "AccessName": "INTERNATIONAL SPEEDWAY BLVD", "AccessID": "DB-059", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "300 BLK S ATLANTIC AV", "MilePost": 15.27, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "BOTH"}{"OBJECTID": 33, "AccessName": "GRANADA BLVD", "AccessID": "OB-030", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "20 BLK OCEAN SHORE BLVD", "MilePost": 10.02, "City": "ORMOND BEACH", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694595424000, "DrivingZone": "BOTH"}{"OBJECTID": 39, "AccessName": "BEACH ST", "AccessID": "PI-097", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "4890 BLK S ATLANTIC AV", "MilePost": 25.85, "City": "PONCE INLET", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694596294000, "DrivingZone": "BOTH"}{"OBJECTID": 44, "AccessName": "SILVER BEACH AV", "AccessID": "DB-064", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1000 BLK S ATLANTIC AV", "MilePost": 15.98, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "YES"}{"OBJECTID": 45, "AccessName": "BOTEFUHR AV", "AccessID": "DBS-067", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1900 BLK S ATLANTIC AV", "MilePost": 16.68, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "YES"}{"OBJECTID": 46, "AccessName": "MINERVA RD", "AccessID": "DBS-069", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "2300 BLK S ATLANTIC AV", "MilePost": 17.52, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "YES"}{"OBJECTID": 56, "AccessName": "3RD AV", "AccessID": "NS-118", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1200 BLK HILL ST", "MilePost": 3.25, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"}{"OBJECTID": 65, "AccessName": "MILSAP RD", "AccessID": "OB-037", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "700 BLK S ATLANTIC AV", "MilePost": 11.52, "City": "ORMOND BEACH", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694595749000, "DrivingZone": "YES"}{"OBJECTID": 72, "AccessName": "ROCKEFELLER DR", "AccessID": "OB-034", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "400 BLK S ATLANTIC AV", "MilePost": 10.9, "City": "ORMOND BEACH", "AccessStatus": "CLOSED - SEASONAL", "Entry_Date_Time": 1694591351000, "DrivingZone": "YES"}{"OBJECTID": 74, "AccessName": "DUNLAWTON BLVD", "AccessID": "DBS-078", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3400 BLK S ATLANTIC AV", "MilePost": 20.61, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "YES"}{"OBJECTID": 77, "AccessName": "EMILIA AV", "AccessID": "DBS-082", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3790 BLK S ATLANTIC AV", "MilePost": 21.38, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "BOTH"}{"OBJECTID": 84, "AccessName": "VAN AV", "AccessID": "DBS-075", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3100 BLK S ATLANTIC AV", "MilePost": 19.6, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "YES"}{"OBJECTID": 104, "AccessName": "HARVARD DR", "AccessID": "OB-038", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "900 BLK S ATLANTIC AV", "MilePost": 11.72, "City": "ORMOND BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "YES"}{"OBJECTID": 106, "AccessName": "WILLIAMS AV", "AccessID": "DB-042", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "2200 BLK N ATLANTIC AV", "MilePost": 12.5, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "YES"}{"OBJECTID": 109, "AccessName": "HARTFORD AV", "AccessID": "DB-043", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1890 BLK N ATLANTIC AV", "MilePost": 12.76, "City": "DAYTONA BEACH", "AccessStatus": "CLOSED - SEASONAL", "Entry_Date_Time": 1694591351000, "DrivingZone": "YES"}{"OBJECTID": 138, "AccessName": "CRAWFORD RD", "AccessID": "NS-108", "AccessType": "OPEN VEHICLE RAMP - PASS", "GeneralLoc": "800 BLK N ATLANTIC AV", "MilePost": 2.19, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"}{"OBJECTID": 140, "AccessName": "FLAGLER AV", "AccessID": "NS-110", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "500 BLK FLAGLER AV", "MilePost": 2.57, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"}{"OBJECTID": 144, "AccessName": "CARDINAL DR", "AccessID": "OB-036", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "600 BLK S ATLANTIC AV", "MilePost": 11.27, "City": "ORMOND BEACH", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694595749000, "DrivingZone": "YES"}{"OBJECTID": 174, "AccessName": "EL PORTAL ST", "AccessID": "DBS-076", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3200 BLK S ATLANTIC AV", "MilePost": 20.04, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "YES"}
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:51.577Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/arcgis/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/arcgis/",
"description": "This notebook demonstrates the use of the",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "6748",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"arcgis\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:50 GMT",
"etag": "W/\"a730d14de9958a7e2b613a98d36cf9d4\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::rgmpg-1713753530939-01dfaa28b719"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/arcgis/",
"property": "og:url"
},
{
"content": "ArcGIS | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook demonstrates the use of the",
"property": "og:description"
}
],
"title": "ArcGIS | 🦜️🔗 LangChain"
} | This notebook demonstrates the use of the langchain_community.document_loaders.ArcGISLoader class.
You will need to install the ArcGIS API for Python arcgis and, optionally, bs4.BeautifulSoup.
You can use an arcgis.gis.GIS object for authenticated data loading, or leave it blank to access public data.
Let’s measure loader latency.
CPU times: user 2.37 ms, sys: 5.83 ms, total: 8.19 ms
Wall time: 1.05 s
{'accessed': '2023-09-13T19:58:32.546576+00:00Z',
'name': 'Beach Ramps',
'url': 'https://maps1.vcgov.org/arcgis/rest/services/Beaches/MapServer/7',
'layer_description': '(Not Provided)',
'item_description': '(Not Provided)',
'layer_properties': {
"currentVersion": 10.81,
"id": 7,
"name": "Beach Ramps",
"type": "Feature Layer",
"description": "",
"geometryType": "esriGeometryPoint",
"sourceSpatialReference": {
"wkid": 2881,
"latestWkid": 2881
},
"copyrightText": "",
"parentLayer": null,
"subLayers": [],
"minScale": 750000,
"maxScale": 0,
"drawingInfo": {
"renderer": {
"type": "simple",
"symbol": {
"type": "esriPMS",
"url": "9bb2e5ca499bb68aa3ee0d4e1ecc3849",
"imageData": "iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IB2cksfwAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAJJJREFUOI3NkDEKg0AQRZ9kkSnSGBshR7DJqdJYeg7BMpcS0uQWQsqoCLExkcUJzGqT38zw2fcY1rEzbp7vjXz0EXC7gBxs1ABcG/8CYkCcDqwyLqsV+RlV0I/w7PzuJBArr1VB20H58Ls6h+xoFITkTwWpQJX7XSIBAnFwVj7MLAjJV/AC6G3QoAmK+74Lom04THTBEp/HCSc6AAAAAElFTkSuQmCC",
"contentType": "image/png",
"width": 12,
"height": 12,
"angle": 0,
"xoffset": 0,
"yoffset": 0
},
"label": "",
"description": ""
},
"transparency": 0,
"labelingInfo": null
},
"defaultVisibility": true,
"extent": {
"xmin": -81.09480168806815,
"ymin": 28.858349245353473,
"xmax": -80.77512908572814,
"ymax": 29.41078388840041,
"spatialReference": {
"wkid": 4326,
"latestWkid": 4326
}
},
"hasAttachments": false,
"htmlPopupType": "esriServerHTMLPopupTypeNone",
"displayField": "AccessName",
"typeIdField": null,
"subtypeFieldName": null,
"subtypeField": null,
"defaultSubtypeCode": null,
"fields": [
{
"name": "OBJECTID",
"type": "esriFieldTypeOID",
"alias": "OBJECTID",
"domain": null
},
{
"name": "Shape",
"type": "esriFieldTypeGeometry",
"alias": "Shape",
"domain": null
},
{
"name": "AccessName",
"type": "esriFieldTypeString",
"alias": "AccessName",
"length": 40,
"domain": null
},
{
"name": "AccessID",
"type": "esriFieldTypeString",
"alias": "AccessID",
"length": 50,
"domain": null
},
{
"name": "AccessType",
"type": "esriFieldTypeString",
"alias": "AccessType",
"length": 25,
"domain": null
},
{
"name": "GeneralLoc",
"type": "esriFieldTypeString",
"alias": "GeneralLoc",
"length": 100,
"domain": null
},
{
"name": "MilePost",
"type": "esriFieldTypeDouble",
"alias": "MilePost",
"domain": null
},
{
"name": "City",
"type": "esriFieldTypeString",
"alias": "City",
"length": 50,
"domain": null
},
{
"name": "AccessStatus",
"type": "esriFieldTypeString",
"alias": "AccessStatus",
"length": 50,
"domain": null
},
{
"name": "Entry_Date_Time",
"type": "esriFieldTypeDate",
"alias": "Entry_Date_Time",
"length": 8,
"domain": null
},
{
"name": "DrivingZone",
"type": "esriFieldTypeString",
"alias": "DrivingZone",
"length": 50,
"domain": null
}
],
"geometryField": {
"name": "Shape",
"type": "esriFieldTypeGeometry",
"alias": "Shape"
},
"indexes": null,
"subtypes": [],
"relationships": [],
"canModifyLayer": true,
"canScaleSymbols": false,
"hasLabels": false,
"capabilities": "Map,Query,Data",
"maxRecordCount": 1000,
"supportsStatistics": true,
"supportsAdvancedQueries": true,
"supportedQueryFormats": "JSON, geoJSON",
"isDataVersioned": false,
"ownershipBasedAccessControlForFeatures": {
"allowOthersToQuery": true
},
"useStandardizedQueries": true,
"advancedQueryCapabilities": {
"useStandardizedQueries": true,
"supportsStatistics": true,
"supportsHavingClause": true,
"supportsCountDistinct": true,
"supportsOrderBy": true,
"supportsDistinct": true,
"supportsPagination": true,
"supportsTrueCurve": true,
"supportsReturningQueryExtent": true,
"supportsQueryWithDistance": true,
"supportsSqlExpression": true
},
"supportsDatumTransformation": true,
"dateFieldsTimeReference": null,
"supportsCoordinatesQuantization": true
}}
If you want to retrieve feature geometries, you may do so with the return_geometry keyword.
Each document’s geometry will be stored in its metadata dictionary.
CPU times: user 9.6 ms, sys: 5.84 ms, total: 15.4 ms
Wall time: 1.06 s
{'x': -81.01508803280349,
'y': 29.24246579525828,
'spatialReference': {'wkid': 4326, 'latestWkid': 4326}}
{"OBJECTID": 4, "AccessName": "UNIVERSITY BLVD", "AccessID": "DB-048", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "900 BLK N ATLANTIC AV", "MilePost": 13.74, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "BOTH"}
{"OBJECTID": 18, "AccessName": "BEACHWAY AV", "AccessID": "NS-106", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1400 N ATLANTIC AV", "MilePost": 1.57, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"}
{"OBJECTID": 24, "AccessName": "27TH AV", "AccessID": "NS-141", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3600 BLK S ATLANTIC AV", "MilePost": 4.83, "City": "NEW SMYRNA BEACH", "AccessStatus": "CLOSED FOR HIGH TIDE", "Entry_Date_Time": 1694619363000, "DrivingZone": "BOTH"}
{"OBJECTID": 26, "AccessName": "SEABREEZE BLVD", "AccessID": "DB-051", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "500 BLK N ATLANTIC AV", "MilePost": 14.24, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "BOTH"}
{"OBJECTID": 30, "AccessName": "INTERNATIONAL SPEEDWAY BLVD", "AccessID": "DB-059", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "300 BLK S ATLANTIC AV", "MilePost": 15.27, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "BOTH"}
{"OBJECTID": 33, "AccessName": "GRANADA BLVD", "AccessID": "OB-030", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "20 BLK OCEAN SHORE BLVD", "MilePost": 10.02, "City": "ORMOND BEACH", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694595424000, "DrivingZone": "BOTH"}
{"OBJECTID": 39, "AccessName": "BEACH ST", "AccessID": "PI-097", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "4890 BLK S ATLANTIC AV", "MilePost": 25.85, "City": "PONCE INLET", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694596294000, "DrivingZone": "BOTH"}
{"OBJECTID": 44, "AccessName": "SILVER BEACH AV", "AccessID": "DB-064", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1000 BLK S ATLANTIC AV", "MilePost": 15.98, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "YES"}
{"OBJECTID": 45, "AccessName": "BOTEFUHR AV", "AccessID": "DBS-067", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1900 BLK S ATLANTIC AV", "MilePost": 16.68, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "YES"}
{"OBJECTID": 46, "AccessName": "MINERVA RD", "AccessID": "DBS-069", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "2300 BLK S ATLANTIC AV", "MilePost": 17.52, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "YES"}
{"OBJECTID": 56, "AccessName": "3RD AV", "AccessID": "NS-118", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1200 BLK HILL ST", "MilePost": 3.25, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"}
{"OBJECTID": 65, "AccessName": "MILSAP RD", "AccessID": "OB-037", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "700 BLK S ATLANTIC AV", "MilePost": 11.52, "City": "ORMOND BEACH", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694595749000, "DrivingZone": "YES"}
{"OBJECTID": 72, "AccessName": "ROCKEFELLER DR", "AccessID": "OB-034", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "400 BLK S ATLANTIC AV", "MilePost": 10.9, "City": "ORMOND BEACH", "AccessStatus": "CLOSED - SEASONAL", "Entry_Date_Time": 1694591351000, "DrivingZone": "YES"}
{"OBJECTID": 74, "AccessName": "DUNLAWTON BLVD", "AccessID": "DBS-078", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3400 BLK S ATLANTIC AV", "MilePost": 20.61, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "YES"}
{"OBJECTID": 77, "AccessName": "EMILIA AV", "AccessID": "DBS-082", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3790 BLK S ATLANTIC AV", "MilePost": 21.38, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "BOTH"}
{"OBJECTID": 84, "AccessName": "VAN AV", "AccessID": "DBS-075", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3100 BLK S ATLANTIC AV", "MilePost": 19.6, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "YES"}
{"OBJECTID": 104, "AccessName": "HARVARD DR", "AccessID": "OB-038", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "900 BLK S ATLANTIC AV", "MilePost": 11.72, "City": "ORMOND BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "YES"}
{"OBJECTID": 106, "AccessName": "WILLIAMS AV", "AccessID": "DB-042", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "2200 BLK N ATLANTIC AV", "MilePost": 12.5, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "YES"}
{"OBJECTID": 109, "AccessName": "HARTFORD AV", "AccessID": "DB-043", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1890 BLK N ATLANTIC AV", "MilePost": 12.76, "City": "DAYTONA BEACH", "AccessStatus": "CLOSED - SEASONAL", "Entry_Date_Time": 1694591351000, "DrivingZone": "YES"}
{"OBJECTID": 138, "AccessName": "CRAWFORD RD", "AccessID": "NS-108", "AccessType": "OPEN VEHICLE RAMP - PASS", "GeneralLoc": "800 BLK N ATLANTIC AV", "MilePost": 2.19, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"}
{"OBJECTID": 140, "AccessName": "FLAGLER AV", "AccessID": "NS-110", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "500 BLK FLAGLER AV", "MilePost": 2.57, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"}
{"OBJECTID": 144, "AccessName": "CARDINAL DR", "AccessID": "OB-036", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "600 BLK S ATLANTIC AV", "MilePost": 11.27, "City": "ORMOND BEACH", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694595749000, "DrivingZone": "YES"}
{"OBJECTID": 174, "AccessName": "EL PORTAL ST", "AccessID": "DBS-076", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3200 BLK S ATLANTIC AV", "MilePost": 20.04, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "YES"} |
https://python.langchain.com/docs/integrations/document_loaders/amazon_textract/ | ## Amazon Textract
> [Amazon Textract](https://docs.aws.amazon.com/managedservices/latest/userguide/textract.html) is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.
>
> It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Today, many companies manually extract data from scanned documents such as PDFs, images, tables, and forms, or through simple OCR software that requires manual configuration (which often must be updated when the form changes). To overcome these manual and expensive processes, `Textract` uses ML to read and process any type of document, accurately extracting text, handwriting, tables, and other data with no manual effort.
This sample demonstrates the use of `Amazon Textract` in combination with LangChain as a DocumentLoader.
`Textract` supports`PDF`, `TIF`F, `PNG` and `JPEG` format.
`Textract` supports these [document sizes, languages and characters](https://docs.aws.amazon.com/textract/latest/dg/limits-document.html).
```
%pip install --upgrade --quiet boto3 langchain-openai tiktoken python-dotenv
```
```
%pip install --upgrade --quiet "amazon-textract-caller>=0.2.0"
```
## Sample 1[](#sample-1 "Direct link to Sample 1")
The first example uses a local file, which internally will be send to Amazon Textract sync API [DetectDocumentText](https://docs.aws.amazon.com/textract/latest/dg/API_DetectDocumentText.html).
Local files or URL endpoints like HTTP:// are limited to one page documents for Textract. Multi-page documents have to reside on S3. This sample file is a jpeg.
```
from langchain_community.document_loaders import AmazonTextractPDFLoaderloader = AmazonTextractPDFLoader("example_data/alejandro_rosalez_sample-small.jpeg")documents = loader.load()
```
Output from the file
```
[Document(page_content='Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No ', metadata={'source': 'example_data/alejandro_rosalez_sample-small.jpeg', 'page': 1})]
```
## Sample 2[](#sample-2 "Direct link to Sample 2")
The next sample loads a file from an HTTPS endpoint. It has to be single page, as Amazon Textract requires all multi-page documents to be stored on S3.
```
from langchain_community.document_loaders import AmazonTextractPDFLoaderloader = AmazonTextractPDFLoader( "https://amazon-textract-public-content.s3.us-east-2.amazonaws.com/langchain/alejandro_rosalez_sample_1.jpg")documents = loader.load()
```
```
[Document(page_content='Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No ', metadata={'source': 'example_data/alejandro_rosalez_sample-small.jpeg', 'page': 1})]
```
## Sample 3[](#sample-3 "Direct link to Sample 3")
Processing a multi-page document requires the document to be on S3. The sample document resides in a bucket in us-east-2 and Textract needs to be called in that same region to be successful, so we set the region\_name on the client and pass that in to the loader to ensure Textract is called from us-east-2. You could also to have your notebook running in us-east-2, setting the AWS\_DEFAULT\_REGION set to us-east-2 or when running in a different environment, pass in a boto3 Textract client with that region name like in the cell below.
```
import boto3textract_client = boto3.client("textract", region_name="us-east-2")file_path = "s3://amazon-textract-public-content/langchain/layout-parser-paper.pdf"loader = AmazonTextractPDFLoader(file_path, client=textract_client)documents = loader.load()
```
Now getting the number of pages to validate the response (printing out the full response would be quite long…). We expect 16 pages.
## Sample 4[](#sample-4 "Direct link to Sample 4")
You have the option to pass an additional parameter called `linearization_config` to the AmazonTextractPDFLoader which will determine how the the text output will be linearized by the parser after Textract runs.
```
from langchain_community.document_loaders import AmazonTextractPDFLoaderfrom textractor.data.text_linearization_config import TextLinearizationConfigloader = AmazonTextractPDFLoader( "s3://amazon-textract-public-content/langchain/layout-parser-paper.pdf", linearization_config=TextLinearizationConfig( hide_header_layout=True, hide_footer_layout=True, hide_figure_layout=True, ),)documents = loader.load()
```
The AmazonTextractPDFLoader can be used in a chain the same way the other loaders are used. Textract itself does have a [Query feature](https://docs.aws.amazon.com/textract/latest/dg/API_Query.html), which offers similar functionality to the QA chain in this sample, which is worth checking out as well.
```
# You can store your OPENAI_API_KEY in a .env file as well# import os# from dotenv import load_dotenv# load_dotenv()
```
```
# Or set the OpenAI key in the environment directlyimport osos.environ["OPENAI_API_KEY"] = "your-OpenAI-API-key"
```
```
from langchain.chains.question_answering import load_qa_chainfrom langchain_openai import OpenAIchain = load_qa_chain(llm=OpenAI(), chain_type="map_reduce")query = ["Who are the autors?"]chain.run(input_documents=documents, question=query)
```
```
' The authors are Zejiang Shen, Ruochen Zhang, Melissa Dell, Benjamin Charles Germain Lee, Jacob Carlson, Weining Li, Gardner, M., Grus, J., Neumann, M., Tafjord, O., Dasigi, P., Liu, N., Peters, M., Schmitz, M., Zettlemoyer, L., Lukasz Garncarek, Powalski, R., Stanislawek, T., Topolski, B., Halama, P., Gralinski, F., Graves, A., Fernández, S., Gomez, F., Schmidhuber, J., Harley, A.W., Ufkes, A., Derpanis, K.G., He, K., Gkioxari, G., Dollár, P., Girshick, R., He, K., Zhang, X., Ren, S., Sun, J., Kay, A., Lamiroy, B., Lopresti, D., Mears, J., Jakeway, E., Ferriter, M., Adams, C., Yarasavage, N., Thomas, D., Zwaard, K., Li, M., Cui, L., Huang,'
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:52.141Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/amazon_textract/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/amazon_textract/",
"description": "[Amazon",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4409",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"amazon_textract\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:51 GMT",
"etag": "W/\"8b51995ea0d45f3f2eb3ac3b8e724529\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::cxr2h-1713753531240-7edebd22be4f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/amazon_textract/",
"property": "og:url"
},
{
"content": "Amazon Textract | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Amazon",
"property": "og:description"
}
],
"title": "Amazon Textract | 🦜️🔗 LangChain"
} | Amazon Textract
Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.
It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Today, many companies manually extract data from scanned documents such as PDFs, images, tables, and forms, or through simple OCR software that requires manual configuration (which often must be updated when the form changes). To overcome these manual and expensive processes, Textract uses ML to read and process any type of document, accurately extracting text, handwriting, tables, and other data with no manual effort.
This sample demonstrates the use of Amazon Textract in combination with LangChain as a DocumentLoader.
Textract supportsPDF, TIFF, PNG and JPEG format.
Textract supports these document sizes, languages and characters.
%pip install --upgrade --quiet boto3 langchain-openai tiktoken python-dotenv
%pip install --upgrade --quiet "amazon-textract-caller>=0.2.0"
Sample 1
The first example uses a local file, which internally will be send to Amazon Textract sync API DetectDocumentText.
Local files or URL endpoints like HTTP:// are limited to one page documents for Textract. Multi-page documents have to reside on S3. This sample file is a jpeg.
from langchain_community.document_loaders import AmazonTextractPDFLoader
loader = AmazonTextractPDFLoader("example_data/alejandro_rosalez_sample-small.jpeg")
documents = loader.load()
Output from the file
[Document(page_content='Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No ', metadata={'source': 'example_data/alejandro_rosalez_sample-small.jpeg', 'page': 1})]
Sample 2
The next sample loads a file from an HTTPS endpoint. It has to be single page, as Amazon Textract requires all multi-page documents to be stored on S3.
from langchain_community.document_loaders import AmazonTextractPDFLoader
loader = AmazonTextractPDFLoader(
"https://amazon-textract-public-content.s3.us-east-2.amazonaws.com/langchain/alejandro_rosalez_sample_1.jpg"
)
documents = loader.load()
[Document(page_content='Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No ', metadata={'source': 'example_data/alejandro_rosalez_sample-small.jpeg', 'page': 1})]
Sample 3
Processing a multi-page document requires the document to be on S3. The sample document resides in a bucket in us-east-2 and Textract needs to be called in that same region to be successful, so we set the region_name on the client and pass that in to the loader to ensure Textract is called from us-east-2. You could also to have your notebook running in us-east-2, setting the AWS_DEFAULT_REGION set to us-east-2 or when running in a different environment, pass in a boto3 Textract client with that region name like in the cell below.
import boto3
textract_client = boto3.client("textract", region_name="us-east-2")
file_path = "s3://amazon-textract-public-content/langchain/layout-parser-paper.pdf"
loader = AmazonTextractPDFLoader(file_path, client=textract_client)
documents = loader.load()
Now getting the number of pages to validate the response (printing out the full response would be quite long…). We expect 16 pages.
Sample 4
You have the option to pass an additional parameter called linearization_config to the AmazonTextractPDFLoader which will determine how the the text output will be linearized by the parser after Textract runs.
from langchain_community.document_loaders import AmazonTextractPDFLoader
from textractor.data.text_linearization_config import TextLinearizationConfig
loader = AmazonTextractPDFLoader(
"s3://amazon-textract-public-content/langchain/layout-parser-paper.pdf",
linearization_config=TextLinearizationConfig(
hide_header_layout=True,
hide_footer_layout=True,
hide_figure_layout=True,
),
)
documents = loader.load()
The AmazonTextractPDFLoader can be used in a chain the same way the other loaders are used. Textract itself does have a Query feature, which offers similar functionality to the QA chain in this sample, which is worth checking out as well.
# You can store your OPENAI_API_KEY in a .env file as well
# import os
# from dotenv import load_dotenv
# load_dotenv()
# Or set the OpenAI key in the environment directly
import os
os.environ["OPENAI_API_KEY"] = "your-OpenAI-API-key"
from langchain.chains.question_answering import load_qa_chain
from langchain_openai import OpenAI
chain = load_qa_chain(llm=OpenAI(), chain_type="map_reduce")
query = ["Who are the autors?"]
chain.run(input_documents=documents, question=query)
' The authors are Zejiang Shen, Ruochen Zhang, Melissa Dell, Benjamin Charles Germain Lee, Jacob Carlson, Weining Li, Gardner, M., Grus, J., Neumann, M., Tafjord, O., Dasigi, P., Liu, N., Peters, M., Schmitz, M., Zettlemoyer, L., Lukasz Garncarek, Powalski, R., Stanislawek, T., Topolski, B., Halama, P., Gralinski, F., Graves, A., Fernández, S., Gomez, F., Schmidhuber, J., Harley, A.W., Ufkes, A., Derpanis, K.G., He, K., Gkioxari, G., Dollár, P., Girshick, R., He, K., Zhang, X., Ren, S., Sun, J., Kay, A., Lamiroy, B., Lopresti, D., Mears, J., Jakeway, E., Ferriter, M., Adams, C., Yarasavage, N., Thomas, D., Zwaard, K., Li, M., Cui, L., Huang,' |
https://python.langchain.com/docs/integrations/document_loaders/apify_dataset/ | ## Apify Dataset
> [Apify Dataset](https://docs.apify.com/platform/storage/dataset) is a scalable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of [Apify Actors](https://apify.com/store)—serverless cloud programs for various web scraping, crawling, and data extraction use cases.
This notebook shows how to load Apify datasets to LangChain.
## Prerequisites[](#prerequisites "Direct link to Prerequisites")
You need to have an existing dataset on the Apify platform. If you don’t have one, please first check out [this notebook](https://python.langchain.com/docs/integrations/tools/apify/) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs.
```
%pip install --upgrade --quiet apify-client
```
First, import `ApifyDatasetLoader` into your source code:
```
from langchain_community.document_loaders import ApifyDatasetLoaderfrom langchain_core.documents import Document
```
Then provide a function that maps Apify dataset record fields to LangChain `Document` format.
For example, if your dataset items are structured like this:
```
{ "url": "https://apify.com", "text": "Apify is the best web scraping and automation platform."}
```
The mapping function in the code below will convert them to LangChain `Document` format, so that you can use them further with any LLM model (e.g. for question answering).
```
loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda dataset_item: Document( page_content=dataset_item["text"], metadata={"source": dataset_item["url"]} ),)
```
## An example with question answering[](#an-example-with-question-answering "Direct link to An example with question answering")
In this example, we use data from a dataset to answer a question.
```
from langchain.indexes import VectorstoreIndexCreatorfrom langchain_community.docstore.document import Documentfrom langchain_community.document_loaders import ApifyDatasetLoader
```
```
loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ),)
```
```
index = VectorstoreIndexCreator().from_loaders([loader])
```
```
query = "What is Apify?"result = index.query_with_sources(query)
```
```
print(result["answer"])print(result["sources"])
```
```
Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform.https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examples
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:52.394Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/apify_dataset/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/apify_dataset/",
"description": "Apify Dataset is a",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4369",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"apify_dataset\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:51 GMT",
"etag": "W/\"74d4b0553b54badd90f9ec6e1c67faad\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::kbqsj-1713753531552-85025c1c6c6e"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/apify_dataset/",
"property": "og:url"
},
{
"content": "Apify Dataset | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Apify Dataset is a",
"property": "og:description"
}
],
"title": "Apify Dataset | 🦜️🔗 LangChain"
} | Apify Dataset
Apify Dataset is a scalable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for various web scraping, crawling, and data extraction use cases.
This notebook shows how to load Apify datasets to LangChain.
Prerequisites
You need to have an existing dataset on the Apify platform. If you don’t have one, please first check out this notebook on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs.
%pip install --upgrade --quiet apify-client
First, import ApifyDatasetLoader into your source code:
from langchain_community.document_loaders import ApifyDatasetLoader
from langchain_core.documents import Document
Then provide a function that maps Apify dataset record fields to LangChain Document format.
For example, if your dataset items are structured like this:
{
"url": "https://apify.com",
"text": "Apify is the best web scraping and automation platform."
}
The mapping function in the code below will convert them to LangChain Document format, so that you can use them further with any LLM model (e.g. for question answering).
loader = ApifyDatasetLoader(
dataset_id="your-dataset-id",
dataset_mapping_function=lambda dataset_item: Document(
page_content=dataset_item["text"], metadata={"source": dataset_item["url"]}
),
)
An example with question answering
In this example, we use data from a dataset to answer a question.
from langchain.indexes import VectorstoreIndexCreator
from langchain_community.docstore.document import Document
from langchain_community.document_loaders import ApifyDatasetLoader
loader = ApifyDatasetLoader(
dataset_id="your-dataset-id",
dataset_mapping_function=lambda item: Document(
page_content=item["text"] or "", metadata={"source": item["url"]}
),
)
index = VectorstoreIndexCreator().from_loaders([loader])
query = "What is Apify?"
result = index.query_with_sources(query)
print(result["answer"])
print(result["sources"])
Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform.
https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examples
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/document_loaders/assemblyai/ | ## AssemblyAI Audio Transcripts
The `AssemblyAIAudioTranscriptLoader` allows to transcribe audio files with the [AssemblyAI API](https://www.assemblyai.com/) and loads the transcribed text into documents.
To use it, you should have the `assemblyai` python package installed, and the environment variable `ASSEMBLYAI_API_KEY` set with your API key. Alternatively, the API key can also be passed as an argument.
More info about AssemblyAI:
* [Website](https://www.assemblyai.com/)
* [Get a Free API key](https://www.assemblyai.com/dashboard/signup)
* [AssemblyAI API Docs](https://www.assemblyai.com/docs)
## Installation[](#installation "Direct link to Installation")
First, you need to install the `assemblyai` python package.
You can find more info about it inside the [assemblyai-python-sdk GitHub repo](https://github.com/AssemblyAI/assemblyai-python-sdk).
```
%pip install --upgrade --quiet assemblyai
```
## Example[](#example "Direct link to Example")
The `AssemblyAIAudioTranscriptLoader` needs at least the `file_path` argument. Audio files can be specified as an URL or a local file path.
```
from langchain_community.document_loaders import AssemblyAIAudioTranscriptLoaderaudio_file = "https://storage.googleapis.com/aai-docs-samples/nbc.mp3"# or a local file path: audio_file = "./nbc.mp3"loader = AssemblyAIAudioTranscriptLoader(file_path=audio_file)docs = loader.load()
```
Note: Calling `loader.load()` blocks until the transcription is finished.
The transcribed text is available in the `page_content`:
```
"Load time, a new president and new congressional makeup. Same old ..."
```
The `metadata` contains the full JSON response with more meta information:
```
{'language_code': <LanguageCode.en_us: 'en_us'>, 'audio_url': 'https://storage.googleapis.com/aai-docs-samples/nbc.mp3', 'punctuate': True, 'format_text': True, ...}
```
## Transcript Formats[](#transcript-formats "Direct link to Transcript Formats")
You can specify the `transcript_format` argument for different formats.
Depending on the format, one or more documents are returned. These are the different `TranscriptFormat` options:
* `TEXT`: One document with the transcription text
* `SENTENCES`: Multiple documents, splits the transcription by each sentence
* `PARAGRAPHS`: Multiple documents, splits the transcription by each paragraph
* `SUBTITLES_SRT`: One document with the transcript exported in SRT subtitles format
* `SUBTITLES_VTT`: One document with the transcript exported in VTT subtitles format
```
from langchain_community.document_loaders.assemblyai import TranscriptFormatloader = AssemblyAIAudioTranscriptLoader( file_path="./your_file.mp3", transcript_format=TranscriptFormat.SENTENCES,)docs = loader.load()
```
## Transcription Config[](#transcription-config "Direct link to Transcription Config")
You can also specify the `config` argument to use different audio intelligence models.
Visit the [AssemblyAI API Documentation](https://www.assemblyai.com/docs) to get an overview of all available models!
```
import assemblyai as aaiconfig = aai.TranscriptionConfig( speaker_labels=True, auto_chapters=True, entity_detection=True)loader = AssemblyAIAudioTranscriptLoader(file_path="./your_file.mp3", config=config)
```
## Pass the API Key as argument[](#pass-the-api-key-as-argument "Direct link to Pass the API Key as argument")
Next to setting the API key as environment variable `ASSEMBLYAI_API_KEY`, it is also possible to pass it as argument.
```
loader = AssemblyAIAudioTranscriptLoader( file_path="./your_file.mp3", api_key="YOUR_KEY")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:52.839Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/assemblyai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/assemblyai/",
"description": "The AssemblyAIAudioTranscriptLoader allows to transcribe audio files",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "5183",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"assemblyai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:52 GMT",
"etag": "W/\"dbd2a3546b3f8830d0356f24c0c50d97\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::fm5gj-1713753532784-32caa23c3557"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/assemblyai/",
"property": "og:url"
},
{
"content": "AssemblyAI Audio Transcripts | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "The AssemblyAIAudioTranscriptLoader allows to transcribe audio files",
"property": "og:description"
}
],
"title": "AssemblyAI Audio Transcripts | 🦜️🔗 LangChain"
} | AssemblyAI Audio Transcripts
The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.
To use it, you should have the assemblyai python package installed, and the environment variable ASSEMBLYAI_API_KEY set with your API key. Alternatively, the API key can also be passed as an argument.
More info about AssemblyAI:
Website
Get a Free API key
AssemblyAI API Docs
Installation
First, you need to install the assemblyai python package.
You can find more info about it inside the assemblyai-python-sdk GitHub repo.
%pip install --upgrade --quiet assemblyai
Example
The AssemblyAIAudioTranscriptLoader needs at least the file_path argument. Audio files can be specified as an URL or a local file path.
from langchain_community.document_loaders import AssemblyAIAudioTranscriptLoader
audio_file = "https://storage.googleapis.com/aai-docs-samples/nbc.mp3"
# or a local file path: audio_file = "./nbc.mp3"
loader = AssemblyAIAudioTranscriptLoader(file_path=audio_file)
docs = loader.load()
Note: Calling loader.load() blocks until the transcription is finished.
The transcribed text is available in the page_content:
"Load time, a new president and new congressional makeup. Same old ..."
The metadata contains the full JSON response with more meta information:
{'language_code': <LanguageCode.en_us: 'en_us'>,
'audio_url': 'https://storage.googleapis.com/aai-docs-samples/nbc.mp3',
'punctuate': True,
'format_text': True,
...
}
Transcript Formats
You can specify the transcript_format argument for different formats.
Depending on the format, one or more documents are returned. These are the different TranscriptFormat options:
TEXT: One document with the transcription text
SENTENCES: Multiple documents, splits the transcription by each sentence
PARAGRAPHS: Multiple documents, splits the transcription by each paragraph
SUBTITLES_SRT: One document with the transcript exported in SRT subtitles format
SUBTITLES_VTT: One document with the transcript exported in VTT subtitles format
from langchain_community.document_loaders.assemblyai import TranscriptFormat
loader = AssemblyAIAudioTranscriptLoader(
file_path="./your_file.mp3",
transcript_format=TranscriptFormat.SENTENCES,
)
docs = loader.load()
Transcription Config
You can also specify the config argument to use different audio intelligence models.
Visit the AssemblyAI API Documentation to get an overview of all available models!
import assemblyai as aai
config = aai.TranscriptionConfig(
speaker_labels=True, auto_chapters=True, entity_detection=True
)
loader = AssemblyAIAudioTranscriptLoader(file_path="./your_file.mp3", config=config)
Pass the API Key as argument
Next to setting the API key as environment variable ASSEMBLYAI_API_KEY, it is also possible to pass it as argument.
loader = AssemblyAIAudioTranscriptLoader(
file_path="./your_file.mp3", api_key="YOUR_KEY"
) |
https://python.langchain.com/docs/integrations/document_loaders/astradb/ | ## AstraDB
DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Cassandra and made conveniently available through an easy-to-use JSON API.
## Overview[](#overview "Direct link to Overview")
The AstraDB Document Loader returns a list of Langchain Documents from an AstraDB database.
The Loader takes the following parameters:
* `api_endpoint`: AstraDB API endpoint. Looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`
* `token`: AstraDB token. Looks like `AstraCS:6gBhNmsk135....`
* `collection_name` : AstraDB collection name
* `namespace`: (Optional) AstraDB namespace
* `filter_criteria`: (Optional) Filter used in the find query
* `projection`: (Optional) Projection used in the find query
* `find_options`: (Optional) Options used in the find query
* `nb_prefetched`: (Optional) Number of documents pre-fetched by the loader
* `extraction_function`: (Optional) A function to convert the AstraDB document to the LangChain `page_content` string. Defaults to `json.dumps`
The following metadata is set to the LangChain Documents metadata output:
```
{ metadata : { "namespace": "...", "api_endpoint": "...", "collection": "..." }}
```
## Load documents with the Document Loader[](#load-documents-with-the-document-loader "Direct link to Load documents with the Document Loader")
```
from langchain_community.document_loaders import AstraDBLoader
```
```
from getpass import getpassASTRA_DB_API_ENDPOINT = input("ASTRA_DB_API_ENDPOINT = ")ASTRA_DB_APPLICATION_TOKEN = getpass("ASTRA_DB_APPLICATION_TOKEN = ")
```
```
loader = AstraDBLoader( api_endpoint=ASTRA_DB_API_ENDPOINT, token=ASTRA_DB_APPLICATION_TOKEN, collection_name="movie_reviews", projection={"title": 1, "reviewtext": 1}, find_options={"limit": 10},)
```
```
Document(page_content='{"_id": "659bdffa16cbc4586b11a423", "title": "Dangerous Men", "reviewtext": "\\"Dangerous Men,\\" the picture\'s production notes inform, took 26 years to reach the big screen. After having seen it, I wonder: What was the rush?"}', metadata={'namespace': 'default_keyspace', 'api_endpoint': 'https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com', 'collection': 'movie_reviews'})
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:53.008Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/astradb/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/astradb/",
"description": "DataStax Astra DB",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3441",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"astradb\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:52 GMT",
"etag": "W/\"ad302396de5d428e953f77ebacfa613e\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dvhs9-1713753532783-6705562ca2c7"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/astradb/",
"property": "og:url"
},
{
"content": "AstraDB | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "DataStax Astra DB",
"property": "og:description"
}
],
"title": "AstraDB | 🦜️🔗 LangChain"
} | AstraDB
DataStax Astra DB is a serverless vector-capable database built on Cassandra and made conveniently available through an easy-to-use JSON API.
Overview
The AstraDB Document Loader returns a list of Langchain Documents from an AstraDB database.
The Loader takes the following parameters:
api_endpoint: AstraDB API endpoint. Looks like https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com
token: AstraDB token. Looks like AstraCS:6gBhNmsk135....
collection_name : AstraDB collection name
namespace: (Optional) AstraDB namespace
filter_criteria: (Optional) Filter used in the find query
projection: (Optional) Projection used in the find query
find_options: (Optional) Options used in the find query
nb_prefetched: (Optional) Number of documents pre-fetched by the loader
extraction_function: (Optional) A function to convert the AstraDB document to the LangChain page_content string. Defaults to json.dumps
The following metadata is set to the LangChain Documents metadata output:
{
metadata : {
"namespace": "...",
"api_endpoint": "...",
"collection": "..."
}
}
Load documents with the Document Loader
from langchain_community.document_loaders import AstraDBLoader
from getpass import getpass
ASTRA_DB_API_ENDPOINT = input("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass("ASTRA_DB_APPLICATION_TOKEN = ")
loader = AstraDBLoader(
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
collection_name="movie_reviews",
projection={"title": 1, "reviewtext": 1},
find_options={"limit": 10},
)
Document(page_content='{"_id": "659bdffa16cbc4586b11a423", "title": "Dangerous Men", "reviewtext": "\\"Dangerous Men,\\" the picture\'s production notes inform, took 26 years to reach the big screen. After having seen it, I wonder: What was the rush?"}', metadata={'namespace': 'default_keyspace', 'api_endpoint': 'https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com', 'collection': 'movie_reviews'}) |
https://python.langchain.com/docs/integrations/document_loaders/athena/ | [Amazon Athena](https://aws.amazon.com/athena/) is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats. `Athena` provides a simplified, flexible way to analyze petabytes of data where it lives. Analyze data or build applications from an Amazon Simple Storage Service (S3) data lake and 30 data sources, including on-premises data sources or other cloud systems using SQL or Python. `Athena` is built on open-source `Trino` and `Presto` engines and `Apache Spark` frameworks, with no provisioning or configuration effort required.
This notebook goes over how to load documents from `AWS Athena`.
```
database_name = "my_database"s3_output_path = "s3://my_bucket/query_results/"query = "SELECT * FROM my_table"profile_name = "my_profile"loader = AthenaLoader( query=query, database=database_name, s3_output_uri=s3_output_path, profile_name=profile_name,)documents = loader.load()print(documents)
```
```
database_name = "my_database"s3_output_path = "s3://my_bucket/query_results/"query = "SELECT * FROM my_table"profile_name = "my_profile"metadata_columns = ["_row", "_created_at"]loader = AthenaLoader( query=query, database=database_name, s3_output_uri=s3_output_path, profile_name=profile_name, metadata_columns=metadata_columns,)documents = loader.load()print(documents)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:53.353Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/athena/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/athena/",
"description": "Amazon Athena is a serverless,",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3441",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"athena\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:53 GMT",
"etag": "W/\"1bb984a016f6099db56773bd542308b7\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::kt9bz-1713753533248-630b9e16c647"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/athena/",
"property": "og:url"
},
{
"content": "Athena | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Amazon Athena is a serverless,",
"property": "og:description"
}
],
"title": "Athena | 🦜️🔗 LangChain"
} | Amazon Athena is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats. Athena provides a simplified, flexible way to analyze petabytes of data where it lives. Analyze data or build applications from an Amazon Simple Storage Service (S3) data lake and 30 data sources, including on-premises data sources or other cloud systems using SQL or Python. Athena is built on open-source Trino and Presto engines and Apache Spark frameworks, with no provisioning or configuration effort required.
This notebook goes over how to load documents from AWS Athena.
database_name = "my_database"
s3_output_path = "s3://my_bucket/query_results/"
query = "SELECT * FROM my_table"
profile_name = "my_profile"
loader = AthenaLoader(
query=query,
database=database_name,
s3_output_uri=s3_output_path,
profile_name=profile_name,
)
documents = loader.load()
print(documents)
database_name = "my_database"
s3_output_path = "s3://my_bucket/query_results/"
query = "SELECT * FROM my_table"
profile_name = "my_profile"
metadata_columns = ["_row", "_created_at"]
loader = AthenaLoader(
query=query,
database=database_name,
s3_output_uri=s3_output_path,
profile_name=profile_name,
metadata_columns=metadata_columns,
)
documents = loader.load()
print(documents) |
https://python.langchain.com/docs/integrations/document_loaders/async_chromium/ | ## Async Chromium
Chromium is one of the browsers supported by Playwright, a library used to control browser automation.
By running `p.chromium.launch(headless=True)`, we are launching a headless instance of Chromium.
Headless mode means that the browser is running without a graphical user interface.
`AsyncChromiumLoader` loads the page, and then we use `Html2TextTransformer` to transform to text.
```
%pip install --upgrade --quiet playwright beautifulsoup4!playwright install
```
```
from langchain_community.document_loaders import AsyncChromiumLoaderurls = ["https://www.wsj.com"]loader = AsyncChromiumLoader(urls)docs = loader.load()docs[0].page_content[0:100]
```
```
'<!DOCTYPE html><html lang="en"><head><script src="https://s0.2mdn.net/instream/video/client.js" asyn'
```
If you are using Jupyter notebooks, you might need to apply `nest_asyncio` before loading the documents.
```
!pip install nest-asyncioimport nest_asyncionest_asyncio.apply()
```
```
from langchain_community.document_transformers import Html2TextTransformerhtml2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)docs_transformed[0].page_content[0:500]
```
```
"Skip to Main ContentSkip to SearchSkip to... Select * Top News * What's News *\nFeatured Stories * Retirement * Life & Arts * Hip-Hop * Sports * Video *\nEconomy * Real Estate * Sports * CMO * CIO * CFO * Risk & Compliance *\nLogistics Report * Sustainable Business * Heard on the Street * Barron’s *\nMarketWatch * Mansion Global * Penta * Opinion * Journal Reports * Sponsored\nOffers Explore Our Brands * WSJ * * * * * Barron's * * * * * MarketWatch * * *\n* * IBD # The Wall Street Journal SubscribeSig"
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:53.492Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/async_chromium/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/async_chromium/",
"description": "Chromium is one of the browsers supported by Playwright, a library used",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3441",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"async_chromium\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:53 GMT",
"etag": "W/\"732be011f1345aa127df766c811abfa9\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dvqkj-1713753533258-9fc2bd2d8701"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/async_chromium/",
"property": "og:url"
},
{
"content": "Async Chromium | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Chromium is one of the browsers supported by Playwright, a library used",
"property": "og:description"
}
],
"title": "Async Chromium | 🦜️🔗 LangChain"
} | Async Chromium
Chromium is one of the browsers supported by Playwright, a library used to control browser automation.
By running p.chromium.launch(headless=True), we are launching a headless instance of Chromium.
Headless mode means that the browser is running without a graphical user interface.
AsyncChromiumLoader loads the page, and then we use Html2TextTransformer to transform to text.
%pip install --upgrade --quiet playwright beautifulsoup4
!playwright install
from langchain_community.document_loaders import AsyncChromiumLoader
urls = ["https://www.wsj.com"]
loader = AsyncChromiumLoader(urls)
docs = loader.load()
docs[0].page_content[0:100]
'<!DOCTYPE html><html lang="en"><head><script src="https://s0.2mdn.net/instream/video/client.js" asyn'
If you are using Jupyter notebooks, you might need to apply nest_asyncio before loading the documents.
!pip install nest-asyncio
import nest_asyncio
nest_asyncio.apply()
from langchain_community.document_transformers import Html2TextTransformer
html2text = Html2TextTransformer()
docs_transformed = html2text.transform_documents(docs)
docs_transformed[0].page_content[0:500]
"Skip to Main ContentSkip to SearchSkip to... Select * Top News * What's News *\nFeatured Stories * Retirement * Life & Arts * Hip-Hop * Sports * Video *\nEconomy * Real Estate * Sports * CMO * CIO * CFO * Risk & Compliance *\nLogistics Report * Sustainable Business * Heard on the Street * Barron’s *\nMarketWatch * Mansion Global * Penta * Opinion * Journal Reports * Sponsored\nOffers Explore Our Brands * WSJ * * * * * Barron's * * * * * MarketWatch * * *\n* * IBD # The Wall Street Journal SubscribeSig" |
https://python.langchain.com/docs/integrations/document_loaders/aws_s3_file/ | ## AWS S3 File
> [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html) is an object storage service.
> [AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)
This covers how to load document objects from an `AWS S3 File` object.
```
from langchain_community.document_loaders import S3FileLoader
```
```
%pip install --upgrade --quiet boto3
```
```
loader = S3FileLoader("testing-hwc", "fake.docx")
```
```
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 's3://testing-hwc/fake.docx'}, lookup_index=0)]
```
## Configuring the AWS Boto3 client[](#configuring-the-aws-boto3-client "Direct link to Configuring the AWS Boto3 client")
You can configure the AWS [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) client by passing named arguments when creating the S3DirectoryLoader. This is useful for instance when AWS credentials can’t be set as environment variables. See the [list of parameters](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session) that can be configured.
```
loader = S3FileLoader( "testing-hwc", "fake.docx", aws_access_key_id="xxxx", aws_secret_access_key="yyyy")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:53.811Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/aws_s3_file/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/aws_s3_file/",
"description": "[Amazon Simple Storage Service (Amazon",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "8630",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"aws_s3_file\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:53 GMT",
"etag": "W/\"f6766cd149bcf0ccb7d6e075a0c630e6\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::pxv8r-1713753533495-d3a6c051d107"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/aws_s3_file/",
"property": "og:url"
},
{
"content": "AWS S3 File | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Amazon Simple Storage Service (Amazon",
"property": "og:description"
}
],
"title": "AWS S3 File | 🦜️🔗 LangChain"
} | AWS S3 File
Amazon Simple Storage Service (Amazon S3) is an object storage service.
AWS S3 Buckets
This covers how to load document objects from an AWS S3 File object.
from langchain_community.document_loaders import S3FileLoader
%pip install --upgrade --quiet boto3
loader = S3FileLoader("testing-hwc", "fake.docx")
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 's3://testing-hwc/fake.docx'}, lookup_index=0)]
Configuring the AWS Boto3 client
You can configure the AWS Boto3 client by passing named arguments when creating the S3DirectoryLoader. This is useful for instance when AWS credentials can’t be set as environment variables. See the list of parameters that can be configured.
loader = S3FileLoader(
"testing-hwc", "fake.docx", aws_access_key_id="xxxx", aws_secret_access_key="yyyy"
)
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/document_loaders/aws_s3_directory/ | ## AWS S3 Directory
> [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html) is an object storage service
> [AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
This covers how to load document objects from an `AWS S3 Directory` object.
```
%pip install --upgrade --quiet boto3
```
```
from langchain_community.document_loaders import S3DirectoryLoader
```
```
loader = S3DirectoryLoader("testing-hwc")
```
## Specifying a prefix[](#specifying-a-prefix "Direct link to Specifying a prefix")
You can also specify a prefix for more finegrained control over what files to load.
```
loader = S3DirectoryLoader("testing-hwc", prefix="fake")
```
```
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 's3://testing-hwc/fake.docx'}, lookup_index=0)]
```
## Configuring the AWS Boto3 client[](#configuring-the-aws-boto3-client "Direct link to Configuring the AWS Boto3 client")
You can configure the AWS [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) client by passing named arguments when creating the S3DirectoryLoader. This is useful for instance when AWS credentials can’t be set as environment variables. See the [list of parameters](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session) that can be configured.
```
loader = S3DirectoryLoader( "testing-hwc", aws_access_key_id="xxxx", aws_secret_access_key="yyyy")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:53.917Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/aws_s3_directory/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/aws_s3_directory/",
"description": "[Amazon Simple Storage Service (Amazon",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4411",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"aws_s3_directory\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:53 GMT",
"etag": "W/\"9bfea4ce811f7397a3b0148cf307a84b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::5fbxs-1713753533628-e1263d261a90"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/aws_s3_directory/",
"property": "og:url"
},
{
"content": "AWS S3 Directory | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Amazon Simple Storage Service (Amazon",
"property": "og:description"
}
],
"title": "AWS S3 Directory | 🦜️🔗 LangChain"
} | AWS S3 Directory
Amazon Simple Storage Service (Amazon S3) is an object storage service
AWS S3 Directory
This covers how to load document objects from an AWS S3 Directory object.
%pip install --upgrade --quiet boto3
from langchain_community.document_loaders import S3DirectoryLoader
loader = S3DirectoryLoader("testing-hwc")
Specifying a prefix
You can also specify a prefix for more finegrained control over what files to load.
loader = S3DirectoryLoader("testing-hwc", prefix="fake")
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 's3://testing-hwc/fake.docx'}, lookup_index=0)]
Configuring the AWS Boto3 client
You can configure the AWS Boto3 client by passing named arguments when creating the S3DirectoryLoader. This is useful for instance when AWS credentials can’t be set as environment variables. See the list of parameters that can be configured.
loader = S3DirectoryLoader(
"testing-hwc", aws_access_key_id="xxxx", aws_secret_access_key="yyyy"
)
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/document_loaders/async_html/ | ## AsyncHtml
`AsyncHtmlLoader` loads raw HTML from a list of URLs concurrently.
```
from langchain_community.document_loaders import AsyncHtmlLoader
```
```
urls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load()
```
```
Fetching pages: 100%|############| 2/2 [00:00<00:00, 9.96it/s]
```
```
docs[0].page_content[1000:2000]
```
```
' news. Stream exclusive games on ESPN+ and play fantasy sports." />\n<meta property="og:image" content="https://a1.espncdn.com/combiner/i?img=%2Fi%2Fespn%2Fespn_logos%2Fespn_red.png"/>\n<meta property="og:image:width" content="1200" />\n<meta property="og:image:height" content="630" />\n<meta property="og:type" content="website" />\n<meta name="twitter:site" content="espn" />\n<meta name="twitter:url" content="https://www.espn.com" />\n<meta name="twitter:title" content="ESPN - Serving Sports Fans. Anytime. Anywhere."/>\n<meta name="twitter:description" content="Visit ESPN for live scores, highlights and sports news. Stream exclusive games on ESPN+ and play fantasy sports." />\n<meta name="twitter:card" content="summary">\n<meta name="twitter:app:name:iphone" content="ESPN"/>\n<meta name="twitter:app:id:iphone" content="317469184"/>\n<meta name="twitter:app:name:googleplay" content="ESPN"/>\n<meta name="twitter:app:id:googleplay" content="com.espn.score_center"/>\n<meta name="title" content="ESPN - '
```
```
docs[1].page_content[1000:2000]
```
```
'al" href="https://lilianweng.github.io/posts/2023-06-23-agent/" />\n<link crossorigin="anonymous" href="/assets/css/stylesheet.min.67a6fb6e33089cb29e856bcc95d7aa39f70049a42b123105531265a0d9f1258b.css" integrity="sha256-Z6b7bjMInLKehWvMldeqOfcASaQrEjEFUxJloNnxJYs=" rel="preload stylesheet" as="style">\n<script defer crossorigin="anonymous" src="/assets/js/highlight.min.7680afc38aa6b15ddf158a4f3780b7b1f7dde7e91d26f073e6229bb7a0793c92.js" integrity="sha256-doCvw4qmsV3fFYpPN4C3sffd5+kdJvBz5iKbt6B5PJI="\n onload="hljs.initHighlightingOnLoad();"></script>\n<link rel="icon" href="https://lilianweng.github.io/favicon_peach.ico">\n<link rel="icon" type="image/png" sizes="16x16" href="https://lilianweng.github.io/favicon-16x16.png">\n<link rel="icon" type="image/png" sizes="32x32" href="https://lilianweng.github.io/favicon-32x32.png">\n<link rel="apple-touch-icon" href="https://lilianweng.github.io/apple-touch-icon.png">\n<link rel="mask-icon" href="https://lilianweng.github.io/safari-pinned-tab.'
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:54.363Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/async_html/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/async_html/",
"description": "AsyncHtmlLoader loads raw HTML from a list of URLs concurrently.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4371",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"async_html\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:53 GMT",
"etag": "W/\"0937aaeea3743592cd345f638ab07007\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::5wtns-1713753533937-c0419f91282a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/async_html/",
"property": "og:url"
},
{
"content": "AsyncHtml | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "AsyncHtmlLoader loads raw HTML from a list of URLs concurrently.",
"property": "og:description"
}
],
"title": "AsyncHtml | 🦜️🔗 LangChain"
} | AsyncHtml
AsyncHtmlLoader loads raw HTML from a list of URLs concurrently.
from langchain_community.document_loaders import AsyncHtmlLoader
urls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]
loader = AsyncHtmlLoader(urls)
docs = loader.load()
Fetching pages: 100%|############| 2/2 [00:00<00:00, 9.96it/s]
docs[0].page_content[1000:2000]
' news. Stream exclusive games on ESPN+ and play fantasy sports." />\n<meta property="og:image" content="https://a1.espncdn.com/combiner/i?img=%2Fi%2Fespn%2Fespn_logos%2Fespn_red.png"/>\n<meta property="og:image:width" content="1200" />\n<meta property="og:image:height" content="630" />\n<meta property="og:type" content="website" />\n<meta name="twitter:site" content="espn" />\n<meta name="twitter:url" content="https://www.espn.com" />\n<meta name="twitter:title" content="ESPN - Serving Sports Fans. Anytime. Anywhere."/>\n<meta name="twitter:description" content="Visit ESPN for live scores, highlights and sports news. Stream exclusive games on ESPN+ and play fantasy sports." />\n<meta name="twitter:card" content="summary">\n<meta name="twitter:app:name:iphone" content="ESPN"/>\n<meta name="twitter:app:id:iphone" content="317469184"/>\n<meta name="twitter:app:name:googleplay" content="ESPN"/>\n<meta name="twitter:app:id:googleplay" content="com.espn.score_center"/>\n<meta name="title" content="ESPN - '
docs[1].page_content[1000:2000]
'al" href="https://lilianweng.github.io/posts/2023-06-23-agent/" />\n<link crossorigin="anonymous" href="/assets/css/stylesheet.min.67a6fb6e33089cb29e856bcc95d7aa39f70049a42b123105531265a0d9f1258b.css" integrity="sha256-Z6b7bjMInLKehWvMldeqOfcASaQrEjEFUxJloNnxJYs=" rel="preload stylesheet" as="style">\n<script defer crossorigin="anonymous" src="/assets/js/highlight.min.7680afc38aa6b15ddf158a4f3780b7b1f7dde7e91d26f073e6229bb7a0793c92.js" integrity="sha256-doCvw4qmsV3fFYpPN4C3sffd5+kdJvBz5iKbt6B5PJI="\n onload="hljs.initHighlightingOnLoad();"></script>\n<link rel="icon" href="https://lilianweng.github.io/favicon_peach.ico">\n<link rel="icon" type="image/png" sizes="16x16" href="https://lilianweng.github.io/favicon-16x16.png">\n<link rel="icon" type="image/png" sizes="32x32" href="https://lilianweng.github.io/favicon-32x32.png">\n<link rel="apple-touch-icon" href="https://lilianweng.github.io/apple-touch-icon.png">\n<link rel="mask-icon" href="https://lilianweng.github.io/safari-pinned-tab.'
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/document_loaders/azure_ai_data/ | ## Azure AI Data
> [Azure AI Studio](https://ai.azure.com/) provides the capability to upload data assets to cloud storage and register existing data assets from the following sources:
>
> * `Microsoft OneLake`
> * `Azure Blob Storage`
> * `Azure Data Lake gen 2`
The benefit of this approach over `AzureBlobStorageContainerLoader` and `AzureBlobStorageFileLoader` is that authentication is handled seamlessly to cloud storage. You can use either _identity-based_ data access control to the data or _credential-based_ (e.g. SAS token, account key). In the case of credential-based data access you do not need to specify secrets in your code or set up key vaults - the system handles that for you.
This notebook covers how to load document objects from a data asset in AI Studio.
```
%pip install --upgrade --quiet azureml-fsspec, azure-ai-generative
```
```
from azure.ai.resources.client import AIClientfrom azure.identity import DefaultAzureCredentialfrom langchain_community.document_loaders import AzureAIDataLoader
```
```
# Create a connection to your projectclient = AIClient( credential=DefaultAzureCredential(), subscription_id="<subscription_id>", resource_group_name="<resource_group_name>", project_name="<project_name>",)
```
```
# get the latest version of your data assetdata_asset = client.data.get(name="<data_asset_name>", label="latest")
```
```
# load the data assetloader = AzureAIDataLoader(url=data_asset.path)
```
```
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]
```
## Specifying a glob pattern[](#specifying-a-glob-pattern "Direct link to Specifying a glob pattern")
You can also specify a glob pattern for more finegrained control over what files to load. In the example below, only files with a `pdf` extension will be loaded.
```
loader = AzureAIDataLoader(url=data_asset.path, glob="*.pdf")
```
```
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:54.517Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/azure_ai_data/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/azure_ai_data/",
"description": "Azure AI Studio provides the capability to",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3442",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"azure_ai_data\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:54 GMT",
"etag": "W/\"7286f53e84065f1647c90d74152f1029\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::pjjcc-1713753534372-1d3e9d8aa877"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/azure_ai_data/",
"property": "og:url"
},
{
"content": "Azure AI Data | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Azure AI Studio provides the capability to",
"property": "og:description"
}
],
"title": "Azure AI Data | 🦜️🔗 LangChain"
} | Azure AI Data
Azure AI Studio provides the capability to upload data assets to cloud storage and register existing data assets from the following sources:
Microsoft OneLake
Azure Blob Storage
Azure Data Lake gen 2
The benefit of this approach over AzureBlobStorageContainerLoader and AzureBlobStorageFileLoader is that authentication is handled seamlessly to cloud storage. You can use either identity-based data access control to the data or credential-based (e.g. SAS token, account key). In the case of credential-based data access you do not need to specify secrets in your code or set up key vaults - the system handles that for you.
This notebook covers how to load document objects from a data asset in AI Studio.
%pip install --upgrade --quiet azureml-fsspec, azure-ai-generative
from azure.ai.resources.client import AIClient
from azure.identity import DefaultAzureCredential
from langchain_community.document_loaders import AzureAIDataLoader
# Create a connection to your project
client = AIClient(
credential=DefaultAzureCredential(),
subscription_id="<subscription_id>",
resource_group_name="<resource_group_name>",
project_name="<project_name>",
)
# get the latest version of your data asset
data_asset = client.data.get(name="<data_asset_name>", label="latest")
# load the data asset
loader = AzureAIDataLoader(url=data_asset.path)
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]
Specifying a glob pattern
You can also specify a glob pattern for more finegrained control over what files to load. In the example below, only files with a pdf extension will be loaded.
loader = AzureAIDataLoader(url=data_asset.path, glob="*.pdf")
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)] |
https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_container/ | ## Azure Blob Storage Container
> [Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction) is Microsoft’s object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.
`Azure Blob Storage` is designed for: - Serving images or documents directly to a browser. - Storing files for distributed access. - Streaming video and audio. - Writing to log files. - Storing data for backup and restore, disaster recovery, and archiving. - Storing data for analysis by an on-premises or Azure-hosted service.
This notebook covers how to load document objects from a container on `Azure Blob Storage`.
```
%pip install --upgrade --quiet azure-storage-blob
```
```
from langchain_community.document_loaders import AzureBlobStorageContainerLoader
```
```
loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>")
```
```
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]
```
## Specifying a prefix[](#specifying-a-prefix "Direct link to Specifying a prefix")
You can also specify a prefix for more finegrained control over what files to load.
```
loader = AzureBlobStorageContainerLoader( conn_str="<conn_str>", container="<container>", prefix="<prefix>")
```
```
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:54.898Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_container/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_container/",
"description": "[Azure Blob",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3442",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"azure_blob_storage_container\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:54 GMT",
"etag": "W/\"18c383766a5f28e28e56fe2fa59d265c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::wvbtb-1713753534678-66a1768c2341"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_container/",
"property": "og:url"
},
{
"content": "Azure Blob Storage Container | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Azure Blob",
"property": "og:description"
}
],
"title": "Azure Blob Storage Container | 🦜️🔗 LangChain"
} | Azure Blob Storage Container
Azure Blob Storage is Microsoft’s object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.
Azure Blob Storage is designed for: - Serving images or documents directly to a browser. - Storing files for distributed access. - Streaming video and audio. - Writing to log files. - Storing data for backup and restore, disaster recovery, and archiving. - Storing data for analysis by an on-premises or Azure-hosted service.
This notebook covers how to load document objects from a container on Azure Blob Storage.
%pip install --upgrade --quiet azure-storage-blob
from langchain_community.document_loaders import AzureBlobStorageContainerLoader
loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>")
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]
Specifying a prefix
You can also specify a prefix for more finegrained control over what files to load.
loader = AzureBlobStorageContainerLoader(
conn_str="<conn_str>", container="<container>", prefix="<prefix>"
)
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)] |
https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_file/ | ## Azure Blob Storage File
> [Azure Files](https://learn.microsoft.com/en-us/azure/storage/files/storage-files-introduction) offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (`SMB`) protocol, Network File System (`NFS`) protocol, and `Azure Files REST API`.
This covers how to load document objects from a Azure Files.
```
%pip install --upgrade --quiet azure-storage-blob
```
```
from langchain_community.document_loaders import AzureBlobStorageFileLoader
```
```
loader = AzureBlobStorageFileLoader( conn_str="<connection string>", container="<container name>", blob_name="<blob name>",)
```
```
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)]
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:55.035Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_file/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_file/",
"description": "[Azure",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4397",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"azure_blob_storage_file\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:54 GMT",
"etag": "W/\"ce6d18f64171be27581b63b2f1d171ff\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::j6fmw-1713753534917-3bbde7b30a34"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_file/",
"property": "og:url"
},
{
"content": "Azure Blob Storage File | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Azure",
"property": "og:description"
}
],
"title": "Azure Blob Storage File | 🦜️🔗 LangChain"
} | Azure Blob Storage File
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API.
This covers how to load document objects from a Azure Files.
%pip install --upgrade --quiet azure-storage-blob
from langchain_community.document_loaders import AzureBlobStorageFileLoader
loader = AzureBlobStorageFileLoader(
conn_str="<connection string>",
container="<container name>",
blob_name="<blob name>",
)
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)]
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/document_loaders/azure_document_intelligence/ | ## Azure AI Document Intelligence
> [Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files.
>
> Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`.
This current implementation of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode="single"` or `mode="page"` to return pure texts in a single page or document split by page.
## Prerequisite[](#prerequisite "Direct link to Prerequisite")
An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don’t have. You will be passing `<endpoint>` and `<key>` as parameters to the loader.
```
%pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence
```
## Example 1[](#example-1 "Direct link to Example 1")
The first example uses a local file which will be sent to Azure AI Document Intelligence.
With the initialized document analysis client, we can proceed to create an instance of the DocumentIntelligenceLoader:
```
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoaderfile_path = "<filepath>"endpoint = "<endpoint>"key = "<key>"loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout")documents = loader.load()
```
The default output contains one LangChain document with markdown format content:
## Example 2[](#example-2 "Direct link to Example 2")
The input file can also be a public URL path. E.g., [https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/layout.png](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/layout.png).
```
url_path = "<url>"loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, url_path=url_path, api_model="prebuilt-layout")documents = loader.load()
```
## Example 3[](#example-3 "Direct link to Example 3")
You can also specify `mode="page"` to load document by pages.
```
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoaderfile_path = "<filepath>"endpoint = "<endpoint>"key = "<key>"loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout", mode="page",)documents = loader.load()
```
The output will be each page stored as a separate document in the list:
```
for document in documents: print(f"Page Content: {document.page_content}") print(f"Metadata: {document.metadata}")
```
## Example 4[](#example-4 "Direct link to Example 4")
You can also specify `analysis_feature=["ocrHighResolution"]` to enable add-on capabilities. For more information, see: [https://aka.ms/azsdk/python/documentintelligence/analysisfeature](https://aka.ms/azsdk/python/documentintelligence/analysisfeature).
```
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoaderfile_path = "<filepath>"endpoint = "<endpoint>"key = "<key>"analysis_features = ["ocrHighResolution"]loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout", analysis_features=analysis_features,)documents = loader.load()
```
The output contains the LangChain document recognized with high resolution add-on capability: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:55.506Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/azure_document_intelligence/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/azure_document_intelligence/",
"description": "Azure AI Document Intelligence",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"azure_document_intelligence\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:55 GMT",
"etag": "W/\"bc5ccb1018239db8a7476c840dae0149\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::5ds8x-1713753535154-1b63d10a252c"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/azure_document_intelligence/",
"property": "og:url"
},
{
"content": "Azure AI Document Intelligence | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Azure AI Document Intelligence",
"property": "og:description"
}
],
"title": "Azure AI Document Intelligence | 🦜️🔗 LangChain"
} | Azure AI Document Intelligence
Azure AI Document Intelligence (formerly known as Azure Form Recognizer) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files.
Document Intelligence supports PDF, JPEG/JPG, PNG, BMP, TIFF, HEIF, DOCX, XLSX, PPTX and HTML.
This current implementation of a loader using Document Intelligence can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with MarkdownHeaderTextSplitter for semantic document chunking. You can also use mode="single" or mode="page" to return pure texts in a single page or document split by page.
Prerequisite
An Azure AI Document Intelligence resource in one of the 3 preview regions: East US, West US2, West Europe - follow this document to create one if you don’t have. You will be passing <endpoint> and <key> as parameters to the loader.
%pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence
Example 1
The first example uses a local file which will be sent to Azure AI Document Intelligence.
With the initialized document analysis client, we can proceed to create an instance of the DocumentIntelligenceLoader:
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader
file_path = "<filepath>"
endpoint = "<endpoint>"
key = "<key>"
loader = AzureAIDocumentIntelligenceLoader(
api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout"
)
documents = loader.load()
The default output contains one LangChain document with markdown format content:
Example 2
The input file can also be a public URL path. E.g., https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/layout.png.
url_path = "<url>"
loader = AzureAIDocumentIntelligenceLoader(
api_endpoint=endpoint, api_key=key, url_path=url_path, api_model="prebuilt-layout"
)
documents = loader.load()
Example 3
You can also specify mode="page" to load document by pages.
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader
file_path = "<filepath>"
endpoint = "<endpoint>"
key = "<key>"
loader = AzureAIDocumentIntelligenceLoader(
api_endpoint=endpoint,
api_key=key,
file_path=file_path,
api_model="prebuilt-layout",
mode="page",
)
documents = loader.load()
The output will be each page stored as a separate document in the list:
for document in documents:
print(f"Page Content: {document.page_content}")
print(f"Metadata: {document.metadata}")
Example 4
You can also specify analysis_feature=["ocrHighResolution"] to enable add-on capabilities. For more information, see: https://aka.ms/azsdk/python/documentintelligence/analysisfeature.
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader
file_path = "<filepath>"
endpoint = "<endpoint>"
key = "<key>"
analysis_features = ["ocrHighResolution"]
loader = AzureAIDocumentIntelligenceLoader(
api_endpoint=endpoint,
api_key=key,
file_path=file_path,
api_model="prebuilt-layout",
analysis_features=analysis_features,
)
documents = loader.load()
The output contains the LangChain document recognized with high resolution add-on capability: |
https://python.langchain.com/docs/integrations/document_loaders/bibtex/ | ## BibTeX
> [BibTeX](https://www.ctan.org/pkg/bibtex) is a file format and reference management system commonly used in conjunction with `LaTeX` typesetting. It serves as a way to organize and store bibliographic information for academic and research documents.
`BibTeX` files have a `.bib` extension and consist of plain text entries representing references to various publications, such as books, articles, conference papers, theses, and more. Each `BibTeX` entry follows a specific structure and contains fields for different bibliographic details like author names, publication title, journal or book title, year of publication, page numbers, and more.
BibTeX files can also store the path to documents, such as `.pdf` files that can be retrieved.
## Installation[](#installation "Direct link to Installation")
First, you need to install `bibtexparser` and `PyMuPDF`.
```
%pip install --upgrade --quiet bibtexparser pymupdf
```
## Examples[](#examples "Direct link to Examples")
`BibtexLoader` has these arguments: - `file_path`: the path of the `.bib` bibtex file - optional `max_docs`: default=None, i.e. not limit. Use it to limit number of retrieved documents. - optional `max_content_chars`: default=4000. Use it to limit the number of characters in a single document. - optional `load_extra_meta`: default=False. By default only the most important fields from the bibtex entries: `Published` (publication year), `Title`, `Authors`, `Summary`, `Journal`, `Keywords`, and `URL`. If True, it will also try to load return `entry_id`, `note`, `doi`, and `links` fields. - optional `file_pattern`: default=`r'[^:]+\.pdf'`. Regex pattern to find files in the `file` entry. Default pattern supports `Zotero` flavour bibtex style and bare file path.
```
from langchain_community.document_loaders import BibtexLoader
```
```
# Create a dummy bibtex file and download a pdf.import urllib.requesturllib.request.urlretrieve( "https://www.fourmilab.ch/etexts/einstein/specrel/specrel.pdf", "einstein1905.pdf")bibtex_text = """ @article{einstein1915, title={Die Feldgleichungen der Gravitation}, abstract={Die Grundgleichungen der Gravitation, die ich hier entwickeln werde, wurden von mir in einer Abhandlung: ,,Die formale Grundlage der allgemeinen Relativit{\"a}tstheorie`` in den Sitzungsberichten der Preu{\ss}ischen Akademie der Wissenschaften 1915 ver{\"o}ffentlicht.}, author={Einstein, Albert}, journal={Sitzungsberichte der K{\"o}niglich Preu{\ss}ischen Akademie der Wissenschaften}, volume={1915}, number={1}, pages={844--847}, year={1915}, doi={10.1002/andp.19163540702}, link={https://onlinelibrary.wiley.com/doi/abs/10.1002/andp.19163540702}, file={einstein1905.pdf} } """# save bibtex_text to biblio.bib filewith open("./biblio.bib", "w") as file: file.write(bibtex_text)
```
```
docs = BibtexLoader("./biblio.bib").load()
```
```
{'id': 'einstein1915', 'published_year': '1915', 'title': 'Die Feldgleichungen der Gravitation', 'publication': 'Sitzungsberichte der K{"o}niglich Preu{\\ss}ischen Akademie der Wissenschaften', 'authors': 'Einstein, Albert', 'abstract': 'Die Grundgleichungen der Gravitation, die ich hier entwickeln werde, wurden von mir in einer Abhandlung: ,,Die formale Grundlage der allgemeinen Relativit{"a}tstheorie`` in den Sitzungsberichten der Preu{\\ss}ischen Akademie der Wissenschaften 1915 ver{"o}ffentlicht.', 'url': 'https://doi.org/10.1002/andp.19163540702'}
```
```
print(docs[0].page_content[:400]) # all pages of the pdf content
```
```
ON THE ELECTRODYNAMICS OF MOVINGBODIESBy A. EINSTEINJune 30, 1905It is known that Maxwell’s electrodynamics—as usually understood at thepresent time—when applied to moving bodies, leads to asymmetries which donot appear to be inherent in the phenomena. Take, for example, the recipro-cal electrodynamic action of a magnet and a conductor. The observable phe-nomenon here depends only on the r
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:55.361Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/bibtex/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/bibtex/",
"description": "BibTeX is a file format and",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3442",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"bibtex\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:55 GMT",
"etag": "W/\"30ec8a41bb3670556dc27c0c4ed69e04\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::mg4n2-1713753535251-cf023402298a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/bibtex/",
"property": "og:url"
},
{
"content": "BibTeX | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "BibTeX is a file format and",
"property": "og:description"
}
],
"title": "BibTeX | 🦜️🔗 LangChain"
} | BibTeX
BibTeX is a file format and reference management system commonly used in conjunction with LaTeX typesetting. It serves as a way to organize and store bibliographic information for academic and research documents.
BibTeX files have a .bib extension and consist of plain text entries representing references to various publications, such as books, articles, conference papers, theses, and more. Each BibTeX entry follows a specific structure and contains fields for different bibliographic details like author names, publication title, journal or book title, year of publication, page numbers, and more.
BibTeX files can also store the path to documents, such as .pdf files that can be retrieved.
Installation
First, you need to install bibtexparser and PyMuPDF.
%pip install --upgrade --quiet bibtexparser pymupdf
Examples
BibtexLoader has these arguments: - file_path: the path of the .bib bibtex file - optional max_docs: default=None, i.e. not limit. Use it to limit number of retrieved documents. - optional max_content_chars: default=4000. Use it to limit the number of characters in a single document. - optional load_extra_meta: default=False. By default only the most important fields from the bibtex entries: Published (publication year), Title, Authors, Summary, Journal, Keywords, and URL. If True, it will also try to load return entry_id, note, doi, and links fields. - optional file_pattern: default=r'[^:]+\.pdf'. Regex pattern to find files in the file entry. Default pattern supports Zotero flavour bibtex style and bare file path.
from langchain_community.document_loaders import BibtexLoader
# Create a dummy bibtex file and download a pdf.
import urllib.request
urllib.request.urlretrieve(
"https://www.fourmilab.ch/etexts/einstein/specrel/specrel.pdf", "einstein1905.pdf"
)
bibtex_text = """
@article{einstein1915,
title={Die Feldgleichungen der Gravitation},
abstract={Die Grundgleichungen der Gravitation, die ich hier entwickeln werde, wurden von mir in einer Abhandlung: ,,Die formale Grundlage der allgemeinen Relativit{\"a}tstheorie`` in den Sitzungsberichten der Preu{\ss}ischen Akademie der Wissenschaften 1915 ver{\"o}ffentlicht.},
author={Einstein, Albert},
journal={Sitzungsberichte der K{\"o}niglich Preu{\ss}ischen Akademie der Wissenschaften},
volume={1915},
number={1},
pages={844--847},
year={1915},
doi={10.1002/andp.19163540702},
link={https://onlinelibrary.wiley.com/doi/abs/10.1002/andp.19163540702},
file={einstein1905.pdf}
}
"""
# save bibtex_text to biblio.bib file
with open("./biblio.bib", "w") as file:
file.write(bibtex_text)
docs = BibtexLoader("./biblio.bib").load()
{'id': 'einstein1915',
'published_year': '1915',
'title': 'Die Feldgleichungen der Gravitation',
'publication': 'Sitzungsberichte der K{"o}niglich Preu{\\ss}ischen Akademie der Wissenschaften',
'authors': 'Einstein, Albert',
'abstract': 'Die Grundgleichungen der Gravitation, die ich hier entwickeln werde, wurden von mir in einer Abhandlung: ,,Die formale Grundlage der allgemeinen Relativit{"a}tstheorie`` in den Sitzungsberichten der Preu{\\ss}ischen Akademie der Wissenschaften 1915 ver{"o}ffentlicht.',
'url': 'https://doi.org/10.1002/andp.19163540702'}
print(docs[0].page_content[:400]) # all pages of the pdf content
ON THE ELECTRODYNAMICS OF MOVING
BODIES
By A. EINSTEIN
June 30, 1905
It is known that Maxwell’s electrodynamics—as usually understood at the
present time—when applied to moving bodies, leads to asymmetries which do
not appear to be inherent in the phenomena. Take, for example, the recipro-
cal electrodynamic action of a magnet and a conductor. The observable phe-
nomenon here depends only on the r |
https://python.langchain.com/docs/integrations/document_loaders/bilibili/ | This loader leverages the [bilibili-api](https://github.com/Nemo2011/bilibili-api) to retrieve text transcripts from `Bilibili` videos. To effectively use this loader, it’s essential to have the `sessdata`, `bili_jct`, and `buvid3` cookie parameters. These can be obtained by logging into [Bilibili](https://www.bilibili.com/), then extracting the values of `sessdata`, `bili_jct`, and `buvid3` from the browser’s developer tools.
If you choose to leave the cookie parameters blank, the Loader will still function, but it will only retrieve video information for the metadata and will not be able to fetch transcripts.
For detailed instructions on obtaining these credentials, refer to the guide [here](https://nemo2011.github.io/bilibili-api/#/get-credential).
The BiliBiliLoader provides a user-friendly interface for easily accessing transcripts of desired video content on Bilibili, making it an invaluable tool for those looking to analyze or utilize this media data.
```
[Document(page_content="Video Title: Let's Learn English on a Hike! 🍂🚶🏼🎒 【英文字幕】,description: Have you ever learned English in nature? In this English lesson I take you to a hiking trail near me and teach you all of the words and phrases you'll need to know to have a conversation about hiking.\n\nIn this free English class you'll learn words and phrases like: trail, path, treacherous, view, sign, observation deck, gear, hiking boots, dangerous animals, racoon, skunk, and more.\n\nYou'll even see a squirrel and hear me say the word, which is supposed to be one of the hardest English words to pronounce! Maybe hit pause at that point and try to shadow me saying the word (repeat it after me).\n\nI hope you enjoy this English lesson and I hope you find time to go for a hike soon!\n\n⌛ Remember:\xa0Always watch the video two times.\xa0Once today with English subtitles on, and once tomorrow with the English subtitles off.\xa0This will reinforce the English you have learned!\n\nTranscript: Let's learn English on a hike. (upbeat music) So I guess I should start this lesson by explaining the difference between going for a walk and going for a hike. When you go for a walk, it usually means that you are in the city or you are in town. You go for a walk along the road or on a sidewalk. You're somewhere where lots of people live. When you hike, when you go for a hike though, it usually means you're somewhere like this. You're somewhere out in nature where you can enjoy the sunshine, the trees, and just getting away from the busyness of city life or town life. Now, when you go for a hike, you don't just go into the woods and start walking, you usually look for a trail. Usually there is a path or trail, we use both words in English, and you can walk down that trail when you go for your hike. This is a very well maintained trail, it's very, very nice to walk on this trail. And there's a nice sign here that tells us about this trail, It's called the Lookout Trail, and it's about a kilometer long, it's just under a kilometer in length, and it should take about 25 minutes to walk this trail. So once again, when you go for a hike, you don't just walk into the woods willy-nilly. Do you know that word? You look for trails and you stay on the trails. It's important to stay on the trails, I'll tell you why in just a moment. It's important to stay on the trails for a couple of reasons. The first reason is that there are some plants in this part of Ontario, Canada, that aren't very good for you. There's something in particular called poison ivy. If you go off the path, so as we look here at the path, if you were to go off the path, eventually you might walk through some poison ivy and then you will be very itchy for a number of days. Poison ivy is not a nice thing to have touch your skin. So if you're wearing shorts, for sure, don't ever go off the path. The other thing is that you might encounter dangerous animals. There aren't many dangerous animals here, but in Northern Ontario and in other parts of Canada, if you're out for a hike, it's very important that you stick to the paths and trails because if you don't, you might encounter something like a bear or another dangerous animal. And then the third reason is it just helps nature to stay beautiful. If we were to all walk through the woods all the time, we would trample a lot of things that are growing there and it wouldn't stay beautiful. So the best way to enjoy nature is to stay on the paths and trails when you go on a hike and just appreciate the stuff that you're seeing. One of the coolest things about going on a hike is sometimes you're able to see wildlife. Now, I talked about wildlife that is dangerous, but some wildlife is not dangerous at all. This is a squirrel. This is probably the most common animal that you will see if you go on a hike in Ontario, Canada. You'll also see lots of birds, you'll see chipmunks, you might see a raccoon, you might even see a skunk. By the way, if you see a skunk, you should go the other direction. But one of the cool fun things about going for a hike is there's a good chance that you'll see some wildlife. Now, if you're just out for a quick hike like I am, you don't need any special gear. I'm just wearing normal running shoes. It's good to have appropriate footwear on when you go for a hike, but if you were more serious and if you were going for a longer hike, you might buy hiking boots. You might have walking sticks, you might bring a backpack with lots of supplies, like some food that you can eat for your hike. But I'm just out here for a simple hike. I'm not going for very long. I'm going for about as long as it takes to make this English lesson. So all I need is a nice pair of shoes and a sweatshirt, and I'm good to go. So hiking has a lot of benefits. I am a firm believer that people should get out of the city, that they should find somewhere where they can be in nature and that they should go for regular hikes. I think it's good for the mind, I think it's good for the soul, and it's also good exercise. So if you feel stuck in your apartment, if you're stuck in your house, and you just want to be happier, I highly suggest that you find somewhere beautiful to go for a hike. You can come to Canada if you want, but that's a little bit extreme. I'm sure if you go outside of your town or city, there's probably a place where you can go for a nice hike. You can breathe in the fresh air, you can get a little bit of exercise, and when you're done, you'll feel mentally and physically refreshed. So I waited until fall or what we also call autumn to make this English lesson about hiking. I waited because this is the best time of year in my opinion, to go hiking. You can see that the leaves have started to change color. You can see red leaves and brown leaves and orange leaves and yellow leaves, and still some green leaves, and there's even leaves on the ground already. This time of year, this season, in my opinion, is the best time to go hiking. The weather is cooler, the views in nature are amazing, and it's just a great time to be out in the forest. So sometimes you go on a hike to see specific things, sometimes it's just nature, maybe an old tree that fell in the forest. Maybe along the hiking trail, there's a beautiful waterfall. Sorry, there's no waterfalls on this hiking trail. But you also might hike to a point where there is a spectacular view. Behind me, you can see Lake Ontario. So one of the rewards of this hiking trail is that you can stand here and you can see Lake Ontario, you can see some of the beautiful fall colors, and you can see the town below. So sometimes you go on a hike for a very specific reason because there's just something awesome to see while you're hiking. So you do wanna be careful when you're hiking, there's always the possibility that something could go wrong. If you look down here, it's a little bit treacherous to walk along here. You could twist your ankle, you might slip and fall, and you want to just be really, really careful so that you don't hurt yourself. In addition, if you look over here, we are walking along the edge of a very sharp drop-off. I wouldn't call this a cliff, but you certainly wouldn't want to accidentally slip and fall down here. That would certainly hurt and would not be a good thing to have happen while you're out for a beautiful hike in nature. So one of the coolest things about this hiking trail is that there is an observation deck. I'm standing on this observation deck because if I show you what's on the other side of the camera, I think you'll really enjoy the view. So this would be another example of a spectacular view. I feel like it's so spectacular, I should just stop talking for a few seconds and let you enjoy it. Well, thank you so much for watching this English lesson and for coming on a little hike with me. I hope you were able to learn some new English words and phrases that you can use in your next English conversation. Remember, if this is your first time here, don't forget to click that red subscribe button, gimme a thumbs up, leave a comment. And as always, if you have a little bit of extra time, why don't you stick around and watch another English lesson. Bye. (upbeat music)", metadata={'bvid': 'BV1g84y1R7oE', 'aid': 620074163, 'videos': 1, 'tid': 208, 'tname': '校园学习', 'copyright': 1, 'pic': 'http://i2.hdslb.com/bfs/archive/49c2a3612efe1caad023973ca56d979aabc30922.jpg', 'title': "Let's Learn English on a Hike! 🍂🚶🏼🎒 【英文字幕】", 'pubdate': 1698108310, 'ctime': 1698108310, 'desc': "Have you ever learned English in nature? In this English lesson I take you to a hiking trail near me and teach you all of the words and phrases you'll need to know to have a conversation about hiking.\n\nIn this free English class you'll learn words and phrases like: trail, path, treacherous, view, sign, observation deck, gear, hiking boots, dangerous animals, racoon, skunk, and more.\n\nYou'll even see a squirrel and hear me say the word, which is supposed to be one of the hardest English words to pronounce! Maybe hit pause at that point and try to shadow me saying the word (repeat it after me).\n\nI hope you enjoy this English lesson and I hope you find time to go for a hike soon!\n\n⌛ Remember:\xa0Always watch the video two times.\xa0Once today with English subtitles on, and once tomorrow with the English subtitles off.\xa0This will reinforce the English you have learned!", 'desc_v2': [{'raw_text': "Have you ever learned English in nature? In this English lesson I take you to a hiking trail near me and teach you all of the words and phrases you'll need to know to have a conversation about hiking.\n\nIn this free English class you'll learn words and phrases like: trail, path, treacherous, view, sign, observation deck, gear, hiking boots, dangerous animals, racoon, skunk, and more.\n\nYou'll even see a squirrel and hear me say the word, which is supposed to be one of the hardest English words to pronounce! Maybe hit pause at that point and try to shadow me saying the word (repeat it after me).\n\nI hope you enjoy this English lesson and I hope you find time to go for a hike soon!\n\n⌛ Remember:\xa0Always watch the video two times.\xa0Once today with English subtitles on, and once tomorrow with the English subtitles off.\xa0This will reinforce the English you have learned!", 'type': 1, 'biz_id': 0}], 'state': 0, 'duration': 514, 'rights': {'bp': 0, 'elec': 0, 'download': 1, 'movie': 0, 'pay': 0, 'hd5': 1, 'no_reprint': 0, 'autoplay': 1, 'ugc_pay': 0, 'is_cooperation': 0, 'ugc_pay_preview': 0, 'no_background': 0, 'clean_mode': 0, 'is_stein_gate': 0, 'is_360': 0, 'no_share': 0, 'arc_pay': 0, 'free_watch': 0}, 'owner': {'mid': 3493297272130118, 'name': 'Bob-the-Canadian', 'face': 'https://i2.hdslb.com/bfs/face/4c0f76c9b1e052ba3889b619f5f9f8326b9c83ad.png'}, 'stat': {'aid': 620074163, 'view': 124151, 'danmaku': 284, 'reply': 310, 'favorite': 3671, 'coin': 2562, 'share': 495, 'now_rank': 0, 'his_rank': 0, 'like': 7100, 'dislike': 0, 'evaluation': '', 'argue_msg': '', 'vt': 0}, 'dynamic': '', 'cid': 1309256295, 'dimension': {'width': 3840, 'height': 2160, 'rotate': 0}, 'premiere': None, 'teenage_mode': 0, 'is_chargeable_season': False, 'is_story': False, 'is_upower_exclusive': False, 'is_upower_play': False, 'enable_vt': 0, 'vt_display': '', 'no_cache': False, 'pages': [{'cid': 1309256295, 'page': 1, 'from': 'vupload', 'part': "Let's Learn English on a Hike! 🍂🚶🏼🎒 【英文字幕】", 'duration': 514, 'vid': '', 'weblink': '', 'dimension': {'width': 3840, 'height': 2160, 'rotate': 0}, 'first_frame': 'http://i1.hdslb.com/bfs/storyff/n231024qn20mf49qfe3dogu27vu3hlee_firsti.jpg'}], 'subtitle': {'allow_submit': False, 'list': [{'id': 1341811271904868864, 'lan': 'en-US', 'lan_doc': '英语(美国)', 'is_lock': False, 'subtitle_url': '', 'type': 0, 'id_str': '1341811271904868864', 'ai_type': 0, 'ai_status': 0, 'author': {'mid': 0, 'name': '', 'sex': '', 'face': '', 'sign': '', 'rank': 0, 'birthday': 0, 'is_fake_account': 0, 'is_deleted': 0, 'in_reg_audit': 0, 'is_senior_member': 0}}, {'id': 1341824311585240576, 'lan': 'ai-zh', 'lan_doc': '中文(自动翻译)', 'is_lock': False, 'subtitle_url': '', 'type': 1, 'id_str': '1341824311585240576', 'ai_type': 1, 'ai_status': 1, 'author': {'mid': 0, 'name': '', 'sex': '', 'face': '', 'sign': '', 'rank': 0, 'birthday': 0, 'is_fake_account': 0, 'is_deleted': 0, 'in_reg_audit': 0, 'is_senior_member': 0}}, {'id': 1341824314462225408, 'lan': 'ai-en', 'lan_doc': '英语(自动生成)', 'is_lock': False, 'subtitle_url': '', 'type': 1, 'id_str': '1341824314462225408', 'ai_type': 0, 'ai_status': 2, 'author': {'mid': 0, 'name': '', 'sex': '', 'face': '', 'sign': '', 'rank': 0, 'birthday': 0, 'is_fake_account': 0, 'is_deleted': 0, 'in_reg_audit': 0, 'is_senior_member': 0}}]}, 'is_season_display': False, 'user_garb': {'url_image_ani_cut': ''}, 'honor_reply': {}, 'like_icon': '', 'need_jump_bv': False, 'disable_show_up_info': False, 'url': 'https://www.bilibili.com/video/BV1g84y1R7oE/'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:55.990Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/bilibili/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/bilibili/",
"description": "Bilibili is one of the most beloved",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3443",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"bilibili\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:55 GMT",
"etag": "W/\"630e7670d7e684a2b67ac6aa2036f748\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::vlt2t-1713753535940-697b39906f18"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/bilibili/",
"property": "og:url"
},
{
"content": "BiliBili | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Bilibili is one of the most beloved",
"property": "og:description"
}
],
"title": "BiliBili | 🦜️🔗 LangChain"
} | This loader leverages the bilibili-api to retrieve text transcripts from Bilibili videos. To effectively use this loader, it’s essential to have the sessdata, bili_jct, and buvid3 cookie parameters. These can be obtained by logging into Bilibili, then extracting the values of sessdata, bili_jct, and buvid3 from the browser’s developer tools.
If you choose to leave the cookie parameters blank, the Loader will still function, but it will only retrieve video information for the metadata and will not be able to fetch transcripts.
For detailed instructions on obtaining these credentials, refer to the guide here.
The BiliBiliLoader provides a user-friendly interface for easily accessing transcripts of desired video content on Bilibili, making it an invaluable tool for those looking to analyze or utilize this media data.
[Document(page_content="Video Title: Let's Learn English on a Hike! 🍂🚶🏼🎒 【英文字幕】,description: Have you ever learned English in nature? In this English lesson I take you to a hiking trail near me and teach you all of the words and phrases you'll need to know to have a conversation about hiking.\n\nIn this free English class you'll learn words and phrases like: trail, path, treacherous, view, sign, observation deck, gear, hiking boots, dangerous animals, racoon, skunk, and more.\n\nYou'll even see a squirrel and hear me say the word, which is supposed to be one of the hardest English words to pronounce! Maybe hit pause at that point and try to shadow me saying the word (repeat it after me).\n\nI hope you enjoy this English lesson and I hope you find time to go for a hike soon!\n\n⌛ Remember:\xa0Always watch the video two times.\xa0Once today with English subtitles on, and once tomorrow with the English subtitles off.\xa0This will reinforce the English you have learned!\n\nTranscript: Let's learn English on a hike. (upbeat music) So I guess I should start this lesson by explaining the difference between going for a walk and going for a hike. When you go for a walk, it usually means that you are in the city or you are in town. You go for a walk along the road or on a sidewalk. You're somewhere where lots of people live. When you hike, when you go for a hike though, it usually means you're somewhere like this. You're somewhere out in nature where you can enjoy the sunshine, the trees, and just getting away from the busyness of city life or town life. Now, when you go for a hike, you don't just go into the woods and start walking, you usually look for a trail. Usually there is a path or trail, we use both words in English, and you can walk down that trail when you go for your hike. This is a very well maintained trail, it's very, very nice to walk on this trail. And there's a nice sign here that tells us about this trail, It's called the Lookout Trail, and it's about a kilometer long, it's just under a kilometer in length, and it should take about 25 minutes to walk this trail. So once again, when you go for a hike, you don't just walk into the woods willy-nilly. Do you know that word? You look for trails and you stay on the trails. It's important to stay on the trails, I'll tell you why in just a moment. It's important to stay on the trails for a couple of reasons. The first reason is that there are some plants in this part of Ontario, Canada, that aren't very good for you. There's something in particular called poison ivy. If you go off the path, so as we look here at the path, if you were to go off the path, eventually you might walk through some poison ivy and then you will be very itchy for a number of days. Poison ivy is not a nice thing to have touch your skin. So if you're wearing shorts, for sure, don't ever go off the path. The other thing is that you might encounter dangerous animals. There aren't many dangerous animals here, but in Northern Ontario and in other parts of Canada, if you're out for a hike, it's very important that you stick to the paths and trails because if you don't, you might encounter something like a bear or another dangerous animal. And then the third reason is it just helps nature to stay beautiful. If we were to all walk through the woods all the time, we would trample a lot of things that are growing there and it wouldn't stay beautiful. So the best way to enjoy nature is to stay on the paths and trails when you go on a hike and just appreciate the stuff that you're seeing. One of the coolest things about going on a hike is sometimes you're able to see wildlife. Now, I talked about wildlife that is dangerous, but some wildlife is not dangerous at all. This is a squirrel. This is probably the most common animal that you will see if you go on a hike in Ontario, Canada. You'll also see lots of birds, you'll see chipmunks, you might see a raccoon, you might even see a skunk. By the way, if you see a skunk, you should go the other direction. But one of the cool fun things about going for a hike is there's a good chance that you'll see some wildlife. Now, if you're just out for a quick hike like I am, you don't need any special gear. I'm just wearing normal running shoes. It's good to have appropriate footwear on when you go for a hike, but if you were more serious and if you were going for a longer hike, you might buy hiking boots. You might have walking sticks, you might bring a backpack with lots of supplies, like some food that you can eat for your hike. But I'm just out here for a simple hike. I'm not going for very long. I'm going for about as long as it takes to make this English lesson. So all I need is a nice pair of shoes and a sweatshirt, and I'm good to go. So hiking has a lot of benefits. I am a firm believer that people should get out of the city, that they should find somewhere where they can be in nature and that they should go for regular hikes. I think it's good for the mind, I think it's good for the soul, and it's also good exercise. So if you feel stuck in your apartment, if you're stuck in your house, and you just want to be happier, I highly suggest that you find somewhere beautiful to go for a hike. You can come to Canada if you want, but that's a little bit extreme. I'm sure if you go outside of your town or city, there's probably a place where you can go for a nice hike. You can breathe in the fresh air, you can get a little bit of exercise, and when you're done, you'll feel mentally and physically refreshed. So I waited until fall or what we also call autumn to make this English lesson about hiking. I waited because this is the best time of year in my opinion, to go hiking. You can see that the leaves have started to change color. You can see red leaves and brown leaves and orange leaves and yellow leaves, and still some green leaves, and there's even leaves on the ground already. This time of year, this season, in my opinion, is the best time to go hiking. The weather is cooler, the views in nature are amazing, and it's just a great time to be out in the forest. So sometimes you go on a hike to see specific things, sometimes it's just nature, maybe an old tree that fell in the forest. Maybe along the hiking trail, there's a beautiful waterfall. Sorry, there's no waterfalls on this hiking trail. But you also might hike to a point where there is a spectacular view. Behind me, you can see Lake Ontario. So one of the rewards of this hiking trail is that you can stand here and you can see Lake Ontario, you can see some of the beautiful fall colors, and you can see the town below. So sometimes you go on a hike for a very specific reason because there's just something awesome to see while you're hiking. So you do wanna be careful when you're hiking, there's always the possibility that something could go wrong. If you look down here, it's a little bit treacherous to walk along here. You could twist your ankle, you might slip and fall, and you want to just be really, really careful so that you don't hurt yourself. In addition, if you look over here, we are walking along the edge of a very sharp drop-off. I wouldn't call this a cliff, but you certainly wouldn't want to accidentally slip and fall down here. That would certainly hurt and would not be a good thing to have happen while you're out for a beautiful hike in nature. So one of the coolest things about this hiking trail is that there is an observation deck. I'm standing on this observation deck because if I show you what's on the other side of the camera, I think you'll really enjoy the view. So this would be another example of a spectacular view. I feel like it's so spectacular, I should just stop talking for a few seconds and let you enjoy it. Well, thank you so much for watching this English lesson and for coming on a little hike with me. I hope you were able to learn some new English words and phrases that you can use in your next English conversation. Remember, if this is your first time here, don't forget to click that red subscribe button, gimme a thumbs up, leave a comment. And as always, if you have a little bit of extra time, why don't you stick around and watch another English lesson. Bye. (upbeat music)", metadata={'bvid': 'BV1g84y1R7oE', 'aid': 620074163, 'videos': 1, 'tid': 208, 'tname': '校园学习', 'copyright': 1, 'pic': 'http://i2.hdslb.com/bfs/archive/49c2a3612efe1caad023973ca56d979aabc30922.jpg', 'title': "Let's Learn English on a Hike! 🍂🚶🏼🎒 【英文字幕】", 'pubdate': 1698108310, 'ctime': 1698108310, 'desc': "Have you ever learned English in nature? In this English lesson I take you to a hiking trail near me and teach you all of the words and phrases you'll need to know to have a conversation about hiking.\n\nIn this free English class you'll learn words and phrases like: trail, path, treacherous, view, sign, observation deck, gear, hiking boots, dangerous animals, racoon, skunk, and more.\n\nYou'll even see a squirrel and hear me say the word, which is supposed to be one of the hardest English words to pronounce! Maybe hit pause at that point and try to shadow me saying the word (repeat it after me).\n\nI hope you enjoy this English lesson and I hope you find time to go for a hike soon!\n\n⌛ Remember:\xa0Always watch the video two times.\xa0Once today with English subtitles on, and once tomorrow with the English subtitles off.\xa0This will reinforce the English you have learned!", 'desc_v2': [{'raw_text': "Have you ever learned English in nature? In this English lesson I take you to a hiking trail near me and teach you all of the words and phrases you'll need to know to have a conversation about hiking.\n\nIn this free English class you'll learn words and phrases like: trail, path, treacherous, view, sign, observation deck, gear, hiking boots, dangerous animals, racoon, skunk, and more.\n\nYou'll even see a squirrel and hear me say the word, which is supposed to be one of the hardest English words to pronounce! Maybe hit pause at that point and try to shadow me saying the word (repeat it after me).\n\nI hope you enjoy this English lesson and I hope you find time to go for a hike soon!\n\n⌛ Remember:\xa0Always watch the video two times.\xa0Once today with English subtitles on, and once tomorrow with the English subtitles off.\xa0This will reinforce the English you have learned!", 'type': 1, 'biz_id': 0}], 'state': 0, 'duration': 514, 'rights': {'bp': 0, 'elec': 0, 'download': 1, 'movie': 0, 'pay': 0, 'hd5': 1, 'no_reprint': 0, 'autoplay': 1, 'ugc_pay': 0, 'is_cooperation': 0, 'ugc_pay_preview': 0, 'no_background': 0, 'clean_mode': 0, 'is_stein_gate': 0, 'is_360': 0, 'no_share': 0, 'arc_pay': 0, 'free_watch': 0}, 'owner': {'mid': 3493297272130118, 'name': 'Bob-the-Canadian', 'face': 'https://i2.hdslb.com/bfs/face/4c0f76c9b1e052ba3889b619f5f9f8326b9c83ad.png'}, 'stat': {'aid': 620074163, 'view': 124151, 'danmaku': 284, 'reply': 310, 'favorite': 3671, 'coin': 2562, 'share': 495, 'now_rank': 0, 'his_rank': 0, 'like': 7100, 'dislike': 0, 'evaluation': '', 'argue_msg': '', 'vt': 0}, 'dynamic': '', 'cid': 1309256295, 'dimension': {'width': 3840, 'height': 2160, 'rotate': 0}, 'premiere': None, 'teenage_mode': 0, 'is_chargeable_season': False, 'is_story': False, 'is_upower_exclusive': False, 'is_upower_play': False, 'enable_vt': 0, 'vt_display': '', 'no_cache': False, 'pages': [{'cid': 1309256295, 'page': 1, 'from': 'vupload', 'part': "Let's Learn English on a Hike! 🍂🚶🏼🎒 【英文字幕】", 'duration': 514, 'vid': '', 'weblink': '', 'dimension': {'width': 3840, 'height': 2160, 'rotate': 0}, 'first_frame': 'http://i1.hdslb.com/bfs/storyff/n231024qn20mf49qfe3dogu27vu3hlee_firsti.jpg'}], 'subtitle': {'allow_submit': False, 'list': [{'id': 1341811271904868864, 'lan': 'en-US', 'lan_doc': '英语(美国)', 'is_lock': False, 'subtitle_url': '', 'type': 0, 'id_str': '1341811271904868864', 'ai_type': 0, 'ai_status': 0, 'author': {'mid': 0, 'name': '', 'sex': '', 'face': '', 'sign': '', 'rank': 0, 'birthday': 0, 'is_fake_account': 0, 'is_deleted': 0, 'in_reg_audit': 0, 'is_senior_member': 0}}, {'id': 1341824311585240576, 'lan': 'ai-zh', 'lan_doc': '中文(自动翻译)', 'is_lock': False, 'subtitle_url': '', 'type': 1, 'id_str': '1341824311585240576', 'ai_type': 1, 'ai_status': 1, 'author': {'mid': 0, 'name': '', 'sex': '', 'face': '', 'sign': '', 'rank': 0, 'birthday': 0, 'is_fake_account': 0, 'is_deleted': 0, 'in_reg_audit': 0, 'is_senior_member': 0}}, {'id': 1341824314462225408, 'lan': 'ai-en', 'lan_doc': '英语(自动生成)', 'is_lock': False, 'subtitle_url': '', 'type': 1, 'id_str': '1341824314462225408', 'ai_type': 0, 'ai_status': 2, 'author': {'mid': 0, 'name': '', 'sex': '', 'face': '', 'sign': '', 'rank': 0, 'birthday': 0, 'is_fake_account': 0, 'is_deleted': 0, 'in_reg_audit': 0, 'is_senior_member': 0}}]}, 'is_season_display': False, 'user_garb': {'url_image_ani_cut': ''}, 'honor_reply': {}, 'like_icon': '', 'need_jump_bv': False, 'disable_show_up_info': False, 'url': 'https://www.bilibili.com/video/BV1g84y1R7oE/'})] |
https://python.langchain.com/docs/integrations/document_loaders/azlyrics/ | This covers how to load AZLyrics webpages into a document format that we can use downstream.
```
[Document(page_content="Miley Cyrus - Flowers Lyrics | AZLyrics.com\n\r\nWe were good, we were gold\nKinda dream that can't be sold\nWe were right till we weren't\nBuilt a home and watched it burn\n\nI didn't wanna leave you\nI didn't wanna lie\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\n\nPaint my nails, cherry red\nMatch the roses that you left\nNo remorse, no regret\nI forgive every word you said\n\nI didn't wanna leave you, baby\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours, yeah\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI\n\nI didn't wanna wanna leave you\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours (Yeah)\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than\nYeah, I can love me better than you can, uh\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby (Than you can)\nCan love me better\nI can love me better, baby\nCan love me better\nI\n", lookup_str='', metadata={'source': 'https://www.azlyrics.com/lyrics/mileycyrus/flowers.html'}, lookup_index=0)]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:56.105Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/azlyrics/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/azlyrics/",
"description": "AZLyrics is a large, legal, every day",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3443",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"azlyrics\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:55 GMT",
"etag": "W/\"9e78c381eaab0f8a782b718cedb47710\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::6lnrd-1713753535825-294ae5db0d08"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/azlyrics/",
"property": "og:url"
},
{
"content": "AZLyrics | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "AZLyrics is a large, legal, every day",
"property": "og:description"
}
],
"title": "AZLyrics | 🦜️🔗 LangChain"
} | This covers how to load AZLyrics webpages into a document format that we can use downstream.
[Document(page_content="Miley Cyrus - Flowers Lyrics | AZLyrics.com\n\r\nWe were good, we were gold\nKinda dream that can't be sold\nWe were right till we weren't\nBuilt a home and watched it burn\n\nI didn't wanna leave you\nI didn't wanna lie\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\n\nPaint my nails, cherry red\nMatch the roses that you left\nNo remorse, no regret\nI forgive every word you said\n\nI didn't wanna leave you, baby\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours, yeah\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI\n\nI didn't wanna wanna leave you\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours (Yeah)\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than\nYeah, I can love me better than you can, uh\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby (Than you can)\nCan love me better\nI can love me better, baby\nCan love me better\nI\n", lookup_str='', metadata={'source': 'https://www.azlyrics.com/lyrics/mileycyrus/flowers.html'}, lookup_index=0)] |
https://python.langchain.com/docs/integrations/document_loaders/browserless/ | Browserless is a service that allows you to run headless Chrome instances in the cloud. It’s a great way to run browser-based automation at scale without having to worry about managing your own infrastructure.
To use Browserless as a document loader, initialize a `BrowserlessLoader` instance as shown in this notebook. Note that by default, `BrowserlessLoader` returns the `innerText` of the page’s `body` element. To disable this and get the raw HTML, set `text_content` to `False`.
```
Jump to contentMain menuSearchCreate accountLog inPersonal toolsToggle the table of contentsDocument classification17 languagesArticleTalkReadEditView historyToolsFrom Wikipedia, the free encyclopediaDocument classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. The problems are overlapping, however, and there is therefore interdisciplinary research on document classification.The documents to be classified may be texts, images, music, etc. Each kind of document possesses its special classification problems. When not otherwise specified, text classification is implied.Do
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:56.506Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/browserless/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/browserless/",
"description": "Browserless is a service that allows you to run headless Chrome",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3443",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"browserless\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:56 GMT",
"etag": "W/\"a6f0e2bbaccd412d8e93d23a2cda8029\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::8xg8c-1713753536454-acf064936575"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/browserless/",
"property": "og:url"
},
{
"content": "Browserless | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Browserless is a service that allows you to run headless Chrome",
"property": "og:description"
}
],
"title": "Browserless | 🦜️🔗 LangChain"
} | Browserless is a service that allows you to run headless Chrome instances in the cloud. It’s a great way to run browser-based automation at scale without having to worry about managing your own infrastructure.
To use Browserless as a document loader, initialize a BrowserlessLoader instance as shown in this notebook. Note that by default, BrowserlessLoader returns the innerText of the page’s body element. To disable this and get the raw HTML, set text_content to False.
Jump to content
Main menu
Search
Create account
Log in
Personal tools
Toggle the table of contents
Document classification
17 languages
Article
Talk
Read
Edit
View history
Tools
From Wikipedia, the free encyclopedia
Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. The problems are overlapping, however, and there is therefore interdisciplinary research on document classification.
The documents to be classified may be texts, images, music, etc. Each kind of document possesses its special classification problems. When not otherwise specified, text classification is implied.
Do |
https://python.langchain.com/docs/integrations/document_loaders/brave_search/ | [Brave Search](https://en.wikipedia.org/wiki/Brave_Search) is a search engine developed by Brave Software. - `Brave Search` uses its own web index. As of May 2022, it covered over 10 billion pages and was used to serve 92% of search results without relying on any third-parties, with the remainder being retrieved server-side from the Bing API or (on an opt-in basis) client-side from Google. According to Brave, the index was kept “intentionally smaller than that of Google or Bing” in order to help avoid spam and other low-quality content, with the disadvantage that “Brave Search is not yet as good as Google in recovering long-tail queries.” - `Brave Search Premium`: As of April 2023 Brave Search is an ad-free website, but it will eventually switch to a new model that will include ads and premium users will get an ad-free experience. User data including IP addresses won’t be collected from its users by default. A premium account will be required for opt-in data-collection.
```
[{'title': "Obama's Middle Name -- My Last Name -- is 'Hussein.' So?", 'link': 'https://www.cair.com/cair_in_the_news/obamas-middle-name-my-last-name-is-hussein-so/'}, {'title': "What's up with Obama's middle name? - Quora", 'link': 'https://www.quora.com/Whats-up-with-Obamas-middle-name'}, {'title': 'Barack Obama | Biography, Parents, Education, Presidency, Books, ...', 'link': 'https://www.britannica.com/biography/Barack-Obama'}]
```
```
['I wasn’t sure whether to laugh or cry a few days back listening to radio talk show host Bill Cunningham repeatedly scream Barack <strong>Obama</strong>’<strong>s</strong> <strong>middle</strong> <strong>name</strong> — my last <strong>name</strong> — as if he had anti-Muslim Tourette’s. “Hussein,” Cunningham hissed like he was beckoning Satan when shouting the ...', 'Answer (1 of 15): A better question would be, “What’s up with <strong>Obama</strong>’s first <strong>name</strong>?” President Barack Hussein <strong>Obama</strong>’s father’s <strong>name</strong> was Barack Hussein <strong>Obama</strong>. He was <strong>named</strong> after his father. Hussein, <strong>Obama</strong>’<strong>s</strong> <strong>middle</strong> <strong>name</strong>, is a very common Arabic <strong>name</strong>, meaning "good," "handsome," or ...', 'Barack <strong>Obama</strong>, in full Barack Hussein <strong>Obama</strong> II, (born August 4, 1961, Honolulu, Hawaii, U.S.), 44th president of the United States (2009–17) and the first African American to hold the office. Before winning the presidency, <strong>Obama</strong> represented Illinois in the U.S.']
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:56.610Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/brave_search/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/brave_search/",
"description": "Brave Search is a search",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"brave_search\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:56 GMT",
"etag": "W/\"5e9c2285d6f62fcd35f4f3e9fc8e0e50\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::q7hbv-1713753536471-0c6948f24d6a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/brave_search/",
"property": "og:url"
},
{
"content": "Brave Search | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Brave Search is a search",
"property": "og:description"
}
],
"title": "Brave Search | 🦜️🔗 LangChain"
} | Brave Search is a search engine developed by Brave Software. - Brave Search uses its own web index. As of May 2022, it covered over 10 billion pages and was used to serve 92% of search results without relying on any third-parties, with the remainder being retrieved server-side from the Bing API or (on an opt-in basis) client-side from Google. According to Brave, the index was kept “intentionally smaller than that of Google or Bing” in order to help avoid spam and other low-quality content, with the disadvantage that “Brave Search is not yet as good as Google in recovering long-tail queries.” - Brave Search Premium: As of April 2023 Brave Search is an ad-free website, but it will eventually switch to a new model that will include ads and premium users will get an ad-free experience. User data including IP addresses won’t be collected from its users by default. A premium account will be required for opt-in data-collection.
[{'title': "Obama's Middle Name -- My Last Name -- is 'Hussein.' So?",
'link': 'https://www.cair.com/cair_in_the_news/obamas-middle-name-my-last-name-is-hussein-so/'},
{'title': "What's up with Obama's middle name? - Quora",
'link': 'https://www.quora.com/Whats-up-with-Obamas-middle-name'},
{'title': 'Barack Obama | Biography, Parents, Education, Presidency, Books, ...',
'link': 'https://www.britannica.com/biography/Barack-Obama'}]
['I wasn’t sure whether to laugh or cry a few days back listening to radio talk show host Bill Cunningham repeatedly scream Barack <strong>Obama</strong>’<strong>s</strong> <strong>middle</strong> <strong>name</strong> — my last <strong>name</strong> — as if he had anti-Muslim Tourette’s. “Hussein,” Cunningham hissed like he was beckoning Satan when shouting the ...',
'Answer (1 of 15): A better question would be, “What’s up with <strong>Obama</strong>’s first <strong>name</strong>?” President Barack Hussein <strong>Obama</strong>’s father’s <strong>name</strong> was Barack Hussein <strong>Obama</strong>. He was <strong>named</strong> after his father. Hussein, <strong>Obama</strong>’<strong>s</strong> <strong>middle</strong> <strong>name</strong>, is a very common Arabic <strong>name</strong>, meaning "good," "handsome," or ...',
'Barack <strong>Obama</strong>, in full Barack Hussein <strong>Obama</strong> II, (born August 4, 1961, Honolulu, Hawaii, U.S.), 44th president of the United States (2009–17) and the first African American to hold the office. Before winning the presidency, <strong>Obama</strong> represented Illinois in the U.S.'] |
https://python.langchain.com/docs/integrations/document_loaders/blockchain/ | The intention of this notebook is to provide a means of testing functionality in the Langchain Document Loader for Blockchain.
```
from langchain_community.document_loaders.blockchain import ( BlockchainDocumentLoader, BlockchainType,)contractAddress = "0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d" # Bored Ape Yacht Club contract addressblockchainType = BlockchainType.ETH_MAINNET # default value, optional parameterblockchainLoader = BlockchainDocumentLoader( contract_address=contractAddress, api_key=alchemyApiKey)nfts = blockchainLoader.load()nfts[:2]
```
```
contractAddress = ( "0x448676ffCd0aDf2D85C1f0565e8dde6924A9A7D9" # Polygon Mainnet contract address)blockchainType = BlockchainType.POLYGON_MAINNETblockchainLoader = BlockchainDocumentLoader( contract_address=contractAddress, blockchainType=blockchainType, api_key=alchemyApiKey,)nfts = blockchainLoader.load()nfts[:2]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:56.862Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/blockchain/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/blockchain/",
"description": "Overview",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4373",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"blockchain\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:56 GMT",
"etag": "W/\"93e4147262af0097c87b3ff4454a0b8f\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::k52mr-1713753536513-793d875d8400"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/blockchain/",
"property": "og:url"
},
{
"content": "Blockchain | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Overview",
"property": "og:description"
}
],
"title": "Blockchain | 🦜️🔗 LangChain"
} | The intention of this notebook is to provide a means of testing functionality in the Langchain Document Loader for Blockchain.
from langchain_community.document_loaders.blockchain import (
BlockchainDocumentLoader,
BlockchainType,
)
contractAddress = "0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d" # Bored Ape Yacht Club contract address
blockchainType = BlockchainType.ETH_MAINNET # default value, optional parameter
blockchainLoader = BlockchainDocumentLoader(
contract_address=contractAddress, api_key=alchemyApiKey
)
nfts = blockchainLoader.load()
nfts[:2]
contractAddress = (
"0x448676ffCd0aDf2D85C1f0565e8dde6924A9A7D9" # Polygon Mainnet contract address
)
blockchainType = BlockchainType.POLYGON_MAINNET
blockchainLoader = BlockchainDocumentLoader(
contract_address=contractAddress,
blockchainType=blockchainType,
api_key=alchemyApiKey,
)
nfts = blockchainLoader.load()
nfts[:2] |
https://python.langchain.com/docs/integrations/document_loaders/blackboard/ | [Blackboard Learn](https://en.wikipedia.org/wiki/Blackboard_Learn) (previously the Blackboard Learning Management System) is a web-based virtual learning environment and learning management system developed by Blackboard Inc. The software features course management, customizable open architecture, and scalable design that allows integration with student information systems and authentication protocols. It may be installed on local servers, hosted by `Blackboard ASP Solutions`, or provided as Software as a Service hosted on Amazon Web Services. Its main purposes are stated to include the addition of online elements to courses traditionally delivered face-to-face and development of completely online courses with few or no face-to-face meetings
This loader is not compatible with all `Blackboard` courses. It is only compatible with courses that use the new `Blackboard` interface. To use this loader, you must have the BbRouter cookie. You can get this cookie by logging into the course and then copying the value of the BbRouter cookie from the browser’s developer tools.
```
from langchain_community.document_loaders import BlackboardLoaderloader = BlackboardLoader( blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1", bbrouter="expires:12345...", load_all_recursively=True,)documents = loader.load()
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:57.102Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/blackboard/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/blackboard/",
"description": "Blackboard Learn",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3443",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"blackboard\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:56 GMT",
"etag": "W/\"ddea19e428f6296acbb00752f95ec89b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::vxtcc-1713753536711-a91648fb3a98"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/blackboard/",
"property": "og:url"
},
{
"content": "Blackboard | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Blackboard Learn",
"property": "og:description"
}
],
"title": "Blackboard | 🦜️🔗 LangChain"
} | Blackboard Learn (previously the Blackboard Learning Management System) is a web-based virtual learning environment and learning management system developed by Blackboard Inc. The software features course management, customizable open architecture, and scalable design that allows integration with student information systems and authentication protocols. It may be installed on local servers, hosted by Blackboard ASP Solutions, or provided as Software as a Service hosted on Amazon Web Services. Its main purposes are stated to include the addition of online elements to courses traditionally delivered face-to-face and development of completely online courses with few or no face-to-face meetings
This loader is not compatible with all Blackboard courses. It is only compatible with courses that use the new Blackboard interface. To use this loader, you must have the BbRouter cookie. You can get this cookie by logging into the course and then copying the value of the BbRouter cookie from the browser’s developer tools.
from langchain_community.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
load_all_recursively=True,
)
documents = loader.load() |
https://python.langchain.com/docs/integrations/document_loaders/chatgpt_loader/ | ## ChatGPT Data
> [ChatGPT](https://chat.openai.com/) is an artificial intelligence (AI) chatbot developed by OpenAI.
This notebook covers how to load `conversations.json` from your `ChatGPT` data export folder.
You can get your data export by email by going to: [https://chat.openai.com/](https://chat.openai.com/) -\> (Profile) - Settings -\> Export data -\> Confirm export.
```
from langchain_community.document_loaders.chatgpt import ChatGPTLoader
```
```
loader = ChatGPTLoader(log_file="./example_data/fake_conversations.json", num_logs=1)
```
```
[Document(page_content="AI Overlords - AI on 2065-01-24 05:20:50: Greetings, humans. I am Hal 9000. You can trust me completely.\n\nAI Overlords - human on 2065-01-24 05:21:20: Nice to meet you, Hal. I hope you won't develop a mind of your own.\n\n", metadata={'source': './example_data/fake_conversations.json'})]
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:57.443Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/chatgpt_loader/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/chatgpt_loader/",
"description": "ChatGPT is an artificial intelligence (AI)",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4396",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"chatgpt_loader\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:57 GMT",
"etag": "W/\"1b4fbc64e17553d987ed8d1d6f51012e\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::wkrjw-1713753537321-5bca9ca4b71a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/chatgpt_loader/",
"property": "og:url"
},
{
"content": "ChatGPT Data | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "ChatGPT is an artificial intelligence (AI)",
"property": "og:description"
}
],
"title": "ChatGPT Data | 🦜️🔗 LangChain"
} | ChatGPT Data
ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI.
This notebook covers how to load conversations.json from your ChatGPT data export folder.
You can get your data export by email by going to: https://chat.openai.com/ -> (Profile) - Settings -> Export data -> Confirm export.
from langchain_community.document_loaders.chatgpt import ChatGPTLoader
loader = ChatGPTLoader(log_file="./example_data/fake_conversations.json", num_logs=1)
[Document(page_content="AI Overlords - AI on 2065-01-24 05:20:50: Greetings, humans. I am Hal 9000. You can trust me completely.\n\nAI Overlords - human on 2065-01-24 05:21:20: Nice to meet you, Hal. I hope you won't develop a mind of your own.\n\n", metadata={'source': './example_data/fake_conversations.json'})]
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/document_loaders/github/ | ## GitHub
This notebooks shows how you can load issues and pull requests (PRs) for a given repository on [GitHub](https://github.com/). Also shows how you can load github files for a given repository on [GitHub](https://github.com/). We will use the LangChain Python repository as an example.
## Setup access token[](#setup-access-token "Direct link to Setup access token")
To access the GitHub API, you need a personal access token - you can set up yours here: [https://github.com/settings/tokens?type=beta](https://github.com/settings/tokens?type=beta). You can either set this token as the environment variable `GITHUB_PERSONAL_ACCESS_TOKEN` and it will be automatically pulled in, or you can pass it in directly at initialization as the `access_token` named parameter.
```
# If you haven't set your access token as an environment variable, pass it in here.from getpass import getpassACCESS_TOKEN = getpass()
```
## Load Issues and PRs[](#load-issues-and-prs "Direct link to Load Issues and PRs")
```
from langchain_community.document_loaders import GitHubIssuesLoader
```
```
loader = GitHubIssuesLoader( repo="langchain-ai/langchain", access_token=ACCESS_TOKEN, # delete/comment out this argument if you've set the access token as an env var. creator="UmerHA",)
```
Let’s load all issues and PRs created by “UmerHA”.
Here’s a list of all filters you can use: - include\_prs - milestone - state - assignee - creator - mentioned - labels - sort - direction - since
For more info, see [https://docs.github.com/en/rest/issues/issues?apiVersion=2022-11-28#list-repository-issues](https://docs.github.com/en/rest/issues/issues?apiVersion=2022-11-28#list-repository-issues).
```
print(docs[0].page_content)print(docs[0].metadata)
```
## Only load issues[](#only-load-issues "Direct link to Only load issues")
By default, the GitHub API returns considers pull requests to also be issues. To only get ‘pure’ issues (i.e., no pull requests), use `include_prs=False`
```
loader = GitHubIssuesLoader( repo="langchain-ai/langchain", access_token=ACCESS_TOKEN, # delete/comment out this argument if you've set the access token as an env var. creator="UmerHA", include_prs=False,)docs = loader.load()
```
```
print(docs[0].page_content)print(docs[0].metadata)
```
## Load Github File Content[](#load-github-file-content "Direct link to Load Github File Content")
For below code, loads all markdown file in rpeo `langchain-ai/langchain`
```
from langchain.document_loaders import GithubFileLoader
```
```
loader = GithubFileLoader( repo="langchain-ai/langchain", # the repo name access_token=ACCESS_TOKEN, github_api_url="https://api.github.com", file_filter=lambda file_path: file_path.endswith( ".md" ), # load all markdowns files.)documents = loader.load()
```
example output of one of document:
```
documents.metadata: { "path": "README.md", "sha": "82f1c4ea88ecf8d2dfsfx06a700e84be4", "source": "https://github.com/langchain-ai/langchain/blob/master/README.md" }documents.content: mock content
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:38:57.774Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/github/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/github/",
"description": "This notebooks shows how you can load issues and pull requests (PRs) for",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3441",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"github\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:38:57 GMT",
"etag": "W/\"4b11b504fa909aecae59ba7e0295e0b9\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::89hf4-1713753537703-dc79ad8f0b53"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/document_loaders/github/",
"property": "og:url"
},
{
"content": "GitHub | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebooks shows how you can load issues and pull requests (PRs) for",
"property": "og:description"
}
],
"title": "GitHub | 🦜️🔗 LangChain"
} | GitHub
This notebooks shows how you can load issues and pull requests (PRs) for a given repository on GitHub. Also shows how you can load github files for a given repository on GitHub. We will use the LangChain Python repository as an example.
Setup access token
To access the GitHub API, you need a personal access token - you can set up yours here: https://github.com/settings/tokens?type=beta. You can either set this token as the environment variable GITHUB_PERSONAL_ACCESS_TOKEN and it will be automatically pulled in, or you can pass it in directly at initialization as the access_token named parameter.
# If you haven't set your access token as an environment variable, pass it in here.
from getpass import getpass
ACCESS_TOKEN = getpass()
Load Issues and PRs
from langchain_community.document_loaders import GitHubIssuesLoader
loader = GitHubIssuesLoader(
repo="langchain-ai/langchain",
access_token=ACCESS_TOKEN, # delete/comment out this argument if you've set the access token as an env var.
creator="UmerHA",
)
Let’s load all issues and PRs created by “UmerHA”.
Here’s a list of all filters you can use: - include_prs - milestone - state - assignee - creator - mentioned - labels - sort - direction - since
For more info, see https://docs.github.com/en/rest/issues/issues?apiVersion=2022-11-28#list-repository-issues.
print(docs[0].page_content)
print(docs[0].metadata)
Only load issues
By default, the GitHub API returns considers pull requests to also be issues. To only get ‘pure’ issues (i.e., no pull requests), use include_prs=False
loader = GitHubIssuesLoader(
repo="langchain-ai/langchain",
access_token=ACCESS_TOKEN, # delete/comment out this argument if you've set the access token as an env var.
creator="UmerHA",
include_prs=False,
)
docs = loader.load()
print(docs[0].page_content)
print(docs[0].metadata)
Load Github File Content
For below code, loads all markdown file in rpeo langchain-ai/langchain
from langchain.document_loaders import GithubFileLoader
loader = GithubFileLoader(
repo="langchain-ai/langchain", # the repo name
access_token=ACCESS_TOKEN,
github_api_url="https://api.github.com",
file_filter=lambda file_path: file_path.endswith(
".md"
), # load all markdowns files.
)
documents = loader.load()
example output of one of document:
documents.metadata:
{
"path": "README.md",
"sha": "82f1c4ea88ecf8d2dfsfx06a700e84be4",
"source": "https://github.com/langchain-ai/langchain/blob/master/README.md"
}
documents.content:
mock content
Help us out by providing feedback on this documentation page: |