id
stringlengths
14
15
text
stringlengths
17
2.72k
source
stringlengths
47
115
36b055cb0e2b-3
# with 2 messages overlapping chunk_size = 8 overlap = 2
https://python.langchain.com/docs/integrations/chat_loaders/facebook
36b055cb0e2b-4
training_examples = [ conversation_messages[i: i + chunk_size] for conversation_messages in training_data for i in range( 0, len(conversation_messages) - chunk_size + 1, chunk_size - overlap) ] len(training_examples) 4. Fine-tune the model​ It's time to fine-tune the model. Make sure you have openai installed and ha...
https://python.langchain.com/docs/integrations/chat_loaders/facebook
36b055cb0e2b-5
# OpenAI audits each training file for compliance reasons. # This make take a few minutes status = openai.File.retrieve(training_file.id).status start_time = time.time() while status != "processed": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.File.r...
https://python.langchain.com/docs/integrations/chat_loaders/facebook
52fcb0dc9e32-0
GMail This loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous emai...
https://python.langchain.com/docs/integrations/chat_loaders/gmail
52fcb0dc9e32-1
SCOPES = ['https://www.googleapis.com/auth/gmail.readonly'] creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('email_token.json'): creds = Credentials.from_authorized_user_file('e...
https://python.langchain.com/docs/integrations/chat_loaders/gmail
c3276c3bdad0-0
Slack This notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages. The process has three steps: Export the desired conversation thread by following the instructions here. Create the SlackChatLoader with the file path pointed to the json file or dire...
https://python.langchain.com/docs/integrations/chat_loaders/slack
c3276c3bdad0-1
raw_messages = loader.lazy_load() # Merge consecutive messages from the same sender into a single message merged_messages = merge_chat_runs(raw_messages) # Convert messages from "U0500003428" to AI messages messages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="U0500003428")) Next Steps​ You can th...
https://python.langchain.com/docs/integrations/chat_loaders/slack
7ba310a3aed6-0
iMessage This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages. On MacOS, iMessage stores conversations in a sqlite database at ~/Library/Messages/chat.db (at least for macOS Ventura 13.4). The IMessageChatLoader loads from this database file...
https://python.langchain.com/docs/integrations/chat_loaders/imessage
7ba310a3aed6-1
# Download file to chat.db download_drive_file(url) 2. Create the Chat Loader​ Provide the loader with the file path to the zip directory. You can optionally specify the user id that maps to an ai message as well an configure whether to merge message runs. from langchain.chat_loaders.imessage import IMessageChatLoader ...
https://python.langchain.com/docs/integrations/chat_loaders/imessage
7ba310a3aed6-2
raw_messages = loader.lazy_load() # Merge consecutive messages from the same sender into a single message merged_messages = merge_chat_runs(raw_messages) # Convert messages from "Tortoise" to AI messages. Do you have a guess who these conversations are between? chat_sessions: List[ChatSession] = list(map_ai_messages(me...
https://python.langchain.com/docs/integrations/chat_loaders/imessage
7ba310a3aed6-3
# OpenAI audits each training file for compliance reasons. # This make take a few minutes status = openai.File.retrieve(training_file.id).status start_time = time.time() while status != "processed": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.File.r...
https://python.langchain.com/docs/integrations/chat_loaders/imessage
12bf2cbe6826-0
Telegram This notebook shows how to use the Telegram chat loader. This class helps map exported Telegram conversations to LangChain chat messages. The process has three steps: Export the chat .txt file by copying chats from the Discord app and pasting them in a file on your local computer Create the TelegramChatLoader ...
https://python.langchain.com/docs/integrations/chat_loaders/telegram
12bf2cbe6826-1
"text_entities": [ { "type": "plain", "text": "What did you just say?" } ] } ] } 2. Create the Chat Loader​ All that's required is the file path. You can optionally specify the user name that maps to an ai message as well an configure whether to merge message runs. from langchain.chat_loaders.telegram import TelegramCh...
https://python.langchain.com/docs/integrations/chat_loaders/telegram
12bf2cbe6826-2
raw_messages = loader.lazy_load() # Merge consecutive messages from the same sender into a single message merged_messages = merge_chat_runs(raw_messages) # Convert messages from "Jiminy Cricket" to AI messages messages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="Jiminy Cricket")) Next Steps​ You ...
https://python.langchain.com/docs/integrations/chat_loaders/telegram
6009eeaf994e-0
This notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages. To make the export of your WhatsApp conversation(s), complete the following steps: whatsapp_chat.txt [8/15/23, 9:12:33 AM] Dr. Feather: ‎Messages and calls are end-to-end encrypted. ...
https://python.langchain.com/docs/integrations/chat_loaders/whatsapp
6009eeaf994e-1
The load() (or lazy_load) methods return a list of "ChatSessions" that currently store the list of messages per loaded conversation. [{'messages': [AIMessage(content='I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature!', additional_kwargs={'sender': 'Dr. Feather', 'events': ...
https://python.langchain.com/docs/integrations/chat_loaders/whatsapp
6009eeaf994e-2
You can then use these messages how you see fit, such as finetuning a model, few-shot example selection, or directly make predictions for the next message.
https://python.langchain.com/docs/integrations/chat_loaders/whatsapp
317dc440d55d-0
This notebook shows how to load chat messages from Twitter to finetune on. We do this by utilizing Apify. First, use Apify to export tweets. An example # Filter out tweets that reference other tweets, because it's a bit weird tweets = [d["full_text"] for d in data if "t.co" not in d['full_text']] # Create them as AI m...
https://python.langchain.com/docs/integrations/chat_loaders/twitter
1dd384fc67fa-0
Beautiful Soup Beautiful Soup offers fine-grained control over HTML content, enabling specific tag extraction, removal, and content cleaning. It's suited for cases where you want to extract specific information and clean up the HTML content according to your needs. For example, we can scrape text content within <p>, <...
https://python.langchain.com/docs/integrations/document_transformers/beautiful_soup
e6611293b361-0
docai from langchain.document_loaders.blob_loaders import Blob from langchain.document_loaders.parsers import DocAIParser DocAI is a Google Cloud platform to transform unstructured data from documents into structured data, making it easier to understand, analyze, and consume. You can read more about it: https://cloud.g...
https://python.langchain.com/docs/integrations/document_transformers/docai
e6611293b361-1
And when they're finished, you can parse the results: parser.is_running(operations) results = parser.get_results(operations) print(results[0]) DocAIParsingResults(source_path='gs://vertex-pgt/examples/goog-exhibit-99-1-q1-2023-19.pdf', parsed_path='gs://vertex-pgt/test/run1/16447136779727347991/0') And now we can final...
https://python.langchain.com/docs/integrations/document_transformers/docai
0827c79c4783-0
Doctran Extract Properties We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. Extracting metadata from documents is helpful for a variety of tasks, including: Classification: classifying documents into different categories Da...
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
0827c79c4783-1
Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully ...
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
0827c79c4783-2
HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our ...
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
0827c79c4783-3
Jason Fan Cofounder & CEO Psychic jason@psychic.dev documents = [Document(page_content=sample_text)] properties = [ { "name": "category", "description": "What type of email this is.", "type": "string", "enum": ["update", "action_item", "customer_feedback", "announcement", "other"], "required": True, }, { "name": "ment...
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
7f28b2046bda-0
Documents used in a vector store knowledge base are typically stored in narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the liklihood of retrieving relevant documents, and decrease the liklihood of re...
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
7f28b2046bda-1
Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully ...
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
7f28b2046bda-2
Best regards, Jason Fan Cofounder & CEO Psychic jason@psychic.dev """ print(sample_text) After interrogating a document, the result will be returned as a new document with questions and answers provided in the metadata. { "questions_and_answers": [ { "question": "What is the purpose of this document?", "answer": "The ...
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
6d553453f4cc-0
Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. However, it can still be useful to use a LLM translate documents into other langu...
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
6d553453f4cc-1
Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully ...
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
6d553453f4cc-2
Medidas de seguridad y privacidad Como parte de nuestro compromiso continuo para garantizar la seguridad y privacidad de los datos de nuestros clientes, hemos implementado medidas robustas en todos nuestros sistemas. Nos gustaría elogiar a John Doe (correo electrónico: john.doe@example.com) del departamento de TI por s...
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
6d553453f4cc-3
Proyectos de investigación y desarrollo En nuestra búsqueda de la innovación, nuestro departamento de investigación y desarrollo ha estado trabajando incansablemente en varios proyectos. Me gustaría reconocer el excepcional trabajo de David Rodríguez (correo electrónico: david.rodriguez@example.com) en su papel de líde...
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
e5aa2ff097b4-0
html2text html2text is a Python script that converts a page of HTML into clean, easy-to-read plain ASCII text. The ASCII also happens to be valid Markdown (a text-to-HTML format). from langchain.document_loaders import AsyncHtmlLoader
https://python.langchain.com/docs/integrations/document_transformers/html2text
e5aa2ff097b4-1
urls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"] loader = AsyncHtmlLoader(urls) docs = loader.load() Fetching pages: 100%|############| 2/2 [00:00<00:00, 10.75it/s] from langchain.document_transformers import Html2TextTransformer urls = ["https://www.espn.com", "https://lilianweng...
https://python.langchain.com/docs/integrations/document_transformers/html2text
e5aa2ff097b4-2
docs_transformed[1].page_content[1000:2000] "t's brain,\ncomplemented by several key components:\n\n * **Planning**\n * Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\n * Reflection and refinement: The agent can do self-criti...
https://python.langchain.com/docs/integrations/document_transformers/html2text
7741490a5510-0
Nuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing. The Nuclia Understanding API document transformer splits text into paragraphs...
https://python.langchain.com/docs/integrations/document_transformers/nuclia_transformer
adce1e8a80fe-0
OpenAI Functions Metadata Tagger It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. The OpenAI...
https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger
adce1e8a80fe-1
enhanced_documents = document_transformer.transform_documents(original_documents) import json print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n" ) Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_...
https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger
adce1e8a80fe-2
This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false} Customization​ You can pass the underlying tagging chain the standard LLMChain arguments in the document transformer constructor. For example, if you wanted to ask ...
https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger
7e1c03ed1626-0
YouTube transcripts YouTube is an online video sharing and social media platform created by Google. This notebook covers how to load documents from YouTube transcripts. from langchain.document_loaders import YoutubeLoader # !pip install youtube-transcript-api loader = YoutubeLoader.from_youtube_url( "https://www.youtub...
https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript
7e1c03ed1626-1
# Use Youtube Ids youtube_loader_ids = GoogleApiYoutubeLoader( google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True ) # returns a list of Documents youtube_loader_channel.load()
https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript
edccca7b1f22-0
Below is an example on how to load a local acreom vault into Langchain. As the local vault in acreom is a folder of plain text .md files, the loader requires the path to the directory. Vault files may contain some metadata which is stored as a YAML header. These values will be added to the document’s metadata if colle...
https://python.langchain.com/docs/integrations/document_loaders/acreom
2fe1c43e5b83-0
The Etherscan loader use etherscan api to load transaction histories under specific account on Ethereum Mainnet. You will need a Etherscan api key to proceed. The free api key has 5 calls per second quota. If the account does not have corresponding transactions, the loader will a list with one document. The content of ...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-1
[Document(page_content="{'blockNumber': '1723771', 'timeStamp': '1466213371', 'hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'nonce': '3155', 'blockHash': '0xc2c2207bcaf341eed07f984c9a90b3f8e8bdbdbd2ac6562f8c2f5bfa4b51299d', 'transactionIndex': '5', 'from': '0x3763e6e1228bfeab94191c856412...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-2
Document(page_content="{'blockNumber': '1727090', 'timeStamp': '1466262018', 'hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'nonce': '3267', 'blockHash': '0xc0cff378c3446b9b22d217c2c5f54b1c85b89a632c69c55b76cdffe88d2b9f4d', 'transactionIndex': '20', 'from': '0x3763e6e1228bfeab94191c856412...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-3
Document(page_content="{'blockNumber': '1730337', 'timeStamp': '1466308222', 'hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'nonce': '3344', 'blockHash': '0x3a52d28b8587d55c621144a161a0ad5c37dd9f7d63b629ab31da04fa410b2cfa', 'transactionIndex': '1', 'from': '0x3763e6e1228bfeab94191c856412d...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-4
Document(page_content="{'blockNumber': '1733479', 'timeStamp': '1466352351', 'hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'nonce': '3367', 'blockHash': '0x9928661e7ae125b3ae0bcf5e076555a3ee44c52ae31bd6864c9c93a6ebb3f43e', 'transactionIndex': '0', 'from': '0x3763e6e1228bfeab94191c856412d...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-5
Document(page_content="{'blockNumber': '1734172', 'timeStamp': '1466362463', 'hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'nonce': '1016', 'blockHash': '0x8a8afe2b446713db88218553cfb5dd202422928e5e0bc00475ed2f37d95649de', 'transactionIndex': '4', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-6
Document(page_content="{'blockNumber': '1737276', 'timeStamp': '1466406037', 'hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'nonce': '1024', 'blockHash': '0xe117cad73752bb485c3bef24556e45b7766b283229180fcabc9711f3524b9f79', 'transactionIndex': '35', 'from': '0x16545fb79dbee1ad3a7f868b7661...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-7
Document(page_content="{'blockNumber': '1740314', 'timeStamp': '1466450262', 'hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'nonce': '1051', 'blockHash': '0x588d17842819a81afae3ac6644d8005c12ce55ddb66c8d4c202caa91d4e8fdbe', 'transactionIndex': '6', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-8
Document(page_content="{'blockNumber': '1743384', 'timeStamp': '1466494099', 'hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'nonce': '1068', 'blockHash': '0x997245108c84250057fda27306b53f9438ad40978a95ca51d8fd7477e73fbaa7', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-9
Document(page_content="{'blockNumber': '1746405', 'timeStamp': '1466538123', 'hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'nonce': '1092', 'blockHash': '0x3af3966cdaf22e8b112792ee2e0edd21ceb5a0e7bf9d8c168a40cf22deb3690c', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-10
Document(page_content="{'blockNumber': '1749459', 'timeStamp': '1466582044', 'hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'nonce': '1096', 'blockHash': '0x5fc5d2a903977b35ce1239975ae23f9157d45d7bd8a8f6205e8ce270000797f9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-11
Document(page_content="{'blockNumber': '1752614', 'timeStamp': '1466626168', 'hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'nonce': '1118', 'blockHash': '0x88ef054b98e47504332609394e15c0a4467f84042396717af6483f0bcd916127', 'transactionIndex': '11', 'from': '0x16545fb79dbee1ad3a7f868b7661...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-12
Document(page_content="{'blockNumber': '1755659', 'timeStamp': '1466669931', 'hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'nonce': '1133', 'blockHash': '0x2983972217a91343860415d1744c2a55246a297c4810908bbd3184785bc9b0c2', 'transactionIndex': '14', 'from': '0x16545fb79dbee1ad3a7f868b7661...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-13
Document(page_content="{'blockNumber': '1758709', 'timeStamp': '1466713652', 'hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'nonce': '1147', 'blockHash': '0x1660de1e73067251be0109d267a21ffc7d5bde21719a3664c7045c32e771ecf9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-14
Document(page_content="{'blockNumber': '1761783', 'timeStamp': '1466757809', 'hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'nonce': '1169', 'blockHash': '0x7576961afa4218a3264addd37a41f55c444dd534e9410dbd6f93f7fe20e0363e', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-15
Document(page_content="{'blockNumber': '1764895', 'timeStamp': '1466801683', 'hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'nonce': '1186', 'blockHash': '0x2e687643becd3c36e0c396a02af0842775e17ccefa0904de5aeca0a9a1aa795e', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-16
Document(page_content="{'blockNumber': '1767936', 'timeStamp': '1466845682', 'hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'nonce': '1211', 'blockHash': '0xb01d8fd47b3554a99352ac3e5baf5524f314cfbc4262afcfbea1467b2d682898', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-17
Document(page_content="{'blockNumber': '1770911', 'timeStamp': '1466888890', 'hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'nonce': '1212', 'blockHash': '0x79a9de39276132dab8bf00dc3e060f0e8a14f5e16a0ee4e9cc491da31b25fe58', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-18
Document(page_content="{'blockNumber': '1774044', 'timeStamp': '1466932983', 'hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'nonce': '1240', 'blockHash': '0x69cee390378c3b886f9543fb3a1cb2fc97621ec155f7884564d4c866348ce539', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-19
Document(page_content="{'blockNumber': '1777057', 'timeStamp': '1466976422', 'hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'nonce': '1248', 'blockHash': '0xc7cacda0ac38c99f1b9bccbeee1562a41781d2cfaa357e8c7b4af6a49584b968', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
2fe1c43e5b83-20
Document(page_content="{'blockNumber': '1780120', 'timeStamp': '1467020353', 'hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'nonce': '1266', 'blockHash': '0xfc0e066e5b613239e1a01e6d582e7ab162ceb3ca4f719dfbd1a0c965adcfe1c5', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c...
https://python.langchain.com/docs/integrations/document_loaders/Etherscan
00b85849898b-0
Airbyte CDK Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. A lot of source connectors are implemented using the Airbyte CDK. This loader allows to run any of these connectors and ...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_cdk
00b85849898b-1
issues_loader = AirbyteCDKLoader(source_class=SourceGithub, config=config, stream_name="issues") Now you can load documents the usual way docs = issues_loader.load() As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method whic...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_cdk
95d6e162c59f-0
Airbyte Hubspot Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. This loader exposes the Hubspot connector as a document loader, allowing you to load various Hubspot objects as docu...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_hubspot
95d6e162c59f-1
loader = AirbyteHubspotLoader(config=config, record_handler=handle_record, stream_name="products") docs = loader.load() Incremental loads​ Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_hubspot
f610793ea366-0
Airbyte JSON Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. This covers how to load any source from Airbyte into a local JSON file that can be read in as a document Prereqs: Have ...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_json
8c7dc3a084d2-0
This loader exposes the Salesforce connector as a document loader, allowing you to load various Salesforce objects as documents. First, you need to install the airbyte-source-salesforce python package. { "client_id": "<oauth client id>", "client_secret": "<oauth client secret>", "refresh_token": "<oauth refresh token>"...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_salesforce
8c7dc3a084d2-1
loader = AirbyteSalesforceLoader(config=config, record_handler=handle_record, stream_name="asset") docs = loader.load() Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated freque...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_salesforce
905bac361ffc-0
Airbyte Shopify Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. This loader exposes the Shopify connector as a document loader, allowing you to load various Shopify objects as docu...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_shopify
905bac361ffc-1
loader = AirbyteShopifyLoader(config=config, record_handler=handle_record, stream_name="orders") docs = loader.load() Incremental loads​ Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and a...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_shopify
2ee7b92982ad-0
Airbyte Stripe Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. This loader exposes the Stripe connector as a document loader, allowing you to load various Stripe objects as documen...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_stripe
2ee7b92982ad-1
loader = AirbyteStripeLoader(config=config, record_handler=handle_record, stream_name="invoices") docs = loader.load() Incremental loads​ Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and ...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_stripe
a9a0d7edee5a-0
Airbyte Typeform Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. This loader exposes the Typeform connector as a document loader, allowing you to load various Typeform objects as d...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_typeform
a9a0d7edee5a-1
loader = AirbyteTypeformLoader(config=config, record_handler=handle_record, stream_name="forms") docs = loader.load() Incremental loads​ Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and a...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_typeform
114029aa6c24-0
Airtable from langchain.document_loaders import AirtableLoader Get your API key here. Get ID of your base here. Get your table ID from the table url as shown here. api_key = "xxx" base_id = "xxx" table_id = "xxx" loader = AirtableLoader(api_key, table_id, base_id) docs = loader.load() Returns each table row as dict. ev...
https://python.langchain.com/docs/integrations/document_loaders/airtable
be31dcd2b7c6-0
Airbyte Zendesk Support Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. This loader exposes the Zendesk Support connector as a document loader, allowing you to load various objects...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_zendesk_support
be31dcd2b7c6-1
loader = AirbyteZendeskSupportLoader(config=config, record_handler=handle_record, stream_name="tickets") docs = loader.load() Incremental loads​ Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of da...
https://python.langchain.com/docs/integrations/document_loaders/airbyte_zendesk_support
a9dd41f69fa6-0
Alibaba Cloud MaxCompute Alibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive da...
https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute
a9dd41f69fa6-1
base_query = """ SELECT * FROM ( SELECT 1 AS id, 'content1' AS content, 'meta_info1' AS meta_info UNION ALL SELECT 2 AS id, 'content2' AS content, 'meta_info2' AS meta_info UNION ALL SELECT 3 AS id, 'content3' AS content, 'meta_info3' AS meta_info ) mydata; """ endpoint = "<ENDPOINT>" project = "<PROJECT>" ACCESS_ID = ...
https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute
92d4433edd32-0
Apify Dataset Apify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud...
https://python.langchain.com/docs/integrations/document_loaders/apify_dataset
92d4433edd32-1
https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examples
https://python.langchain.com/docs/integrations/document_loaders/apify_dataset
6504cba28ec5-0
This notebook demonstrates the use of the langchain.document_loaders.ArcGISLoader class. You will need to install the ArcGIS API for Python arcgis and, optionally, bs4.BeautifulSoup. You can use an arcgis.gis.GIS object for authenticated data loading, or leave it blank to access public data. {'accessed': '2023-08-15T04...
https://python.langchain.com/docs/integrations/document_loaders/arcgis
6504cba28ec5-1
"imageData": "iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IB2cksfwAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAJJJREFUOI3NkDEKg0AQRZ9kkSnSGBshR7DJqdJYeg7BMpcS0uQWQsqoCLExkcUJzGqT38zw2fcY1rEzbp7vjXz0EXC7gBxs1ABcG/8CYkCcDqwyLqsV+RlV0I/w7PzuJBArr1VB20H58Ls6h+xoFITkTwWpQJX7XSIBAnFwVj7MLAjJV/AC6G3QoAmK+74Lom04THTBEp/HCSc6AAAAAE...
https://python.langchain.com/docs/integrations/document_loaders/arcgis
6504cba28ec5-2
}, { "name": "Shape", "type": "esriFieldTypeGeometry", "alias": "Shape", "domain": null }, { "name": "AccessName", "type": "esriFieldTypeString", "alias": "AccessName", "length": 40, "domain": null }, { "name": "AccessID", "type": "esriFieldTypeString", "alias": "AccessID", "length": 50, "domain": null }, { "name": "Ac...
https://python.langchain.com/docs/integrations/document_loaders/arcgis
6504cba28ec5-3
"supportsStatistics": true, "supportsAdvancedQueries": true, "supportedQueryFormats": "JSON, geoJSON", "isDataVersioned": false, "ownershipBasedAccessControlForFeatures": { "allowOthersToQuery": true }, "useStandardizedQueries": true, "advancedQueryCapabilities": { "useStandardizedQueries": true, "supportsStatistics": ...
https://python.langchain.com/docs/integrations/document_loaders/arcgis
6504cba28ec5-4
{"OBJECTID": 11, "AccessName": "INTERNATIONAL SPEEDWAY BLVD", "AccessID": "DB-059", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "300 BLK S ATLANTIC AV", "MilePost": 15.27, "City": "DAYTONA BEACH", "AccessStatus": "CLOSED", "Entry_Date_Time": 1692039947000, "DrivingZone": "BOTH"} {"OBJECTID": 14, "AccessName": "GRA...
https://python.langchain.com/docs/integrations/document_loaders/arcgis
6504cba28ec5-5
{"OBJECTID": 42, "AccessName": "BOTEFUHR AV", "AccessID": "DBS-067", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1900 BLK S ATLANTIC AV", "MilePost": 16.68, "City": "DAYTONA BEACH SHORES", "AccessStatus": "CLOSED", "Entry_Date_Time": 1692039947000, "DrivingZone": "YES"} {"OBJECTID": 43, "AccessName": "SILVER BEAC...
https://python.langchain.com/docs/integrations/document_loaders/arcgis
6504cba28ec5-6
{"OBJECTID": 64, "AccessName": "DUNLAWTON BLVD", "AccessID": "DBS-078", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3400 BLK S ATLANTIC AV", "MilePost": 20.61, "City": "DAYTONA BEACH SHORES", "AccessStatus": "CLOSED", "Entry_Date_Time": 1692039947000, "DrivingZone": "YES"} {"OBJECTID": 69, "AccessName": "EMILIA A...
https://python.langchain.com/docs/integrations/document_loaders/arcgis
6504cba28ec5-7
{"OBJECTID": 124, "AccessName": "HARTFORD AV", "AccessID": "DB-043", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1890 BLK N ATLANTIC AV", "MilePost": 12.76, "City": "DAYTONA BEACH", "AccessStatus": "CLOSED", "Entry_Date_Time": 1692039947000, "DrivingZone": "YES"} {"OBJECTID": 127, "AccessName": "WILLIAMS AV", "Ac...
https://python.langchain.com/docs/integrations/document_loaders/arcgis
6504cba28ec5-8
{"OBJECTID": 232, "AccessName": "VAN AV", "AccessID": "DBS-075", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3100 BLK S ATLANTIC AV", "MilePost": 19.6, "City": "DAYTONA BEACH SHORES", "AccessStatus": "CLOSED", "Entry_Date_Time": 1692039947000, "DrivingZone": "YES"} {"OBJECTID": 234, "AccessName": "ROCKEFELLER DR"...
https://python.langchain.com/docs/integrations/document_loaders/arcgis
431aaa50ef04-0
Arxiv arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. This notebook shows how to load scientific articles from Arxiv.org into a doc...
https://python.langchain.com/docs/integrations/document_loaders/arxiv
431aaa50ef04-1
docs[0].page_content[:400] # all pages of the Document content 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that th...
https://python.langchain.com/docs/integrations/document_loaders/arxiv
e25fa3bb4f46-0
AssemblyAI Audio Transcripts The AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents. To use it, you should have the assemblyai python package installed, and the environment variable ASSEMBLYAI_API_KEY set with your API key. Alternativel...
https://python.langchain.com/docs/integrations/document_loaders/assemblyai
e25fa3bb4f46-1
docs = loader.load() Transcription Config​ You can also specify the config argument to use different audio intelligence models. Visit the AssemblyAI API Documentation to get an overview of all available models! import assemblyai as aai config = aai.TranscriptionConfig(speaker_labels=True, auto_chapters=True, entity_de...
https://python.langchain.com/docs/integrations/document_loaders/assemblyai
80ba220b30a1-0
AsyncHtmlLoader AsyncHtmlLoader loads raw HTML from a list of urls concurrently. from langchain.document_loaders import AsyncHtmlLoader urls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"] loader = AsyncHtmlLoader(urls) docs = loader.load() Fetching pages: 100%|############| 2/2 [00:0...
https://python.langchain.com/docs/integrations/document_loaders/async_html
80ba220b30a1-1
docs[1].page_content[1000:2000] 'al" href="https://lilianweng.github.io/posts/2023-06-23-agent/" />\n<link crossorigin="anonymous" href="/assets/css/stylesheet.min.67a6fb6e33089cb29e856bcc95d7aa39f70049a42b123105531265a0d9f1258b.css" integrity="sha256-Z6b7bjMInLKehWvMldeqOfcASaQrEjEFUxJloNnxJYs=" rel="preload styleshee...
https://python.langchain.com/docs/integrations/document_loaders/async_html
40f50c078696-0
AWS S3 Directory Amazon Simple Storage Service (Amazon S3) is an object storage service AWS S3 Directory This covers how to load document objects from an AWS S3 Directory object. from langchain.document_loaders import S3DirectoryLoader loader = S3DirectoryLoader("testing-hwc") Specifying a prefix​ You can also specify ...
https://python.langchain.com/docs/integrations/document_loaders/aws_s3_directory
6db2ca1b04b3-0
Async Chromium Chromium is one of the browsers supported by Playwright, a library used to control browser automation. By running p.chromium.launch(headless=True), we are launching a headless instance of Chromium. Headless mode means that the browser is running without a graphical user interface. AsyncChromiumLoader l...
https://python.langchain.com/docs/integrations/document_loaders/async_chromium