url
stringlengths
34
116
markdown
stringlengths
0
150k
screenshotUrl
null
crawl
dict
metadata
dict
text
stringlengths
0
147k
https://python.langchain.com/docs/integrations/document_loaders/cassandra/
## Cassandra [Cassandra](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.Starting with version 5.0, the database ships with [vector search capabilities](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html). ## Overview[​](#overview "Direct link to Overview") The Cassandra Document Loader returns a list of Langchain Documents from a Cassandra database. You must either provide a CQL query or a table name to retrieve the documents. The Loader takes the following parameters: * table: (Optional) The table to load the data from. * session: (Optional) The cassandra driver session. If not provided, the cassio resolved session will be used. * keyspace: (Optional) The keyspace of the table. If not provided, the cassio resolved keyspace will be used. * query: (Optional) The query used to load the data. * page\_content\_mapper: (Optional) a function to convert a row to string page content. The default converts the row to JSON. * metadata\_mapper: (Optional) a function to convert a row to metadata dict. * query\_parameters: (Optional) The query parameters used when calling session.execute . * query\_timeout: (Optional) The query timeout used when calling session.execute . * query\_custom\_payload: (Optional) The query custom\_payload used when calling `session.execute`. * query\_execution\_profile: (Optional) The query execution\_profile used when calling `session.execute`. * query\_host: (Optional) The query host used when calling `session.execute`. * query\_execute\_as: (Optional) The query execute\_as used when calling `session.execute`. ## Load documents with the Document Loader[​](#load-documents-with-the-document-loader "Direct link to Load documents with the Document Loader") ``` from langchain_community.document_loaders import CassandraLoader ``` ### Init from a cassandra driver Session[​](#init-from-a-cassandra-driver-session "Direct link to Init from a cassandra driver Session") You need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like: ``` from cassandra.cluster import Clustercluster = Cluster()session = cluster.connect() ``` You need to provide the name of an existing keyspace of the Cassandra instance: ``` CASSANDRA_KEYSPACE = input("CASSANDRA_KEYSPACE = ") ``` Creating the document loader: ``` loader = CassandraLoader( table="movie_reviews", session=session, keyspace=CASSANDRA_KEYSPACE,) ``` ``` Document(page_content='Row(_id=\'659bdffa16cbc4586b11a423\', title=\'Dangerous Men\', reviewtext=\'"Dangerous Men," the picture\\\'s production notes inform, took 26 years to reach the big screen. After having seen it, I wonder: What was the rush?\')', metadata={'table': 'movie_reviews', 'keyspace': 'default_keyspace'}) ``` ### Init from cassio[​](#init-from-cassio "Direct link to Init from cassio") It’s also possible to use cassio to configure the session and keyspace. ``` import cassiocassio.init(contact_points="127.0.0.1", keyspace=CASSANDRA_KEYSPACE)loader = CassandraLoader( table="movie_reviews",)docs = loader.load() ``` #### Attribution statement[​](#attribution-statement "Direct link to Attribution statement") > Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:38:58.080Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/cassandra/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/cassandra/", "description": "Cassandra is a NoSQL, row-oriented,", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3444", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"cassandra\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:38:57 GMT", "etag": "W/\"72f2b969748c34d1620d1ab538eadf15\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::f7wpm-1713753537829-15bc88463fa5" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/cassandra/", "property": "og:url" }, { "content": "Cassandra | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Cassandra is a NoSQL, row-oriented,", "property": "og:description" } ], "title": "Cassandra | 🦜️🔗 LangChain" }
Cassandra Cassandra is a NoSQL, row-oriented, highly scalable and highly available database.Starting with version 5.0, the database ships with vector search capabilities. Overview​ The Cassandra Document Loader returns a list of Langchain Documents from a Cassandra database. You must either provide a CQL query or a table name to retrieve the documents. The Loader takes the following parameters: table: (Optional) The table to load the data from. session: (Optional) The cassandra driver session. If not provided, the cassio resolved session will be used. keyspace: (Optional) The keyspace of the table. If not provided, the cassio resolved keyspace will be used. query: (Optional) The query used to load the data. page_content_mapper: (Optional) a function to convert a row to string page content. The default converts the row to JSON. metadata_mapper: (Optional) a function to convert a row to metadata dict. query_parameters: (Optional) The query parameters used when calling session.execute . query_timeout: (Optional) The query timeout used when calling session.execute . query_custom_payload: (Optional) The query custom_payload used when calling session.execute. query_execution_profile: (Optional) The query execution_profile used when calling session.execute. query_host: (Optional) The query host used when calling session.execute. query_execute_as: (Optional) The query execute_as used when calling session.execute. Load documents with the Document Loader​ from langchain_community.document_loaders import CassandraLoader Init from a cassandra driver Session​ You need to create a cassandra.cluster.Session object, as described in the Cassandra driver documentation. The details vary (e.g. with network settings and authentication), but this might be something like: from cassandra.cluster import Cluster cluster = Cluster() session = cluster.connect() You need to provide the name of an existing keyspace of the Cassandra instance: CASSANDRA_KEYSPACE = input("CASSANDRA_KEYSPACE = ") Creating the document loader: loader = CassandraLoader( table="movie_reviews", session=session, keyspace=CASSANDRA_KEYSPACE, ) Document(page_content='Row(_id=\'659bdffa16cbc4586b11a423\', title=\'Dangerous Men\', reviewtext=\'"Dangerous Men," the picture\\\'s production notes inform, took 26 years to reach the big screen. After having seen it, I wonder: What was the rush?\')', metadata={'table': 'movie_reviews', 'keyspace': 'default_keyspace'}) Init from cassio​ It’s also possible to use cassio to configure the session and keyspace. import cassio cassio.init(contact_points="127.0.0.1", keyspace=CASSANDRA_KEYSPACE) loader = CassandraLoader( table="movie_reviews", ) docs = loader.load() Attribution statement​ Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.
https://python.langchain.com/docs/integrations/document_loaders/glue_catalog/
## Glue Catalog The [AWS Glue Data Catalog](https://docs.aws.amazon.com/en_en/glue/latest/dg/catalog-and-crawler.html) is a centralized metadata repository that allows you to manage, access, and share metadata about your data stored in AWS. It acts as a metadata store for your data assets, enabling various AWS services and your applications to query and connect to the data they need efficiently. When you define data sources, transformations, and targets in AWS Glue, the metadata about these elements is stored in the Data Catalog. This includes information about data locations, schema definitions, runtime metrics, and more. It supports various data store types, such as Amazon S3, Amazon RDS, Amazon Redshift, and external databases compatible with JDBC. It is also directly integrated with Amazon Athena, Amazon Redshift Spectrum, and Amazon EMR, allowing these services to directly access and query the data. The Langchain GlueCatalogLoader will get the schema of all tables inside the given Glue database in the same format as Pandas dtype. ## Setting up[​](#setting-up "Direct link to Setting up") * Follow [instructions to set up an AWS accoung](https://docs.aws.amazon.com/athena/latest/ug/setting-up.html). * Install the boto3 library: `pip install boto3` ## Example[​](#example "Direct link to Example") ``` from langchain_community.document_loaders.glue_catalog import GlueCatalogLoader ``` ``` database_name = "my_database"profile_name = "my_profile"loader = GlueCatalogLoader( database=database_name, profile_name=profile_name,)schemas = loader.load()print(schemas) ``` ## Example with table filtering[​](#example-with-table-filtering "Direct link to Example with table filtering") Table filtering allows you to selectively retrieve schema information for a specific subset of tables within a Glue database. Instead of loading the schemas for all tables, you can use the `table_filter` argument to specify exactly which tables you’re interested in. ``` from langchain_community.document_loaders.glue_catalog import GlueCatalogLoader ``` ``` database_name = "my_database"profile_name = "my_profile"table_filter = ["table1", "table2", "table3"]loader = GlueCatalogLoader( database=database_name, profile_name=profile_name, table_filter=table_filter)schemas = loader.load()print(schemas) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:38:58.227Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/glue_catalog/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/glue_catalog/", "description": "The [AWS Glue Data", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3441", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"glue_catalog\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:38:57 GMT", "etag": "W/\"f0bda436e3c0eeceef222ae0b197b5d4\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::gfrhk-1713753537939-08f45b479cbb" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/glue_catalog/", "property": "og:url" }, { "content": "Glue Catalog | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The [AWS Glue Data", "property": "og:description" } ], "title": "Glue Catalog | 🦜️🔗 LangChain" }
Glue Catalog The AWS Glue Data Catalog is a centralized metadata repository that allows you to manage, access, and share metadata about your data stored in AWS. It acts as a metadata store for your data assets, enabling various AWS services and your applications to query and connect to the data they need efficiently. When you define data sources, transformations, and targets in AWS Glue, the metadata about these elements is stored in the Data Catalog. This includes information about data locations, schema definitions, runtime metrics, and more. It supports various data store types, such as Amazon S3, Amazon RDS, Amazon Redshift, and external databases compatible with JDBC. It is also directly integrated with Amazon Athena, Amazon Redshift Spectrum, and Amazon EMR, allowing these services to directly access and query the data. The Langchain GlueCatalogLoader will get the schema of all tables inside the given Glue database in the same format as Pandas dtype. Setting up​ Follow instructions to set up an AWS accoung. Install the boto3 library: pip install boto3 Example​ from langchain_community.document_loaders.glue_catalog import GlueCatalogLoader database_name = "my_database" profile_name = "my_profile" loader = GlueCatalogLoader( database=database_name, profile_name=profile_name, ) schemas = loader.load() print(schemas) Example with table filtering​ Table filtering allows you to selectively retrieve schema information for a specific subset of tables within a Glue database. Instead of loading the schemas for all tables, you can use the table_filter argument to specify exactly which tables you’re interested in. from langchain_community.document_loaders.glue_catalog import GlueCatalogLoader database_name = "my_database" profile_name = "my_profile" table_filter = ["table1", "table2", "table3"] loader = GlueCatalogLoader( database=database_name, profile_name=profile_name, table_filter=table_filter ) schemas = loader.load() print(schemas) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/google_alloydb/
## Google AlloyDB for PostgreSQL > [AlloyDB](https://cloud.google.com/alloydb) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. AlloyDB is 100% compatible with PostgreSQL. Extend your database application to build AI-powered experiences leveraging AlloyDB’s Langchain integrations. This notebook goes over how to use `AlloyDB for PostgreSQL` to load Documents with the `AlloyDBLoader` class. Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-alloydb-pg-python/). [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-alloydb-pg-python/blob/main/docs/document_loader.ipynb) Open In Colab ## Before you begin[​](#before-you-begin "Direct link to Before you begin") To run this notebook, you will need to do the following: * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project) * [Enable the AlloyDB API](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com) * [Create a AlloyDB cluster and instance.](https://cloud.google.com/alloydb/docs/cluster-create) * [Create a AlloyDB database.](https://cloud.google.com/alloydb/docs/quickstart/create-and-connect) * [Add a User to the database.](https://cloud.google.com/alloydb/docs/database-users/about) ### 🦜🔗 Library Installation[​](#library-installation "Direct link to 🦜🔗 Library Installation") Install the integration library, `langchain-google-alloydb-pg`. ``` %pip install --upgrade --quiet langchain-google-alloydb-pg ``` **Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. ``` # # Automatically restart kernel after installs so that your environment can access the new packages# import IPython# app = IPython.Application.instance()# app.kernel.do_shutdown(True) ``` ### 🔐 Authentication[​](#authentication "Direct link to 🔐 Authentication") Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. * If you are using Colab to run this notebook, use the cell below and continue. * If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env). ``` from google.colab import authauth.authenticate_user() ``` ### ☁ Set Your Google Cloud Project[​](#set-your-google-cloud-project "Direct link to ☁ Set Your Google Cloud Project") Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: * Run `gcloud config list`. * Run `gcloud projects list`. * See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113). ``` # @title Project { display-mode: "form" }PROJECT_ID = "gcp_project_id" # @param {type:"string"}# Set the project id! gcloud config set project {PROJECT_ID} ``` ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") ### Set AlloyDB database variables[​](#set-alloydb-database-variables "Direct link to Set AlloyDB database variables") Find your database values, in the [AlloyDB Instances page](https://console.cloud.google.com/alloydb/clusters). ``` # @title Set Your Values Here { display-mode: "form" }REGION = "us-central1" # @param {type: "string"}CLUSTER = "my-cluster" # @param {type: "string"}INSTANCE = "my-primary" # @param {type: "string"}DATABASE = "my-database" # @param {type: "string"}TABLE_NAME = "vector_store" # @param {type: "string"} ``` ### AlloyDBEngine Connection Pool[​](#alloydbengine-connection-pool "Direct link to AlloyDBEngine Connection Pool") One of the requirements and arguments to establish AlloyDB as a vector store is a `AlloyDBEngine` object. The `AlloyDBEngine` configures a connection pool to your AlloyDB database, enabling successful connections from your application and following industry best practices. To create a `AlloyDBEngine` using `AlloyDBEngine.from_instance()` you need to provide only 5 things: 1. `project_id` : Project ID of the Google Cloud Project where the AlloyDB instance is located. 2. `region` : Region where the AlloyDB instance is located. 3. `cluster`: The name of the AlloyDB cluster. 4. `instance` : The name of the AlloyDB instance. 5. `database` : The name of the database to connect to on the AlloyDB instance. By default, [IAM database authentication](https://cloud.google.com/alloydb/docs/connect-iam) will be used as the method of database authentication. This library uses the IAM principal belonging to the [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) sourced from the environment. Optionally, [built-in database authentication](https://cloud.google.com/alloydb/docs/database-users/about) using a username and password to access the AlloyDB database can also be used. Just provide the optional `user` and `password` arguments to `AlloyDBEngine.from_instance()`: * `user` : Database user to use for built-in database authentication and login * `password` : Database password to use for built-in database authentication and login. **Note**: This tutorial demonstrates the async interface. All async methods have corresponding sync methods. ``` from langchain_google_alloydb_pg import AlloyDBEngineengine = await AlloyDBEngine.afrom_instance( project_id=PROJECT_ID, region=REGION, cluster=CLUSTER, instance=INSTANCE, database=DATABASE,) ``` ### Create AlloyDBLoader[​](#create-alloydbloader "Direct link to Create AlloyDBLoader") ``` from langchain_google_alloydb_pg import AlloyDBLoader# Creating a basic AlloyDBLoader objectloader = await AlloyDBLoader.create(engine, table_name=TABLE_NAME) ``` ### Load Documents via default table[​](#load-documents-via-default-table "Direct link to Load Documents via default table") The loader returns a list of Documents from the table using the first column as page\_content and all other columns as metadata. The default table will have the first column as page\_content and the second column as metadata (JSON). Each row becomes a document. ``` docs = await loader.aload()print(docs) ``` ### Load documents via custom table/metadata or custom page content columns[​](#load-documents-via-custom-tablemetadata-or-custom-page-content-columns "Direct link to Load documents via custom table/metadata or custom page content columns") ``` loader = await AlloyDBLoader.create( engine, table_name=TABLE_NAME, content_columns=["product_name"], # Optional metadata_columns=["id"], # Optional)docs = await loader.aload()print(docs) ``` ### Set page content format[​](#set-page-content-format "Direct link to Set page content format") The loader returns a list of Documents, with one document per row, with page content in specified string format, i.e. text (space separated concatenation), JSON, YAML, CSV, etc. JSON and YAML formats include headers, while text and CSV do not include field headers. ``` loader = AlloyDBLoader.create( engine, table_name="products", content_columns=["product_name", "description"], format="YAML",)docs = await loader.aload()print(docs) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:38:58.381Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_alloydb/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_alloydb/", "description": "AlloyDB is a fully managed", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_alloydb\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:38:58 GMT", "etag": "W/\"3e82b253c693254935ee23b91852fa81\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::l88wt-1713753537939-db6bb66c6a1e" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_alloydb/", "property": "og:url" }, { "content": "Google AlloyDB for PostgreSQL | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "AlloyDB is a fully managed", "property": "og:description" } ], "title": "Google AlloyDB for PostgreSQL | 🦜️🔗 LangChain" }
Google AlloyDB for PostgreSQL AlloyDB is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. AlloyDB is 100% compatible with PostgreSQL. Extend your database application to build AI-powered experiences leveraging AlloyDB’s Langchain integrations. This notebook goes over how to use AlloyDB for PostgreSQL to load Documents with the AlloyDBLoader class. Learn more about the package on GitHub. Open In Colab Before you begin​ To run this notebook, you will need to do the following: Create a Google Cloud Project Enable the AlloyDB API Create a AlloyDB cluster and instance. Create a AlloyDB database. Add a User to the database. 🦜🔗 Library Installation​ Install the integration library, langchain-google-alloydb-pg. %pip install --upgrade --quiet langchain-google-alloydb-pg Colab only: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. # # Automatically restart kernel after installs so that your environment can access the new packages # import IPython # app = IPython.Application.instance() # app.kernel.do_shutdown(True) 🔐 Authentication​ Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. If you are using Colab to run this notebook, use the cell below and continue. If you are using Vertex AI Workbench, check out the setup instructions here. from google.colab import auth auth.authenticate_user() ☁ Set Your Google Cloud Project​ Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: Run gcloud config list. Run gcloud projects list. See the support page: Locate the project ID. # @title Project { display-mode: "form" } PROJECT_ID = "gcp_project_id" # @param {type:"string"} # Set the project id ! gcloud config set project {PROJECT_ID} Basic Usage​ Set AlloyDB database variables​ Find your database values, in the AlloyDB Instances page. # @title Set Your Values Here { display-mode: "form" } REGION = "us-central1" # @param {type: "string"} CLUSTER = "my-cluster" # @param {type: "string"} INSTANCE = "my-primary" # @param {type: "string"} DATABASE = "my-database" # @param {type: "string"} TABLE_NAME = "vector_store" # @param {type: "string"} AlloyDBEngine Connection Pool​ One of the requirements and arguments to establish AlloyDB as a vector store is a AlloyDBEngine object. The AlloyDBEngine configures a connection pool to your AlloyDB database, enabling successful connections from your application and following industry best practices. To create a AlloyDBEngine using AlloyDBEngine.from_instance() you need to provide only 5 things: project_id : Project ID of the Google Cloud Project where the AlloyDB instance is located. region : Region where the AlloyDB instance is located. cluster: The name of the AlloyDB cluster. instance : The name of the AlloyDB instance. database : The name of the database to connect to on the AlloyDB instance. By default, IAM database authentication will be used as the method of database authentication. This library uses the IAM principal belonging to the Application Default Credentials (ADC) sourced from the environment. Optionally, built-in database authentication using a username and password to access the AlloyDB database can also be used. Just provide the optional user and password arguments to AlloyDBEngine.from_instance(): user : Database user to use for built-in database authentication and login password : Database password to use for built-in database authentication and login. Note: This tutorial demonstrates the async interface. All async methods have corresponding sync methods. from langchain_google_alloydb_pg import AlloyDBEngine engine = await AlloyDBEngine.afrom_instance( project_id=PROJECT_ID, region=REGION, cluster=CLUSTER, instance=INSTANCE, database=DATABASE, ) Create AlloyDBLoader​ from langchain_google_alloydb_pg import AlloyDBLoader # Creating a basic AlloyDBLoader object loader = await AlloyDBLoader.create(engine, table_name=TABLE_NAME) Load Documents via default table​ The loader returns a list of Documents from the table using the first column as page_content and all other columns as metadata. The default table will have the first column as page_content and the second column as metadata (JSON). Each row becomes a document. docs = await loader.aload() print(docs) Load documents via custom table/metadata or custom page content columns​ loader = await AlloyDBLoader.create( engine, table_name=TABLE_NAME, content_columns=["product_name"], # Optional metadata_columns=["id"], # Optional ) docs = await loader.aload() print(docs) Set page content format​ The loader returns a list of Documents, with one document per row, with page content in specified string format, i.e. text (space separated concatenation), JSON, YAML, CSV, etc. JSON and YAML formats include headers, while text and CSV do not include field headers. loader = AlloyDBLoader.create( engine, table_name="products", content_columns=["product_name", "description"], format="YAML", ) docs = await loader.aload() print(docs)
https://python.langchain.com/docs/integrations/document_loaders/gitbook/
## GitBook > [GitBook](https://docs.gitbook.com/) is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs. This notebook shows how to pull page data from any `GitBook`. ``` from langchain_community.document_loaders import GitbookLoader ``` ### Load from single GitBook page[​](#load-from-single-gitbook-page "Direct link to Load from single GitBook page") ``` loader = GitbookLoader("https://docs.gitbook.com") ``` ``` page_data = loader.load() ``` ``` [Document(page_content='Introduction to GitBook\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\nWe want to help \nteams to work more efficiently\n by creating a simple yet powerful platform for them to \nshare their knowledge\n.\nOur mission is to make a \nuser-friendly\n and \ncollaborative\n product for everyone to create, edit and share knowledge through documentation.\nPublish your documentation in 5 easy steps\nImport\n\nMove your existing content to GitBook with ease.\nGit Sync\n\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\nOrganise your content\n\nCreate pages and spaces and organize them into collections\nCollaborate\n\nInvite other users and collaborate asynchronously with ease.\nPublish your docs\n\nShare your documentation with selected users or with everyone.\nNext\n - Getting started\nOverview\nLast modified \n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)] ``` ### Load from all paths in a given GitBook[​](#load-from-all-paths-in-a-given-gitbook "Direct link to Load from all paths in a given GitBook") For this to work, the GitbookLoader needs to be initialized with the root path (`https://docs.gitbook.com` in this example) and have `load_all_paths` set to `True`. ``` loader = GitbookLoader("https://docs.gitbook.com", load_all_paths=True)all_pages_data = loader.load() ``` ``` Fetching text from https://docs.gitbook.com/Fetching text from https://docs.gitbook.com/getting-started/overviewFetching text from https://docs.gitbook.com/getting-started/importFetching text from https://docs.gitbook.com/getting-started/git-syncFetching text from https://docs.gitbook.com/getting-started/content-structureFetching text from https://docs.gitbook.com/getting-started/collaborationFetching text from https://docs.gitbook.com/getting-started/publishingFetching text from https://docs.gitbook.com/tour/quick-findFetching text from https://docs.gitbook.com/tour/editorFetching text from https://docs.gitbook.com/tour/customizationFetching text from https://docs.gitbook.com/tour/member-managementFetching text from https://docs.gitbook.com/tour/pdf-exportFetching text from https://docs.gitbook.com/tour/activity-historyFetching text from https://docs.gitbook.com/tour/insightsFetching text from https://docs.gitbook.com/tour/notificationsFetching text from https://docs.gitbook.com/tour/internationalizationFetching text from https://docs.gitbook.com/tour/keyboard-shortcutsFetching text from https://docs.gitbook.com/tour/seoFetching text from https://docs.gitbook.com/advanced-guides/custom-domainFetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-securityFetching text from https://docs.gitbook.com/advanced-guides/integrationsFetching text from https://docs.gitbook.com/billing-and-admin/account-settingsFetching text from https://docs.gitbook.com/billing-and-admin/plansFetching text from https://docs.gitbook.com/troubleshooting/faqsFetching text from https://docs.gitbook.com/troubleshooting/hard-refreshFetching text from https://docs.gitbook.com/troubleshooting/report-bugsFetching text from https://docs.gitbook.com/troubleshooting/connectivity-issuesFetching text from https://docs.gitbook.com/troubleshooting/support ``` ``` print(f"fetched {len(all_pages_data)} documents.")# show second documentall_pages_data[2] ``` ``` Document(page_content="Import\nFind out how to easily migrate your existing documentation and which formats are supported.\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \nPermissions\nAll members with editor permission or above can use the import feature.\nSupported formats\nGitBook supports imports from websites or files that are:\nMarkdown (.md or .markdown)\nHTML (.html)\nMicrosoft Word (.docx).\nWe also support import from:\nConfluence\nNotion\nGitHub Wiki\nQuip\nDropbox Paper\nGoogle Docs\nYou can also upload a ZIP\n \ncontaining HTML or Markdown files when \nimporting multiple pages.\nNote: this feature is in beta.\nFeel free to suggest import sources we don't support yet and \nlet us know\n if you have any issues.\nImport panel\nWhen you create a new space, you'll have the option to import content straight away:\nThe new page menu\nImport a page or subpage by selecting \nImport Page\n from the New Page menu, or \nImport Subpage\n in the page action menu, found in the table of contents:\nImport from the page action menu\nWhen you choose your input source, instructions will explain how to proceed.\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document format.\nLimits\nGitBook currently has the following limits for imported content:\nThe maximum number of pages that can be uploaded in a single import is \n20.\nThe maximum number of files (images etc.) that can be uploaded in a single import is \n20.\nGetting started - \nPrevious\nOverview\nNext\n - Getting started\nGit Sync\nLast modified \n4mo ago", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:38:58.810Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/gitbook/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/gitbook/", "description": "GitBook is a modern documentation", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3441", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"gitbook\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:38:58 GMT", "etag": "W/\"6e5992e22e7602441f1d1e55f8d48d8b\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::vmk42-1713753538016-fa95413721a9" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/gitbook/", "property": "og:url" }, { "content": "GitBook | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "GitBook is a modern documentation", "property": "og:description" } ], "title": "GitBook | 🦜️🔗 LangChain" }
GitBook GitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs. This notebook shows how to pull page data from any GitBook. from langchain_community.document_loaders import GitbookLoader Load from single GitBook page​ loader = GitbookLoader("https://docs.gitbook.com") page_data = loader.load() [Document(page_content='Introduction to GitBook\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\nWe want to help \nteams to work more efficiently\n by creating a simple yet powerful platform for them to \nshare their knowledge\n.\nOur mission is to make a \nuser-friendly\n and \ncollaborative\n product for everyone to create, edit and share knowledge through documentation.\nPublish your documentation in 5 easy steps\nImport\n\nMove your existing content to GitBook with ease.\nGit Sync\n\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\nOrganise your content\n\nCreate pages and spaces and organize them into collections\nCollaborate\n\nInvite other users and collaborate asynchronously with ease.\nPublish your docs\n\nShare your documentation with selected users or with everyone.\nNext\n - Getting started\nOverview\nLast modified \n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)] Load from all paths in a given GitBook​ For this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have load_all_paths set to True. loader = GitbookLoader("https://docs.gitbook.com", load_all_paths=True) all_pages_data = loader.load() Fetching text from https://docs.gitbook.com/ Fetching text from https://docs.gitbook.com/getting-started/overview Fetching text from https://docs.gitbook.com/getting-started/import Fetching text from https://docs.gitbook.com/getting-started/git-sync Fetching text from https://docs.gitbook.com/getting-started/content-structure Fetching text from https://docs.gitbook.com/getting-started/collaboration Fetching text from https://docs.gitbook.com/getting-started/publishing Fetching text from https://docs.gitbook.com/tour/quick-find Fetching text from https://docs.gitbook.com/tour/editor Fetching text from https://docs.gitbook.com/tour/customization Fetching text from https://docs.gitbook.com/tour/member-management Fetching text from https://docs.gitbook.com/tour/pdf-export Fetching text from https://docs.gitbook.com/tour/activity-history Fetching text from https://docs.gitbook.com/tour/insights Fetching text from https://docs.gitbook.com/tour/notifications Fetching text from https://docs.gitbook.com/tour/internationalization Fetching text from https://docs.gitbook.com/tour/keyboard-shortcuts Fetching text from https://docs.gitbook.com/tour/seo Fetching text from https://docs.gitbook.com/advanced-guides/custom-domain Fetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-security Fetching text from https://docs.gitbook.com/advanced-guides/integrations Fetching text from https://docs.gitbook.com/billing-and-admin/account-settings Fetching text from https://docs.gitbook.com/billing-and-admin/plans Fetching text from https://docs.gitbook.com/troubleshooting/faqs Fetching text from https://docs.gitbook.com/troubleshooting/hard-refresh Fetching text from https://docs.gitbook.com/troubleshooting/report-bugs Fetching text from https://docs.gitbook.com/troubleshooting/connectivity-issues Fetching text from https://docs.gitbook.com/troubleshooting/support print(f"fetched {len(all_pages_data)} documents.") # show second document all_pages_data[2] Document(page_content="Import\nFind out how to easily migrate your existing documentation and which formats are supported.\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \nPermissions\nAll members with editor permission or above can use the import feature.\nSupported formats\nGitBook supports imports from websites or files that are:\nMarkdown (.md or .markdown)\nHTML (.html)\nMicrosoft Word (.docx).\nWe also support import from:\nConfluence\nNotion\nGitHub Wiki\nQuip\nDropbox Paper\nGoogle Docs\nYou can also upload a ZIP\n \ncontaining HTML or Markdown files when \nimporting multiple pages.\nNote: this feature is in beta.\nFeel free to suggest import sources we don't support yet and \nlet us know\n if you have any issues.\nImport panel\nWhen you create a new space, you'll have the option to import content straight away:\nThe new page menu\nImport a page or subpage by selecting \nImport Page\n from the New Page menu, or \nImport Subpage\n in the page action menu, found in the table of contents:\nImport from the page action menu\nWhen you choose your input source, instructions will explain how to proceed.\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document format.\nLimits\nGitBook currently has the following limits for imported content:\nThe maximum number of pages that can be uploaded in a single import is \n20.\nThe maximum number of files (images etc.) that can be uploaded in a single import is \n20.\nGetting started - \nPrevious\nOverview\nNext\n - Getting started\nGit Sync\nLast modified \n4mo ago", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0)
https://python.langchain.com/docs/integrations/document_loaders/google_bigquery/
## Google BigQuery > [Google BigQuery](https://cloud.google.com/bigquery) is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. `BigQuery` is a part of the `Google Cloud Platform`. Load a `BigQuery` query with one document per row. ``` %pip install --upgrade --quiet google-cloud-bigquery ``` ``` from langchain_community.document_loaders import BigQueryLoader ``` ``` BASE_QUERY = """SELECT id, dna_sequence, organismFROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array)""" ``` ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") ``` loader = BigQueryLoader(BASE_QUERY)data = loader.load() ``` ``` [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={}, lookup_index=0)] ``` ## Specifying Which Columns are Content vs Metadata[​](#specifying-which-columns-are-content-vs-metadata "Direct link to Specifying Which Columns are Content vs Metadata") ``` loader = BigQueryLoader( BASE_QUERY, page_content_columns=["dna_sequence", "organism"], metadata_columns=["id"],)data = loader.load() ``` ``` [Document(page_content='dna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={'id': 1}, lookup_index=0), Document(page_content='dna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={'id': 2}, lookup_index=0), Document(page_content='dna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={'id': 3}, lookup_index=0)] ``` ``` # Note that the `id` column is being returned twice, with one instance aliased as `source`ALIASED_QUERY = """SELECT id, dna_sequence, organism, id as sourceFROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array)""" ``` ``` loader = BigQueryLoader(ALIASED_QUERY, metadata_columns=["source"])data = loader.load() ``` ``` [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).\nsource: 1', lookup_str='', metadata={'source': 1}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).\nsource: 2', lookup_str='', metadata={'source': 2}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).\nsource: 3', lookup_str='', metadata={'source': 3}, lookup_index=0)] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:38:58.975Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_bigquery/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_bigquery/", "description": "Google BigQuery is a serverless", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4412", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_bigquery\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:38:58 GMT", "etag": "W/\"e330306489abb37924f721445b8d73cb\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::ffxhk-1713753538244-b591e3d33c07" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_bigquery/", "property": "og:url" }, { "content": "Google BigQuery | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Google BigQuery is a serverless", "property": "og:description" } ], "title": "Google BigQuery | 🦜️🔗 LangChain" }
Google BigQuery Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. BigQuery is a part of the Google Cloud Platform. Load a BigQuery query with one document per row. %pip install --upgrade --quiet google-cloud-bigquery from langchain_community.document_loaders import BigQueryLoader BASE_QUERY = """ SELECT id, dna_sequence, organism FROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array) """ Basic Usage​ loader = BigQueryLoader(BASE_QUERY) data = loader.load() [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={}, lookup_index=0)] Specifying Which Columns are Content vs Metadata​ loader = BigQueryLoader( BASE_QUERY, page_content_columns=["dna_sequence", "organism"], metadata_columns=["id"], ) data = loader.load() [Document(page_content='dna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={'id': 1}, lookup_index=0), Document(page_content='dna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={'id': 2}, lookup_index=0), Document(page_content='dna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={'id': 3}, lookup_index=0)] # Note that the `id` column is being returned twice, with one instance aliased as `source` ALIASED_QUERY = """ SELECT id, dna_sequence, organism, id as source FROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array) """ loader = BigQueryLoader(ALIASED_QUERY, metadata_columns=["source"]) data = loader.load() [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).\nsource: 1', lookup_str='', metadata={'source': 1}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).\nsource: 2', lookup_str='', metadata={'source': 2}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).\nsource: 3', lookup_str='', metadata={'source': 3}, lookup_index=0)]
https://python.langchain.com/docs/integrations/document_loaders/google_bigtable/
## Google Bigtable > [Bigtable](https://cloud.google.com/bigtable) is a key-value and wide-column store, ideal for fast access to structured, semi-structured, or unstructured data. Extend your database application to build AI-powered experiences leveraging Bigtable’s Langchain integrations. This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `BigtableLoader` and `BigtableSaver`. Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-bigtable-python/). [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-bigtable-python/blob/main/docs/document_loader.ipynb) Open In Colab ## Before You Begin[​](#before-you-begin "Direct link to Before You Begin") To run this notebook, you will need to do the following: * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project) * [Enable the Bigtable API](https://console.cloud.google.com/flows/enableapi?apiid=bigtable.googleapis.com) * [Create a Bigtable instance](https://cloud.google.com/bigtable/docs/creating-instance) * [Create a Bigtable table](https://cloud.google.com/bigtable/docs/managing-tables) * [Create Bigtable access credentials](https://developers.google.com/workspace/guides/create-credentials) After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. ``` # @markdown Please specify an instance and a table for demo purpose.INSTANCE_ID = "my_instance" # @param {type:"string"}TABLE_ID = "my_table" # @param {type:"string"} ``` ### 🦜🔗 Library Installation[​](#library-installation "Direct link to 🦜🔗 Library Installation") The integration lives in its own `langchain-google-bigtable` package, so we need to install it. ``` %pip install -upgrade --quiet langchain-google-bigtable ``` **Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. ``` # # Automatically restart kernel after installs so that your environment can access the new packages# import IPython# app = IPython.Application.instance()# app.kernel.do_shutdown(True) ``` ### ☁ Set Your Google Cloud Project[​](#set-your-google-cloud-project "Direct link to ☁ Set Your Google Cloud Project") Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: * Run `gcloud config list`. * Run `gcloud projects list`. * See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113). ``` # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.PROJECT_ID = "my-project-id" # @param {type:"string"}# Set the project id!gcloud config set project {PROJECT_ID} ``` ### 🔐 Authentication[​](#authentication "Direct link to 🔐 Authentication") Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. * If you are using Colab to run this notebook, use the cell below and continue. * If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env). ``` from google.colab import authauth.authenticate_user() ``` ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") ### Using the saver[​](#using-the-saver "Direct link to Using the saver") Save langchain documents with `BigtableSaver.add_documents(<documents>)`. To initialize `BigtableSaver` class you need to provide 2 things: 1. `instance_id` - An instance of Bigtable. 2. `table_id` - The name of the table within the Bigtable to store langchain documents. ``` from langchain_core.documents import Documentfrom langchain_google_bigtable import BigtableSavertest_docs = [ Document( page_content="Apple Granny Smith 150 0.99 1", metadata={"fruit_id": 1}, ), Document( page_content="Banana Cavendish 200 0.59 0", metadata={"fruit_id": 2}, ), Document( page_content="Orange Navel 80 1.29 1", metadata={"fruit_id": 3}, ),]saver = BigtableSaver( instance_id=INSTANCE_ID, table_id=TABLE_ID,)saver.add_documents(test_docs) ``` ### Querying for Documents from Bigtable[​](#querying-for-documents-from-bigtable "Direct link to Querying for Documents from Bigtable") For more details on connecting to a Bigtable table, please check the [Python SDK documentation](https://cloud.google.com/python/docs/reference/bigtable/latest/client). #### Load documents from table[​](#load-documents-from-table "Direct link to Load documents from table") Load langchain documents with `BigtableLoader.load()` or `BigtableLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `BigtableLoader` class you need to provide: 1. `instance_id` - An instance of Bigtable. 2. `table_id` - The name of the table within the Bigtable to store langchain documents. ``` from langchain_google_bigtable import BigtableLoaderloader = BigtableLoader( instance_id=INSTANCE_ID, table_id=TABLE_ID,)for doc in loader.lazy_load(): print(doc) break ``` ### Delete documents[​](#delete-documents "Direct link to Delete documents") Delete a list of langchain documents from Bigtable table with `BigtableSaver.delete(<documents>)`. ``` from langchain_google_bigtable import BigtableSaverdocs = loader.load()print("Documents before delete: ", docs)onedoc = test_docs[0]saver.delete([onedoc])print("Documents after delete: ", loader.load()) ``` ## Advanced Usage[​](#advanced-usage "Direct link to Advanced Usage") ### Limiting the returned rows[​](#limiting-the-returned-rows "Direct link to Limiting the returned rows") There are two ways to limit the returned rows: 1. Using a [filter](https://cloud.google.com/python/docs/reference/bigtable/latest/row-filters) 2. Using a [row\_set](https://cloud.google.com/python/docs/reference/bigtable/latest/row-set#google.cloud.bigtable.row_set.RowSet) ``` import google.cloud.bigtable.row_filters as row_filtersfilter_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, filter=row_filters.ColumnQualifierRegexFilter(b"os_build"))from google.cloud.bigtable.row_set import RowSetrow_set = RowSet()row_set.add_row_range_from_keys( start_key="phone#4c410523#20190501", end_key="phone#4c410523#201906201")row_set_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, row_set=row_set,) ``` ### Custom client[​](#custom-client "Direct link to Custom client") The client created by default is the default client, using only admin=True option. To use a non-default, a [custom client](https://cloud.google.com/python/docs/reference/bigtable/latest/client#class-googlecloudbigtableclientclientprojectnone-credentialsnone-readonlyfalse-adminfalse-clientinfonone-clientoptionsnone-adminclientoptionsnone-channelnone) can be passed to the constructor. ``` from google.cloud import bigtablecustom_client_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, client=bigtable.Client(...),) ``` ### Custom content[​](#custom-content "Direct link to Custom content") The BigtableLoader assumes there is a column family called `langchain`, that has a column called `content`, that contains values encoded in UTF-8. These defaults can be changed like so: ``` from langchain_google_bigtable import Encodingcustom_content_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, content_encoding=Encoding.ASCII, content_column_family="my_content_family", content_column_name="my_content_column_name",) ``` ### Metadata mapping[​](#metadata-mapping "Direct link to Metadata mapping") By default, the `metadata` map on the `Document` object will contain a single key, `rowkey`, with the value of the row’s rowkey value. To add more items to that map, use metadata\_mapping. ``` import jsonfrom langchain_google_bigtable import MetadataMappingmetadata_mapping_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, metadata_mappings=[ MetadataMapping( column_family="my_int_family", column_name="my_int_column", metadata_key="key_in_metadata_map", encoding=Encoding.INT_BIG_ENDIAN, ), MetadataMapping( column_family="my_custom_family", column_name="my_custom_column", metadata_key="custom_key", encoding=Encoding.CUSTOM, custom_decoding_func=lambda input: json.loads(input.decode()), custom_encoding_func=lambda input: str.encode(json.dumps(input)), ), ],) ``` ### Metadata as JSON[​](#metadata-as-json "Direct link to Metadata as JSON") If there is a column in Bigtable that contains a JSON string that you would like to have added to the output document metadata, it is possible to add the following parameters to BigtableLoader. Note, the default value for `metadata_as_json_encoding` is UTF-8. ``` metadata_as_json_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, metadata_as_json_encoding=Encoding.ASCII, metadata_as_json_family="my_metadata_as_json_family", metadata_as_json_name="my_metadata_as_json_column_name",) ``` ### Customize BigtableSaver[​](#customize-bigtablesaver "Direct link to Customize BigtableSaver") The BigtableSaver is also customizable similar to BigtableLoader. ``` saver = BigtableSaver( INSTANCE_ID, TABLE_ID, client=bigtable.Client(...), content_encoding=Encoding.ASCII, content_column_family="my_content_family", content_column_name="my_content_column_name", metadata_mappings=[ MetadataMapping( column_family="my_int_family", column_name="my_int_column", metadata_key="key_in_metadata_map", encoding=Encoding.INT_BIG_ENDIAN, ), MetadataMapping( column_family="my_custom_family", column_name="my_custom_column", metadata_key="custom_key", encoding=Encoding.CUSTOM, custom_decoding_func=lambda input: json.loads(input.decode()), custom_encoding_func=lambda input: str.encode(json.dumps(input)), ), ], metadata_as_json_encoding=Encoding.ASCII, metadata_as_json_family="my_metadata_as_json_family", metadata_as_json_name="my_metadata_as_json_column_name",) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:38:59.461Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_bigtable/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_bigtable/", "description": "Bigtable is a key-value and", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4412", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_bigtable\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:38:58 GMT", "etag": "W/\"aef3b064eb43fdb0b6b2adfe415579bc\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::6jz7h-1713753538744-0251a27ef536" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_bigtable/", "property": "og:url" }, { "content": "Google Bigtable | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Bigtable is a key-value and", "property": "og:description" } ], "title": "Google Bigtable | 🦜️🔗 LangChain" }
Google Bigtable Bigtable is a key-value and wide-column store, ideal for fast access to structured, semi-structured, or unstructured data. Extend your database application to build AI-powered experiences leveraging Bigtable’s Langchain integrations. This notebook goes over how to use Bigtable to save, load and delete langchain documents with BigtableLoader and BigtableSaver. Learn more about the package on GitHub. Open In Colab Before You Begin​ To run this notebook, you will need to do the following: Create a Google Cloud Project Enable the Bigtable API Create a Bigtable instance Create a Bigtable table Create Bigtable access credentials After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. # @markdown Please specify an instance and a table for demo purpose. INSTANCE_ID = "my_instance" # @param {type:"string"} TABLE_ID = "my_table" # @param {type:"string"} 🦜🔗 Library Installation​ The integration lives in its own langchain-google-bigtable package, so we need to install it. %pip install -upgrade --quiet langchain-google-bigtable Colab only: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. # # Automatically restart kernel after installs so that your environment can access the new packages # import IPython # app = IPython.Application.instance() # app.kernel.do_shutdown(True) ☁ Set Your Google Cloud Project​ Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: Run gcloud config list. Run gcloud projects list. See the support page: Locate the project ID. # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell. PROJECT_ID = "my-project-id" # @param {type:"string"} # Set the project id !gcloud config set project {PROJECT_ID} 🔐 Authentication​ Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. If you are using Colab to run this notebook, use the cell below and continue. If you are using Vertex AI Workbench, check out the setup instructions here. from google.colab import auth auth.authenticate_user() Basic Usage​ Using the saver​ Save langchain documents with BigtableSaver.add_documents(<documents>). To initialize BigtableSaver class you need to provide 2 things: instance_id - An instance of Bigtable. table_id - The name of the table within the Bigtable to store langchain documents. from langchain_core.documents import Document from langchain_google_bigtable import BigtableSaver test_docs = [ Document( page_content="Apple Granny Smith 150 0.99 1", metadata={"fruit_id": 1}, ), Document( page_content="Banana Cavendish 200 0.59 0", metadata={"fruit_id": 2}, ), Document( page_content="Orange Navel 80 1.29 1", metadata={"fruit_id": 3}, ), ] saver = BigtableSaver( instance_id=INSTANCE_ID, table_id=TABLE_ID, ) saver.add_documents(test_docs) Querying for Documents from Bigtable​ For more details on connecting to a Bigtable table, please check the Python SDK documentation. Load documents from table​ Load langchain documents with BigtableLoader.load() or BigtableLoader.lazy_load(). lazy_load returns a generator that only queries database during the iteration. To initialize BigtableLoader class you need to provide: instance_id - An instance of Bigtable. table_id - The name of the table within the Bigtable to store langchain documents. from langchain_google_bigtable import BigtableLoader loader = BigtableLoader( instance_id=INSTANCE_ID, table_id=TABLE_ID, ) for doc in loader.lazy_load(): print(doc) break Delete documents​ Delete a list of langchain documents from Bigtable table with BigtableSaver.delete(<documents>). from langchain_google_bigtable import BigtableSaver docs = loader.load() print("Documents before delete: ", docs) onedoc = test_docs[0] saver.delete([onedoc]) print("Documents after delete: ", loader.load()) Advanced Usage​ Limiting the returned rows​ There are two ways to limit the returned rows: Using a filter Using a row_set import google.cloud.bigtable.row_filters as row_filters filter_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, filter=row_filters.ColumnQualifierRegexFilter(b"os_build") ) from google.cloud.bigtable.row_set import RowSet row_set = RowSet() row_set.add_row_range_from_keys( start_key="phone#4c410523#20190501", end_key="phone#4c410523#201906201" ) row_set_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, row_set=row_set, ) Custom client​ The client created by default is the default client, using only admin=True option. To use a non-default, a custom client can be passed to the constructor. from google.cloud import bigtable custom_client_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, client=bigtable.Client(...), ) Custom content​ The BigtableLoader assumes there is a column family called langchain, that has a column called content, that contains values encoded in UTF-8. These defaults can be changed like so: from langchain_google_bigtable import Encoding custom_content_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, content_encoding=Encoding.ASCII, content_column_family="my_content_family", content_column_name="my_content_column_name", ) Metadata mapping​ By default, the metadata map on the Document object will contain a single key, rowkey, with the value of the row’s rowkey value. To add more items to that map, use metadata_mapping. import json from langchain_google_bigtable import MetadataMapping metadata_mapping_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, metadata_mappings=[ MetadataMapping( column_family="my_int_family", column_name="my_int_column", metadata_key="key_in_metadata_map", encoding=Encoding.INT_BIG_ENDIAN, ), MetadataMapping( column_family="my_custom_family", column_name="my_custom_column", metadata_key="custom_key", encoding=Encoding.CUSTOM, custom_decoding_func=lambda input: json.loads(input.decode()), custom_encoding_func=lambda input: str.encode(json.dumps(input)), ), ], ) Metadata as JSON​ If there is a column in Bigtable that contains a JSON string that you would like to have added to the output document metadata, it is possible to add the following parameters to BigtableLoader. Note, the default value for metadata_as_json_encoding is UTF-8. metadata_as_json_loader = BigtableLoader( INSTANCE_ID, TABLE_ID, metadata_as_json_encoding=Encoding.ASCII, metadata_as_json_family="my_metadata_as_json_family", metadata_as_json_name="my_metadata_as_json_column_name", ) Customize BigtableSaver​ The BigtableSaver is also customizable similar to BigtableLoader. saver = BigtableSaver( INSTANCE_ID, TABLE_ID, client=bigtable.Client(...), content_encoding=Encoding.ASCII, content_column_family="my_content_family", content_column_name="my_content_column_name", metadata_mappings=[ MetadataMapping( column_family="my_int_family", column_name="my_int_column", metadata_key="key_in_metadata_map", encoding=Encoding.INT_BIG_ENDIAN, ), MetadataMapping( column_family="my_custom_family", column_name="my_custom_column", metadata_key="custom_key", encoding=Encoding.CUSTOM, custom_decoding_func=lambda input: json.loads(input.decode()), custom_encoding_func=lambda input: str.encode(json.dumps(input)), ), ], metadata_as_json_encoding=Encoding.ASCII, metadata_as_json_family="my_metadata_as_json_family", metadata_as_json_name="my_metadata_as_json_column_name", )
https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_mssql/
> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgres), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL’s Langchain integrations. This notebook goes over how to use [Cloud SQL for SQL server](https://cloud.google.com/sql/sqlserver) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MSSQLLoader` and `MSSQLDocumentSaver`. Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mssql-python/). [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-mssql-python/blob/main/docs/document_loader.ipynb) Open In Colab ## Before You Begin[​](#before-you-begin "Direct link to Before You Begin") To run this notebook, you will need to do the following: * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project) * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com) * [Create a Cloud SQL for SQL server instance](https://cloud.google.com/sql/docs/sqlserver/create-instance) * [Create a Cloud SQL database](https://cloud.google.com/sql/docs/sqlserver/create-manage-databases) * [Add an IAM database user to the database](https://cloud.google.com/sql/docs/sqlserver/create-manage-users) (Optional) After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. ``` # @markdown Please fill in the both the Google Cloud region and name of your Cloud SQL instance.REGION = "us-central1" # @param {type:"string"}INSTANCE = "test-instance" # @param {type:"string"}# @markdown Please fill in user name and password of your Cloud SQL instance.DB_USER = "sqlserver" # @param {type:"string"}DB_PASS = "password" # @param {type:"string"}# @markdown Please specify a database and a table for demo purpose.DATABASE = "test" # @param {type:"string"}TABLE_NAME = "test-default" # @param {type:"string"} ``` ### 🦜🔗 Library Installation[​](#library-installation "Direct link to 🦜🔗 Library Installation") The integration lives in its own `langchain-google-cloud-sql-mssql` package, so we need to install it. ``` %pip install --upgrade --quiet langchain-google-cloud-sql-mssql ``` **Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. ``` # # Automatically restart kernel after installs so that your environment can access the new packages# import IPython# app = IPython.Application.instance()# app.kernel.do_shutdown(True) ``` ### 🔐 Authentication[​](#authentication "Direct link to 🔐 Authentication") Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. * If you are using Colab to run this notebook, use the cell below and continue. * If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env). ``` from google.colab import authauth.authenticate_user() ``` ### ☁ Set Your Google Cloud Project[​](#set-your-google-cloud-project "Direct link to ☁ Set Your Google Cloud Project") Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: * Run `gcloud config list`. * Run `gcloud projects list`. * See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113). ``` # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.PROJECT_ID = "my-project-id" # @param {type:"string"}# Set the project id!gcloud config set project {PROJECT_ID} ``` ### 💡 API Enablement[​](#api-enablement "Direct link to 💡 API Enablement") The `langchain-google-cloud-sql-mssql` package requires that you [enable the Cloud SQL Admin API](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin.googleapis.com) in your Google Cloud Project. ``` # enable Cloud SQL Admin API!gcloud services enable sqladmin.googleapis.com ``` ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") ### MSSQLEngine Connection Pool[​](#mssqlengine-connection-pool "Direct link to MSSQLEngine Connection Pool") Before saving or loading documents from MSSQL table, we need first configures a connection pool to Cloud SQL database. The `MSSQLEngine` configures a [SQLAlchemy connection pool](https://docs.sqlalchemy.org/en/20/core/pooling.html#module-sqlalchemy.pool) to your Cloud SQL database, enabling successful connections from your application and following industry best practices. To create a `MSSQLEngine` using `MSSQLEngine.from_instance()` you need to provide only 4 things: 1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located. 2. `region` : Region where the Cloud SQL instance is located. 3. `instance` : The name of the Cloud SQL instance. 4. `database` : The name of the database to connect to on the Cloud SQL instance. 5. `user` : Database user to use for built-in database authentication and login. 6. `password` : Database password to use for built-in database authentication and login. ``` from langchain_google_cloud_sql_mssql import MSSQLEngineengine = MSSQLEngine.from_instance( project_id=PROJECT_ID, region=REGION, instance=INSTANCE, database=DATABASE, user=DB_USER, password=DB_PASS,) ``` ### Initialize a table[​](#initialize-a-table "Direct link to Initialize a table") Initialize a table of default schema via `MSSQLEngine.init_document_table(<table_name>)`. Table Columns: * page\_content (type: text) * langchain\_metadata (type: JSON) `overwrite_existing=True` flag means the newly initialized table will replace any existing table of the same name. ``` engine.init_document_table(TABLE_NAME, overwrite_existing=True) ``` ### Save documents[​](#save-documents "Direct link to Save documents") Save langchain documents with `MSSQLDocumentSaver.add_documents(<documents>)`. To initialize `MSSQLDocumentSaver` class you need to provide 2 things: 1. `engine` - An instance of a `MSSQLEngine` engine. 2. `table_name` - The name of the table within the Cloud SQL database to store langchain documents. ``` from langchain_core.documents import Documentfrom langchain_google_cloud_sql_mssql import MSSQLDocumentSavertest_docs = [ Document( page_content="Apple Granny Smith 150 0.99 1", metadata={"fruit_id": 1}, ), Document( page_content="Banana Cavendish 200 0.59 0", metadata={"fruit_id": 2}, ), Document( page_content="Orange Navel 80 1.29 1", metadata={"fruit_id": 3}, ),]saver = MSSQLDocumentSaver(engine=engine, table_name=TABLE_NAME)saver.add_documents(test_docs) ``` ### Load documents[​](#load-documents "Direct link to Load documents") Load langchain documents with `MSSQLLoader.load()` or `MSSQLLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `MSSQLDocumentSaver` class you need to provide: 1. `engine` - An instance of a `MSSQLEngine` engine. 2. `table_name` - The name of the table within the Cloud SQL database to store langchain documents. ``` from langchain_google_cloud_sql_mssql import MSSQLLoaderloader = MSSQLLoader(engine=engine, table_name=TABLE_NAME)docs = loader.lazy_load()for doc in docs: print("Loaded documents:", doc) ``` ### Load documents via query[​](#load-documents-via-query "Direct link to Load documents via query") Other than loading documents from a table, we can also choose to load documents from a view generated from a SQL query. For example: ``` from langchain_google_cloud_sql_mssql import MSSQLLoaderloader = MSSQLLoader( engine=engine, query=f"select * from \"{TABLE_NAME}\" where JSON_VALUE(langchain_metadata, '$.fruit_id') = 1;",)onedoc = loader.load()onedoc ``` The view generated from SQL query can have different schema than default table. In such cases, the behavior of MSSQLLoader is the same as loading from table with non-default schema. Please refer to section [Load documents with customized document page content & metadata](#Load-documents-with-customized-document-page-content-&-metadata). ### Delete documents[​](#delete-documents "Direct link to Delete documents") Delete a list of langchain documents from MSSQL table with `MSSQLDocumentSaver.delete(<documents>)`. For table with default schema (page\_content, langchain\_metadata), the deletion criteria is: A `row` should be deleted if there exists a `document` in the list, such that * `document.page_content` equals `row[page_content]` * `document.metadata` equals `row[langchain_metadata]` ``` from langchain_google_cloud_sql_mssql import MSSQLLoaderloader = MSSQLLoader(engine=engine, table_name=TABLE_NAME)docs = loader.load()print("Documents before delete:", docs)saver.delete(onedoc)print("Documents after delete:", loader.load()) ``` ## Advanced Usage[​](#advanced-usage "Direct link to Advanced Usage") ### Load documents with customized document page content & metadata[​](#load-documents-with-customized-document-page-content-metadata "Direct link to Load documents with customized document page content & metadata") First we prepare an example table with non-default schema, and populate it with some arbitary data. ``` import sqlalchemywith engine.connect() as conn: conn.execute(sqlalchemy.text(f'DROP TABLE IF EXISTS "{TABLE_NAME}"')) conn.commit() conn.execute( sqlalchemy.text( f""" IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[{TABLE_NAME}]') AND type in (N'U')) BEGIN CREATE TABLE [dbo].[{TABLE_NAME}]( fruit_id INT IDENTITY(1,1) PRIMARY KEY, fruit_name VARCHAR(100) NOT NULL, variety VARCHAR(50), quantity_in_stock INT NOT NULL, price_per_unit DECIMAL(6,2) NOT NULL, organic BIT NOT NULL ) END """ ) ) conn.execute( sqlalchemy.text( f""" INSERT INTO "{TABLE_NAME}" (fruit_name, variety, quantity_in_stock, price_per_unit, organic) VALUES ('Apple', 'Granny Smith', 150, 0.99, 1), ('Banana', 'Cavendish', 200, 0.59, 0), ('Orange', 'Navel', 80, 1.29, 1); """ ) ) conn.commit() ``` If we still load langchain documents with default parameters of `MSSQLLoader` from this example table, the `page_content` of loaded documents will be the first column of the table, and `metadata` will be consisting of key-value pairs of all the other columns. ``` loader = MSSQLLoader( engine=engine, table_name=TABLE_NAME,)loader.load() ``` We can specify the content and metadata we want to load by setting the `content_columns` and `metadata_columns` when initializing the `MSSQLLoader`. 1. `content_columns`: The columns to write into the `page_content` of the document. 2. `metadata_columns`: The columns to write into the `metadata` of the document. For example here, the values of columns in `content_columns` will be joined together into a space-separated string, as `page_content` of loaded documents, and `metadata` of loaded documents will only contain key-value pairs of columns specified in `metadata_columns`. ``` loader = MSSQLLoader( engine=engine, table_name=TABLE_NAME, content_columns=[ "variety", "quantity_in_stock", "price_per_unit", "organic", ], metadata_columns=["fruit_id", "fruit_name"],)loader.load() ``` ### Save document with customized page content & metadata[​](#save-document-with-customized-page-content-metadata "Direct link to Save document with customized page content & metadata") In order to save langchain document into table with customized metadata fields. We need first create such a table via `MSSQLEngine.init_document_table()`, and specify the list of `metadata_columns` we want it to have. In this example, the created table will have table columns: * description (type: text): for storing fruit description. * fruit\_name (type text): for storing fruit name. * organic (type tinyint(1)): to tell if the fruit is organic. * other\_metadata (type: JSON): for storing other metadata information of the fruit. We can use the following parameters with `MSSQLEngine.init_document_table()` to create the table: 1. `table_name`: The name of the table within the Cloud SQL database to store langchain documents. 2. `metadata_columns`: A list of `sqlalchemy.Column` indicating the list of metadata columns we need. 3. `content_column`: The name of column to store `page_content` of langchain document. Default: `page_content`. 4. `metadata_json_column`: The name of JSON column to store extra `metadata` of langchain document. Default: `langchain_metadata`. ``` engine.init_document_table( TABLE_NAME, metadata_columns=[ sqlalchemy.Column( "fruit_name", sqlalchemy.UnicodeText, primary_key=False, nullable=True, ), sqlalchemy.Column( "organic", sqlalchemy.Boolean, primary_key=False, nullable=True, ), ], content_column="description", metadata_json_column="other_metadata", overwrite_existing=True,) ``` Save documents with `MSSQLDocumentSaver.add_documents(<documents>)`. As you can see in this example, * `document.page_content` will be saved into `description` column. * `document.metadata.fruit_name` will be saved into `fruit_name` column. * `document.metadata.organic` will be saved into `organic` column. * `document.metadata.fruit_id` will be saved into `other_metadata` column in JSON format. ``` test_docs = [ Document( page_content="Granny Smith 150 0.99", metadata={"fruit_id": 1, "fruit_name": "Apple", "organic": 1}, ),]saver = MSSQLDocumentSaver( engine=engine, table_name=TABLE_NAME, content_column="description", metadata_json_column="other_metadata",)saver.add_documents(test_docs) ``` ``` with engine.connect() as conn: result = conn.execute(sqlalchemy.text(f'select * from "{TABLE_NAME}";')) print(result.keys()) print(result.fetchall()) ``` ### Delete documents with customized page content & metadata[​](#delete-documents-with-customized-page-content-metadata "Direct link to Delete documents with customized page content & metadata") We can also delete documents from table with customized metadata columns via `MSSQLDocumentSaver.delete(<documents>)`. The deletion criteria is: A `row` should be deleted if there exists a `document` in the list, such that * `document.page_content` equals `row[page_content]` * For every metadata field `k` in `document.metadata` * `document.metadata[k]` equals `row[k]` or `document.metadata[k]` equals `row[langchain_metadata][k]` * There no extra metadata field presents in `row` but not in `document.metadata`. ``` loader = MSSQLLoader(engine=engine, table_name=TABLE_NAME)docs = loader.load()print("Documents before delete:", docs)saver.delete(docs)print("Documents after delete:", loader.load()) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:38:59.900Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_mssql/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_mssql/", "description": "Cloud SQL is a fully managed", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3442", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_cloud_sql_mssql\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:38:59 GMT", "etag": "W/\"9223e0a0744f6d68a08ad1bf56735dc6\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::mmd2j-1713753539525-a5f5784407e8" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_mssql/", "property": "og:url" }, { "content": "Google Cloud SQL for SQL server | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Cloud SQL is a fully managed", "property": "og:description" } ], "title": "Google Cloud SQL for SQL server | 🦜️🔗 LangChain" }
Cloud SQL is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers MySQL, PostgreSQL, and SQL Server database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL’s Langchain integrations. This notebook goes over how to use Cloud SQL for SQL server to save, load and delete langchain documents with MSSQLLoader and MSSQLDocumentSaver. Learn more about the package on GitHub. Open In Colab Before You Begin​ To run this notebook, you will need to do the following: Create a Google Cloud Project Enable the Cloud SQL Admin API. Create a Cloud SQL for SQL server instance Create a Cloud SQL database Add an IAM database user to the database (Optional) After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. # @markdown Please fill in the both the Google Cloud region and name of your Cloud SQL instance. REGION = "us-central1" # @param {type:"string"} INSTANCE = "test-instance" # @param {type:"string"} # @markdown Please fill in user name and password of your Cloud SQL instance. DB_USER = "sqlserver" # @param {type:"string"} DB_PASS = "password" # @param {type:"string"} # @markdown Please specify a database and a table for demo purpose. DATABASE = "test" # @param {type:"string"} TABLE_NAME = "test-default" # @param {type:"string"} 🦜🔗 Library Installation​ The integration lives in its own langchain-google-cloud-sql-mssql package, so we need to install it. %pip install --upgrade --quiet langchain-google-cloud-sql-mssql Colab only: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. # # Automatically restart kernel after installs so that your environment can access the new packages # import IPython # app = IPython.Application.instance() # app.kernel.do_shutdown(True) 🔐 Authentication​ Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. If you are using Colab to run this notebook, use the cell below and continue. If you are using Vertex AI Workbench, check out the setup instructions here. from google.colab import auth auth.authenticate_user() ☁ Set Your Google Cloud Project​ Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: Run gcloud config list. Run gcloud projects list. See the support page: Locate the project ID. # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell. PROJECT_ID = "my-project-id" # @param {type:"string"} # Set the project id !gcloud config set project {PROJECT_ID} 💡 API Enablement​ The langchain-google-cloud-sql-mssql package requires that you enable the Cloud SQL Admin API in your Google Cloud Project. # enable Cloud SQL Admin API !gcloud services enable sqladmin.googleapis.com Basic Usage​ MSSQLEngine Connection Pool​ Before saving or loading documents from MSSQL table, we need first configures a connection pool to Cloud SQL database. The MSSQLEngine configures a SQLAlchemy connection pool to your Cloud SQL database, enabling successful connections from your application and following industry best practices. To create a MSSQLEngine using MSSQLEngine.from_instance() you need to provide only 4 things: project_id : Project ID of the Google Cloud Project where the Cloud SQL instance is located. region : Region where the Cloud SQL instance is located. instance : The name of the Cloud SQL instance. database : The name of the database to connect to on the Cloud SQL instance. user : Database user to use for built-in database authentication and login. password : Database password to use for built-in database authentication and login. from langchain_google_cloud_sql_mssql import MSSQLEngine engine = MSSQLEngine.from_instance( project_id=PROJECT_ID, region=REGION, instance=INSTANCE, database=DATABASE, user=DB_USER, password=DB_PASS, ) Initialize a table​ Initialize a table of default schema via MSSQLEngine.init_document_table(<table_name>). Table Columns: page_content (type: text) langchain_metadata (type: JSON) overwrite_existing=True flag means the newly initialized table will replace any existing table of the same name. engine.init_document_table(TABLE_NAME, overwrite_existing=True) Save documents​ Save langchain documents with MSSQLDocumentSaver.add_documents(<documents>). To initialize MSSQLDocumentSaver class you need to provide 2 things: engine - An instance of a MSSQLEngine engine. table_name - The name of the table within the Cloud SQL database to store langchain documents. from langchain_core.documents import Document from langchain_google_cloud_sql_mssql import MSSQLDocumentSaver test_docs = [ Document( page_content="Apple Granny Smith 150 0.99 1", metadata={"fruit_id": 1}, ), Document( page_content="Banana Cavendish 200 0.59 0", metadata={"fruit_id": 2}, ), Document( page_content="Orange Navel 80 1.29 1", metadata={"fruit_id": 3}, ), ] saver = MSSQLDocumentSaver(engine=engine, table_name=TABLE_NAME) saver.add_documents(test_docs) Load documents​ Load langchain documents with MSSQLLoader.load() or MSSQLLoader.lazy_load(). lazy_load returns a generator that only queries database during the iteration. To initialize MSSQLDocumentSaver class you need to provide: engine - An instance of a MSSQLEngine engine. table_name - The name of the table within the Cloud SQL database to store langchain documents. from langchain_google_cloud_sql_mssql import MSSQLLoader loader = MSSQLLoader(engine=engine, table_name=TABLE_NAME) docs = loader.lazy_load() for doc in docs: print("Loaded documents:", doc) Load documents via query​ Other than loading documents from a table, we can also choose to load documents from a view generated from a SQL query. For example: from langchain_google_cloud_sql_mssql import MSSQLLoader loader = MSSQLLoader( engine=engine, query=f"select * from \"{TABLE_NAME}\" where JSON_VALUE(langchain_metadata, '$.fruit_id') = 1;", ) onedoc = loader.load() onedoc The view generated from SQL query can have different schema than default table. In such cases, the behavior of MSSQLLoader is the same as loading from table with non-default schema. Please refer to section Load documents with customized document page content & metadata. Delete documents​ Delete a list of langchain documents from MSSQL table with MSSQLDocumentSaver.delete(<documents>). For table with default schema (page_content, langchain_metadata), the deletion criteria is: A row should be deleted if there exists a document in the list, such that document.page_content equals row[page_content] document.metadata equals row[langchain_metadata] from langchain_google_cloud_sql_mssql import MSSQLLoader loader = MSSQLLoader(engine=engine, table_name=TABLE_NAME) docs = loader.load() print("Documents before delete:", docs) saver.delete(onedoc) print("Documents after delete:", loader.load()) Advanced Usage​ Load documents with customized document page content & metadata​ First we prepare an example table with non-default schema, and populate it with some arbitary data. import sqlalchemy with engine.connect() as conn: conn.execute(sqlalchemy.text(f'DROP TABLE IF EXISTS "{TABLE_NAME}"')) conn.commit() conn.execute( sqlalchemy.text( f""" IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[{TABLE_NAME}]') AND type in (N'U')) BEGIN CREATE TABLE [dbo].[{TABLE_NAME}]( fruit_id INT IDENTITY(1,1) PRIMARY KEY, fruit_name VARCHAR(100) NOT NULL, variety VARCHAR(50), quantity_in_stock INT NOT NULL, price_per_unit DECIMAL(6,2) NOT NULL, organic BIT NOT NULL ) END """ ) ) conn.execute( sqlalchemy.text( f""" INSERT INTO "{TABLE_NAME}" (fruit_name, variety, quantity_in_stock, price_per_unit, organic) VALUES ('Apple', 'Granny Smith', 150, 0.99, 1), ('Banana', 'Cavendish', 200, 0.59, 0), ('Orange', 'Navel', 80, 1.29, 1); """ ) ) conn.commit() If we still load langchain documents with default parameters of MSSQLLoader from this example table, the page_content of loaded documents will be the first column of the table, and metadata will be consisting of key-value pairs of all the other columns. loader = MSSQLLoader( engine=engine, table_name=TABLE_NAME, ) loader.load() We can specify the content and metadata we want to load by setting the content_columns and metadata_columns when initializing the MSSQLLoader. content_columns: The columns to write into the page_content of the document. metadata_columns: The columns to write into the metadata of the document. For example here, the values of columns in content_columns will be joined together into a space-separated string, as page_content of loaded documents, and metadata of loaded documents will only contain key-value pairs of columns specified in metadata_columns. loader = MSSQLLoader( engine=engine, table_name=TABLE_NAME, content_columns=[ "variety", "quantity_in_stock", "price_per_unit", "organic", ], metadata_columns=["fruit_id", "fruit_name"], ) loader.load() Save document with customized page content & metadata​ In order to save langchain document into table with customized metadata fields. We need first create such a table via MSSQLEngine.init_document_table(), and specify the list of metadata_columns we want it to have. In this example, the created table will have table columns: description (type: text): for storing fruit description. fruit_name (type text): for storing fruit name. organic (type tinyint(1)): to tell if the fruit is organic. other_metadata (type: JSON): for storing other metadata information of the fruit. We can use the following parameters with MSSQLEngine.init_document_table() to create the table: table_name: The name of the table within the Cloud SQL database to store langchain documents. metadata_columns: A list of sqlalchemy.Column indicating the list of metadata columns we need. content_column: The name of column to store page_content of langchain document. Default: page_content. metadata_json_column: The name of JSON column to store extra metadata of langchain document. Default: langchain_metadata. engine.init_document_table( TABLE_NAME, metadata_columns=[ sqlalchemy.Column( "fruit_name", sqlalchemy.UnicodeText, primary_key=False, nullable=True, ), sqlalchemy.Column( "organic", sqlalchemy.Boolean, primary_key=False, nullable=True, ), ], content_column="description", metadata_json_column="other_metadata", overwrite_existing=True, ) Save documents with MSSQLDocumentSaver.add_documents(<documents>). As you can see in this example, document.page_content will be saved into description column. document.metadata.fruit_name will be saved into fruit_name column. document.metadata.organic will be saved into organic column. document.metadata.fruit_id will be saved into other_metadata column in JSON format. test_docs = [ Document( page_content="Granny Smith 150 0.99", metadata={"fruit_id": 1, "fruit_name": "Apple", "organic": 1}, ), ] saver = MSSQLDocumentSaver( engine=engine, table_name=TABLE_NAME, content_column="description", metadata_json_column="other_metadata", ) saver.add_documents(test_docs) with engine.connect() as conn: result = conn.execute(sqlalchemy.text(f'select * from "{TABLE_NAME}";')) print(result.keys()) print(result.fetchall()) Delete documents with customized page content & metadata​ We can also delete documents from table with customized metadata columns via MSSQLDocumentSaver.delete(<documents>). The deletion criteria is: A row should be deleted if there exists a document in the list, such that document.page_content equals row[page_content] For every metadata field k in document.metadata document.metadata[k] equals row[k] or document.metadata[k] equals row[langchain_metadata][k] There no extra metadata field presents in row but not in document.metadata. loader = MSSQLLoader(engine=engine, table_name=TABLE_NAME) docs = loader.load() print("Documents before delete:", docs) saver.delete(docs) print("Documents after delete:", loader.load())
https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_pg/
> [Cloud SQL for PostgreSQL](https://cloud.google.com/sql/docs/postgres) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud Platform. Extend your database application to build AI-powered experiences leveraging Cloud SQL for PostgreSQL’s Langchain integrations. This notebook goes over how to use `Cloud SQL for PostgreSQL` to load Documents with the `PostgresLoader` class. Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-pg-python/). [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-pg-python/blob/main/docs/document_loader.ipynb) Open In Colab ## Before you begin[​](#before-you-begin "Direct link to Before you begin") To run this notebook, you will need to do the following: * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project) * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com) * [Create a Cloud SQL for PostgreSQL instance.](https://cloud.google.com/sql/docs/postgres/create-instance) * [Create a Cloud SQL for PostgreSQL database.](https://cloud.google.com/sql/docs/postgres/create-manage-databases) * [Add a User to the database.](https://cloud.google.com/sql/docs/postgres/create-manage-users) ### 🦜🔗 Library Installation[​](#library-installation "Direct link to 🦜🔗 Library Installation") Install the integration library, `langchain_google_cloud_sql_pg`. ``` %pip install --upgrade --quiet langchain_google_cloud_sql_pg ``` **Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. ``` # # Automatically restart kernel after installs so that your environment can access the new packages# import IPython# app = IPython.Application.instance()# app.kernel.do_shutdown(True) ``` ### 🔐 Authentication[​](#authentication "Direct link to 🔐 Authentication") Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. * If you are using Colab to run this notebook, use the cell below and continue. * If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env). ``` from google.colab import authauth.authenticate_user() ``` ### ☁ Set Your Google Cloud Project[​](#set-your-google-cloud-project "Direct link to ☁ Set Your Google Cloud Project") Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: * Run `gcloud config list`. * Run `gcloud projects list`. * See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113). ``` # @title Project { display-mode: "form" }PROJECT_ID = "gcp_project_id" # @param {type:"string"}# Set the project id! gcloud config set project {PROJECT_ID} ``` ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") ### Set Cloud SQL database values[​](#set-cloud-sql-database-values "Direct link to Set Cloud SQL database values") Find your database variables, in the [Cloud SQL Instances page](https://console.cloud.google.com/sql/instances). ``` # @title Set Your Values Here { display-mode: "form" }REGION = "us-central1" # @param {type: "string"}INSTANCE = "my-primary" # @param {type: "string"}DATABASE = "my-database" # @param {type: "string"}TABLE_NAME = "vector_store" # @param {type: "string"} ``` ### Cloud SQL Engine[​](#cloud-sql-engine "Direct link to Cloud SQL Engine") One of the requirements and arguments to establish PostgreSQL as a document loader is a `PostgresEngine` object. The `PostgresEngine` configures a connection pool to your Cloud SQL for PostgreSQL database, enabling successful connections from your application and following industry best practices. To create a `PostgresEngine` using `PostgresEngine.from_instance()` you need to provide only 4 things: 1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located. 2. `region` : Region where the Cloud SQL instance is located. 3. `instance` : The name of the Cloud SQL instance. 4. `database` : The name of the database to connect to on the Cloud SQL instance. By default, [IAM database authentication](https://cloud.google.com/sql/docs/postgres/iam-authentication) will be used as the method of database authentication. This library uses the IAM principal belonging to the [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) sourced from the environment. Optionally, [built-in database authentication](https://cloud.google.com/sql/docs/postgres/users) using a username and password to access the Cloud SQL database can also be used. Just provide the optional `user` and `password` arguments to `PostgresEngine.from_instance()`: * `user` : Database user to use for built-in database authentication and login * `password` : Database password to use for built-in database authentication and login. **Note**: This tutorial demonstrates the async interface. All async methods have corresponding sync methods. ``` from langchain_google_cloud_sql_pg import PostgresEngineengine = await PostgresEngine.afrom_instance( project_id=PROJECT_ID, region=REGION, instance=INSTANCE, database=DATABASE,) ``` ### Create PostgresLoader[​](#create-postgresloader "Direct link to Create PostgresLoader") ``` from langchain_google_cloud_sql_pg import PostgresLoader# Creating a basic PostgreSQL objectloader = await PostgresLoader.create(engine, table_name=TABLE_NAME) ``` ### Load Documents via default table[​](#load-documents-via-default-table "Direct link to Load Documents via default table") The loader returns a list of Documents from the table using the first column as page\_content and all other columns as metadata. The default table will have the first column as page\_content and the second column as metadata (JSON). Each row becomes a document. Please note that if you want your documents to have ids you will need to add them in. ``` from langchain_google_cloud_sql_pg import PostgresLoader# Creating a basic PostgresLoader objectloader = await PostgresLoader.create(engine, table_name=TABLE_NAME)docs = await loader.aload()print(docs) ``` ### Load documents via custom table/metadata or custom page content columns[​](#load-documents-via-custom-tablemetadata-or-custom-page-content-columns "Direct link to Load documents via custom table/metadata or custom page content columns") ``` loader = await PostgresLoader.create( engine, table_name=TABLE_NAME, content_columns=["product_name"], # Optional metadata_columns=["id"], # Optional)docs = await loader.aload()print(docs) ``` ### Set page content format[​](#set-page-content-format "Direct link to Set page content format") The loader returns a list of Documents, with one document per row, with page content in specified string format, i.e. text (space separated concatenation), JSON, YAML, CSV, etc. JSON and YAML formats include headers, while text and CSV do not include field headers. ``` loader = await PostgresLoader.create( engine, table_name="products", content_columns=["product_name", "description"], format="YAML",)docs = await loader.aload()print(docs) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:00.609Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_pg/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_pg/", "description": "Cloud SQL for PostgreSQL", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3443", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_cloud_sql_pg\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:00 GMT", "etag": "W/\"3c02d4aed50d918a27de882dd9ad4fc9\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::2cm6b-1713753540528-a888addff834" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_pg/", "property": "og:url" }, { "content": "Google Cloud SQL for PostgreSQL | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Cloud SQL for PostgreSQL", "property": "og:description" } ], "title": "Google Cloud SQL for PostgreSQL | 🦜️🔗 LangChain" }
Cloud SQL for PostgreSQL is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud Platform. Extend your database application to build AI-powered experiences leveraging Cloud SQL for PostgreSQL’s Langchain integrations. This notebook goes over how to use Cloud SQL for PostgreSQL to load Documents with the PostgresLoader class. Learn more about the package on GitHub. Open In Colab Before you begin​ To run this notebook, you will need to do the following: Create a Google Cloud Project Enable the Cloud SQL Admin API. Create a Cloud SQL for PostgreSQL instance. Create a Cloud SQL for PostgreSQL database. Add a User to the database. 🦜🔗 Library Installation​ Install the integration library, langchain_google_cloud_sql_pg. %pip install --upgrade --quiet langchain_google_cloud_sql_pg Colab only: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. # # Automatically restart kernel after installs so that your environment can access the new packages # import IPython # app = IPython.Application.instance() # app.kernel.do_shutdown(True) 🔐 Authentication​ Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. If you are using Colab to run this notebook, use the cell below and continue. If you are using Vertex AI Workbench, check out the setup instructions here. from google.colab import auth auth.authenticate_user() ☁ Set Your Google Cloud Project​ Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: Run gcloud config list. Run gcloud projects list. See the support page: Locate the project ID. # @title Project { display-mode: "form" } PROJECT_ID = "gcp_project_id" # @param {type:"string"} # Set the project id ! gcloud config set project {PROJECT_ID} Basic Usage​ Set Cloud SQL database values​ Find your database variables, in the Cloud SQL Instances page. # @title Set Your Values Here { display-mode: "form" } REGION = "us-central1" # @param {type: "string"} INSTANCE = "my-primary" # @param {type: "string"} DATABASE = "my-database" # @param {type: "string"} TABLE_NAME = "vector_store" # @param {type: "string"} Cloud SQL Engine​ One of the requirements and arguments to establish PostgreSQL as a document loader is a PostgresEngine object. The PostgresEngine configures a connection pool to your Cloud SQL for PostgreSQL database, enabling successful connections from your application and following industry best practices. To create a PostgresEngine using PostgresEngine.from_instance() you need to provide only 4 things: project_id : Project ID of the Google Cloud Project where the Cloud SQL instance is located. region : Region where the Cloud SQL instance is located. instance : The name of the Cloud SQL instance. database : The name of the database to connect to on the Cloud SQL instance. By default, IAM database authentication will be used as the method of database authentication. This library uses the IAM principal belonging to the Application Default Credentials (ADC) sourced from the environment. Optionally, built-in database authentication using a username and password to access the Cloud SQL database can also be used. Just provide the optional user and password arguments to PostgresEngine.from_instance(): user : Database user to use for built-in database authentication and login password : Database password to use for built-in database authentication and login. Note: This tutorial demonstrates the async interface. All async methods have corresponding sync methods. from langchain_google_cloud_sql_pg import PostgresEngine engine = await PostgresEngine.afrom_instance( project_id=PROJECT_ID, region=REGION, instance=INSTANCE, database=DATABASE, ) Create PostgresLoader​ from langchain_google_cloud_sql_pg import PostgresLoader # Creating a basic PostgreSQL object loader = await PostgresLoader.create(engine, table_name=TABLE_NAME) Load Documents via default table​ The loader returns a list of Documents from the table using the first column as page_content and all other columns as metadata. The default table will have the first column as page_content and the second column as metadata (JSON). Each row becomes a document. Please note that if you want your documents to have ids you will need to add them in. from langchain_google_cloud_sql_pg import PostgresLoader # Creating a basic PostgresLoader object loader = await PostgresLoader.create(engine, table_name=TABLE_NAME) docs = await loader.aload() print(docs) Load documents via custom table/metadata or custom page content columns​ loader = await PostgresLoader.create( engine, table_name=TABLE_NAME, content_columns=["product_name"], # Optional metadata_columns=["id"], # Optional ) docs = await loader.aload() print(docs) Set page content format​ The loader returns a list of Documents, with one document per row, with page content in specified string format, i.e. text (space separated concatenation), JSON, YAML, CSV, etc. JSON and YAML formats include headers, while text and CSV do not include field headers. loader = await PostgresLoader.create( engine, table_name="products", content_columns=["product_name", "description"], format="YAML", ) docs = await loader.aload() print(docs)
https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_mysql/
> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgresql), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL’s Langchain integrations. This notebook goes over how to use [Cloud SQL for MySQL](https://cloud.google.com/sql/mysql) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MySQLLoader` and `MySQLDocumentSaver`. Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mysql-python/). [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-mysql-python/blob/main/docs/document_loader.ipynb) Open In Colab ## Before You Begin[​](#before-you-begin "Direct link to Before You Begin") To run this notebook, you will need to do the following: * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project) * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com) * [Create a Cloud SQL for MySQL instance](https://cloud.google.com/sql/docs/mysql/create-instance) * [Create a Cloud SQL database](https://cloud.google.com/sql/docs/mysql/create-manage-databases) * [Add an IAM database user to the database](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users#creating-a-database-user) (Optional) After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. ``` # @markdown Please fill in the both the Google Cloud region and name of your Cloud SQL instance.REGION = "us-central1" # @param {type:"string"}INSTANCE = "test-instance" # @param {type:"string"}# @markdown Please specify a database and a table for demo purpose.DATABASE = "test" # @param {type:"string"}TABLE_NAME = "test-default" # @param {type:"string"} ``` ### 🦜🔗 Library Installation[​](#library-installation "Direct link to 🦜🔗 Library Installation") The integration lives in its own `langchain-google-cloud-sql-mysql` package, so we need to install it. ``` %pip install -upgrade --quiet langchain-google-cloud-sql-mysql ``` **Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. ``` # # Automatically restart kernel after installs so that your environment can access the new packages# import IPython# app = IPython.Application.instance()# app.kernel.do_shutdown(True) ``` ### ☁ Set Your Google Cloud Project[​](#set-your-google-cloud-project "Direct link to ☁ Set Your Google Cloud Project") Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: * Run `gcloud config list`. * Run `gcloud projects list`. * See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113). ``` # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.PROJECT_ID = "my-project-id" # @param {type:"string"}# Set the project id!gcloud config set project {PROJECT_ID} ``` ### 🔐 Authentication[​](#authentication "Direct link to 🔐 Authentication") Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. * If you are using Colab to run this notebook, use the cell below and continue. * If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env). ``` from google.colab import authauth.authenticate_user() ``` ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") ### MySQLEngine Connection Pool[​](#mysqlengine-connection-pool "Direct link to MySQLEngine Connection Pool") Before saving or loading documents from MySQL table, we need first configures a connection pool to Cloud SQL database. The `MySQLEngine` configures a connection pool to your Cloud SQL database, enabling successful connections from your application and following industry best practices. To create a `MySQLEngine` using `MySQLEngine.from_instance()` you need to provide only 4 things: 1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located. 2. `region` : Region where the Cloud SQL instance is located. 3. `instance` : The name of the Cloud SQL instance. 4. `database` : The name of the database to connect to on the Cloud SQL instance. By default, [IAM database authentication](https://cloud.google.com/sql/docs/mysql/iam-authentication#iam-db-auth) will be used as the method of database authentication. This library uses the IAM principal belonging to the [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) sourced from the envionment. For more informatin on IAM database authentication please see: * [Configure an instance for IAM database authentication](https://cloud.google.com/sql/docs/mysql/create-edit-iam-instances) * [Manage users with IAM database authentication](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users) Optionally, [built-in database authentication](https://cloud.google.com/sql/docs/mysql/built-in-authentication) using a username and password to access the Cloud SQL database can also be used. Just provide the optional `user` and `password` arguments to `MySQLEngine.from_instance()`: * `user` : Database user to use for built-in database authentication and login * `password` : Database password to use for built-in database authentication and login. ``` from langchain_google_cloud_sql_mysql import MySQLEngineengine = MySQLEngine.from_instance( project_id=PROJECT_ID, region=REGION, instance=INSTANCE, database=DATABASE) ``` ### Initialize a table[​](#initialize-a-table "Direct link to Initialize a table") Initialize a table of default schema via `MySQLEngine.init_document_table(<table_name>)`. Table Columns: * page\_content (type: text) * langchain\_metadata (type: JSON) `overwrite_existing=True` flag means the newly initialized table will replace any existing table of the same name. ``` engine.init_document_table(TABLE_NAME, overwrite_existing=True) ``` ### Save documents[​](#save-documents "Direct link to Save documents") Save langchain documents with `MySQLDocumentSaver.add_documents(<documents>)`. To initialize `MySQLDocumentSaver` class you need to provide 2 things: 1. `engine` - An instance of a `MySQLEngine` engine. 2. `table_name` - The name of the table within the Cloud SQL database to store langchain documents. ``` from langchain_core.documents import Documentfrom langchain_google_cloud_sql_mysql import MySQLDocumentSavertest_docs = [ Document( page_content="Apple Granny Smith 150 0.99 1", metadata={"fruit_id": 1}, ), Document( page_content="Banana Cavendish 200 0.59 0", metadata={"fruit_id": 2}, ), Document( page_content="Orange Navel 80 1.29 1", metadata={"fruit_id": 3}, ),]saver = MySQLDocumentSaver(engine=engine, table_name=TABLE_NAME)saver.add_documents(test_docs) ``` ### Load documents[​](#load-documents "Direct link to Load documents") Load langchain documents with `MySQLLoader.load()` or `MySQLLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `MySQLLoader` class you need to provide: 1. `engine` - An instance of a `MySQLEngine` engine. 2. `table_name` - The name of the table within the Cloud SQL database to store langchain documents. ``` from langchain_google_cloud_sql_mysql import MySQLLoaderloader = MySQLLoader(engine=engine, table_name=TABLE_NAME)docs = loader.lazy_load()for doc in docs: print("Loaded documents:", doc) ``` ### Load documents via query[​](#load-documents-via-query "Direct link to Load documents via query") Other than loading documents from a table, we can also choose to load documents from a view generated from a SQL query. For example: ``` from langchain_google_cloud_sql_mysql import MySQLLoaderloader = MySQLLoader( engine=engine, query=f"select * from `{TABLE_NAME}` where JSON_EXTRACT(langchain_metadata, '$.fruit_id') = 1;",)onedoc = loader.load()onedoc ``` The view generated from SQL query can have different schema than default table. In such cases, the behavior of MySQLLoader is the same as loading from table with non-default schema. Please refer to section [Load documents with customized document page content & metadata](#Load-documents-with-customized-document-page-content-&-metadata). ### Delete documents[​](#delete-documents "Direct link to Delete documents") Delete a list of langchain documents from MySQL table with `MySQLDocumentSaver.delete(<documents>)`. For table with default schema (page\_content, langchain\_metadata), the deletion criteria is: A `row` should be deleted if there exists a `document` in the list, such that * `document.page_content` equals `row[page_content]` * `document.metadata` equals `row[langchain_metadata]` ``` from langchain_google_cloud_sql_mysql import MySQLLoaderloader = MySQLLoader(engine=engine, table_name=TABLE_NAME)docs = loader.load()print("Documents before delete:", docs)saver.delete(onedoc)print("Documents after delete:", loader.load()) ``` ## Advanced Usage[​](#advanced-usage "Direct link to Advanced Usage") ### Load documents with customized document page content & metadata[​](#load-documents-with-customized-document-page-content-metadata "Direct link to Load documents with customized document page content & metadata") First we prepare an example table with non-default schema, and populate it with some arbitrary data. ``` import sqlalchemywith engine.connect() as conn: conn.execute(sqlalchemy.text(f"DROP TABLE IF EXISTS `{TABLE_NAME}`")) conn.commit() conn.execute( sqlalchemy.text( f""" CREATE TABLE IF NOT EXISTS `{TABLE_NAME}`( fruit_id INT AUTO_INCREMENT PRIMARY KEY, fruit_name VARCHAR(100) NOT NULL, variety VARCHAR(50), quantity_in_stock INT NOT NULL, price_per_unit DECIMAL(6,2) NOT NULL, organic TINYINT(1) NOT NULL ) """ ) ) conn.execute( sqlalchemy.text( f""" INSERT INTO `{TABLE_NAME}` (fruit_name, variety, quantity_in_stock, price_per_unit, organic) VALUES ('Apple', 'Granny Smith', 150, 0.99, 1), ('Banana', 'Cavendish', 200, 0.59, 0), ('Orange', 'Navel', 80, 1.29, 1); """ ) ) conn.commit() ``` If we still load langchain documents with default parameters of `MySQLLoader` from this example table, the `page_content` of loaded documents will be the first column of the table, and `metadata` will be consisting of key-value pairs of all the other columns. ``` loader = MySQLLoader( engine=engine, table_name=TABLE_NAME,)loader.load() ``` We can specify the content and metadata we want to load by setting the `content_columns` and `metadata_columns` when initializing the `MySQLLoader`. 1. `content_columns`: The columns to write into the `page_content` of the document. 2. `metadata_columns`: The columns to write into the `metadata` of the document. For example here, the values of columns in `content_columns` will be joined together into a space-separated string, as `page_content` of loaded documents, and `metadata` of loaded documents will only contain key-value pairs of columns specified in `metadata_columns`. ``` loader = MySQLLoader( engine=engine, table_name=TABLE_NAME, content_columns=[ "variety", "quantity_in_stock", "price_per_unit", "organic", ], metadata_columns=["fruit_id", "fruit_name"],)loader.load() ``` ### Save document with customized page content & metadata[​](#save-document-with-customized-page-content-metadata "Direct link to Save document with customized page content & metadata") In order to save langchain document into table with customized metadata fields. We need first create such a table via `MySQLEngine.init_document_table()`, and specify the list of `metadata_columns` we want it to have. In this example, the created table will have table columns: * description (type: text): for storing fruit description. * fruit\_name (type text): for storing fruit name. * organic (type tinyint(1)): to tell if the fruit is organic. * other\_metadata (type: JSON): for storing other metadata information of the fruit. We can use the following parameters with `MySQLEngine.init_document_table()` to create the table: 1. `table_name`: The name of the table within the Cloud SQL database to store langchain documents. 2. `metadata_columns`: A list of `sqlalchemy.Column` indicating the list of metadata columns we need. 3. `content_column`: The name of column to store `page_content` of langchain document. Default: `page_content`. 4. `metadata_json_column`: The name of JSON column to store extra `metadata` of langchain document. Default: `langchain_metadata`. ``` engine.init_document_table( TABLE_NAME, metadata_columns=[ sqlalchemy.Column( "fruit_name", sqlalchemy.UnicodeText, primary_key=False, nullable=True, ), sqlalchemy.Column( "organic", sqlalchemy.Boolean, primary_key=False, nullable=True, ), ], content_column="description", metadata_json_column="other_metadata", overwrite_existing=True,) ``` Save documents with `MySQLDocumentSaver.add_documents(<documents>)`. As you can see in this example, * `document.page_content` will be saved into `description` column. * `document.metadata.fruit_name` will be saved into `fruit_name` column. * `document.metadata.organic` will be saved into `organic` column. * `document.metadata.fruit_id` will be saved into `other_metadata` column in JSON format. ``` test_docs = [ Document( page_content="Granny Smith 150 0.99", metadata={"fruit_id": 1, "fruit_name": "Apple", "organic": 1}, ),]saver = MySQLDocumentSaver( engine=engine, table_name=TABLE_NAME, content_column="description", metadata_json_column="other_metadata",)saver.add_documents(test_docs) ``` ``` with engine.connect() as conn: result = conn.execute(sqlalchemy.text(f"select * from `{TABLE_NAME}`;")) print(result.keys()) print(result.fetchall()) ``` ### Delete documents with customized page content & metadata[​](#delete-documents-with-customized-page-content-metadata "Direct link to Delete documents with customized page content & metadata") We can also delete documents from table with customized metadata columns via `MySQLDocumentSaver.delete(<documents>)`. The deletion criteria is: A `row` should be deleted if there exists a `document` in the list, such that * `document.page_content` equals `row[page_content]` * For every metadata field `k` in `document.metadata` * `document.metadata[k]` equals `row[k]` or `document.metadata[k]` equals `row[langchain_metadata][k]` * There no extra metadata field presents in `row` but not in `document.metadata`. ``` loader = MySQLLoader(engine=engine, table_name=TABLE_NAME)docs = loader.load()print("Documents before delete:", docs)saver.delete(docs)print("Documents after delete:", loader.load()) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:00.900Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_mysql/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_mysql/", "description": "Cloud SQL is a fully managed", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_cloud_sql_mysql\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:00 GMT", "etag": "W/\"64ac887bc4ee685b771424112795749a\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::6pr7p-1713753540518-e627e292bd83" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_sql_mysql/", "property": "og:url" }, { "content": "Google Cloud SQL for MySQL | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Cloud SQL is a fully managed", "property": "og:description" } ], "title": "Google Cloud SQL for MySQL | 🦜️🔗 LangChain" }
Cloud SQL is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers MySQL, PostgreSQL, and SQL Server database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL’s Langchain integrations. This notebook goes over how to use Cloud SQL for MySQL to save, load and delete langchain documents with MySQLLoader and MySQLDocumentSaver. Learn more about the package on GitHub. Open In Colab Before You Begin​ To run this notebook, you will need to do the following: Create a Google Cloud Project Enable the Cloud SQL Admin API. Create a Cloud SQL for MySQL instance Create a Cloud SQL database Add an IAM database user to the database (Optional) After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. # @markdown Please fill in the both the Google Cloud region and name of your Cloud SQL instance. REGION = "us-central1" # @param {type:"string"} INSTANCE = "test-instance" # @param {type:"string"} # @markdown Please specify a database and a table for demo purpose. DATABASE = "test" # @param {type:"string"} TABLE_NAME = "test-default" # @param {type:"string"} 🦜🔗 Library Installation​ The integration lives in its own langchain-google-cloud-sql-mysql package, so we need to install it. %pip install -upgrade --quiet langchain-google-cloud-sql-mysql Colab only: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. # # Automatically restart kernel after installs so that your environment can access the new packages # import IPython # app = IPython.Application.instance() # app.kernel.do_shutdown(True) ☁ Set Your Google Cloud Project​ Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: Run gcloud config list. Run gcloud projects list. See the support page: Locate the project ID. # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell. PROJECT_ID = "my-project-id" # @param {type:"string"} # Set the project id !gcloud config set project {PROJECT_ID} 🔐 Authentication​ Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. If you are using Colab to run this notebook, use the cell below and continue. If you are using Vertex AI Workbench, check out the setup instructions here. from google.colab import auth auth.authenticate_user() Basic Usage​ MySQLEngine Connection Pool​ Before saving or loading documents from MySQL table, we need first configures a connection pool to Cloud SQL database. The MySQLEngine configures a connection pool to your Cloud SQL database, enabling successful connections from your application and following industry best practices. To create a MySQLEngine using MySQLEngine.from_instance() you need to provide only 4 things: project_id : Project ID of the Google Cloud Project where the Cloud SQL instance is located. region : Region where the Cloud SQL instance is located. instance : The name of the Cloud SQL instance. database : The name of the database to connect to on the Cloud SQL instance. By default, IAM database authentication will be used as the method of database authentication. This library uses the IAM principal belonging to the Application Default Credentials (ADC) sourced from the envionment. For more informatin on IAM database authentication please see: Configure an instance for IAM database authentication Manage users with IAM database authentication Optionally, built-in database authentication using a username and password to access the Cloud SQL database can also be used. Just provide the optional user and password arguments to MySQLEngine.from_instance(): user : Database user to use for built-in database authentication and login password : Database password to use for built-in database authentication and login. from langchain_google_cloud_sql_mysql import MySQLEngine engine = MySQLEngine.from_instance( project_id=PROJECT_ID, region=REGION, instance=INSTANCE, database=DATABASE ) Initialize a table​ Initialize a table of default schema via MySQLEngine.init_document_table(<table_name>). Table Columns: page_content (type: text) langchain_metadata (type: JSON) overwrite_existing=True flag means the newly initialized table will replace any existing table of the same name. engine.init_document_table(TABLE_NAME, overwrite_existing=True) Save documents​ Save langchain documents with MySQLDocumentSaver.add_documents(<documents>). To initialize MySQLDocumentSaver class you need to provide 2 things: engine - An instance of a MySQLEngine engine. table_name - The name of the table within the Cloud SQL database to store langchain documents. from langchain_core.documents import Document from langchain_google_cloud_sql_mysql import MySQLDocumentSaver test_docs = [ Document( page_content="Apple Granny Smith 150 0.99 1", metadata={"fruit_id": 1}, ), Document( page_content="Banana Cavendish 200 0.59 0", metadata={"fruit_id": 2}, ), Document( page_content="Orange Navel 80 1.29 1", metadata={"fruit_id": 3}, ), ] saver = MySQLDocumentSaver(engine=engine, table_name=TABLE_NAME) saver.add_documents(test_docs) Load documents​ Load langchain documents with MySQLLoader.load() or MySQLLoader.lazy_load(). lazy_load returns a generator that only queries database during the iteration. To initialize MySQLLoader class you need to provide: engine - An instance of a MySQLEngine engine. table_name - The name of the table within the Cloud SQL database to store langchain documents. from langchain_google_cloud_sql_mysql import MySQLLoader loader = MySQLLoader(engine=engine, table_name=TABLE_NAME) docs = loader.lazy_load() for doc in docs: print("Loaded documents:", doc) Load documents via query​ Other than loading documents from a table, we can also choose to load documents from a view generated from a SQL query. For example: from langchain_google_cloud_sql_mysql import MySQLLoader loader = MySQLLoader( engine=engine, query=f"select * from `{TABLE_NAME}` where JSON_EXTRACT(langchain_metadata, '$.fruit_id') = 1;", ) onedoc = loader.load() onedoc The view generated from SQL query can have different schema than default table. In such cases, the behavior of MySQLLoader is the same as loading from table with non-default schema. Please refer to section Load documents with customized document page content & metadata. Delete documents​ Delete a list of langchain documents from MySQL table with MySQLDocumentSaver.delete(<documents>). For table with default schema (page_content, langchain_metadata), the deletion criteria is: A row should be deleted if there exists a document in the list, such that document.page_content equals row[page_content] document.metadata equals row[langchain_metadata] from langchain_google_cloud_sql_mysql import MySQLLoader loader = MySQLLoader(engine=engine, table_name=TABLE_NAME) docs = loader.load() print("Documents before delete:", docs) saver.delete(onedoc) print("Documents after delete:", loader.load()) Advanced Usage​ Load documents with customized document page content & metadata​ First we prepare an example table with non-default schema, and populate it with some arbitrary data. import sqlalchemy with engine.connect() as conn: conn.execute(sqlalchemy.text(f"DROP TABLE IF EXISTS `{TABLE_NAME}`")) conn.commit() conn.execute( sqlalchemy.text( f""" CREATE TABLE IF NOT EXISTS `{TABLE_NAME}`( fruit_id INT AUTO_INCREMENT PRIMARY KEY, fruit_name VARCHAR(100) NOT NULL, variety VARCHAR(50), quantity_in_stock INT NOT NULL, price_per_unit DECIMAL(6,2) NOT NULL, organic TINYINT(1) NOT NULL ) """ ) ) conn.execute( sqlalchemy.text( f""" INSERT INTO `{TABLE_NAME}` (fruit_name, variety, quantity_in_stock, price_per_unit, organic) VALUES ('Apple', 'Granny Smith', 150, 0.99, 1), ('Banana', 'Cavendish', 200, 0.59, 0), ('Orange', 'Navel', 80, 1.29, 1); """ ) ) conn.commit() If we still load langchain documents with default parameters of MySQLLoader from this example table, the page_content of loaded documents will be the first column of the table, and metadata will be consisting of key-value pairs of all the other columns. loader = MySQLLoader( engine=engine, table_name=TABLE_NAME, ) loader.load() We can specify the content and metadata we want to load by setting the content_columns and metadata_columns when initializing the MySQLLoader. content_columns: The columns to write into the page_content of the document. metadata_columns: The columns to write into the metadata of the document. For example here, the values of columns in content_columns will be joined together into a space-separated string, as page_content of loaded documents, and metadata of loaded documents will only contain key-value pairs of columns specified in metadata_columns. loader = MySQLLoader( engine=engine, table_name=TABLE_NAME, content_columns=[ "variety", "quantity_in_stock", "price_per_unit", "organic", ], metadata_columns=["fruit_id", "fruit_name"], ) loader.load() Save document with customized page content & metadata​ In order to save langchain document into table with customized metadata fields. We need first create such a table via MySQLEngine.init_document_table(), and specify the list of metadata_columns we want it to have. In this example, the created table will have table columns: description (type: text): for storing fruit description. fruit_name (type text): for storing fruit name. organic (type tinyint(1)): to tell if the fruit is organic. other_metadata (type: JSON): for storing other metadata information of the fruit. We can use the following parameters with MySQLEngine.init_document_table() to create the table: table_name: The name of the table within the Cloud SQL database to store langchain documents. metadata_columns: A list of sqlalchemy.Column indicating the list of metadata columns we need. content_column: The name of column to store page_content of langchain document. Default: page_content. metadata_json_column: The name of JSON column to store extra metadata of langchain document. Default: langchain_metadata. engine.init_document_table( TABLE_NAME, metadata_columns=[ sqlalchemy.Column( "fruit_name", sqlalchemy.UnicodeText, primary_key=False, nullable=True, ), sqlalchemy.Column( "organic", sqlalchemy.Boolean, primary_key=False, nullable=True, ), ], content_column="description", metadata_json_column="other_metadata", overwrite_existing=True, ) Save documents with MySQLDocumentSaver.add_documents(<documents>). As you can see in this example, document.page_content will be saved into description column. document.metadata.fruit_name will be saved into fruit_name column. document.metadata.organic will be saved into organic column. document.metadata.fruit_id will be saved into other_metadata column in JSON format. test_docs = [ Document( page_content="Granny Smith 150 0.99", metadata={"fruit_id": 1, "fruit_name": "Apple", "organic": 1}, ), ] saver = MySQLDocumentSaver( engine=engine, table_name=TABLE_NAME, content_column="description", metadata_json_column="other_metadata", ) saver.add_documents(test_docs) with engine.connect() as conn: result = conn.execute(sqlalchemy.text(f"select * from `{TABLE_NAME}`;")) print(result.keys()) print(result.fetchall()) Delete documents with customized page content & metadata​ We can also delete documents from table with customized metadata columns via MySQLDocumentSaver.delete(<documents>). The deletion criteria is: A row should be deleted if there exists a document in the list, such that document.page_content equals row[page_content] For every metadata field k in document.metadata document.metadata[k] equals row[k] or document.metadata[k] equals row[langchain_metadata][k] There no extra metadata field presents in row but not in document.metadata. loader = MySQLLoader(engine=engine, table_name=TABLE_NAME) docs = loader.load() print("Documents before delete:", docs) saver.delete(docs) print("Documents after delete:", loader.load())
https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_directory/
## Google Cloud Storage Directory > [Google Cloud Storage](https://en.wikipedia.org/wiki/Google_Cloud_Storage) is a managed service for storing unstructured data. This covers how to load document objects from an `Google Cloud Storage (GCS) directory (bucket)`. ``` %pip install --upgrade --quiet google-cloud-storage ``` ``` from langchain_community.document_loaders import GCSDirectoryLoader ``` ``` loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc") ``` ``` /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) ``` ``` [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpz37njh7u/fake.docx'}, lookup_index=0)] ``` ## Specifying a prefix[​](#specifying-a-prefix "Direct link to Specifying a prefix") You can also specify a prefix for more finegrained control over what files to load -including loading all files from a specific folder-. ``` loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc", prefix="fake") ``` ``` /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) ``` ``` [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpylg6291i/fake.docx'}, lookup_index=0)] ``` ## Continue on failure to load a single file[​](#continue-on-failure-to-load-a-single-file "Direct link to Continue on failure to load a single file") Files in a GCS bucket may cause errors during processing. Enable the `continue_on_failure=True` argument to allow silent failure. This means failure to process a single file will not break the function, it will log a warning instead. ``` loader = GCSDirectoryLoader( project_name="aist", bucket="testing-hwc", continue_on_failure=True) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:02.310Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_directory/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_directory/", "description": "[Google Cloud", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4415", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_cloud_storage_directory\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:02 GMT", "etag": "W/\"74234dcaed687cafe66982c5c86d286b\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::ncfnt-1713753542194-6ff587ed7785" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_directory/", "property": "og:url" }, { "content": "Google Cloud Storage Directory | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[Google Cloud", "property": "og:description" } ], "title": "Google Cloud Storage Directory | 🦜️🔗 LangChain" }
Google Cloud Storage Directory Google Cloud Storage is a managed service for storing unstructured data. This covers how to load document objects from an Google Cloud Storage (GCS) directory (bucket). %pip install --upgrade --quiet google-cloud-storage from langchain_community.document_loaders import GCSDirectoryLoader loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc") /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpz37njh7u/fake.docx'}, lookup_index=0)] Specifying a prefix​ You can also specify a prefix for more finegrained control over what files to load -including loading all files from a specific folder-. loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc", prefix="fake") /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpylg6291i/fake.docx'}, lookup_index=0)] Continue on failure to load a single file​ Files in a GCS bucket may cause errors during processing. Enable the continue_on_failure=True argument to allow silent failure. This means failure to process a single file will not break the function, it will log a warning instead. loader = GCSDirectoryLoader( project_name="aist", bucket="testing-hwc", continue_on_failure=True )
https://python.langchain.com/docs/integrations/document_loaders/google_datastore/
> [Firestore in Datastore Mode](https://cloud.google.com/datastore) is a NoSQL document database built for automatic scaling, high performance and ease of application development. Extend your database application to build AI-powered experiences leveraging Datastore’s Langchain integrations. This notebook goes over how to use [Firestore in Datastore Mode](https://cloud.google.com/datastore) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `DatastoreLoader` and `DatastoreSaver`. Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-datastore-python/). [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-datastore-python/blob/main/docs/document_loader.ipynb) Open In Colab ## Before You Begin[​](#before-you-begin "Direct link to Before You Begin") To run this notebook, you will need to do the following: * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project) * [Enable the Datastore API](https://console.cloud.google.com/flows/enableapi?apiid=datastore.googleapis.com) * [Create a Firestore in Datastore Mode database](https://cloud.google.com/datastore/docs/manage-databases) After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. ### 🦜🔗 Library Installation[​](#library-installation "Direct link to 🦜🔗 Library Installation") The integration lives in its own `langchain-google-datastore` package, so we need to install it. ``` %pip install -upgrade --quiet langchain-google-datastore ``` **Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. ``` # # Automatically restart kernel after installs so that your environment can access the new packages# import IPython# app = IPython.Application.instance()# app.kernel.do_shutdown(True) ``` ### ☁ Set Your Google Cloud Project[​](#set-your-google-cloud-project "Direct link to ☁ Set Your Google Cloud Project") Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: * Run `gcloud config list`. * Run `gcloud projects list`. * See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113). ``` # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.PROJECT_ID = "my-project-id" # @param {type:"string"}# Set the project id!gcloud config set project {PROJECT_ID} ``` ### 🔐 Authentication[​](#authentication "Direct link to 🔐 Authentication") Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. * If you are using Colab to run this notebook, use the cell below and continue. * If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env). ``` from google.colab import authauth.authenticate_user() ``` ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") ### Save documents[​](#save-documents "Direct link to Save documents") Save langchain documents with `DatastoreSaver.upsert_documents(<documents>)`. By default it will try to extract the entity key from the `key` in the Document metadata. ``` from langchain_core.documents import Documentfrom langchain_google_datastore import DatastoreSaversaver = DatastoreSaver()data = [Document(page_content="Hello, World!")]saver.upsert_documents(data) ``` #### Save documents without key[​](#save-documents-without-key "Direct link to Save documents without key") If a `kind` is specified the documents will be stored with an auto generated id. ``` saver = DatastoreSaver("MyKind")saver.upsert_documents(data) ``` ### Load documents via Kind[​](#load-documents-via-kind "Direct link to Load documents via Kind") Load langchain documents with `DatastoreLoader.load()` or `DatastoreLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `DatastoreLoader` class you need to provide: 1. `source` - The source to load the documents. It can be an instance of Query or the name of the Datastore kind to read from. ``` from langchain_google_datastore import DatastoreLoaderloader = DatastoreLoader("MyKind")data = loader.load() ``` ### Load documents via query[​](#load-documents-via-query "Direct link to Load documents via query") Other than loading documents from kind, we can also choose to load documents from query. For example: ``` from google.cloud import datastoreclient = datastore.Client(database="non-default-db", namespace="custom_namespace")query_load = client.query(kind="MyKind")query_load.add_filter("region", "=", "west_coast")loader_document = DatastoreLoader(query_load)data = loader_document.load() ``` ### Delete documents[​](#delete-documents "Direct link to Delete documents") Delete a list of langchain documents from Datastore with `DatastoreSaver.delete_documents(<documents>)`. ``` saver = DatastoreSaver()saver.delete_documents(data)keys_to_delete = [ ["Kind1", "identifier"], ["Kind2", 123], ["Kind3", "identifier", "NestedKind", 456],]# The Documents will be ignored and only the document ids will be used.saver.delete_documents(data, keys_to_delete) ``` ## Advanced Usage[​](#advanced-usage "Direct link to Advanced Usage") ### Load documents with customized document page content & metadata[​](#load-documents-with-customized-document-page-content-metadata "Direct link to Load documents with customized document page content & metadata") The arguments of `page_content_properties` and `metadata_properties` will specify the Entity properties to be written into LangChain Document `page_content` and `metadata`. ``` loader = DatastoreLoader( source="MyKind", page_content_fields=["data_field"], metadata_fields=["metadata_field"],)data = loader.load() ``` ### Customize Page Content Format[​](#customize-page-content-format "Direct link to Customize Page Content Format") When the `page_content` contains only one field the information will be the field value only. Otherwise the `page_content` will be in JSON format. ### Customize Connection & Authentication[​](#customize-connection-authentication "Direct link to Customize Connection & Authentication") ``` from google.auth import compute_enginefrom google.cloud.firestore import Clientclient = Client(database="non-default-db", creds=compute_engine.Credentials())loader = DatastoreLoader( source="foo", client=client,) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:02.989Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_datastore/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_datastore/", "description": "Firestore in Datastore Mode is a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3445", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_datastore\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:02 GMT", "etag": "W/\"56b93ebde121fbc236ab2466b7687e0f\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::nrswz-1713753542918-b24472b8335b" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_datastore/", "property": "og:url" }, { "content": "Google Firestore in Datastore Mode | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Firestore in Datastore Mode is a", "property": "og:description" } ], "title": "Google Firestore in Datastore Mode | 🦜️🔗 LangChain" }
Firestore in Datastore Mode is a NoSQL document database built for automatic scaling, high performance and ease of application development. Extend your database application to build AI-powered experiences leveraging Datastore’s Langchain integrations. This notebook goes over how to use Firestore in Datastore Mode to save, load and delete langchain documents with DatastoreLoader and DatastoreSaver. Learn more about the package on GitHub. Open In Colab Before You Begin​ To run this notebook, you will need to do the following: Create a Google Cloud Project Enable the Datastore API Create a Firestore in Datastore Mode database After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. 🦜🔗 Library Installation​ The integration lives in its own langchain-google-datastore package, so we need to install it. %pip install -upgrade --quiet langchain-google-datastore Colab only: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. # # Automatically restart kernel after installs so that your environment can access the new packages # import IPython # app = IPython.Application.instance() # app.kernel.do_shutdown(True) ☁ Set Your Google Cloud Project​ Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: Run gcloud config list. Run gcloud projects list. See the support page: Locate the project ID. # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell. PROJECT_ID = "my-project-id" # @param {type:"string"} # Set the project id !gcloud config set project {PROJECT_ID} 🔐 Authentication​ Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. If you are using Colab to run this notebook, use the cell below and continue. If you are using Vertex AI Workbench, check out the setup instructions here. from google.colab import auth auth.authenticate_user() Basic Usage​ Save documents​ Save langchain documents with DatastoreSaver.upsert_documents(<documents>). By default it will try to extract the entity key from the key in the Document metadata. from langchain_core.documents import Document from langchain_google_datastore import DatastoreSaver saver = DatastoreSaver() data = [Document(page_content="Hello, World!")] saver.upsert_documents(data) Save documents without key​ If a kind is specified the documents will be stored with an auto generated id. saver = DatastoreSaver("MyKind") saver.upsert_documents(data) Load documents via Kind​ Load langchain documents with DatastoreLoader.load() or DatastoreLoader.lazy_load(). lazy_load returns a generator that only queries database during the iteration. To initialize DatastoreLoader class you need to provide: 1. source - The source to load the documents. It can be an instance of Query or the name of the Datastore kind to read from. from langchain_google_datastore import DatastoreLoader loader = DatastoreLoader("MyKind") data = loader.load() Load documents via query​ Other than loading documents from kind, we can also choose to load documents from query. For example: from google.cloud import datastore client = datastore.Client(database="non-default-db", namespace="custom_namespace") query_load = client.query(kind="MyKind") query_load.add_filter("region", "=", "west_coast") loader_document = DatastoreLoader(query_load) data = loader_document.load() Delete documents​ Delete a list of langchain documents from Datastore with DatastoreSaver.delete_documents(<documents>). saver = DatastoreSaver() saver.delete_documents(data) keys_to_delete = [ ["Kind1", "identifier"], ["Kind2", 123], ["Kind3", "identifier", "NestedKind", 456], ] # The Documents will be ignored and only the document ids will be used. saver.delete_documents(data, keys_to_delete) Advanced Usage​ Load documents with customized document page content & metadata​ The arguments of page_content_properties and metadata_properties will specify the Entity properties to be written into LangChain Document page_content and metadata. loader = DatastoreLoader( source="MyKind", page_content_fields=["data_field"], metadata_fields=["metadata_field"], ) data = loader.load() Customize Page Content Format​ When the page_content contains only one field the information will be the field value only. Otherwise the page_content will be in JSON format. Customize Connection & Authentication​ from google.auth import compute_engine from google.cloud.firestore import Client client = Client(database="non-default-db", creds=compute_engine.Credentials()) loader = DatastoreLoader( source="foo", client=client, )
https://python.langchain.com/docs/integrations/document_loaders/google_el_carro/
> Google [El Carro Oracle Operator](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator) offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring. Extend your Oracle database’s capabilities to build AI-powered experiences by leveraging the El Carro Langchain integration. This guide goes over how to use El Carro Langchain integration to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `ElCarroLoader` and `ElCarroDocumentSaver`. This integration works for any Oracle database, regardless of where it is running. Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-el-carro-python/). [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-el-carro-python/blob/main/docs/document_loader.ipynb) Open In Colab ## Before You Begin[​](#before-you-begin "Direct link to Before You Begin") Please complete the [Getting Started](https://github.com/googleapis/langchain-google-el-carro-python/tree/main/README.md#getting-started) section of the README to set up your El Carro Oracle database. ### 🦜🔗 Library Installation[​](#library-installation "Direct link to 🦜🔗 Library Installation") The integration lives in its own `langchain-google-el-carro` package, so we need to install it. ``` %pip install --upgrade --quiet langchain-google-el-carro ``` ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") ### Set Up Oracle Database Connection[​](#set-up-oracle-database-connection "Direct link to Set Up Oracle Database Connection") Fill out the following variable with your Oracle database connections details. ``` # @title Set Your Values Here { display-mode: "form" }HOST = "127.0.0.1" # @param {type: "string"}PORT = 3307 # @param {type: "integer"}DATABASE = "my-database" # @param {type: "string"}TABLE_NAME = "message_store" # @param {type: "string"}USER = "my-user" # @param {type: "string"}PASSWORD = input("Please provide a password to be used for the database user: ") ``` If you are using El Carro, you can find the hostname and port values in the status of the El Carro Kubernetes instance. Use the user password you created for your PDB. Example Ouput: ``` kubectl get -w instances.oracle.db.anthosapis.com -n dbNAME DB ENGINE VERSION EDITION ENDPOINT URL DB NAMES BACKUP ID READYSTATUS READYREASON DBREADYSTATUS DBREADYREASONmydb Oracle 18c Express mydb-svc.db 34.71.69.25:6021 ['pdbname'] TRUE CreateComplete True CreateComplete ``` ### ElCarroEngine Connection Pool[​](#elcarroengine-connection-pool "Direct link to ElCarroEngine Connection Pool") `ElCarroEngine` configures a connection pool to your Oracle database, enabling successful connections from your application and following industry best practices. ``` from langchain_google_el_carro import ElCarroEngineelcarro_engine = ElCarroEngine.from_instance( db_host=HOST, db_port=PORT, db_name=DATABASE, db_user=USER, db_password=PASSWORD,) ``` ### Initialize a table[​](#initialize-a-table "Direct link to Initialize a table") Initialize a table of default schema via `elcarro_engine.init_document_table(<table_name>)`. Table Columns: * page\_content (type: text) * langchain\_metadata (type: JSON) ``` elcarro_engine.drop_document_table(TABLE_NAME)elcarro_engine.init_document_table( table_name=TABLE_NAME,) ``` ### Save documents[​](#save-documents "Direct link to Save documents") Save langchain documents with `ElCarroDocumentSaver.add_documents(<documents>)`. To initialize `ElCarroDocumentSaver` class you need to provide 2 things: 1. `elcarro_engine` - An instance of a `ElCarroEngine` engine. 2. `table_name` - The name of the table within the Oracle database to store langchain documents. ``` from langchain_core.documents import Documentfrom langchain_google_el_carro import ElCarroDocumentSaverdoc = Document( page_content="Banana", metadata={"type": "fruit", "weight": 100, "organic": 1},)saver = ElCarroDocumentSaver( elcarro_engine=elcarro_engine, table_name=TABLE_NAME,)saver.add_documents([doc]) ``` ### Load documents[​](#load-documents "Direct link to Load documents") Load langchain documents with `ElCarroLoader.load()` or `ElCarroLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `ElCarroLoader` class you need to provide: 1. `elcarro_engine` - An instance of a `ElCarroEngine` engine. 2. `table_name` - The name of the table within the Oracle database to store langchain documents. ``` from langchain_google_el_carro import ElCarroLoaderloader = ElCarroLoader(elcarro_engine=elcarro_engine, table_name=TABLE_NAME)docs = loader.lazy_load()for doc in docs: print("Loaded documents:", doc) ``` ### Load documents via query[​](#load-documents-via-query "Direct link to Load documents via query") Other than loading documents from a table, we can also choose to load documents from a view generated from a SQL query. For example: ``` from langchain_google_el_carro import ElCarroLoaderloader = ElCarroLoader( elcarro_engine=elcarro_engine, query=f"SELECT * FROM {TABLE_NAME} WHERE json_value(langchain_metadata, '$.organic') = '1'",)onedoc = loader.load()print(onedoc) ``` The view generated from SQL query can have different schema than default table. In such cases, the behavior of ElCarroLoader is the same as loading from table with non-default schema. Please refer to section [Load documents with customized document page content & metadata](#load-documents-with-customized-document-page-content--metadata). ### Delete documents[​](#delete-documents "Direct link to Delete documents") Delete a list of langchain documents from an Oracle table with `ElCarroDocumentSaver.delete(<documents>)`. For a table with a default schema (page\_content, langchain\_metadata), the deletion criteria is: A `row` should be deleted if there exists a `document` in the list, such that * `document.page_content` equals `row[page_content]` * `document.metadata` equals `row[langchain_metadata]` ``` docs = loader.load()print("Documents before delete:", docs)saver.delete(onedoc)print("Documents after delete:", loader.load()) ``` ## Advanced Usage[​](#advanced-usage "Direct link to Advanced Usage") ### Load documents with customized document page content & metadata[​](#load-documents-with-customized-document-page-content-metadata "Direct link to Load documents with customized document page content & metadata") First we prepare an example table with non-default schema, and populate it with some arbitrary data. ``` import sqlalchemycreate_table_query = f"""CREATE TABLE {TABLE_NAME} ( fruit_id NUMBER GENERATED BY DEFAULT AS IDENTITY (START WITH 1), fruit_name VARCHAR2(100) NOT NULL, variety VARCHAR2(50), quantity_in_stock NUMBER(10) NOT NULL, price_per_unit NUMBER(6,2) NOT NULL, organic NUMBER(3) NOT NULL)"""elcarro_engine.drop_document_table(TABLE_NAME)with elcarro_engine.connect() as conn: conn.execute(sqlalchemy.text(create_table_query)) conn.commit() conn.execute( sqlalchemy.text( f""" INSERT INTO {TABLE_NAME} (fruit_name, variety, quantity_in_stock, price_per_unit, organic) VALUES ('Apple', 'Granny Smith', 150, 0.99, 1) """ ) ) conn.execute( sqlalchemy.text( f""" INSERT INTO {TABLE_NAME} (fruit_name, variety, quantity_in_stock, price_per_unit, organic) VALUES ('Banana', 'Cavendish', 200, 0.59, 0) """ ) ) conn.execute( sqlalchemy.text( f""" INSERT INTO {TABLE_NAME} (fruit_name, variety, quantity_in_stock, price_per_unit, organic) VALUES ('Orange', 'Navel', 80, 1.29, 1) """ ) ) conn.commit() ``` If we still load langchain documents with default parameters of `ElCarroLoader` from this example table, the `page_content` of loaded documents will be the first column of the table, and `metadata` will be consisting of key-value pairs of all the other columns. ``` loader = ElCarroLoader( elcarro_engine=elcarro_engine, table_name=TABLE_NAME,)loaded_docs = loader.load()print(f"Loaded Documents: [{loaded_docs}]") ``` We can specify the content and metadata we want to load by setting the `content_columns` and `metadata_columns` when initializing the `ElCarroLoader`. 1. `content_columns`: The columns to write into the `page_content` of the document. 2. `metadata_columns`: The columns to write into the `metadata` of the document. For example here, the values of columns in `content_columns` will be joined together into a space-separated string, as `page_content` of loaded documents, and `metadata` of loaded documents will only contain key-value pairs of columns specified in `metadata_columns`. ``` loader = ElCarroLoader( elcarro_engine=elcarro_engine, table_name=TABLE_NAME, content_columns=[ "variety", "quantity_in_stock", "price_per_unit", "organic", ], metadata_columns=["fruit_id", "fruit_name"],)loaded_docs = loader.load()print(f"Loaded Documents: [{loaded_docs}]") ``` ### Save document with customized page content & metadata[​](#save-document-with-customized-page-content-metadata "Direct link to Save document with customized page content & metadata") In order to save langchain document into table with customized metadata fields we need first create such a table via `ElCarroEngine.init_document_table()`, and specify the list of `metadata_columns` we want it to have. In this example, the created table will have table columns: * content (type: text): for storing fruit description. * type (type VARCHAR2(200)): for storing fruit type. * weight (type INT): for storing fruit weight. * extra\_json\_metadata (type: JSON): for storing other metadata information of the fruit. We can use the following parameters with `elcarro_engine.init_document_table()` to create the table: 1. `table_name`: The name of the table within the Oracle database to store langchain documents. 2. `metadata_columns`: A list of `sqlalchemy.Column` indicating the list of metadata columns we need. 3. `content_column`: column name to store `page_content` of langchain document. Default: `"page_content", "VARCHAR2(4000)"` 4. `metadata_json_column`: column name to store extra JSON `metadata` of langchain document. Default: `"langchain_metadata", "VARCHAR2(4000)"`. ``` elcarro_engine.drop_document_table(TABLE_NAME)elcarro_engine.init_document_table( table_name=TABLE_NAME, metadata_columns=[ sqlalchemy.Column("type", sqlalchemy.dialects.oracle.VARCHAR2(200)), sqlalchemy.Column("weight", sqlalchemy.INT), ], content_column="content", metadata_json_column="extra_json_metadata",) ``` Save documents with `ElCarroDocumentSaver.add_documents(<documents>)`. As you can see in this example, * `document.page_content` will be saved into `content` column. * `document.metadata.type` will be saved into `type` column. * `document.metadata.weight` will be saved into `weight` column. * `document.metadata.organic` will be saved into `extra_json_metadata` column in JSON format. ``` doc = Document( page_content="Banana", metadata={"type": "fruit", "weight": 100, "organic": 1},)print(f"Original Document: [{doc}]")saver = ElCarroDocumentSaver( elcarro_engine=elcarro_engine, table_name=TABLE_NAME, content_column="content", metadata_json_column="extra_json_metadata",)saver.add_documents([doc])loader = ElCarroLoader( elcarro_engine=elcarro_engine, table_name=TABLE_NAME, content_columns=["content"], metadata_columns=[ "type", "weight", ], metadata_json_column="extra_json_metadata",)loaded_docs = loader.load()print(f"Loaded Document: [{loaded_docs[0]}]") ``` ### Delete documents with customized page content & metadata[​](#delete-documents-with-customized-page-content-metadata "Direct link to Delete documents with customized page content & metadata") We can also delete documents from table with customized metadata columns via `ElCarroDocumentSaver.delete(<documents>)`. The deletion criteria is: A `row` should be deleted if there exists a `document` in the list, such that * `document.page_content` equals `row[page_content]` * For every metadata field `k` in `document.metadata` * `document.metadata[k]` equals `row[k]` or `document.metadata[k]` equals `row[langchain_metadata][k]` * There is no extra metadata field present in `row` but not in `document.metadata`. ``` loader = ElCarroLoader(elcarro_engine=elcarro_engine, table_name=TABLE_NAME)saver.delete(loader.load())print(f"Documents left: {len(loader.load())}") ``` ## More examples[​](#more-examples "Direct link to More examples") Please look at [demo\_doc\_loader\_basic.py](https://github.com/googleapis/langchain-google-el-carro-python/tree/main/samples/demo_doc_loader_basic.py) and [demo\_doc\_loader\_advanced.py](https://github.com/googleapis/langchain-google-el-carro-python/tree/main/samples/demo_doc_loader_advanced.py) for complete code examples.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:03.414Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_el_carro/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_el_carro/", "description": "Google [El Carro Oracle", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_el_carro\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:03 GMT", "etag": "W/\"1baa9fca0de2ab4525892bfd59d22729\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::tzcgl-1713753542933-7af767730b3c" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_el_carro/", "property": "og:url" }, { "content": "Google El Carro for Oracle Workloads | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Google [El Carro Oracle", "property": "og:description" } ], "title": "Google El Carro for Oracle Workloads | 🦜️🔗 LangChain" }
Google El Carro Oracle Operator offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring. Extend your Oracle database’s capabilities to build AI-powered experiences by leveraging the El Carro Langchain integration. This guide goes over how to use El Carro Langchain integration to save, load and delete langchain documents with ElCarroLoader and ElCarroDocumentSaver. This integration works for any Oracle database, regardless of where it is running. Learn more about the package on GitHub. Open In Colab Before You Begin​ Please complete the Getting Started section of the README to set up your El Carro Oracle database. 🦜🔗 Library Installation​ The integration lives in its own langchain-google-el-carro package, so we need to install it. %pip install --upgrade --quiet langchain-google-el-carro Basic Usage​ Set Up Oracle Database Connection​ Fill out the following variable with your Oracle database connections details. # @title Set Your Values Here { display-mode: "form" } HOST = "127.0.0.1" # @param {type: "string"} PORT = 3307 # @param {type: "integer"} DATABASE = "my-database" # @param {type: "string"} TABLE_NAME = "message_store" # @param {type: "string"} USER = "my-user" # @param {type: "string"} PASSWORD = input("Please provide a password to be used for the database user: ") If you are using El Carro, you can find the hostname and port values in the status of the El Carro Kubernetes instance. Use the user password you created for your PDB. Example Ouput: kubectl get -w instances.oracle.db.anthosapis.com -n db NAME DB ENGINE VERSION EDITION ENDPOINT URL DB NAMES BACKUP ID READYSTATUS READYREASON DBREADYSTATUS DBREADYREASON mydb Oracle 18c Express mydb-svc.db 34.71.69.25:6021 ['pdbname'] TRUE CreateComplete True CreateComplete ElCarroEngine Connection Pool​ ElCarroEngine configures a connection pool to your Oracle database, enabling successful connections from your application and following industry best practices. from langchain_google_el_carro import ElCarroEngine elcarro_engine = ElCarroEngine.from_instance( db_host=HOST, db_port=PORT, db_name=DATABASE, db_user=USER, db_password=PASSWORD, ) Initialize a table​ Initialize a table of default schema via elcarro_engine.init_document_table(<table_name>). Table Columns: page_content (type: text) langchain_metadata (type: JSON) elcarro_engine.drop_document_table(TABLE_NAME) elcarro_engine.init_document_table( table_name=TABLE_NAME, ) Save documents​ Save langchain documents with ElCarroDocumentSaver.add_documents(<documents>). To initialize ElCarroDocumentSaver class you need to provide 2 things: elcarro_engine - An instance of a ElCarroEngine engine. table_name - The name of the table within the Oracle database to store langchain documents. from langchain_core.documents import Document from langchain_google_el_carro import ElCarroDocumentSaver doc = Document( page_content="Banana", metadata={"type": "fruit", "weight": 100, "organic": 1}, ) saver = ElCarroDocumentSaver( elcarro_engine=elcarro_engine, table_name=TABLE_NAME, ) saver.add_documents([doc]) Load documents​ Load langchain documents with ElCarroLoader.load() or ElCarroLoader.lazy_load(). lazy_load returns a generator that only queries database during the iteration. To initialize ElCarroLoader class you need to provide: elcarro_engine - An instance of a ElCarroEngine engine. table_name - The name of the table within the Oracle database to store langchain documents. from langchain_google_el_carro import ElCarroLoader loader = ElCarroLoader(elcarro_engine=elcarro_engine, table_name=TABLE_NAME) docs = loader.lazy_load() for doc in docs: print("Loaded documents:", doc) Load documents via query​ Other than loading documents from a table, we can also choose to load documents from a view generated from a SQL query. For example: from langchain_google_el_carro import ElCarroLoader loader = ElCarroLoader( elcarro_engine=elcarro_engine, query=f"SELECT * FROM {TABLE_NAME} WHERE json_value(langchain_metadata, '$.organic') = '1'", ) onedoc = loader.load() print(onedoc) The view generated from SQL query can have different schema than default table. In such cases, the behavior of ElCarroLoader is the same as loading from table with non-default schema. Please refer to section Load documents with customized document page content & metadata. Delete documents​ Delete a list of langchain documents from an Oracle table with ElCarroDocumentSaver.delete(<documents>). For a table with a default schema (page_content, langchain_metadata), the deletion criteria is: A row should be deleted if there exists a document in the list, such that document.page_content equals row[page_content] document.metadata equals row[langchain_metadata] docs = loader.load() print("Documents before delete:", docs) saver.delete(onedoc) print("Documents after delete:", loader.load()) Advanced Usage​ Load documents with customized document page content & metadata​ First we prepare an example table with non-default schema, and populate it with some arbitrary data. import sqlalchemy create_table_query = f"""CREATE TABLE {TABLE_NAME} ( fruit_id NUMBER GENERATED BY DEFAULT AS IDENTITY (START WITH 1), fruit_name VARCHAR2(100) NOT NULL, variety VARCHAR2(50), quantity_in_stock NUMBER(10) NOT NULL, price_per_unit NUMBER(6,2) NOT NULL, organic NUMBER(3) NOT NULL )""" elcarro_engine.drop_document_table(TABLE_NAME) with elcarro_engine.connect() as conn: conn.execute(sqlalchemy.text(create_table_query)) conn.commit() conn.execute( sqlalchemy.text( f""" INSERT INTO {TABLE_NAME} (fruit_name, variety, quantity_in_stock, price_per_unit, organic) VALUES ('Apple', 'Granny Smith', 150, 0.99, 1) """ ) ) conn.execute( sqlalchemy.text( f""" INSERT INTO {TABLE_NAME} (fruit_name, variety, quantity_in_stock, price_per_unit, organic) VALUES ('Banana', 'Cavendish', 200, 0.59, 0) """ ) ) conn.execute( sqlalchemy.text( f""" INSERT INTO {TABLE_NAME} (fruit_name, variety, quantity_in_stock, price_per_unit, organic) VALUES ('Orange', 'Navel', 80, 1.29, 1) """ ) ) conn.commit() If we still load langchain documents with default parameters of ElCarroLoader from this example table, the page_content of loaded documents will be the first column of the table, and metadata will be consisting of key-value pairs of all the other columns. loader = ElCarroLoader( elcarro_engine=elcarro_engine, table_name=TABLE_NAME, ) loaded_docs = loader.load() print(f"Loaded Documents: [{loaded_docs}]") We can specify the content and metadata we want to load by setting the content_columns and metadata_columns when initializing the ElCarroLoader. content_columns: The columns to write into the page_content of the document. metadata_columns: The columns to write into the metadata of the document. For example here, the values of columns in content_columns will be joined together into a space-separated string, as page_content of loaded documents, and metadata of loaded documents will only contain key-value pairs of columns specified in metadata_columns. loader = ElCarroLoader( elcarro_engine=elcarro_engine, table_name=TABLE_NAME, content_columns=[ "variety", "quantity_in_stock", "price_per_unit", "organic", ], metadata_columns=["fruit_id", "fruit_name"], ) loaded_docs = loader.load() print(f"Loaded Documents: [{loaded_docs}]") Save document with customized page content & metadata​ In order to save langchain document into table with customized metadata fields we need first create such a table via ElCarroEngine.init_document_table(), and specify the list of metadata_columns we want it to have. In this example, the created table will have table columns: content (type: text): for storing fruit description. type (type VARCHAR2(200)): for storing fruit type. weight (type INT): for storing fruit weight. extra_json_metadata (type: JSON): for storing other metadata information of the fruit. We can use the following parameters with elcarro_engine.init_document_table() to create the table: table_name: The name of the table within the Oracle database to store langchain documents. metadata_columns: A list of sqlalchemy.Column indicating the list of metadata columns we need. content_column: column name to store page_content of langchain document. Default: "page_content", "VARCHAR2(4000)" metadata_json_column: column name to store extra JSON metadata of langchain document. Default: "langchain_metadata", "VARCHAR2(4000)". elcarro_engine.drop_document_table(TABLE_NAME) elcarro_engine.init_document_table( table_name=TABLE_NAME, metadata_columns=[ sqlalchemy.Column("type", sqlalchemy.dialects.oracle.VARCHAR2(200)), sqlalchemy.Column("weight", sqlalchemy.INT), ], content_column="content", metadata_json_column="extra_json_metadata", ) Save documents with ElCarroDocumentSaver.add_documents(<documents>). As you can see in this example, document.page_content will be saved into content column. document.metadata.type will be saved into type column. document.metadata.weight will be saved into weight column. document.metadata.organic will be saved into extra_json_metadata column in JSON format. doc = Document( page_content="Banana", metadata={"type": "fruit", "weight": 100, "organic": 1}, ) print(f"Original Document: [{doc}]") saver = ElCarroDocumentSaver( elcarro_engine=elcarro_engine, table_name=TABLE_NAME, content_column="content", metadata_json_column="extra_json_metadata", ) saver.add_documents([doc]) loader = ElCarroLoader( elcarro_engine=elcarro_engine, table_name=TABLE_NAME, content_columns=["content"], metadata_columns=[ "type", "weight", ], metadata_json_column="extra_json_metadata", ) loaded_docs = loader.load() print(f"Loaded Document: [{loaded_docs[0]}]") Delete documents with customized page content & metadata​ We can also delete documents from table with customized metadata columns via ElCarroDocumentSaver.delete(<documents>). The deletion criteria is: A row should be deleted if there exists a document in the list, such that document.page_content equals row[page_content] For every metadata field k in document.metadata document.metadata[k] equals row[k] or document.metadata[k] equals row[langchain_metadata][k] There is no extra metadata field present in row but not in document.metadata. loader = ElCarroLoader(elcarro_engine=elcarro_engine, table_name=TABLE_NAME) saver.delete(loader.load()) print(f"Documents left: {len(loader.load())}") More examples​ Please look at demo_doc_loader_basic.py and demo_doc_loader_advanced.py for complete code examples.
https://python.langchain.com/docs/integrations/document_loaders/google_memorystore_redis/
## Google Memorystore for Redis > [Google Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) is a fully-managed service that is powered by the Redis in-memory data store to build application caches that provide sub-millisecond data access. Extend your database application to build AI-powered experiences leveraging Memorystore for Redis’s Langchain integrations. This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MemorystoreDocumentLoader` and `MemorystoreDocumentSaver`. Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-memorystore-redis-python/). [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-memorystore-redis-python/blob/main/docs/document_loader.ipynb) Open In Colab ## Before You Begin[​](#before-you-begin "Direct link to Before You Begin") To run this notebook, you will need to do the following: * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project) * [Enable the Memorystore for Redis API](https://console.cloud.google.com/flows/enableapi?apiid=redis.googleapis.com) * [Create a Memorystore for Redis instance](https://cloud.google.com/memorystore/docs/redis/create-instance-console). Ensure that the version is greater than or equal to 5.0. After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. ``` # @markdown Please specify an endpoint associated with the instance and a key prefix for demo purpose.ENDPOINT = "redis://127.0.0.1:6379" # @param {type:"string"}KEY_PREFIX = "doc:" # @param {type:"string"} ``` ### 🦜🔗 Library Installation[​](#library-installation "Direct link to 🦜🔗 Library Installation") The integration lives in its own `langchain-google-memorystore-redis` package, so we need to install it. ``` %pip install -upgrade --quiet langchain-google-memorystore-redis ``` **Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. ``` # # Automatically restart kernel after installs so that your environment can access the new packages# import IPython# app = IPython.Application.instance()# app.kernel.do_shutdown(True) ``` ### ☁ Set Your Google Cloud Project[​](#set-your-google-cloud-project "Direct link to ☁ Set Your Google Cloud Project") Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: * Run `gcloud config list`. * Run `gcloud projects list`. * See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113). ``` # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.PROJECT_ID = "my-project-id" # @param {type:"string"}# Set the project id!gcloud config set project {PROJECT_ID} ``` ### 🔐 Authentication[​](#authentication "Direct link to 🔐 Authentication") Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. * If you are using Colab to run this notebook, use the cell below and continue. * If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env). ``` from google.colab import authauth.authenticate_user() ``` ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") ### Save documents[​](#save-documents "Direct link to Save documents") Save langchain documents with `MemorystoreDocumentSaver.add_documents(<documents>)`. To initialize `MemorystoreDocumentSaver` class you need to provide 2 things: 1. `client` - A `redis.Redis` client object. 2. `key_prefix` - A prefix for the keys to store Documents in Redis. The Documents will be stored into randomly generated keys with the specified prefix of `key_prefix`. Alternatively, you can designate the suffixes of the keys by specifying `ids` in the `add_documents` method. ``` import redisfrom langchain_core.documents import Documentfrom langchain_google_memorystore_redis import MemorystoreDocumentSavertest_docs = [ Document( page_content="Apple Granny Smith 150 0.99 1", metadata={"fruit_id": 1}, ), Document( page_content="Banana Cavendish 200 0.59 0", metadata={"fruit_id": 2}, ), Document( page_content="Orange Navel 80 1.29 1", metadata={"fruit_id": 3}, ),]doc_ids = [f"{i}" for i in range(len(test_docs))]redis_client = redis.from_url(ENDPOINT)saver = MemorystoreDocumentSaver( client=redis_client, key_prefix=KEY_PREFIX, content_field="page_content",)saver.add_documents(test_docs, ids=doc_ids) ``` ### Load documents[​](#load-documents "Direct link to Load documents") Initialize a loader that loads all documents stored in the Memorystore for Redis instance with a specific prefix. Load langchain documents with `MemorystoreDocumentLoader.load()` or `MemorystoreDocumentLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `MemorystoreDocumentLoader` class you need to provide: 1. `client` - A `redis.Redis` client object. 2. `key_prefix` - A prefix for the keys to store Documents in Redis. ``` import redisfrom langchain_google_memorystore_redis import MemorystoreDocumentLoaderredis_client = redis.from_url(ENDPOINT)loader = MemorystoreDocumentLoader( client=redis_client, key_prefix=KEY_PREFIX, content_fields=set(["page_content"]),)for doc in loader.lazy_load(): print("Loaded documents:", doc) ``` ### Delete documents[​](#delete-documents "Direct link to Delete documents") Delete all of keys with the specified prefix in the Memorystore for Redis instance with `MemorystoreDocumentSaver.delete()`. You can also specify the suffixes of the keys if you know. ``` docs = loader.load()print("Documents before delete:", docs)saver.delete(ids=[0])print("Documents after delete:", loader.load())saver.delete()print("Documents after delete all:", loader.load()) ``` ## Advanced Usage[​](#advanced-usage "Direct link to Advanced Usage") ### Customize Document Page Content & Metadata[​](#customize-document-page-content-metadata "Direct link to Customize Document Page Content & Metadata") When initializing a loader with more than 1 content field, the `page_content` of the loaded Documents will contain a JSON-encoded string with top level fields equal to the specified fields in `content_fields`. If the `metadata_fields` are specified, the `metadata` field of the loaded Documents will only have the top level fields equal to the specified `metadata_fields`. If any of the values of the metadata fields is stored as a JSON-encoded string, it will be decoded prior to being loaded to the metadata fields. ``` loader = MemorystoreDocumentLoader( client=redis_client, key_prefix=KEY_PREFIX, content_fields=set(["content_field_1", "content_field_2"]), metadata_fields=set(["title", "author"]),) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:03.969Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_memorystore_redis/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_memorystore_redis/", "description": "[Google Memorystore for", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3445", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_memorystore_redis\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:03 GMT", "etag": "W/\"de43cdd73662e317fce8d2f7d134806e\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::zvdxw-1713753543258-1d6cfeda3f4e" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_memorystore_redis/", "property": "og:url" }, { "content": "Google Memorystore for Redis | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[Google Memorystore for", "property": "og:description" } ], "title": "Google Memorystore for Redis | 🦜️🔗 LangChain" }
Google Memorystore for Redis Google Memorystore for Redis is a fully-managed service that is powered by the Redis in-memory data store to build application caches that provide sub-millisecond data access. Extend your database application to build AI-powered experiences leveraging Memorystore for Redis’s Langchain integrations. This notebook goes over how to use Memorystore for Redis to save, load and delete langchain documents with MemorystoreDocumentLoader and MemorystoreDocumentSaver. Learn more about the package on GitHub. Open In Colab Before You Begin​ To run this notebook, you will need to do the following: Create a Google Cloud Project Enable the Memorystore for Redis API Create a Memorystore for Redis instance. Ensure that the version is greater than or equal to 5.0. After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. # @markdown Please specify an endpoint associated with the instance and a key prefix for demo purpose. ENDPOINT = "redis://127.0.0.1:6379" # @param {type:"string"} KEY_PREFIX = "doc:" # @param {type:"string"} 🦜🔗 Library Installation​ The integration lives in its own langchain-google-memorystore-redis package, so we need to install it. %pip install -upgrade --quiet langchain-google-memorystore-redis Colab only: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. # # Automatically restart kernel after installs so that your environment can access the new packages # import IPython # app = IPython.Application.instance() # app.kernel.do_shutdown(True) ☁ Set Your Google Cloud Project​ Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: Run gcloud config list. Run gcloud projects list. See the support page: Locate the project ID. # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell. PROJECT_ID = "my-project-id" # @param {type:"string"} # Set the project id !gcloud config set project {PROJECT_ID} 🔐 Authentication​ Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. If you are using Colab to run this notebook, use the cell below and continue. If you are using Vertex AI Workbench, check out the setup instructions here. from google.colab import auth auth.authenticate_user() Basic Usage​ Save documents​ Save langchain documents with MemorystoreDocumentSaver.add_documents(<documents>). To initialize MemorystoreDocumentSaver class you need to provide 2 things: client - A redis.Redis client object. key_prefix - A prefix for the keys to store Documents in Redis. The Documents will be stored into randomly generated keys with the specified prefix of key_prefix. Alternatively, you can designate the suffixes of the keys by specifying ids in the add_documents method. import redis from langchain_core.documents import Document from langchain_google_memorystore_redis import MemorystoreDocumentSaver test_docs = [ Document( page_content="Apple Granny Smith 150 0.99 1", metadata={"fruit_id": 1}, ), Document( page_content="Banana Cavendish 200 0.59 0", metadata={"fruit_id": 2}, ), Document( page_content="Orange Navel 80 1.29 1", metadata={"fruit_id": 3}, ), ] doc_ids = [f"{i}" for i in range(len(test_docs))] redis_client = redis.from_url(ENDPOINT) saver = MemorystoreDocumentSaver( client=redis_client, key_prefix=KEY_PREFIX, content_field="page_content", ) saver.add_documents(test_docs, ids=doc_ids) Load documents​ Initialize a loader that loads all documents stored in the Memorystore for Redis instance with a specific prefix. Load langchain documents with MemorystoreDocumentLoader.load() or MemorystoreDocumentLoader.lazy_load(). lazy_load returns a generator that only queries database during the iteration. To initialize MemorystoreDocumentLoader class you need to provide: client - A redis.Redis client object. key_prefix - A prefix for the keys to store Documents in Redis. import redis from langchain_google_memorystore_redis import MemorystoreDocumentLoader redis_client = redis.from_url(ENDPOINT) loader = MemorystoreDocumentLoader( client=redis_client, key_prefix=KEY_PREFIX, content_fields=set(["page_content"]), ) for doc in loader.lazy_load(): print("Loaded documents:", doc) Delete documents​ Delete all of keys with the specified prefix in the Memorystore for Redis instance with MemorystoreDocumentSaver.delete(). You can also specify the suffixes of the keys if you know. docs = loader.load() print("Documents before delete:", docs) saver.delete(ids=[0]) print("Documents after delete:", loader.load()) saver.delete() print("Documents after delete all:", loader.load()) Advanced Usage​ Customize Document Page Content & Metadata​ When initializing a loader with more than 1 content field, the page_content of the loaded Documents will contain a JSON-encoded string with top level fields equal to the specified fields in content_fields. If the metadata_fields are specified, the metadata field of the loaded Documents will only have the top level fields equal to the specified metadata_fields. If any of the values of the metadata fields is stored as a JSON-encoded string, it will be decoded prior to being loaded to the metadata fields. loader = MemorystoreDocumentLoader( client=redis_client, key_prefix=KEY_PREFIX, content_fields=set(["content_field_1", "content_field_2"]), metadata_fields=set(["title", "author"]), )
https://python.langchain.com/docs/integrations/document_loaders/google_firestore/
## Google Firestore (Native Mode) > [Firestore](https://cloud.google.com/firestore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Firestore’s Langchain integrations. This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `FirestoreLoader` and `FirestoreSaver`. Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-firestore-python/). [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-firestore-python/blob/main/docs/document_loader.ipynb) Open In Colab ## Before You Begin[​](#before-you-begin "Direct link to Before You Begin") To run this notebook, you will need to do the following: * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project) * [Enable the Firestore API](https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com) * [Create a Firestore database](https://cloud.google.com/firestore/docs/manage-databases) After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. ``` # @markdown Please specify a source for demo purpose.SOURCE = "test" # @param {type:"Query"|"CollectionGroup"|"DocumentReference"|"string"} ``` ### 🦜🔗 Library Installation[​](#library-installation "Direct link to 🦜🔗 Library Installation") The integration lives in its own `langchain-google-firestore` package, so we need to install it. ``` %pip install -upgrade --quiet langchain-google-firestore ``` **Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. ``` # # Automatically restart kernel after installs so that your environment can access the new packages# import IPython# app = IPython.Application.instance()# app.kernel.do_shutdown(True) ``` ### ☁ Set Your Google Cloud Project[​](#set-your-google-cloud-project "Direct link to ☁ Set Your Google Cloud Project") Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: * Run `gcloud config list`. * Run `gcloud projects list`. * See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113). ``` # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.PROJECT_ID = "my-project-id" # @param {type:"string"}# Set the project id!gcloud config set project {PROJECT_ID} ``` ### 🔐 Authentication[​](#authentication "Direct link to 🔐 Authentication") Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. * If you are using Colab to run this notebook, use the cell below and continue. * If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env). ``` from google.colab import authauth.authenticate_user() ``` ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") ### Save documents[​](#save-documents "Direct link to Save documents") `FirestoreSaver` can store Documents into Firestore. By default it will try to extract the Document reference from the metadata Save langchain documents with `FirestoreSaver.upsert_documents(<documents>)`. ``` from langchain_core.documents import Documentfrom langchain_google_firestore import FirestoreSaversaver = FirestoreSaver()data = [Document(page_content="Hello, World!")]saver.upsert_documents(data) ``` #### Save documents without reference[​](#save-documents-without-reference "Direct link to Save documents without reference") If a collection is specified the documents will be stored with an auto generated id. ``` saver = FirestoreSaver("Collection")saver.upsert_documents(data) ``` #### Save documents with other references[​](#save-documents-with-other-references "Direct link to Save documents with other references") ``` doc_ids = ["AnotherCollection/doc_id", "foo/bar"]saver = FirestoreSaver()saver.upsert_documents(documents=data, document_ids=doc_ids) ``` ### Load from Collection or SubCollection[​](#load-from-collection-or-subcollection "Direct link to Load from Collection or SubCollection") Load langchain documents with `FirestoreLoader.load()` or `Firestore.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `FirestoreLoader` class you need to provide: 1. `source` - An instance of a Query, CollectionGroup, DocumentReference or the single `\`\-delimited path to a Firestore collection. ``` from langchain_google_firestore import FirestoreLoaderloader_collection = FirestoreLoader("Collection")loader_subcollection = FirestoreLoader("Collection/doc/SubCollection")data_collection = loader_collection.load()data_subcollection = loader_subcollection.load() ``` ### Load a single Document[​](#load-a-single-document "Direct link to Load a single Document") ``` from google.cloud import firestoreclient = firestore.Client()doc_ref = client.collection("foo").document("bar")loader_document = FirestoreLoader(doc_ref)data = loader_document.load() ``` ### Load from CollectionGroup or Query[​](#load-from-collectiongroup-or-query "Direct link to Load from CollectionGroup or Query") ``` from google.cloud.firestore import CollectionGroup, FieldFilter, Querycol_ref = client.collection("col_group")collection_group = CollectionGroup(col_ref)loader_group = FirestoreLoader(collection_group)col_ref = client.collection("collection")query = col_ref.where(filter=FieldFilter("region", "==", "west_coast"))loader_query = FirestoreLoader(query) ``` ### Delete documents[​](#delete-documents "Direct link to Delete documents") Delete a list of langchain documents from Firestore collection with `FirestoreSaver.delete_documents(<documents>)`. If document ids is provided, the Documents will be ignored. ``` saver = FirestoreSaver()saver.delete_documents(data)# The Documents will be ignored and only the document ids will be used.saver.delete_documents(data, doc_ids) ``` ## Advanced Usage[​](#advanced-usage "Direct link to Advanced Usage") ### Load documents with customize document page content & metadata[​](#load-documents-with-customize-document-page-content-metadata "Direct link to Load documents with customize document page content & metadata") The arguments of `page_content_fields` and `metadata_fields` will specify the Firestore Document fields to be written into LangChain Document `page_content` and `metadata`. ``` loader = FirestoreLoader( source="foo/bar/subcol", page_content_fields=["data_field"], metadata_fields=["metadata_field"],)data = loader.load() ``` #### Customize Page Content Format[​](#customize-page-content-format "Direct link to Customize Page Content Format") When the `page_content` contains only one field the information will be the field value only. Otherwise the `page_content` will be in JSON format. ### Customize Connection & Authentication[​](#customize-connection-authentication "Direct link to Customize Connection & Authentication") ``` from google.auth import compute_enginefrom google.cloud.firestore import Clientclient = Client(database="non-default-db", creds=compute_engine.Credentials())loader = FirestoreLoader( source="foo", client=client,) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:04.273Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_firestore/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_firestore/", "description": "Firestore is a serverless", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3445", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_firestore\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:03 GMT", "etag": "W/\"2145ba1099d8d477ce4e57c705c3d9f7\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::xp972-1713753543247-5b201aeefc74" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_firestore/", "property": "og:url" }, { "content": "Google Firestore (Native Mode) | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Firestore is a serverless", "property": "og:description" } ], "title": "Google Firestore (Native Mode) | 🦜️🔗 LangChain" }
Google Firestore (Native Mode) Firestore is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Firestore’s Langchain integrations. This notebook goes over how to use Firestore to save, load and delete langchain documents with FirestoreLoader and FirestoreSaver. Learn more about the package on GitHub. Open In Colab Before You Begin​ To run this notebook, you will need to do the following: Create a Google Cloud Project Enable the Firestore API Create a Firestore database After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. # @markdown Please specify a source for demo purpose. SOURCE = "test" # @param {type:"Query"|"CollectionGroup"|"DocumentReference"|"string"} 🦜🔗 Library Installation​ The integration lives in its own langchain-google-firestore package, so we need to install it. %pip install -upgrade --quiet langchain-google-firestore Colab only: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. # # Automatically restart kernel after installs so that your environment can access the new packages # import IPython # app = IPython.Application.instance() # app.kernel.do_shutdown(True) ☁ Set Your Google Cloud Project​ Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: Run gcloud config list. Run gcloud projects list. See the support page: Locate the project ID. # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell. PROJECT_ID = "my-project-id" # @param {type:"string"} # Set the project id !gcloud config set project {PROJECT_ID} 🔐 Authentication​ Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. If you are using Colab to run this notebook, use the cell below and continue. If you are using Vertex AI Workbench, check out the setup instructions here. from google.colab import auth auth.authenticate_user() Basic Usage​ Save documents​ FirestoreSaver can store Documents into Firestore. By default it will try to extract the Document reference from the metadata Save langchain documents with FirestoreSaver.upsert_documents(<documents>). from langchain_core.documents import Document from langchain_google_firestore import FirestoreSaver saver = FirestoreSaver() data = [Document(page_content="Hello, World!")] saver.upsert_documents(data) Save documents without reference​ If a collection is specified the documents will be stored with an auto generated id. saver = FirestoreSaver("Collection") saver.upsert_documents(data) Save documents with other references​ doc_ids = ["AnotherCollection/doc_id", "foo/bar"] saver = FirestoreSaver() saver.upsert_documents(documents=data, document_ids=doc_ids) Load from Collection or SubCollection​ Load langchain documents with FirestoreLoader.load() or Firestore.lazy_load(). lazy_load returns a generator that only queries database during the iteration. To initialize FirestoreLoader class you need to provide: source - An instance of a Query, CollectionGroup, DocumentReference or the single \-delimited path to a Firestore collection. from langchain_google_firestore import FirestoreLoader loader_collection = FirestoreLoader("Collection") loader_subcollection = FirestoreLoader("Collection/doc/SubCollection") data_collection = loader_collection.load() data_subcollection = loader_subcollection.load() Load a single Document​ from google.cloud import firestore client = firestore.Client() doc_ref = client.collection("foo").document("bar") loader_document = FirestoreLoader(doc_ref) data = loader_document.load() Load from CollectionGroup or Query​ from google.cloud.firestore import CollectionGroup, FieldFilter, Query col_ref = client.collection("col_group") collection_group = CollectionGroup(col_ref) loader_group = FirestoreLoader(collection_group) col_ref = client.collection("collection") query = col_ref.where(filter=FieldFilter("region", "==", "west_coast")) loader_query = FirestoreLoader(query) Delete documents​ Delete a list of langchain documents from Firestore collection with FirestoreSaver.delete_documents(<documents>). If document ids is provided, the Documents will be ignored. saver = FirestoreSaver() saver.delete_documents(data) # The Documents will be ignored and only the document ids will be used. saver.delete_documents(data, doc_ids) Advanced Usage​ Load documents with customize document page content & metadata​ The arguments of page_content_fields and metadata_fields will specify the Firestore Document fields to be written into LangChain Document page_content and metadata. loader = FirestoreLoader( source="foo/bar/subcol", page_content_fields=["data_field"], metadata_fields=["metadata_field"], ) data = loader.load() Customize Page Content Format​ When the page_content contains only one field the information will be the field value only. Otherwise the page_content will be in JSON format. Customize Connection & Authentication​ from google.auth import compute_engine from google.cloud.firestore import Client client = Client(database="non-default-db", creds=compute_engine.Credentials()) loader = FirestoreLoader( source="foo", client=client, )
https://python.langchain.com/docs/integrations/document_loaders/google_drive/
## Google Drive > [Google Drive](https://en.wikipedia.org/wiki/Google_Drive) is a file storage and synchronization service developed by Google. This notebook covers how to load documents from `Google Drive`. Currently, only `Google Docs` are supported. ## Prerequisites[​](#prerequisites "Direct link to Prerequisites") 1. Create a Google Cloud project or use an existing project 2. Enable the [Google Drive API](https://console.cloud.google.com/flows/enableapi?apiid=drive.googleapis.com) 3. [Authorize credentials for desktop app](https://developers.google.com/drive/api/quickstart/python#authorize_credentials_for_a_desktop_application) 4. `pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib` ## 🧑 Instructions for ingesting your Google Docs data[​](#instructions-for-ingesting-your-google-docs-data "Direct link to 🧑 Instructions for ingesting your Google Docs data") Set the environmental variable `GOOGLE_APPLICATION_CREDENTIALS` to an empty string (`""`). By default, the `GoogleDriveLoader` expects the `credentials.json` file to be located at `~/.credentials/credentials.json`, but this is configurable using the `credentials_path` keyword argument. Same thing with `token.json` - default path: `~/.credentials/token.json`, constructor param: `token_path`. The first time you use GoogleDriveLoader, you will be displayed with the consent screen in your browser for user authentication. After authentication, `token.json` will be created automatically at the provided or the default path. Also, if there is already a `token.json` at that path, then you will not be prompted for authentication. `GoogleDriveLoader` can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: * Folder: [https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5](https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5) -\> folder id is `"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5"` * Document: [https://docs.google.com/document/d/1bfaMQ18\_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit](https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit) -\> document id is `"1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw"` ``` %pip install --upgrade --quiet google-api-python-client google-auth-httplib2 google-auth-oauthlib ``` ``` from langchain_community.document_loaders import GoogleDriveLoader ``` ``` loader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", token_path="/path/where/you/want/token/to/be/created/google_token.json", # Optional: configure whether to recursively fetch files from subfolders. Defaults to False. recursive=False,) ``` When you pass a `folder_id` by default all files of type document, sheet and pdf are loaded. You can modify this behaviour by passing a `file_types` argument ``` loader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", file_types=["document", "sheet"], recursive=False,) ``` ## Passing in Optional File Loaders[​](#passing-in-optional-file-loaders "Direct link to Passing in Optional File Loaders") When processing files other than Google Docs and Google Sheets, it can be helpful to pass an optional file loader to `GoogleDriveLoader`. If you pass in a file loader, that file loader will be used on documents that do not have a Google Docs or Google Sheets MIME type. Here is an example of how to load an Excel document from Google Drive using a file loader. ``` from langchain_community.document_loaders import ( GoogleDriveLoader, UnstructuredFileIOLoader,) ``` ``` file_id = "1x9WBtFPWMEAdjcJzPScRsjpjQvpSo_kz"loader = GoogleDriveLoader( file_ids=[file_id], file_loader_cls=UnstructuredFileIOLoader, file_loader_kwargs={"mode": "elements"},) ``` You can also process a folder with a mix of files and Google Docs/Sheets using the following pattern: ``` folder_id = "1asMOHY1BqBS84JcRbOag5LOJac74gpmD"loader = GoogleDriveLoader( folder_id=folder_id, file_loader_cls=UnstructuredFileIOLoader, file_loader_kwargs={"mode": "elements"},) ``` ## Extended usage[​](#extended-usage "Direct link to Extended usage") An external component can manage the complexity of Google Drive : `langchain-googledrive` It’s compatible with the ̀`langchain_community.document_loaders.GoogleDriveLoader` and can be used in its place. To be compatible with containers, the authentication uses an environment variable `̀GOOGLE_ACCOUNT_FILE` to credential file (for user or service). ``` %pip install --upgrade --quiet langchain-googledrive ``` ``` folder_id = "root"# folder_id='1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5' ``` ``` # Use the advanced version.from langchain_googledrive.document_loaders import GoogleDriveLoader ``` ``` loader = GoogleDriveLoader( folder_id=folder_id, recursive=False, num_results=2, # Maximum number of file to load) ``` By default, all files with these mime-type can be converted to `Document`. - text/text - text/plain - text/html - text/csv - text/markdown - image/png - image/jpeg - application/epub+zip - application/pdf - application/rtf - application/vnd.google-apps.document (GDoc) - application/vnd.google-apps.presentation (GSlide) - application/vnd.google-apps.spreadsheet (GSheet) - application/vnd.google.colaboratory (Notebook colab) - application/vnd.openxmlformats-officedocument.presentationml.presentation (PPTX) - application/vnd.openxmlformats-officedocument.wordprocessingml.document (DOCX) It’s possible to update or customize this. See the documentation of `GDriveLoader`. But, the corresponding packages must be installed. ``` %pip install --upgrade --quiet unstructured ``` ``` for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...") ``` ### Loading auth Identities[​](#loading-auth-identities "Direct link to Loading auth Identities") Authorized identities for each file ingested by Google Drive Loader can be loaded along with metadata per Document. ``` from langchain_community.document_loaders import GoogleDriveLoaderloader = GoogleDriveLoader( folder_id=folder_id, load_auth=True, # Optional: configure whether to load authorized identities for each Document.)doc = loader.load() ``` You can pass load\_auth=True, to add Google Drive document access identities to metadata. ### Customize the search pattern[​](#customize-the-search-pattern "Direct link to Customize the search pattern") All parameter compatible with Google [`list()`](https://developers.google.com/drive/api/v3/reference/files/list) API can be set. To specify the new pattern of the Google request, you can use a `PromptTemplate()`. The variables for the prompt can be set with `kwargs` in the constructor. Some pre-formated request are proposed (use `{query}`, `{folder_id}` and/or `{mime_type}`): You can customize the criteria to select the files. A set of predefined filter are proposed: | template | description | | --- | --- | | gdrive-all-in-folder | Return all compatible files from a `folder_id` | | gdrive-query | Search `query` in all drives | | gdrive-by-name | Search file with name `query` | | gdrive-query-in-folder | Search `query` in `folder_id` (and sub-folders if `recursive=true`) | | gdrive-mime-type | Search a specific `mime_type` | | gdrive-mime-type-in-folder | Search a specific `mime_type` in `folder_id` | | gdrive-query-with-mime-type | Search `query` with a specific `mime_type` | | gdrive-query-with-mime-type-and-folder | Search `query` with a specific `mime_type` and in `folder_id` | ``` loader = GoogleDriveLoader( folder_id=folder_id, recursive=False, template="gdrive-query", # Default template to use query="machine learning", num_results=2, # Maximum number of file to load supportsAllDrives=False, # GDrive `list()` parameter)for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...") ``` You can customize your pattern. ``` from langchain_core.prompts.prompt import PromptTemplateloader = GoogleDriveLoader( folder_id=folder_id, recursive=False, template=PromptTemplate( input_variables=["query", "query_name"], template="fullText contains '{query}' and name contains '{query_name}' and trashed=false", ), # Default template to use query="machine learning", query_name="ML", num_results=2, # Maximum number of file to load)for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...") ``` The conversion can manage in Markdown format: - bullet - link - table - titles Set the attribut `return_link` to `True` to export links. #### Modes for GSlide and GSheet[​](#modes-for-gslide-and-gsheet "Direct link to Modes for GSlide and GSheet") The parameter mode accepts different values: * “document”: return the body of each document * “snippets”: return the description of each file (set in metadata of Google Drive files). The parameter `gslide_mode` accepts different values: * “single” : one document with \\<PAGE BREAK\> * “slide” : one document by slide * “elements” : one document for each elements. ``` loader = GoogleDriveLoader( template="gdrive-mime-type", mime_type="application/vnd.google-apps.presentation", # Only GSlide files gslide_mode="slide", num_results=2, # Maximum number of file to load)for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...") ``` The parameter `gsheet_mode` accepts different values: - `"single"`: Generate one document by line - `"elements"` : one document with markdown array and \\<PAGE BREAK\> tags. ``` loader = GoogleDriveLoader( template="gdrive-mime-type", mime_type="application/vnd.google-apps.spreadsheet", # Only GSheet files gsheet_mode="elements", num_results=2, # Maximum number of file to load)for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...") ``` ### Advanced usage[​](#advanced-usage "Direct link to Advanced usage") All Google File have a ‘description’ in the metadata. This field can be used to memorize a summary of the document or others indexed tags (See method `lazy_update_description_with_summary()`). If you use the `mode="snippet"`, only the description will be used for the body. Else, the `metadata['summary']` has the field. Sometime, a specific filter can be used to extract some information from the filename, to select some files with specific criteria. You can use a filter. Sometimes, many documents are returned. It’s not necessary to have all documents in memory at the same time. You can use the lazy versions of methods, to get one document at a time. It’s better to use a complex query in place of a recursive search. For each folder, a query must be applied if you activate `recursive=True`. ``` import osloader = GoogleDriveLoader( gdrive_api_file=os.environ["GOOGLE_ACCOUNT_FILE"], num_results=2, template="gdrive-query", filter=lambda search, file: "#test" not in file.get("description", ""), query="machine learning", supportsAllDrives=False,)for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...") ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:04.997Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_drive/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_drive/", "description": "Google Drive is a file", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3446", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_drive\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:04 GMT", "etag": "W/\"c8f743de34bd467fd1c4b30bc310b664\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::757mv-1713753544356-f9b2946e4889" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_drive/", "property": "og:url" }, { "content": "Google Drive | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Google Drive is a file", "property": "og:description" } ], "title": "Google Drive | 🦜️🔗 LangChain" }
Google Drive Google Drive is a file storage and synchronization service developed by Google. This notebook covers how to load documents from Google Drive. Currently, only Google Docs are supported. Prerequisites​ Create a Google Cloud project or use an existing project Enable the Google Drive API Authorize credentials for desktop app pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib 🧑 Instructions for ingesting your Google Docs data​ Set the environmental variable GOOGLE_APPLICATION_CREDENTIALS to an empty string (""). By default, the GoogleDriveLoader expects the credentials.json file to be located at ~/.credentials/credentials.json, but this is configurable using the credentials_path keyword argument. Same thing with token.json - default path: ~/.credentials/token.json, constructor param: token_path. The first time you use GoogleDriveLoader, you will be displayed with the consent screen in your browser for user authentication. After authentication, token.json will be created automatically at the provided or the default path. Also, if there is already a token.json at that path, then you will not be prompted for authentication. GoogleDriveLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5" Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw" %pip install --upgrade --quiet google-api-python-client google-auth-httplib2 google-auth-oauthlib from langchain_community.document_loaders import GoogleDriveLoader loader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", token_path="/path/where/you/want/token/to/be/created/google_token.json", # Optional: configure whether to recursively fetch files from subfolders. Defaults to False. recursive=False, ) When you pass a folder_id by default all files of type document, sheet and pdf are loaded. You can modify this behaviour by passing a file_types argument loader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", file_types=["document", "sheet"], recursive=False, ) Passing in Optional File Loaders​ When processing files other than Google Docs and Google Sheets, it can be helpful to pass an optional file loader to GoogleDriveLoader. If you pass in a file loader, that file loader will be used on documents that do not have a Google Docs or Google Sheets MIME type. Here is an example of how to load an Excel document from Google Drive using a file loader. from langchain_community.document_loaders import ( GoogleDriveLoader, UnstructuredFileIOLoader, ) file_id = "1x9WBtFPWMEAdjcJzPScRsjpjQvpSo_kz" loader = GoogleDriveLoader( file_ids=[file_id], file_loader_cls=UnstructuredFileIOLoader, file_loader_kwargs={"mode": "elements"}, ) You can also process a folder with a mix of files and Google Docs/Sheets using the following pattern: folder_id = "1asMOHY1BqBS84JcRbOag5LOJac74gpmD" loader = GoogleDriveLoader( folder_id=folder_id, file_loader_cls=UnstructuredFileIOLoader, file_loader_kwargs={"mode": "elements"}, ) Extended usage​ An external component can manage the complexity of Google Drive : langchain-googledrive It’s compatible with the ̀langchain_community.document_loaders.GoogleDriveLoader and can be used in its place. To be compatible with containers, the authentication uses an environment variable ̀GOOGLE_ACCOUNT_FILE to credential file (for user or service). %pip install --upgrade --quiet langchain-googledrive folder_id = "root" # folder_id='1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5' # Use the advanced version. from langchain_googledrive.document_loaders import GoogleDriveLoader loader = GoogleDriveLoader( folder_id=folder_id, recursive=False, num_results=2, # Maximum number of file to load ) By default, all files with these mime-type can be converted to Document. - text/text - text/plain - text/html - text/csv - text/markdown - image/png - image/jpeg - application/epub+zip - application/pdf - application/rtf - application/vnd.google-apps.document (GDoc) - application/vnd.google-apps.presentation (GSlide) - application/vnd.google-apps.spreadsheet (GSheet) - application/vnd.google.colaboratory (Notebook colab) - application/vnd.openxmlformats-officedocument.presentationml.presentation (PPTX) - application/vnd.openxmlformats-officedocument.wordprocessingml.document (DOCX) It’s possible to update or customize this. See the documentation of GDriveLoader. But, the corresponding packages must be installed. %pip install --upgrade --quiet unstructured for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...") Loading auth Identities​ Authorized identities for each file ingested by Google Drive Loader can be loaded along with metadata per Document. from langchain_community.document_loaders import GoogleDriveLoader loader = GoogleDriveLoader( folder_id=folder_id, load_auth=True, # Optional: configure whether to load authorized identities for each Document. ) doc = loader.load() You can pass load_auth=True, to add Google Drive document access identities to metadata. Customize the search pattern​ All parameter compatible with Google list() API can be set. To specify the new pattern of the Google request, you can use a PromptTemplate(). The variables for the prompt can be set with kwargs in the constructor. Some pre-formated request are proposed (use {query}, {folder_id} and/or {mime_type}): You can customize the criteria to select the files. A set of predefined filter are proposed: templatedescription gdrive-all-in-folder Return all compatible files from a folder_id gdrive-query Search query in all drives gdrive-by-name Search file with name query gdrive-query-in-folder Search query in folder_id (and sub-folders if recursive=true) gdrive-mime-type Search a specific mime_type gdrive-mime-type-in-folder Search a specific mime_type in folder_id gdrive-query-with-mime-type Search query with a specific mime_type gdrive-query-with-mime-type-and-folder Search query with a specific mime_type and in folder_id loader = GoogleDriveLoader( folder_id=folder_id, recursive=False, template="gdrive-query", # Default template to use query="machine learning", num_results=2, # Maximum number of file to load supportsAllDrives=False, # GDrive `list()` parameter ) for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...") You can customize your pattern. from langchain_core.prompts.prompt import PromptTemplate loader = GoogleDriveLoader( folder_id=folder_id, recursive=False, template=PromptTemplate( input_variables=["query", "query_name"], template="fullText contains '{query}' and name contains '{query_name}' and trashed=false", ), # Default template to use query="machine learning", query_name="ML", num_results=2, # Maximum number of file to load ) for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...") The conversion can manage in Markdown format: - bullet - link - table - titles Set the attribut return_link to True to export links. Modes for GSlide and GSheet​ The parameter mode accepts different values: “document”: return the body of each document “snippets”: return the description of each file (set in metadata of Google Drive files). The parameter gslide_mode accepts different values: “single” : one document with \<PAGE BREAK> “slide” : one document by slide “elements” : one document for each elements. loader = GoogleDriveLoader( template="gdrive-mime-type", mime_type="application/vnd.google-apps.presentation", # Only GSlide files gslide_mode="slide", num_results=2, # Maximum number of file to load ) for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...") The parameter gsheet_mode accepts different values: - "single": Generate one document by line - "elements" : one document with markdown array and \<PAGE BREAK> tags. loader = GoogleDriveLoader( template="gdrive-mime-type", mime_type="application/vnd.google-apps.spreadsheet", # Only GSheet files gsheet_mode="elements", num_results=2, # Maximum number of file to load ) for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...") Advanced usage​ All Google File have a ‘description’ in the metadata. This field can be used to memorize a summary of the document or others indexed tags (See method lazy_update_description_with_summary()). If you use the mode="snippet", only the description will be used for the body. Else, the metadata['summary'] has the field. Sometime, a specific filter can be used to extract some information from the filename, to select some files with specific criteria. You can use a filter. Sometimes, many documents are returned. It’s not necessary to have all documents in memory at the same time. You can use the lazy versions of methods, to get one document at a time. It’s better to use a complex query in place of a recursive search. For each folder, a query must be applied if you activate recursive=True. import os loader = GoogleDriveLoader( gdrive_api_file=os.environ["GOOGLE_ACCOUNT_FILE"], num_results=2, template="gdrive-query", filter=lambda search, file: "#test" not in file.get("description", ""), query="machine learning", supportsAllDrives=False, ) for doc in loader.load(): print("---") print(doc.page_content.strip()[:60] + "...")
https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_file/
## Google Cloud Storage File > [Google Cloud Storage](https://en.wikipedia.org/wiki/Google_Cloud_Storage) is a managed service for storing unstructured data. This covers how to load document objects from an `Google Cloud Storage (GCS) file object (blob)`. ``` %pip install --upgrade --quiet google-cloud-storage ``` ``` from langchain_community.document_loaders import GCSFileLoader ``` ``` loader = GCSFileLoader(project_name="aist", bucket="testing-hwc", blob="fake.docx") ``` ``` /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) ``` ``` [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmp3srlf8n8/fake.docx'}, lookup_index=0)] ``` If you want to use an alternative loader, you can provide a custom function, for example: ``` from langchain_community.document_loaders import PyPDFLoaderdef load_pdf(file_path): return PyPDFLoader(file_path)loader = GCSFileLoader( project_name="aist", bucket="testing-hwc", blob="fake.pdf", loader_func=load_pdf) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:05.609Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_file/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_file/", "description": "[Google Cloud", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3447", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_cloud_storage_file\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:05 GMT", "etag": "W/\"0f1d48fd571178e2f83ae46541bb2640\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::mmd2j-1713753545044-eef6f4d9ba96" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_file/", "property": "og:url" }, { "content": "Google Cloud Storage File | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[Google Cloud", "property": "og:description" } ], "title": "Google Cloud Storage File | 🦜️🔗 LangChain" }
Google Cloud Storage File Google Cloud Storage is a managed service for storing unstructured data. This covers how to load document objects from an Google Cloud Storage (GCS) file object (blob). %pip install --upgrade --quiet google-cloud-storage from langchain_community.document_loaders import GCSFileLoader loader = GCSFileLoader(project_name="aist", bucket="testing-hwc", blob="fake.docx") /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmp3srlf8n8/fake.docx'}, lookup_index=0)] If you want to use an alternative loader, you can provide a custom function, for example: from langchain_community.document_loaders import PyPDFLoader def load_pdf(file_path): return PyPDFLoader(file_path) loader = GCSFileLoader( project_name="aist", bucket="testing-hwc", blob="fake.pdf", loader_func=load_pdf ) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/google_spanner/
## Google Spanner > [Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution. This notebook goes over how to use [Spanner](https://cloud.google.com/spanner) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `SpannerLoader` and `SpannerDocumentSaver`. Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-spanner-python/). [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-spanner-python/blob/main/docs/document_loader.ipynb) Open In Colab ## Before You Begin[​](#before-you-begin "Direct link to Before You Begin") To run this notebook, you will need to do the following: * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project) * [Enable the Cloud Spanner API](https://console.cloud.google.com/flows/enableapi?apiid=spanner.googleapis.com) * [Create a Spanner instance](https://cloud.google.com/spanner/docs/create-manage-instances) * [Create a Spanner database](https://cloud.google.com/spanner/docs/create-manage-databases) * [Create a Spanner table](https://cloud.google.com/spanner/docs/create-query-database-console#create-schema) After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. ``` # @markdown Please specify an instance id, a database, and a table for demo purpose.INSTANCE_ID = "test_instance" # @param {type:"string"}DATABASE_ID = "test_database" # @param {type:"string"}TABLE_NAME = "test_table" # @param {type:"string"} ``` ### 🦜🔗 Library Installation[​](#library-installation "Direct link to 🦜🔗 Library Installation") The integration lives in its own `langchain-google-spanner` package, so we need to install it. ``` %pip install -upgrade --quiet langchain-google-spanner langchain ``` **Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. ``` # # Automatically restart kernel after installs so that your environment can access the new packages# import IPython# app = IPython.Application.instance()# app.kernel.do_shutdown(True) ``` ### ☁ Set Your Google Cloud Project[​](#set-your-google-cloud-project "Direct link to ☁ Set Your Google Cloud Project") Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: * Run `gcloud config list`. * Run `gcloud projects list`. * See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113). ``` # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.PROJECT_ID = "my-project-id" # @param {type:"string"}# Set the project id!gcloud config set project {PROJECT_ID} ``` ### 🔐 Authentication[​](#authentication "Direct link to 🔐 Authentication") Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. * If you are using Colab to run this notebook, use the cell below and continue. * If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env). ``` from google.colab import authauth.authenticate_user() ``` ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") ### Save documents[​](#save-documents "Direct link to Save documents") Save langchain documents with `SpannerDocumentSaver.add_documents(<documents>)`. To initialize `SpannerDocumentSaver` class you need to provide 3 things: 1. `instance_id` - An instance of Spanner to load data from. 2. `database_id` - An instance of Spanner database to load data from. 3. `table_name` - The name of the table within the Spanner database to store langchain documents. ``` from langchain_core.documents import Documentfrom langchain_google_spanner import SpannerDocumentSavertest_docs = [ Document( page_content="Apple Granny Smith 150 0.99 1", metadata={"fruit_id": 1}, ), Document( page_content="Banana Cavendish 200 0.59 0", metadata={"fruit_id": 2}, ), Document( page_content="Orange Navel 80 1.29 1", metadata={"fruit_id": 3}, ),]saver = SpannerDocumentSaver( instance_id=INSTANCE_ID, database_id=DATABASE_ID, table_name=TABLE_NAME,)saver.add_documents(test_docs) ``` ### Querying for Documents from Spanner[​](#querying-for-documents-from-spanner "Direct link to Querying for Documents from Spanner") For more details on connecting to a Spanner table, please check the [Python SDK documentation](https://cloud.google.com/python/docs/reference/spanner/latest). #### Load documents from table[​](#load-documents-from-table "Direct link to Load documents from table") Load langchain documents with `SpannerLoader.load()` or `SpannerLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `SpannerLoader` class you need to provide: 1. `instance_id` - An instance of Spanner to load data from. 2. `database_id` - An instance of Spanner database to load data from. 3. `query` - A query of the database dialect. ``` from langchain_google_spanner import SpannerLoaderquery = f"SELECT * from {TABLE_NAME}"loader = SpannerLoader( instance_id=INSTANCE_ID, database_id=DATABASE_ID, query=query,)for doc in loader.lazy_load(): print(doc) break ``` ### Delete documents[​](#delete-documents "Direct link to Delete documents") Delete a list of langchain documents from the table with `SpannerDocumentSaver.delete(<documents>)`. ``` docs = loader.load()print("Documents before delete:", docs)doc = test_docs[0]saver.delete([doc])print("Documents after delete:", loader.load()) ``` ## Advanced Usage[​](#advanced-usage "Direct link to Advanced Usage") ### Custom client[​](#custom-client "Direct link to Custom client") The client created by default is the default client. To pass in `credentials` and `project` explicitly, a custom client can be passed to the constructor. ``` from google.cloud import spannerfrom google.oauth2 import service_accountcreds = service_account.Credentials.from_service_account_file("/path/to/key.json")custom_client = spanner.Client(project="my-project", credentials=creds)loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, client=custom_client,) ``` ### Customize Document Page Content & Metadata[​](#customize-document-page-content-metadata "Direct link to Customize Document Page Content & Metadata") The loader will returns a list of Documents with page content from a specific data columns. All other data columns will be added to metadata. Each row becomes a document. #### Customize page content format[​](#customize-page-content-format "Direct link to Customize page content format") The SpannerLoader assumes there is a column called `page_content`. These defaults can be changed like so: ``` custom_content_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, content_columns=["custom_content"]) ``` If multiple columns are specified, the page content’s string format will default to `text` (space-separated string concatenation). There are other format that user can specify, including `text`, `JSON`, `YAML`, `CSV`. #### Customize metadata format[​](#customize-metadata-format "Direct link to Customize metadata format") The SpannerLoader assumes there is a metadata column called `langchain_metadata` that store JSON data. The metadata column will be used as the base dictionary. By default, all other column data will be added and may overwrite the original value. These defaults can be changed like so: ``` custom_metadata_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, metadata_columns=["column1", "column2"]) ``` #### Customize JSON metadata column name[​](#customize-json-metadata-column-name "Direct link to Customize JSON metadata column name") By default, the loader uses `langchain_metadata` as the base dictionary. This can be customized to select a JSON column to use as base dictionary for the Document’s metadata. ``` custom_metadata_json_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, metadata_json_column="another-json-column") ``` ### Custom staleness[​](#custom-staleness "Direct link to Custom staleness") The default [staleness](https://cloud.google.com/python/docs/reference/spanner/latest/snapshot-usage#beginning-a-snapshot) is 15s. This can be customized by specifying a weaker bound (which can either be to perform all reads as of a given timestamp), or as of a given duration in the past. ``` import datetimetimestamp = datetime.datetime.utcnow()custom_timestamp_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, staleness=timestamp,) ``` ``` duration = 20.0custom_duration_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, staleness=duration,) ``` ### Turn on data boost[​](#turn-on-data-boost "Direct link to Turn on data boost") By default, the loader will not use [data boost](https://cloud.google.com/spanner/docs/databoost/databoost-overview) since it has additional costs associated, and require additional IAM permissions. However, user can choose to turn it on. ``` custom_databoost_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, databoost=True,) ``` ### Custom client[​](#custom-client-1 "Direct link to Custom client") The client created by default is the default client. To pass in `credentials` and `project` explicitly, a custom client can be passed to the constructor. ``` from google.cloud import spannercustom_client = spanner.Client(project="my-project", credentials=creds)saver = SpannerDocumentSaver( INSTANCE_ID, DATABASE_ID, TABLE_NAME, client=custom_client,) ``` ### Custom initialization for SpannerDocumentSaver[​](#custom-initialization-for-spannerdocumentsaver "Direct link to Custom initialization for SpannerDocumentSaver") The SpannerDocumentSaver allows custom initialization. This allows user to specify how the Document is saved into the table. content\_column: This will be used as the column name for the Document’s page content. Defaulted to `page_content`. metadata\_columns: These metadata will be saved into specific columns if the key exists in the Document’s metadata. metadata\_json\_column: This will be the column name for the spcial JSON column. Defaulted to `langchain_metadata`. ``` custom_saver = SpannerDocumentSaver( INSTANCE_ID, DATABASE_ID, TABLE_NAME, content_column="my-content", metadata_columns=["foo"], metadata_json_column="my-special-json-column",) ``` ### Initialize custom schema for Spanner[​](#initialize-custom-schema-for-spanner "Direct link to Initialize custom schema for Spanner") The SpannerDocumentSaver will have a `init_document_table` method to create a new table to store docs with custom schema. ``` from langchain_google_spanner import Columnnew_table_name = "my_new_table"SpannerDocumentSaver.init_document_table( INSTANCE_ID, DATABASE_ID, new_table_name, content_column="my-page-content", metadata_columns=[ Column("category", "STRING(36)", True), Column("price", "FLOAT64", False), ],) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:05.914Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_spanner/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_spanner/", "description": "Spanner is a highly scalable", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4416", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_spanner\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:05 GMT", "etag": "W/\"c8fc7df8ea3a79c2172a5a5b80bcc427\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::swct2-1713753545015-f17408cfa4cf" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_spanner/", "property": "og:url" }, { "content": "Google Spanner | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Spanner is a highly scalable", "property": "og:description" } ], "title": "Google Spanner | 🦜️🔗 LangChain" }
Google Spanner Spanner is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution. This notebook goes over how to use Spanner to save, load and delete langchain documents with SpannerLoader and SpannerDocumentSaver. Learn more about the package on GitHub. Open In Colab Before You Begin​ To run this notebook, you will need to do the following: Create a Google Cloud Project Enable the Cloud Spanner API Create a Spanner instance Create a Spanner database Create a Spanner table After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts. # @markdown Please specify an instance id, a database, and a table for demo purpose. INSTANCE_ID = "test_instance" # @param {type:"string"} DATABASE_ID = "test_database" # @param {type:"string"} TABLE_NAME = "test_table" # @param {type:"string"} 🦜🔗 Library Installation​ The integration lives in its own langchain-google-spanner package, so we need to install it. %pip install -upgrade --quiet langchain-google-spanner langchain Colab only: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top. # # Automatically restart kernel after installs so that your environment can access the new packages # import IPython # app = IPython.Application.instance() # app.kernel.do_shutdown(True) ☁ Set Your Google Cloud Project​ Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook. If you don’t know your project ID, try the following: Run gcloud config list. Run gcloud projects list. See the support page: Locate the project ID. # @markdown Please fill in the value below with your Google Cloud project ID and then run the cell. PROJECT_ID = "my-project-id" # @param {type:"string"} # Set the project id !gcloud config set project {PROJECT_ID} 🔐 Authentication​ Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. If you are using Colab to run this notebook, use the cell below and continue. If you are using Vertex AI Workbench, check out the setup instructions here. from google.colab import auth auth.authenticate_user() Basic Usage​ Save documents​ Save langchain documents with SpannerDocumentSaver.add_documents(<documents>). To initialize SpannerDocumentSaver class you need to provide 3 things: instance_id - An instance of Spanner to load data from. database_id - An instance of Spanner database to load data from. table_name - The name of the table within the Spanner database to store langchain documents. from langchain_core.documents import Document from langchain_google_spanner import SpannerDocumentSaver test_docs = [ Document( page_content="Apple Granny Smith 150 0.99 1", metadata={"fruit_id": 1}, ), Document( page_content="Banana Cavendish 200 0.59 0", metadata={"fruit_id": 2}, ), Document( page_content="Orange Navel 80 1.29 1", metadata={"fruit_id": 3}, ), ] saver = SpannerDocumentSaver( instance_id=INSTANCE_ID, database_id=DATABASE_ID, table_name=TABLE_NAME, ) saver.add_documents(test_docs) Querying for Documents from Spanner​ For more details on connecting to a Spanner table, please check the Python SDK documentation. Load documents from table​ Load langchain documents with SpannerLoader.load() or SpannerLoader.lazy_load(). lazy_load returns a generator that only queries database during the iteration. To initialize SpannerLoader class you need to provide: instance_id - An instance of Spanner to load data from. database_id - An instance of Spanner database to load data from. query - A query of the database dialect. from langchain_google_spanner import SpannerLoader query = f"SELECT * from {TABLE_NAME}" loader = SpannerLoader( instance_id=INSTANCE_ID, database_id=DATABASE_ID, query=query, ) for doc in loader.lazy_load(): print(doc) break Delete documents​ Delete a list of langchain documents from the table with SpannerDocumentSaver.delete(<documents>). docs = loader.load() print("Documents before delete:", docs) doc = test_docs[0] saver.delete([doc]) print("Documents after delete:", loader.load()) Advanced Usage​ Custom client​ The client created by default is the default client. To pass in credentials and project explicitly, a custom client can be passed to the constructor. from google.cloud import spanner from google.oauth2 import service_account creds = service_account.Credentials.from_service_account_file("/path/to/key.json") custom_client = spanner.Client(project="my-project", credentials=creds) loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, client=custom_client, ) Customize Document Page Content & Metadata​ The loader will returns a list of Documents with page content from a specific data columns. All other data columns will be added to metadata. Each row becomes a document. Customize page content format​ The SpannerLoader assumes there is a column called page_content. These defaults can be changed like so: custom_content_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, content_columns=["custom_content"] ) If multiple columns are specified, the page content’s string format will default to text (space-separated string concatenation). There are other format that user can specify, including text, JSON, YAML, CSV. Customize metadata format​ The SpannerLoader assumes there is a metadata column called langchain_metadata that store JSON data. The metadata column will be used as the base dictionary. By default, all other column data will be added and may overwrite the original value. These defaults can be changed like so: custom_metadata_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, metadata_columns=["column1", "column2"] ) Customize JSON metadata column name​ By default, the loader uses langchain_metadata as the base dictionary. This can be customized to select a JSON column to use as base dictionary for the Document’s metadata. custom_metadata_json_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, metadata_json_column="another-json-column" ) Custom staleness​ The default staleness is 15s. This can be customized by specifying a weaker bound (which can either be to perform all reads as of a given timestamp), or as of a given duration in the past. import datetime timestamp = datetime.datetime.utcnow() custom_timestamp_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, staleness=timestamp, ) duration = 20.0 custom_duration_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, staleness=duration, ) Turn on data boost​ By default, the loader will not use data boost since it has additional costs associated, and require additional IAM permissions. However, user can choose to turn it on. custom_databoost_loader = SpannerLoader( INSTANCE_ID, DATABASE_ID, query, databoost=True, ) Custom client​ The client created by default is the default client. To pass in credentials and project explicitly, a custom client can be passed to the constructor. from google.cloud import spanner custom_client = spanner.Client(project="my-project", credentials=creds) saver = SpannerDocumentSaver( INSTANCE_ID, DATABASE_ID, TABLE_NAME, client=custom_client, ) Custom initialization for SpannerDocumentSaver​ The SpannerDocumentSaver allows custom initialization. This allows user to specify how the Document is saved into the table. content_column: This will be used as the column name for the Document’s page content. Defaulted to page_content. metadata_columns: These metadata will be saved into specific columns if the key exists in the Document’s metadata. metadata_json_column: This will be the column name for the spcial JSON column. Defaulted to langchain_metadata. custom_saver = SpannerDocumentSaver( INSTANCE_ID, DATABASE_ID, TABLE_NAME, content_column="my-content", metadata_columns=["foo"], metadata_json_column="my-special-json-column", ) Initialize custom schema for Spanner​ The SpannerDocumentSaver will have a init_document_table method to create a new table to store docs with custom schema. from langchain_google_spanner import Column new_table_name = "my_new_table" SpannerDocumentSaver.init_document_table( INSTANCE_ID, DATABASE_ID, new_table_name, content_column="my-page-content", metadata_columns=[ Column("category", "STRING(36)", True), Column("price", "FLOAT64", False), ], )
https://python.langchain.com/docs/integrations/document_loaders/google_speech_to_text/
The `GoogleSpeechToTextLoader` allows to transcribe audio files with the [Google Cloud Speech-to-Text API](https://cloud.google.com/speech-to-text) and loads the transcribed text into documents. To use it, you should have the `google-cloud-speech` python package installed, and a Google Cloud project with the [Speech-to-Text API enabled](https://cloud.google.com/speech-to-text/v2/docs/transcribe-client-libraries#before_you_begin). First, you need to install the `google-cloud-speech` python package. Follow the [quickstart guide](https://cloud.google.com/speech-to-text/v2/docs/sync-recognize) in the Google Cloud documentation to create a project and enable the API. The `GoogleSpeechToTextLoader` must include the `project_id` and `file_path` arguments. Audio files can be specified as a Google Cloud Storage URI (`gs://...`) or a local file path. Only synchronous requests are supported by the loader, which has a [limit of 60 seconds or 10MB](https://cloud.google.com/speech-to-text/v2/docs/sync-recognize#:~:text=60%20seconds%20and/or%2010%20MB) per audio file. ``` from langchain_community.document_loaders import GoogleSpeechToTextLoaderproject_id = "<PROJECT_ID>"file_path = "gs://cloud-samples-data/speech/audio.flac"# or a local file path: file_path = "./audio.wav"loader = GoogleSpeechToTextLoader(project_id=project_id, file_path=file_path)docs = loader.load() ``` Note: Calling `loader.load()` blocks until the transcription is finished. ``` "How old is the Brooklyn Bridge?" ``` You can specify the `config` argument to use different speech recognition models and enable specific features. If you don’t specify a `config`, the following options will be selected automatically: ``` from google.cloud.speech_v2 import ( AutoDetectDecodingConfig, RecognitionConfig, RecognitionFeatures,)from langchain_community.document_loaders import GoogleSpeechToTextLoaderproject_id = "<PROJECT_ID>"location = "global"recognizer_id = "<RECOGNIZER_ID>"file_path = "./audio.wav"config = RecognitionConfig( auto_decoding_config=AutoDetectDecodingConfig(), language_codes=["en-US"], model="long", features=RecognitionFeatures( enable_automatic_punctuation=False, profanity_filter=True, enable_spoken_punctuation=True, enable_spoken_emojis=True, ),)loader = GoogleSpeechToTextLoader( project_id=project_id, location=location, recognizer_id=recognizer_id, file_path=file_path, config=config,) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:07.018Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_speech_to_text/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/google_speech_to_text/", "description": "The GoogleSpeechToTextLoader allows to transcribe audio files with the", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4738", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_speech_to_text\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:06 GMT", "etag": "W/\"cf8e08202048ba1d1b1246bd4f8a098e\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::nkbvf-1713753546940-aa91c8cbbcfa" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/google_speech_to_text/", "property": "og:url" }, { "content": "Google Speech-to-Text Audio Transcripts | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The GoogleSpeechToTextLoader allows to transcribe audio files with the", "property": "og:description" } ], "title": "Google Speech-to-Text Audio Transcripts | 🦜️🔗 LangChain" }
The GoogleSpeechToTextLoader allows to transcribe audio files with the Google Cloud Speech-to-Text API and loads the transcribed text into documents. To use it, you should have the google-cloud-speech python package installed, and a Google Cloud project with the Speech-to-Text API enabled. First, you need to install the google-cloud-speech python package. Follow the quickstart guide in the Google Cloud documentation to create a project and enable the API. The GoogleSpeechToTextLoader must include the project_id and file_path arguments. Audio files can be specified as a Google Cloud Storage URI (gs://...) or a local file path. Only synchronous requests are supported by the loader, which has a limit of 60 seconds or 10MB per audio file. from langchain_community.document_loaders import GoogleSpeechToTextLoader project_id = "<PROJECT_ID>" file_path = "gs://cloud-samples-data/speech/audio.flac" # or a local file path: file_path = "./audio.wav" loader = GoogleSpeechToTextLoader(project_id=project_id, file_path=file_path) docs = loader.load() Note: Calling loader.load() blocks until the transcription is finished. "How old is the Brooklyn Bridge?" You can specify the config argument to use different speech recognition models and enable specific features. If you don’t specify a config, the following options will be selected automatically: from google.cloud.speech_v2 import ( AutoDetectDecodingConfig, RecognitionConfig, RecognitionFeatures, ) from langchain_community.document_loaders import GoogleSpeechToTextLoader project_id = "<PROJECT_ID>" location = "global" recognizer_id = "<RECOGNIZER_ID>" file_path = "./audio.wav" config = RecognitionConfig( auto_decoding_config=AutoDetectDecodingConfig(), language_codes=["en-US"], model="long", features=RecognitionFeatures( enable_automatic_punctuation=False, profanity_filter=True, enable_spoken_punctuation=True, enable_spoken_emojis=True, ), ) loader = GoogleSpeechToTextLoader( project_id=project_id, location=location, recognizer_id=recognizer_id, file_path=file_path, config=config, )
https://python.langchain.com/docs/integrations/document_loaders/grobid/
GROBID is a machine learning library for extracting, parsing, and re-structuring raw documents. It is designed and expected to be used to parse academic papers, where it works particularly well. Note: if the articles supplied to Grobid are large documents (e.g. dissertations) exceeding a certain number of elements, they might not be processed. This loader uses Grobid to parse PDFs into `Documents` that retain metadata associated with the section of text. Once grobid is up-and-running you can interact as described below. Now, we can use the data loader. ``` 'Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g."Books -2TB" or "Social media conversations").There exist some exceptions, notably OPT (Zhang et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022) and GLM (Zeng et al., 2022), but none that are competitive with PaLM-62B or Chinchilla.' ``` ``` {'text': 'Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g."Books -2TB" or "Social media conversations").There exist some exceptions, notably OPT (Zhang et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022) and GLM (Zeng et al., 2022), but none that are competitive with PaLM-62B or Chinchilla.', 'para': '2', 'bboxes': "[[{'page': '1', 'x': '317.05', 'y': '509.17', 'h': '207.73', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '522.72', 'h': '220.08', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '536.27', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '549.82', 'h': '218.65', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '563.37', 'h': '136.98', 'w': '9.46'}], [{'page': '1', 'x': '446.49', 'y': '563.37', 'h': '78.11', 'w': '9.46'}, {'page': '1', 'x': '304.69', 'y': '576.92', 'h': '138.32', 'w': '9.46'}], [{'page': '1', 'x': '447.75', 'y': '576.92', 'h': '76.66', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '590.47', 'h': '219.63', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '604.02', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '617.56', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '631.11', 'h': '220.18', 'w': '9.46'}]]", 'pages': "('1', '1')", 'section_title': 'Introduction', 'section_number': '1', 'paper_title': 'LLaMA: Open and Efficient Foundation Language Models', 'file_path': '/Users/31treehaus/Desktop/Papers/2302.13971.pdf'} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:07.660Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/grobid/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/grobid/", "description": "GROBID is a machine learning library for extracting, parsing, and", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3449", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"grobid\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:07 GMT", "etag": "W/\"fa968c5857a49febe2452d71dd988cba\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::vlt2t-1713753547595-f48093fd298b" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/grobid/", "property": "og:url" }, { "content": "Grobid | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "GROBID is a machine learning library for extracting, parsing, and", "property": "og:description" } ], "title": "Grobid | 🦜️🔗 LangChain" }
GROBID is a machine learning library for extracting, parsing, and re-structuring raw documents. It is designed and expected to be used to parse academic papers, where it works particularly well. Note: if the articles supplied to Grobid are large documents (e.g. dissertations) exceeding a certain number of elements, they might not be processed. This loader uses Grobid to parse PDFs into Documents that retain metadata associated with the section of text. Once grobid is up-and-running you can interact as described below. Now, we can use the data loader. 'Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g."Books -2TB" or "Social media conversations").There exist some exceptions, notably OPT (Zhang et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022) and GLM (Zeng et al., 2022), but none that are competitive with PaLM-62B or Chinchilla.' {'text': 'Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g."Books -2TB" or "Social media conversations").There exist some exceptions, notably OPT (Zhang et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022) and GLM (Zeng et al., 2022), but none that are competitive with PaLM-62B or Chinchilla.', 'para': '2', 'bboxes': "[[{'page': '1', 'x': '317.05', 'y': '509.17', 'h': '207.73', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '522.72', 'h': '220.08', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '536.27', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '549.82', 'h': '218.65', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '563.37', 'h': '136.98', 'w': '9.46'}], [{'page': '1', 'x': '446.49', 'y': '563.37', 'h': '78.11', 'w': '9.46'}, {'page': '1', 'x': '304.69', 'y': '576.92', 'h': '138.32', 'w': '9.46'}], [{'page': '1', 'x': '447.75', 'y': '576.92', 'h': '76.66', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '590.47', 'h': '219.63', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '604.02', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '617.56', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '631.11', 'h': '220.18', 'w': '9.46'}]]", 'pages': "('1', '1')", 'section_title': 'Introduction', 'section_number': '1', 'paper_title': 'LLaMA: Open and Efficient Foundation Language Models', 'file_path': '/Users/31treehaus/Desktop/Papers/2302.13971.pdf'}
https://python.langchain.com/docs/integrations/document_loaders/gutenberg/
## Gutenberg > [Project Gutenberg](https://www.gutenberg.org/about/) is an online library of free eBooks. This notebook covers how to load links to `Gutenberg` e-books into a document format that we can use downstream. ``` from langchain_community.document_loaders import GutenbergLoader ``` ``` loader = GutenbergLoader("https://www.gutenberg.org/cache/epub/69972/pg69972.txt") ``` ``` data[0].page_content[:300] ``` ``` 'The Project Gutenberg eBook of The changed brides, by Emma Dorothy\r\n\n\nEliza Nevitte Southworth\r\n\n\n\r\n\n\nThis eBook is for the use of anyone anywhere in the United States and\r\n\n\nmost other parts of the world at no cost and with almost no restrictions\r\n\n\nwhatsoever. You may copy it, give it away or re-u' ``` ``` {'source': 'https://www.gutenberg.org/cache/epub/69972/pg69972.txt'} ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:08.440Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/gutenberg/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/gutenberg/", "description": "Project Gutenberg is an online", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3450", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"gutenberg\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:08 GMT", "etag": "W/\"fe60eb0e4ed4ed7b4150cc0b740051a8\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::w5r7l-1713753548369-aace5cf44084" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/gutenberg/", "property": "og:url" }, { "content": "Gutenberg | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Project Gutenberg is an online", "property": "og:description" } ], "title": "Gutenberg | 🦜️🔗 LangChain" }
Gutenberg Project Gutenberg is an online library of free eBooks. This notebook covers how to load links to Gutenberg e-books into a document format that we can use downstream. from langchain_community.document_loaders import GutenbergLoader loader = GutenbergLoader("https://www.gutenberg.org/cache/epub/69972/pg69972.txt") data[0].page_content[:300] 'The Project Gutenberg eBook of The changed brides, by Emma Dorothy\r\n\n\nEliza Nevitte Southworth\r\n\n\n\r\n\n\nThis eBook is for the use of anyone anywhere in the United States and\r\n\n\nmost other parts of the world at no cost and with almost no restrictions\r\n\n\nwhatsoever. You may copy it, give it away or re-u' {'source': 'https://www.gutenberg.org/cache/epub/69972/pg69972.txt'} Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset/
This notebook shows how to load `Hugging Face Hub` datasets to LangChain. ``` [Document(page_content='I rented I AM CURIOUS-YELLOW from my video store because of all the controversy that surrounded it when it was first released in 1967. I also heard that at first it was seized by U.S. customs if it ever tried to enter this country, therefore being a fan of films considered "controversial" I really had to see this for myself.<br /><br />The plot is centered around a young Swedish drama student named Lena who wants to learn everything she can about life. In particular she wants to focus her attentions to making some sort of documentary on what the average Swede thought about certain political issues such as the Vietnam War and race issues in the United States. In between asking politicians and ordinary denizens of Stockholm about their opinions on politics, she has sex with her drama teacher, classmates, and married men.<br /><br />What kills me about I AM CURIOUS-YELLOW is that 40 years ago, this was considered pornographic. Really, the sex and nudity scenes are few and far between, even then it\'s not shot like some cheaply made porno. While my countrymen mind find it shocking, in reality sex and nudity are a major staple in Swedish cinema. Even Ingmar Bergman, arguably their answer to good old boy John Ford, had sex scenes in his films.<br /><br />I do commend the filmmakers for the fact that any sex shown in the film is shown for artistic purposes rather than just to shock people and make money to be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for anyone wanting to study the meat and potatoes (no pun intended) of Swedish cinema. But really, this film doesn\'t have much of a plot.', metadata={'label': 0}), Document(page_content='"I Am Curious: Yellow" is a risible and pretentious steaming pile. It doesn\'t matter what one\'s political views are because this film can hardly be taken seriously on any level. As for the claim that frontal male nudity is an automatic NC-17, that isn\'t true. I\'ve seen R-rated films with male nudity. Granted, they only offer some fleeting views, but where are the R-rated films with gaping vulvas and flapping labia? Nowhere, because they don\'t exist. The same goes for those crappy cable shows: schlongs swinging in the breeze but not a clitoris in sight. And those pretentious indie movies like The Brown Bunny, in which we\'re treated to the site of Vincent Gallo\'s throbbing johnson, but not a trace of pink visible on Chloe Sevigny. Before crying (or implying) "double-standard" in matters of nudity, the mentally obtuse should take into account one unavoidably obvious anatomical difference between men and women: there are no genitals on display when actresses appears nude, and the same cannot be said for a man. In fact, you generally won\'t see female genitals in an American film in anything short of porn or explicit erotica. This alleged double-standard is less a double standard than an admittedly depressing ability to come to terms culturally with the insides of women\'s bodies.', metadata={'label': 0}), Document(page_content="If only to avoid making this type of film in the future. This film is interesting as an experiment but tells no cogent story.<br /><br />One might feel virtuous for sitting thru it because it touches on so many IMPORTANT issues but it does so without any discernable motive. The viewer comes away with no new perspectives (unless one comes up with one while one's mind wanders, as it will invariably do during this pointless film).<br /><br />One might better spend one's time staring out a window at a tree growing.<br /><br />", metadata={'label': 0}), Document(page_content="This film was probably inspired by Godard's Masculin, féminin and I urge you to see that film instead.<br /><br />The film has two strong elements and those are, (1) the realistic acting (2) the impressive, undeservedly good, photo. Apart from that, what strikes me most is the endless stream of silliness. Lena Nyman has to be most annoying actress in the world. She acts so stupid and with all the nudity in this film,...it's unattractive. Comparing to Godard's film, intellectuality has been replaced with stupidity. Without going too far on this subject, I would say that follows from the difference in ideals between the French and the Swedish society.<br /><br />A movie of its time, and place. 2/10.", metadata={'label': 0}), Document(page_content='Oh, brother...after hearing about this ridiculous film for umpteen years all I can think of is that old Peggy Lee song..<br /><br />"Is that all there is??" ...I was just an early teen when this smoked fish hit the U.S. I was too young to get in the theater (although I did manage to sneak into "Goodbye Columbus"). Then a screening at a local film museum beckoned - Finally I could see this film, except now I was as old as my parents were when they schlepped to see it!!<br /><br />The ONLY reason this film was not condemned to the anonymous sands of time was because of the obscenity case sparked by its U.S. release. MILLIONS of people flocked to this stinker, thinking they were going to see a sex film...Instead, they got lots of closeups of gnarly, repulsive Swedes, on-street interviews in bland shopping malls, asinie political pretension...and feeble who-cares simulated sex scenes with saggy, pale actors.<br /><br />Cultural icon, holy grail, historic artifact..whatever this thing was, shred it, burn it, then stuff the ashes in a lead box!<br /><br />Elite esthetes still scrape to find value in its boring pseudo revolutionary political spewings..But if it weren\'t for the censorship scandal, it would have been ignored, then forgotten.<br /><br />Instead, the "I Am Blank, Blank" rhythymed title was repeated endlessly for years as a titilation for porno films (I am Curious, Lavender - for gay films, I Am Curious, Black - for blaxploitation films, etc..) and every ten years or so the thing rises from the dead, to be viewed by a new generation of suckers who want to see that "naughty sex film" that "revolutionized the film industry"...<br /><br />Yeesh, avoid like the plague..Or if you MUST see it - rent the video and fast forward to the "dirty" parts, just to get it over with.<br /><br />', metadata={'label': 0}), Document(page_content="I would put this at the top of my list of films in the category of unwatchable trash! There are films that are bad, but the worst kind are the ones that are unwatchable but you are suppose to like them because they are supposed to be good for you! The sex sequences, so shocking in its day, couldn't even arouse a rabbit. The so called controversial politics is strictly high school sophomore amateur night Marxism. The film is self-consciously arty in the worst sense of the term. The photography is in a harsh grainy black and white. Some scenes are out of focus or taken from the wrong angle. Even the sound is bad! And some people call this art?<br /><br />", metadata={'label': 0}), Document(page_content="Whoever wrote the screenplay for this movie obviously never consulted any books about Lucille Ball, especially her autobiography. I've never seen so many mistakes in a biopic, ranging from her early years in Celoron and Jamestown to her later years with Desi. I could write a whole list of factual errors, but it would go on for pages. In all, I believe that Lucille Ball is one of those inimitable people who simply cannot be portrayed by anyone other than themselves. If I were Lucie Arnaz and Desi, Jr., I would be irate at how many mistakes were made in this film. The filmmakers tried hard, but the movie seems awfully sloppy to me.", metadata={'label': 0}), Document(page_content='When I first saw a glimpse of this movie, I quickly noticed the actress who was playing the role of Lucille Ball. Rachel York\'s portrayal of Lucy is absolutely awful. Lucille Ball was an astounding comedian with incredible talent. To think about a legend like Lucille Ball being portrayed the way she was in the movie is horrendous. I cannot believe out of all the actresses in the world who could play a much better Lucy, the producers decided to get Rachel York. She might be a good actress in other roles but to play the role of Lucille Ball is tough. It is pretty hard to find someone who could resemble Lucille Ball, but they could at least find someone a bit similar in looks and talent. If you noticed York\'s portrayal of Lucy in episodes of I Love Lucy like the chocolate factory or vitavetavegamin, nothing is similar in any way-her expression, voice, or movement.<br /><br />To top it all off, Danny Pino playing Desi Arnaz is horrible. Pino does not qualify to play as Ricky. He\'s small and skinny, his accent is unreal, and once again, his acting is unbelievable. Although Fred and Ethel were not similar either, they were not as bad as the characters of Lucy and Ricky.<br /><br />Overall, extremely horrible casting and the story is badly told. If people want to understand the real life situation of Lucille Ball, I suggest watching A&E Biography of Lucy and Desi, read the book from Lucille Ball herself, or PBS\' American Masters: Finding Lucy. If you want to see a docudrama, "Before the Laughter" would be a better choice. The casting of Lucille Ball and Desi Arnaz in "Before the Laughter" is much better compared to this. At least, a similar aspect is shown rather than nothing.', metadata={'label': 0}), Document(page_content='Who are these "They"- the actors? the filmmakers? Certainly couldn\'t be the audience- this is among the most air-puffed productions in existence. It\'s the kind of movie that looks like it was a lot of fun to shoot\x97 TOO much fun, nobody is getting any actual work done, and that almost always makes for a movie that\'s no fun to watch.<br /><br />Ritter dons glasses so as to hammer home his character\'s status as a sort of doppleganger of the bespectacled Bogdanovich; the scenes with the breezy Ms. Stratten are sweet, but have an embarrassing, look-guys-I\'m-dating-the-prom-queen feel to them. Ben Gazzara sports his usual cat\'s-got-canary grin in a futile attempt to elevate the meager plot, which requires him to pursue Audrey Hepburn with all the interest of a narcoleptic at an insomnia clinic. In the meantime, the budding couple\'s respective children (nepotism alert: Bogdanovich\'s daughters) spew cute and pick up some fairly disturbing pointers on \'love\' while observing their parents. (Ms. Hepburn, drawing on her dignity, manages to rise above the proceedings- but she has the monumental challenge of playing herself, ostensibly.) Everybody looks great, but so what? It\'s a movie and we can expect that much, if that\'s what you\'re looking for you\'d be better off picking up a copy of Vogue.<br /><br />Oh- and it has to be mentioned that Colleen Camp thoroughly annoys, even apart from her singing, which, while competent, is wholly unconvincing... the country and western numbers are woefully mismatched with the standards on the soundtrack. Surely this is NOT what Gershwin (who wrote the song from which the movie\'s title is derived) had in mind; his stage musicals of the 20\'s may have been slight, but at least they were long on charm. "They All Laughed" tries to coast on its good intentions, but nobody- least of all Peter Bogdanovich - has the good sense to put on the brakes.<br /><br />Due in no small part to the tragic death of Dorothy Stratten, this movie has a special place in the heart of Mr. Bogdanovich- he even bought it back from its producers, then distributed it on his own and went bankrupt when it didn\'t prove popular. His rise and fall is among the more sympathetic and tragic of Hollywood stories, so there\'s no joy in criticizing the film... there _is_ real emotional investment in Ms. Stratten\'s scenes. But "Laughed" is a faint echo of "The Last Picture Show", "Paper Moon" or "What\'s Up, Doc"- following "Daisy Miller" and "At Long Last Love", it was a thundering confirmation of the phase from which P.B. has never emerged.<br /><br />All in all, though, the movie is harmless, only a waste of rental. I want to watch people having a good time, I\'ll go to the park on a sunny day. For filmic expressions of joy and love, I\'ll stick to Ernest Lubitsch and Jaques Demy...', metadata={'label': 0}), Document(page_content="This is said to be a personal film for Peter Bogdonavitch. He based it on his life but changed things around to fit the characters, who are detectives. These detectives date beautiful models and have no problem getting them. Sounds more like a millionaire playboy filmmaker than a detective, doesn't it? This entire movie was written by Peter, and it shows how out of touch with real people he was. You're supposed to write what you know, and he did that, indeed. And leaves the audience bored and confused, and jealous, for that matter. This is a curio for people who want to see Dorothy Stratten, who was murdered right after filming. But Patti Hanson, who would, in real life, marry Keith Richards, was also a model, like Stratten, but is a lot better and has a more ample part. In fact, Stratten's part seemed forced; added. She doesn't have a lot to do with the story, which is pretty convoluted to begin with. All in all, every character in this film is somebody that very few people can relate with, unless you're millionaire from Manhattan with beautiful supermodels at your beckon call. For the rest of us, it's an irritating snore fest. That's what happens when you're out of touch. You entertain your few friends with inside jokes, and bore all the rest.", metadata={'label': 0}), Document(page_content='It was great to see some of my favorite stars of 30 years ago including John Ritter, Ben Gazarra and Audrey Hepburn. They looked quite wonderful. But that was it. They were not given any characters or good lines to work with. I neither understood or cared what the characters were doing.<br /><br />Some of the smaller female roles were fine, Patty Henson and Colleen Camp were quite competent and confident in their small sidekick parts. They showed some talent and it is sad they didn\'t go on to star in more and better films. Sadly, I didn\'t think Dorothy Stratten got a chance to act in this her only important film role.<br /><br />The film appears to have some fans, and I was very open-minded when I started watching it. I am a big Peter Bogdanovich fan and I enjoyed his last movie, "Cat\'s Meow" and all his early ones from "Targets" to "Nickleodeon". So, it really surprised me that I was barely able to keep awake watching this one.<br /><br />It is ironic that this movie is about a detective agency where the detectives and clients get romantically involved with each other. Five years later, Bogdanovich\'s ex-girlfriend, Cybil Shepherd had a hit television series called "Moonlighting" stealing the story idea from Bogdanovich. Of course, there was a great difference in that the series relied on tons of witty dialogue, while this tries to make do with slapstick and a few screwball lines.<br /><br />Bottom line: It ain\'t no "Paper Moon" and only a very pale version of "What\'s Up, Doc".', metadata={'label': 0}), Document(page_content="I can't believe that those praising this movie herein aren't thinking of some other film. I was prepared for the possibility that this would be awful, but the script (or lack thereof) makes for a film that's also pointless. On the plus side, the general level of craft on the part of the actors and technical crew is quite competent, but when you've got a sow's ear to work with you can't make a silk purse. Ben G fans should stick with just about any other movie he's been in. Dorothy S fans should stick to Galaxina. Peter B fans should stick to Last Picture Show and Target. Fans of cheap laughs at the expense of those who seem to be asking for it should stick to Peter B's amazingly awful book, Killing of the Unicorn.", metadata={'label': 0}), Document(page_content='Never cast models and Playboy bunnies in your films! Bob Fosse\'s "Star 80" about Dorothy Stratten, of whom Bogdanovich was obsessed enough to have married her SISTER after her murder at the hands of her low-life husband, is a zillion times more interesting than Dorothy herself on the silver screen. Patty Hansen is no actress either..I expected to see some sort of lost masterpiece a la Orson Welles but instead got Audrey Hepburn cavorting in jeans and a god-awful "poodlesque" hair-do....Very disappointing...."Paper Moon" and "The Last Picture Show" I could watch again and again. This clunker I could barely sit through once. This movie was reputedly not released because of the brouhaha surrounding Ms. Stratten\'s tawdry death; I think the real reason was because it was so bad!', metadata={'label': 0}), Document(page_content="Its not the cast. A finer group of actors, you could not find. Its not the setting. The director is in love with New York City, and by the end of the film, so are we all! Woody Allen could not improve upon what Bogdonovich has done here. If you are going to fall in love, or find love, Manhattan is the place to go. No, the problem with the movie is the script. There is none. The actors fall in love at first sight, words are unnecessary. In the director's own experience in Hollywood that is what happens when they go to work on the set. It is reality to him, and his peers, but it is a fantasy to most of us in the real world. So, in the end, the movie is hollow, and shallow, and message-less.", metadata={'label': 0}), Document(page_content='Today I found "They All Laughed" on VHS on sale in a rental. It was a really old and very used VHS, I had no information about this movie, but I liked the references listed on its cover: the names of Peter Bogdanovich, Audrey Hepburn, John Ritter and specially Dorothy Stratten attracted me, the price was very low and I decided to risk and buy it. I searched IMDb, and the User Rating of 6.0 was an excellent reference. I looked in "Mick Martin & Marsha Porter Video & DVD Guide 2003" and \x96 wow \x96 four stars! So, I decided that I could not waste more time and immediately see it. Indeed, I have just finished watching "They All Laughed" and I found it a very boring overrated movie. The characters are badly developed, and I spent lots of minutes to understand their roles in the story. The plot is supposed to be funny (private eyes who fall in love for the women they are chasing), but I have not laughed along the whole story. The coincidences, in a huge city like New York, are ridiculous. Ben Gazarra as an attractive and very seductive man, with the women falling for him as if her were a Brad Pitt, Antonio Banderas or George Clooney, is quite ridiculous. In the end, the greater attractions certainly are the presence of the Playboy centerfold and playmate of the year Dorothy Stratten, murdered by her husband pretty after the release of this movie, and whose life was showed in "Star 80" and "Death of a Centerfold: The Dorothy Stratten Story"; the amazing beauty of the sexy Patti Hansen, the future Mrs. Keith Richards; the always wonderful, even being fifty-two years old, Audrey Hepburn; and the song "Amigo", from Roberto Carlos. Although I do not like him, Roberto Carlos has been the most popular Brazilian singer since the end of the 60\'s and is called by his fans as "The King". I will keep this movie in my collection only because of these attractions (manly Dorothy Stratten). My vote is four.<br /><br />Title (Brazil): "Muito Riso e Muita Alegria" ("Many Laughs and Lots of Happiness")', metadata={'label': 0})] ``` ``` Found cached dataset tweet_evalUsing embedded DuckDB without persistence: data will be transient ``` ``` 0%| | 0/3 [00:00<?, ?it/s] ``` ``` ' The most used hashtags in this context are #UKClimate2015, #Sustainability, #TakeDownTheFlag, #LoveWins, #CSOTA, #ClimateSummitoftheAmericas, #SM, and #SocialMedia.' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:09.490Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset/", "description": "The Hugging Face Hub is home", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3450", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"hugging_face_dataset\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:09 GMT", "etag": "W/\"9da8c16b5d3f32917964bd3fdbb1ad0d\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::rl2zt-1713753549409-79a1e45d8d9b" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset/", "property": "og:url" }, { "content": "HuggingFace dataset | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The Hugging Face Hub is home", "property": "og:description" } ], "title": "HuggingFace dataset | 🦜️🔗 LangChain" }
This notebook shows how to load Hugging Face Hub datasets to LangChain. [Document(page_content='I rented I AM CURIOUS-YELLOW from my video store because of all the controversy that surrounded it when it was first released in 1967. I also heard that at first it was seized by U.S. customs if it ever tried to enter this country, therefore being a fan of films considered "controversial" I really had to see this for myself.<br /><br />The plot is centered around a young Swedish drama student named Lena who wants to learn everything she can about life. In particular she wants to focus her attentions to making some sort of documentary on what the average Swede thought about certain political issues such as the Vietnam War and race issues in the United States. In between asking politicians and ordinary denizens of Stockholm about their opinions on politics, she has sex with her drama teacher, classmates, and married men.<br /><br />What kills me about I AM CURIOUS-YELLOW is that 40 years ago, this was considered pornographic. Really, the sex and nudity scenes are few and far between, even then it\'s not shot like some cheaply made porno. While my countrymen mind find it shocking, in reality sex and nudity are a major staple in Swedish cinema. Even Ingmar Bergman, arguably their answer to good old boy John Ford, had sex scenes in his films.<br /><br />I do commend the filmmakers for the fact that any sex shown in the film is shown for artistic purposes rather than just to shock people and make money to be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for anyone wanting to study the meat and potatoes (no pun intended) of Swedish cinema. But really, this film doesn\'t have much of a plot.', metadata={'label': 0}), Document(page_content='"I Am Curious: Yellow" is a risible and pretentious steaming pile. It doesn\'t matter what one\'s political views are because this film can hardly be taken seriously on any level. As for the claim that frontal male nudity is an automatic NC-17, that isn\'t true. I\'ve seen R-rated films with male nudity. Granted, they only offer some fleeting views, but where are the R-rated films with gaping vulvas and flapping labia? Nowhere, because they don\'t exist. The same goes for those crappy cable shows: schlongs swinging in the breeze but not a clitoris in sight. And those pretentious indie movies like The Brown Bunny, in which we\'re treated to the site of Vincent Gallo\'s throbbing johnson, but not a trace of pink visible on Chloe Sevigny. Before crying (or implying) "double-standard" in matters of nudity, the mentally obtuse should take into account one unavoidably obvious anatomical difference between men and women: there are no genitals on display when actresses appears nude, and the same cannot be said for a man. In fact, you generally won\'t see female genitals in an American film in anything short of porn or explicit erotica. This alleged double-standard is less a double standard than an admittedly depressing ability to come to terms culturally with the insides of women\'s bodies.', metadata={'label': 0}), Document(page_content="If only to avoid making this type of film in the future. This film is interesting as an experiment but tells no cogent story.<br /><br />One might feel virtuous for sitting thru it because it touches on so many IMPORTANT issues but it does so without any discernable motive. The viewer comes away with no new perspectives (unless one comes up with one while one's mind wanders, as it will invariably do during this pointless film).<br /><br />One might better spend one's time staring out a window at a tree growing.<br /><br />", metadata={'label': 0}), Document(page_content="This film was probably inspired by Godard's Masculin, féminin and I urge you to see that film instead.<br /><br />The film has two strong elements and those are, (1) the realistic acting (2) the impressive, undeservedly good, photo. Apart from that, what strikes me most is the endless stream of silliness. Lena Nyman has to be most annoying actress in the world. She acts so stupid and with all the nudity in this film,...it's unattractive. Comparing to Godard's film, intellectuality has been replaced with stupidity. Without going too far on this subject, I would say that follows from the difference in ideals between the French and the Swedish society.<br /><br />A movie of its time, and place. 2/10.", metadata={'label': 0}), Document(page_content='Oh, brother...after hearing about this ridiculous film for umpteen years all I can think of is that old Peggy Lee song..<br /><br />"Is that all there is??" ...I was just an early teen when this smoked fish hit the U.S. I was too young to get in the theater (although I did manage to sneak into "Goodbye Columbus"). Then a screening at a local film museum beckoned - Finally I could see this film, except now I was as old as my parents were when they schlepped to see it!!<br /><br />The ONLY reason this film was not condemned to the anonymous sands of time was because of the obscenity case sparked by its U.S. release. MILLIONS of people flocked to this stinker, thinking they were going to see a sex film...Instead, they got lots of closeups of gnarly, repulsive Swedes, on-street interviews in bland shopping malls, asinie political pretension...and feeble who-cares simulated sex scenes with saggy, pale actors.<br /><br />Cultural icon, holy grail, historic artifact..whatever this thing was, shred it, burn it, then stuff the ashes in a lead box!<br /><br />Elite esthetes still scrape to find value in its boring pseudo revolutionary political spewings..But if it weren\'t for the censorship scandal, it would have been ignored, then forgotten.<br /><br />Instead, the "I Am Blank, Blank" rhythymed title was repeated endlessly for years as a titilation for porno films (I am Curious, Lavender - for gay films, I Am Curious, Black - for blaxploitation films, etc..) and every ten years or so the thing rises from the dead, to be viewed by a new generation of suckers who want to see that "naughty sex film" that "revolutionized the film industry"...<br /><br />Yeesh, avoid like the plague..Or if you MUST see it - rent the video and fast forward to the "dirty" parts, just to get it over with.<br /><br />', metadata={'label': 0}), Document(page_content="I would put this at the top of my list of films in the category of unwatchable trash! There are films that are bad, but the worst kind are the ones that are unwatchable but you are suppose to like them because they are supposed to be good for you! The sex sequences, so shocking in its day, couldn't even arouse a rabbit. The so called controversial politics is strictly high school sophomore amateur night Marxism. The film is self-consciously arty in the worst sense of the term. The photography is in a harsh grainy black and white. Some scenes are out of focus or taken from the wrong angle. Even the sound is bad! And some people call this art?<br /><br />", metadata={'label': 0}), Document(page_content="Whoever wrote the screenplay for this movie obviously never consulted any books about Lucille Ball, especially her autobiography. I've never seen so many mistakes in a biopic, ranging from her early years in Celoron and Jamestown to her later years with Desi. I could write a whole list of factual errors, but it would go on for pages. In all, I believe that Lucille Ball is one of those inimitable people who simply cannot be portrayed by anyone other than themselves. If I were Lucie Arnaz and Desi, Jr., I would be irate at how many mistakes were made in this film. The filmmakers tried hard, but the movie seems awfully sloppy to me.", metadata={'label': 0}), Document(page_content='When I first saw a glimpse of this movie, I quickly noticed the actress who was playing the role of Lucille Ball. Rachel York\'s portrayal of Lucy is absolutely awful. Lucille Ball was an astounding comedian with incredible talent. To think about a legend like Lucille Ball being portrayed the way she was in the movie is horrendous. I cannot believe out of all the actresses in the world who could play a much better Lucy, the producers decided to get Rachel York. She might be a good actress in other roles but to play the role of Lucille Ball is tough. It is pretty hard to find someone who could resemble Lucille Ball, but they could at least find someone a bit similar in looks and talent. If you noticed York\'s portrayal of Lucy in episodes of I Love Lucy like the chocolate factory or vitavetavegamin, nothing is similar in any way-her expression, voice, or movement.<br /><br />To top it all off, Danny Pino playing Desi Arnaz is horrible. Pino does not qualify to play as Ricky. He\'s small and skinny, his accent is unreal, and once again, his acting is unbelievable. Although Fred and Ethel were not similar either, they were not as bad as the characters of Lucy and Ricky.<br /><br />Overall, extremely horrible casting and the story is badly told. If people want to understand the real life situation of Lucille Ball, I suggest watching A&E Biography of Lucy and Desi, read the book from Lucille Ball herself, or PBS\' American Masters: Finding Lucy. If you want to see a docudrama, "Before the Laughter" would be a better choice. The casting of Lucille Ball and Desi Arnaz in "Before the Laughter" is much better compared to this. At least, a similar aspect is shown rather than nothing.', metadata={'label': 0}), Document(page_content='Who are these "They"- the actors? the filmmakers? Certainly couldn\'t be the audience- this is among the most air-puffed productions in existence. It\'s the kind of movie that looks like it was a lot of fun to shoot\x97 TOO much fun, nobody is getting any actual work done, and that almost always makes for a movie that\'s no fun to watch.<br /><br />Ritter dons glasses so as to hammer home his character\'s status as a sort of doppleganger of the bespectacled Bogdanovich; the scenes with the breezy Ms. Stratten are sweet, but have an embarrassing, look-guys-I\'m-dating-the-prom-queen feel to them. Ben Gazzara sports his usual cat\'s-got-canary grin in a futile attempt to elevate the meager plot, which requires him to pursue Audrey Hepburn with all the interest of a narcoleptic at an insomnia clinic. In the meantime, the budding couple\'s respective children (nepotism alert: Bogdanovich\'s daughters) spew cute and pick up some fairly disturbing pointers on \'love\' while observing their parents. (Ms. Hepburn, drawing on her dignity, manages to rise above the proceedings- but she has the monumental challenge of playing herself, ostensibly.) Everybody looks great, but so what? It\'s a movie and we can expect that much, if that\'s what you\'re looking for you\'d be better off picking up a copy of Vogue.<br /><br />Oh- and it has to be mentioned that Colleen Camp thoroughly annoys, even apart from her singing, which, while competent, is wholly unconvincing... the country and western numbers are woefully mismatched with the standards on the soundtrack. Surely this is NOT what Gershwin (who wrote the song from which the movie\'s title is derived) had in mind; his stage musicals of the 20\'s may have been slight, but at least they were long on charm. "They All Laughed" tries to coast on its good intentions, but nobody- least of all Peter Bogdanovich - has the good sense to put on the brakes.<br /><br />Due in no small part to the tragic death of Dorothy Stratten, this movie has a special place in the heart of Mr. Bogdanovich- he even bought it back from its producers, then distributed it on his own and went bankrupt when it didn\'t prove popular. His rise and fall is among the more sympathetic and tragic of Hollywood stories, so there\'s no joy in criticizing the film... there _is_ real emotional investment in Ms. Stratten\'s scenes. But "Laughed" is a faint echo of "The Last Picture Show", "Paper Moon" or "What\'s Up, Doc"- following "Daisy Miller" and "At Long Last Love", it was a thundering confirmation of the phase from which P.B. has never emerged.<br /><br />All in all, though, the movie is harmless, only a waste of rental. I want to watch people having a good time, I\'ll go to the park on a sunny day. For filmic expressions of joy and love, I\'ll stick to Ernest Lubitsch and Jaques Demy...', metadata={'label': 0}), Document(page_content="This is said to be a personal film for Peter Bogdonavitch. He based it on his life but changed things around to fit the characters, who are detectives. These detectives date beautiful models and have no problem getting them. Sounds more like a millionaire playboy filmmaker than a detective, doesn't it? This entire movie was written by Peter, and it shows how out of touch with real people he was. You're supposed to write what you know, and he did that, indeed. And leaves the audience bored and confused, and jealous, for that matter. This is a curio for people who want to see Dorothy Stratten, who was murdered right after filming. But Patti Hanson, who would, in real life, marry Keith Richards, was also a model, like Stratten, but is a lot better and has a more ample part. In fact, Stratten's part seemed forced; added. She doesn't have a lot to do with the story, which is pretty convoluted to begin with. All in all, every character in this film is somebody that very few people can relate with, unless you're millionaire from Manhattan with beautiful supermodels at your beckon call. For the rest of us, it's an irritating snore fest. That's what happens when you're out of touch. You entertain your few friends with inside jokes, and bore all the rest.", metadata={'label': 0}), Document(page_content='It was great to see some of my favorite stars of 30 years ago including John Ritter, Ben Gazarra and Audrey Hepburn. They looked quite wonderful. But that was it. They were not given any characters or good lines to work with. I neither understood or cared what the characters were doing.<br /><br />Some of the smaller female roles were fine, Patty Henson and Colleen Camp were quite competent and confident in their small sidekick parts. They showed some talent and it is sad they didn\'t go on to star in more and better films. Sadly, I didn\'t think Dorothy Stratten got a chance to act in this her only important film role.<br /><br />The film appears to have some fans, and I was very open-minded when I started watching it. I am a big Peter Bogdanovich fan and I enjoyed his last movie, "Cat\'s Meow" and all his early ones from "Targets" to "Nickleodeon". So, it really surprised me that I was barely able to keep awake watching this one.<br /><br />It is ironic that this movie is about a detective agency where the detectives and clients get romantically involved with each other. Five years later, Bogdanovich\'s ex-girlfriend, Cybil Shepherd had a hit television series called "Moonlighting" stealing the story idea from Bogdanovich. Of course, there was a great difference in that the series relied on tons of witty dialogue, while this tries to make do with slapstick and a few screwball lines.<br /><br />Bottom line: It ain\'t no "Paper Moon" and only a very pale version of "What\'s Up, Doc".', metadata={'label': 0}), Document(page_content="I can't believe that those praising this movie herein aren't thinking of some other film. I was prepared for the possibility that this would be awful, but the script (or lack thereof) makes for a film that's also pointless. On the plus side, the general level of craft on the part of the actors and technical crew is quite competent, but when you've got a sow's ear to work with you can't make a silk purse. Ben G fans should stick with just about any other movie he's been in. Dorothy S fans should stick to Galaxina. Peter B fans should stick to Last Picture Show and Target. Fans of cheap laughs at the expense of those who seem to be asking for it should stick to Peter B's amazingly awful book, Killing of the Unicorn.", metadata={'label': 0}), Document(page_content='Never cast models and Playboy bunnies in your films! Bob Fosse\'s "Star 80" about Dorothy Stratten, of whom Bogdanovich was obsessed enough to have married her SISTER after her murder at the hands of her low-life husband, is a zillion times more interesting than Dorothy herself on the silver screen. Patty Hansen is no actress either..I expected to see some sort of lost masterpiece a la Orson Welles but instead got Audrey Hepburn cavorting in jeans and a god-awful "poodlesque" hair-do....Very disappointing...."Paper Moon" and "The Last Picture Show" I could watch again and again. This clunker I could barely sit through once. This movie was reputedly not released because of the brouhaha surrounding Ms. Stratten\'s tawdry death; I think the real reason was because it was so bad!', metadata={'label': 0}), Document(page_content="Its not the cast. A finer group of actors, you could not find. Its not the setting. The director is in love with New York City, and by the end of the film, so are we all! Woody Allen could not improve upon what Bogdonovich has done here. If you are going to fall in love, or find love, Manhattan is the place to go. No, the problem with the movie is the script. There is none. The actors fall in love at first sight, words are unnecessary. In the director's own experience in Hollywood that is what happens when they go to work on the set. It is reality to him, and his peers, but it is a fantasy to most of us in the real world. So, in the end, the movie is hollow, and shallow, and message-less.", metadata={'label': 0}), Document(page_content='Today I found "They All Laughed" on VHS on sale in a rental. It was a really old and very used VHS, I had no information about this movie, but I liked the references listed on its cover: the names of Peter Bogdanovich, Audrey Hepburn, John Ritter and specially Dorothy Stratten attracted me, the price was very low and I decided to risk and buy it. I searched IMDb, and the User Rating of 6.0 was an excellent reference. I looked in "Mick Martin & Marsha Porter Video & DVD Guide 2003" and \x96 wow \x96 four stars! So, I decided that I could not waste more time and immediately see it. Indeed, I have just finished watching "They All Laughed" and I found it a very boring overrated movie. The characters are badly developed, and I spent lots of minutes to understand their roles in the story. The plot is supposed to be funny (private eyes who fall in love for the women they are chasing), but I have not laughed along the whole story. The coincidences, in a huge city like New York, are ridiculous. Ben Gazarra as an attractive and very seductive man, with the women falling for him as if her were a Brad Pitt, Antonio Banderas or George Clooney, is quite ridiculous. In the end, the greater attractions certainly are the presence of the Playboy centerfold and playmate of the year Dorothy Stratten, murdered by her husband pretty after the release of this movie, and whose life was showed in "Star 80" and "Death of a Centerfold: The Dorothy Stratten Story"; the amazing beauty of the sexy Patti Hansen, the future Mrs. Keith Richards; the always wonderful, even being fifty-two years old, Audrey Hepburn; and the song "Amigo", from Roberto Carlos. Although I do not like him, Roberto Carlos has been the most popular Brazilian singer since the end of the 60\'s and is called by his fans as "The King". I will keep this movie in my collection only because of these attractions (manly Dorothy Stratten). My vote is four.<br /><br />Title (Brazil): "Muito Riso e Muita Alegria" ("Many Laughs and Lots of Happiness")', metadata={'label': 0})] Found cached dataset tweet_eval Using embedded DuckDB without persistence: data will be transient 0%| | 0/3 [00:00<?, ?it/s] ' The most used hashtags in this context are #UKClimate2015, #Sustainability, #TakeDownTheFlag, #LoveWins, #CSOTA, #ClimateSummitoftheAmericas, #SM, and #SocialMedia.'
https://python.langchain.com/docs/integrations/document_loaders/ifixit/
This loader will allow you to download the text of a repair guide, text of Q&A’s and wikis from devices on `iFixit` using their open APIs. It’s incredibly useful for context related to technical documents and answers to questions about devices in the corpus of data on `iFixit`. ``` [Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)] ``` ``` [Document(page_content='# My iPhone 6 is typing and opening apps by itself\nmy iphone 6 is typing and opening apps by itself. How do i fix this. I just bought it last week.\nI restored as manufactures cleaned up the screen\nthe problem continues\n\n## 27 Answers\n\nFilter by: \n\nMost Helpful\nNewest\nOldest\n\n### Accepted Answer\nHi,\nWhere did you buy it? If you bought it from Apple or from an official retailer like Carphone warehouse etc. Then you\'ll have a year warranty and can get it replaced free.\nIf you bought it second hand, from a third part repair shop or online, then it may still have warranty, unless it is refurbished and has been repaired elsewhere.\nIf this is the case, it may be the screen that needs replacing to solve your issue.\nEither way, wherever you got it, it\'s best to return it and get a refund or a replacement device. :-)\n\n\n\n### Most Helpful Answer\nI had the same issues, screen freezing, opening apps by itself, selecting the screens and typing on it\'s own. I first suspected aliens and then ghosts and then hackers.\niPhone 6 is weak physically and tend to bend on pressure. And my phone had no case or cover.\nI took the phone to apple stores and they said sensors need to be replaced and possibly screen replacement as well. My phone is just 17 months old.\nHere is what I did two days ago and since then it is working like a charm..\nHold the phone in portrait (as if watching a movie). Twist it very very gently. do it few times.Rest the phone for 10 mins (put it on a flat surface). You can now notice those self typing things gone and screen getting stabilized.\nThen, reset the hardware (hold the power and home button till the screen goes off and comes back with apple logo). release the buttons when you see this.\nThen, connect to your laptop and log in to iTunes and reset your phone completely. (please take a back-up first).\nAnd your phone should be good to use again.\nWhat really happened here for me is that the sensors might have stuck to the screen and with mild twisting, they got disengaged/released.\nI posted this in Apple Community and the moderators deleted it, for the best reasons known to them.\nInstead of throwing away your phone (or selling cheaply), try this and you could be saving your phone.\nLet me know how it goes.\n\n\n\n### Other Answer\nIt was the charging cord! I bought a gas station braided cord and it was the culprit. Once I plugged my OEM cord into the phone the GHOSTS went away.\n\n\n\n### Other Answer\nI\'ve same issue that I just get resolved. I first tried to restore it from iCloud back, however it was not a software issue or any virus issue, so after restore same problem continues. Then I get my phone to local area iphone repairing lab, and they detected that it is an LCD issue. LCD get out of order without any reason (It was neither hit or nor slipped, but LCD get out of order all and sudden, while using it) it started opening things at random. I get LCD replaced with new one, that cost me $80.00 in total ($70.00 LCD charges + $10.00 as labor charges to fix it). iPhone is back to perfect mode now. It was iphone 6s. Thanks.\n\n\n\n### Other Answer\nI was having the same issue with my 6 plus, I took it to a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up and cleaned the connectors and connected the screen again and it solved the issue… it’s hardware, not software.\n\n\n\n### Other Answer\nHey.\nJust had this problem now. As it turns out, you just need to plug in your phone. I use a case and when I took it off I noticed that there was a lot of dust and dirt around the areas that the case didn\'t cover. I shined a light in my ports and noticed they were filled with dust. Tomorrow I plan on using pressurized air to clean it out and the problem should be solved. If you plug in your phone and unplug it and it stops the issue, I recommend cleaning your phone thoroughly.\n\n\n\n### Other Answer\nI simply changed the power supply and problem was gone. The block that plugs in the wall not the sub cord. The cord was fine but not the block.\n\n\n\n### Other Answer\nSomeone ask! I purchased my iPhone 6s Plus for 1000 from at&t. Before I touched it, I purchased a otter defender case. I read where at&t said touch desease was due to dropping! Bullshit!! I am 56 I have never dropped it!! Looks brand new! Never dropped or abused any way! I have my original charger. I am going to clean it and try everyone’s advice. It really sucks! I had 40,000,000 on my heart of Vegas slots! I play every day. I would be spinning and my fingers were no where max buttons and it would light up and switch to max. It did it 3 times before I caught it light up by its self. It sucks. Hope I can fix it!!!!\n\n\n\n### Other Answer\nNo answer, but same problem with iPhone 6 plus--random, self-generated jumping amongst apps and typing on its own--plus freezing regularly (aha--maybe that\'s what the "plus" in "6 plus" refers to?). An Apple Genius recommended upgrading to iOS 11.3.1 from 11.2.2, to see if that fixed the trouble. If it didn\'t, Apple will sell me a new phone for $168! Of couese the OS upgrade didn\'t fix the problem. Thanks for helping me figure out that it\'s most likely a hardware problem--which the "genius" probably knows too.\nI\'m getting ready to go Android.\n\n\n\n### Other Answer\nI experienced similar ghost touches. Two weeks ago, I changed my iPhone 6 Plus shell (I had forced the phone into it because it’s pretty tight), and also put a new glass screen protector (the edges of the protector don’t stick to the screen, weird, so I brushed pressure on the edges at times to see if they may smooth out one day miraculously). I’m not sure if I accidentally bend the phone when I installed the shell, or, if I got a defective glass protector that messes up the touch sensor. Well, yesterday was the worse day, keeps dropping calls and ghost pressing keys for me when I was on a call. I got fed up, so I removed the screen protector, and so far problems have not reoccurred yet. I’m crossing my fingers that problems indeed solved.\n\n\n\n### Other Answer\nthank you so much for this post! i was struggling doing the reset because i cannot type userids and passwords correctly because the iphone 6 plus i have kept on typing letters incorrectly. I have been doing it for a day until i come across this article. Very helpful! God bless you!!\n\n\n\n### Other Answer\nI just turned it off, and turned it back on.\n\n\n\n### Other Answer\nMy problem has not gone away completely but its better now i changed my charger and turned off prediction ....,,,now it rarely happens\n\n\n\n### Other Answer\nI tried all of the above. I then turned off my home cleaned it with isopropyl alcohol 90%. Then I baked it in my oven on warm for an hour and a half over foil. Took it out and set it cool completely on the glass top stove. Then I turned on and it worked.\n\n\n\n### Other Answer\nI think at& t should man up and fix your phone for free! You pay a lot for a Apple they should back it. I did the next 30 month payments and finally have it paid off in June. My iPad sept. Looking forward to a almost 100 drop in my phone bill! Now this crap!!! Really\n\n\n\n### Other Answer\nIf your phone is JailBroken, suggest downloading a virus. While all my symptoms were similar, there was indeed a virus/malware on the phone which allowed for remote control of my iphone (even while in lock mode). My mistake for buying a third party iphone i suppose. Anyway i have since had the phone restored to factory and everything is working as expected for now. I will of course keep you posted if this changes. Thanks to all for the helpful posts, really helped me narrow a few things down.\n\n\n\n### Other Answer\nWhen my phone was doing this, it ended up being the screen protector that i got from 5 below. I took it off and it stopped. I ordered more protectors from amazon and replaced it\n\n\n\n### Other Answer\niPhone 6 Plus first generation….I had the same issues as all above, apps opening by themselves, self typing, ultra sensitive screen, items jumping around all over….it even called someone on FaceTime twice by itself when I was not in the room…..I thought the phone was toast and i’d have to buy a new one took me a while to figure out but it was the extra cheap block plug I bought at a dollar store for convenience of an extra charging station when I move around the house from den to living room…..cord was fine but bought a new Apple brand block plug…no more problems works just fine now. This issue was a recent event so had to narrow things down to what had changed recently to my phone so I could figure it out.\nI even had the same problem on a laptop with documents opening up by themselves…..a laptop that was plugged in to the same wall plug as my phone charger with the dollar store block plug….until I changed the block plug.\n\n\n\n### Other Answer\nHad the problem: Inherited a 6s Plus from my wife. She had no problem with it.\nLooks like it was merely the cheap phone case I purchased on Amazon. It was either pinching the edges or torquing the screen/body of the phone. Problem solved.\n\n\n\n### Other Answer\nI bought my phone on march 6 and it was a brand new, but It sucks me uo because it freezing, shaking and control by itself. I went to the store where I bought this and I told them to replacr it, but they told me I have to pay it because Its about lcd issue. Please help me what other ways to fix it. Or should I try to remove the screen or should I follow your step above.\n\n\n\n### Other Answer\nI tried everything and it seems to come back to needing the original iPhone cable…or at least another 1 that would have come with another iPhone…not the $5 Store fast charging cables. My original cable is pretty beat up - like most that I see - but I’ve been beaten up much MUCH less by sticking with its use! I didn’t find that the casing/shell around it or not made any diff.\n\n\n\n### Other Answer\ngreat now I have to wait one more hour to reset my phone and while I was tryin to connect my phone to my computer the computer also restarted smh does anyone else knows how I can get my phone to work… my problem is I have a black dot on the bottom left of my screen an it wont allow me to touch a certain part of my screen unless I rotate my phone and I know the password but the first number is a 2 and it won\'t let me touch 1,2, or 3 so now I have to find a way to get rid of my password and all of a sudden my phone wants to touch stuff on its own which got my phone disabled many times to the point where I have to wait a whole hour and I really need to finish something on my phone today PLEASE HELPPPP\n\n\n\n### Other Answer\nIn my case , iphone 6 screen was faulty. I got it replaced at local repair shop, so far phone is working fine.\n\n\n\n### Other Answer\nthis problem in iphone 6 has many different scenarios and solutions, first try to reconnect the lcd screen to the motherboard again, if didnt solve, try to replace the lcd connector on the motherboard, if not solved, then remains two issues, lcd screen it self or touch IC. in my country some repair shops just change them all for almost 40$ since they dont want to troubleshoot one by one. readers of this comment also should know that partial screen not responding in other iphone models might also have an issue in LCD connector on the motherboard, specially if you lock/unlock screen and screen works again for sometime. lcd connectors gets disconnected lightly from the motherboard due to multiple falls and hits after sometime. best of luck for all\n\n\n\n### Other Answer\nI am facing the same issue whereby these ghost touches type and open apps , I am using an original Iphone cable , how to I fix this issue.\n\n\n\n### Other Answer\nThere were two issues with the phone I had troubles with. It was my dads and turns out he carried it in his pocket. The phone itself had a little bend in it as a result. A little pressure in the opposite direction helped the issue. But it also had a tiny crack in the screen which wasnt obvious, once we added a screen protector this fixed the issues entirely.\n\n\n\n### Other Answer\nI had the same problem with my 64Gb iPhone 6+. Tried a lot of things and eventually downloaded all my images and videos to my PC and restarted the phone - problem solved. Been working now for two days.', lookup_str='', metadata={'source': 'https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself', 'title': 'My iPhone 6 is typing and opening apps by itself'}, lookup_index=0)] ``` ``` [Document(page_content="Standard iPad\nThe standard edition of the tablet computer made by Apple.\n== Background Information ==\n\nOriginally introduced in January 2010, the iPad is Apple's standard edition of their tablet computer. In total, there have been ten generations of the standard edition of the iPad.\n\n== Additional Information ==\n\n* [link|https://www.apple.com/ipad-select/|Official Apple Product Page]\n* [link|https://en.wikipedia.org/wiki/IPad#iPad|Official iPad Wikipedia]", lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Standard_iPad', 'title': 'Standard iPad'}, lookup_index=0)] ``` If you’re looking for a more general way to search iFixit based on a keyword or phrase, the /suggest endpoint will return content related to the search term, then the loader will load the content from each of the suggested items and prep and return the documents. ``` [Document(page_content='Banana\nTasty fruit. Good source of potassium. Yellow.\n== Background Information ==\n\nCommonly misspelled, this wildly popular, phone shaped fruit serves as nutrition and an obstacle to slow down vehicles racing close behind you. Also used commonly as a synonym for “crazy” or “insane”.\n\nBotanically, the banana is considered a berry, although it isn’t included in the culinary berry category containing strawberries and raspberries. Belonging to the genus Musa, the banana originated in Southeast Asia and Australia. Now largely cultivated throughout South and Central America, bananas are largely available throughout the world. They are especially valued as a staple food group in developing countries due to the banana tree’s ability to produce fruit year round.\n\nThe banana can be easily opened. Simply remove the outer yellow shell by cracking the top of the stem. Then, with the broken piece, peel downward on each side until the fruity components on the inside are exposed. Once the shell has been removed it cannot be put back together.\n\n== Technical Specifications ==\n\n* Dimensions: Variable depending on genetics of the parent tree\n* Color: Variable depending on ripeness, region, and season\n\n== Additional Information ==\n\n[link|https://en.wikipedia.org/wiki/Banana|Wiki: Banana]', lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Banana', 'title': 'Banana'}, lookup_index=0), Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:09.768Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/ifixit/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/ifixit/", "description": "iFixit is the largest, open repair community", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3450", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"ifixit\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:09 GMT", "etag": "W/\"350b011b114b4526ab7c9a8732865d89\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::qfv6k-1713753549416-245224f899cd" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/ifixit/", "property": "og:url" }, { "content": "iFixit | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "iFixit is the largest, open repair community", "property": "og:description" } ], "title": "iFixit | 🦜️🔗 LangChain" }
This loader will allow you to download the text of a repair guide, text of Q&A’s and wikis from devices on iFixit using their open APIs. It’s incredibly useful for context related to technical documents and answers to questions about devices in the corpus of data on iFixit. [Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)] [Document(page_content='# My iPhone 6 is typing and opening apps by itself\nmy iphone 6 is typing and opening apps by itself. How do i fix this. I just bought it last week.\nI restored as manufactures cleaned up the screen\nthe problem continues\n\n## 27 Answers\n\nFilter by: \n\nMost Helpful\nNewest\nOldest\n\n### Accepted Answer\nHi,\nWhere did you buy it? If you bought it from Apple or from an official retailer like Carphone warehouse etc. Then you\'ll have a year warranty and can get it replaced free.\nIf you bought it second hand, from a third part repair shop or online, then it may still have warranty, unless it is refurbished and has been repaired elsewhere.\nIf this is the case, it may be the screen that needs replacing to solve your issue.\nEither way, wherever you got it, it\'s best to return it and get a refund or a replacement device. :-)\n\n\n\n### Most Helpful Answer\nI had the same issues, screen freezing, opening apps by itself, selecting the screens and typing on it\'s own. I first suspected aliens and then ghosts and then hackers.\niPhone 6 is weak physically and tend to bend on pressure. And my phone had no case or cover.\nI took the phone to apple stores and they said sensors need to be replaced and possibly screen replacement as well. My phone is just 17 months old.\nHere is what I did two days ago and since then it is working like a charm..\nHold the phone in portrait (as if watching a movie). Twist it very very gently. do it few times.Rest the phone for 10 mins (put it on a flat surface). You can now notice those self typing things gone and screen getting stabilized.\nThen, reset the hardware (hold the power and home button till the screen goes off and comes back with apple logo). release the buttons when you see this.\nThen, connect to your laptop and log in to iTunes and reset your phone completely. (please take a back-up first).\nAnd your phone should be good to use again.\nWhat really happened here for me is that the sensors might have stuck to the screen and with mild twisting, they got disengaged/released.\nI posted this in Apple Community and the moderators deleted it, for the best reasons known to them.\nInstead of throwing away your phone (or selling cheaply), try this and you could be saving your phone.\nLet me know how it goes.\n\n\n\n### Other Answer\nIt was the charging cord! I bought a gas station braided cord and it was the culprit. Once I plugged my OEM cord into the phone the GHOSTS went away.\n\n\n\n### Other Answer\nI\'ve same issue that I just get resolved. I first tried to restore it from iCloud back, however it was not a software issue or any virus issue, so after restore same problem continues. Then I get my phone to local area iphone repairing lab, and they detected that it is an LCD issue. LCD get out of order without any reason (It was neither hit or nor slipped, but LCD get out of order all and sudden, while using it) it started opening things at random. I get LCD replaced with new one, that cost me $80.00 in total ($70.00 LCD charges + $10.00 as labor charges to fix it). iPhone is back to perfect mode now. It was iphone 6s. Thanks.\n\n\n\n### Other Answer\nI was having the same issue with my 6 plus, I took it to a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up and cleaned the connectors and connected the screen again and it solved the issue… it’s hardware, not software.\n\n\n\n### Other Answer\nHey.\nJust had this problem now. As it turns out, you just need to plug in your phone. I use a case and when I took it off I noticed that there was a lot of dust and dirt around the areas that the case didn\'t cover. I shined a light in my ports and noticed they were filled with dust. Tomorrow I plan on using pressurized air to clean it out and the problem should be solved. If you plug in your phone and unplug it and it stops the issue, I recommend cleaning your phone thoroughly.\n\n\n\n### Other Answer\nI simply changed the power supply and problem was gone. The block that plugs in the wall not the sub cord. The cord was fine but not the block.\n\n\n\n### Other Answer\nSomeone ask! I purchased my iPhone 6s Plus for 1000 from at&t. Before I touched it, I purchased a otter defender case. I read where at&t said touch desease was due to dropping! Bullshit!! I am 56 I have never dropped it!! Looks brand new! Never dropped or abused any way! I have my original charger. I am going to clean it and try everyone’s advice. It really sucks! I had 40,000,000 on my heart of Vegas slots! I play every day. I would be spinning and my fingers were no where max buttons and it would light up and switch to max. It did it 3 times before I caught it light up by its self. It sucks. Hope I can fix it!!!!\n\n\n\n### Other Answer\nNo answer, but same problem with iPhone 6 plus--random, self-generated jumping amongst apps and typing on its own--plus freezing regularly (aha--maybe that\'s what the "plus" in "6 plus" refers to?). An Apple Genius recommended upgrading to iOS 11.3.1 from 11.2.2, to see if that fixed the trouble. If it didn\'t, Apple will sell me a new phone for $168! Of couese the OS upgrade didn\'t fix the problem. Thanks for helping me figure out that it\'s most likely a hardware problem--which the "genius" probably knows too.\nI\'m getting ready to go Android.\n\n\n\n### Other Answer\nI experienced similar ghost touches. Two weeks ago, I changed my iPhone 6 Plus shell (I had forced the phone into it because it’s pretty tight), and also put a new glass screen protector (the edges of the protector don’t stick to the screen, weird, so I brushed pressure on the edges at times to see if they may smooth out one day miraculously). I’m not sure if I accidentally bend the phone when I installed the shell, or, if I got a defective glass protector that messes up the touch sensor. Well, yesterday was the worse day, keeps dropping calls and ghost pressing keys for me when I was on a call. I got fed up, so I removed the screen protector, and so far problems have not reoccurred yet. I’m crossing my fingers that problems indeed solved.\n\n\n\n### Other Answer\nthank you so much for this post! i was struggling doing the reset because i cannot type userids and passwords correctly because the iphone 6 plus i have kept on typing letters incorrectly. I have been doing it for a day until i come across this article. Very helpful! God bless you!!\n\n\n\n### Other Answer\nI just turned it off, and turned it back on.\n\n\n\n### Other Answer\nMy problem has not gone away completely but its better now i changed my charger and turned off prediction ....,,,now it rarely happens\n\n\n\n### Other Answer\nI tried all of the above. I then turned off my home cleaned it with isopropyl alcohol 90%. Then I baked it in my oven on warm for an hour and a half over foil. Took it out and set it cool completely on the glass top stove. Then I turned on and it worked.\n\n\n\n### Other Answer\nI think at& t should man up and fix your phone for free! You pay a lot for a Apple they should back it. I did the next 30 month payments and finally have it paid off in June. My iPad sept. Looking forward to a almost 100 drop in my phone bill! Now this crap!!! Really\n\n\n\n### Other Answer\nIf your phone is JailBroken, suggest downloading a virus. While all my symptoms were similar, there was indeed a virus/malware on the phone which allowed for remote control of my iphone (even while in lock mode). My mistake for buying a third party iphone i suppose. Anyway i have since had the phone restored to factory and everything is working as expected for now. I will of course keep you posted if this changes. Thanks to all for the helpful posts, really helped me narrow a few things down.\n\n\n\n### Other Answer\nWhen my phone was doing this, it ended up being the screen protector that i got from 5 below. I took it off and it stopped. I ordered more protectors from amazon and replaced it\n\n\n\n### Other Answer\niPhone 6 Plus first generation….I had the same issues as all above, apps opening by themselves, self typing, ultra sensitive screen, items jumping around all over….it even called someone on FaceTime twice by itself when I was not in the room…..I thought the phone was toast and i’d have to buy a new one took me a while to figure out but it was the extra cheap block plug I bought at a dollar store for convenience of an extra charging station when I move around the house from den to living room…..cord was fine but bought a new Apple brand block plug…no more problems works just fine now. This issue was a recent event so had to narrow things down to what had changed recently to my phone so I could figure it out.\nI even had the same problem on a laptop with documents opening up by themselves…..a laptop that was plugged in to the same wall plug as my phone charger with the dollar store block plug….until I changed the block plug.\n\n\n\n### Other Answer\nHad the problem: Inherited a 6s Plus from my wife. She had no problem with it.\nLooks like it was merely the cheap phone case I purchased on Amazon. It was either pinching the edges or torquing the screen/body of the phone. Problem solved.\n\n\n\n### Other Answer\nI bought my phone on march 6 and it was a brand new, but It sucks me uo because it freezing, shaking and control by itself. I went to the store where I bought this and I told them to replacr it, but they told me I have to pay it because Its about lcd issue. Please help me what other ways to fix it. Or should I try to remove the screen or should I follow your step above.\n\n\n\n### Other Answer\nI tried everything and it seems to come back to needing the original iPhone cable…or at least another 1 that would have come with another iPhone…not the $5 Store fast charging cables. My original cable is pretty beat up - like most that I see - but I’ve been beaten up much MUCH less by sticking with its use! I didn’t find that the casing/shell around it or not made any diff.\n\n\n\n### Other Answer\ngreat now I have to wait one more hour to reset my phone and while I was tryin to connect my phone to my computer the computer also restarted smh does anyone else knows how I can get my phone to work… my problem is I have a black dot on the bottom left of my screen an it wont allow me to touch a certain part of my screen unless I rotate my phone and I know the password but the first number is a 2 and it won\'t let me touch 1,2, or 3 so now I have to find a way to get rid of my password and all of a sudden my phone wants to touch stuff on its own which got my phone disabled many times to the point where I have to wait a whole hour and I really need to finish something on my phone today PLEASE HELPPPP\n\n\n\n### Other Answer\nIn my case , iphone 6 screen was faulty. I got it replaced at local repair shop, so far phone is working fine.\n\n\n\n### Other Answer\nthis problem in iphone 6 has many different scenarios and solutions, first try to reconnect the lcd screen to the motherboard again, if didnt solve, try to replace the lcd connector on the motherboard, if not solved, then remains two issues, lcd screen it self or touch IC. in my country some repair shops just change them all for almost 40$ since they dont want to troubleshoot one by one. readers of this comment also should know that partial screen not responding in other iphone models might also have an issue in LCD connector on the motherboard, specially if you lock/unlock screen and screen works again for sometime. lcd connectors gets disconnected lightly from the motherboard due to multiple falls and hits after sometime. best of luck for all\n\n\n\n### Other Answer\nI am facing the same issue whereby these ghost touches type and open apps , I am using an original Iphone cable , how to I fix this issue.\n\n\n\n### Other Answer\nThere were two issues with the phone I had troubles with. It was my dads and turns out he carried it in his pocket. The phone itself had a little bend in it as a result. A little pressure in the opposite direction helped the issue. But it also had a tiny crack in the screen which wasnt obvious, once we added a screen protector this fixed the issues entirely.\n\n\n\n### Other Answer\nI had the same problem with my 64Gb iPhone 6+. Tried a lot of things and eventually downloaded all my images and videos to my PC and restarted the phone - problem solved. Been working now for two days.', lookup_str='', metadata={'source': 'https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself', 'title': 'My iPhone 6 is typing and opening apps by itself'}, lookup_index=0)] [Document(page_content="Standard iPad\nThe standard edition of the tablet computer made by Apple.\n== Background Information ==\n\nOriginally introduced in January 2010, the iPad is Apple's standard edition of their tablet computer. In total, there have been ten generations of the standard edition of the iPad.\n\n== Additional Information ==\n\n* [link|https://www.apple.com/ipad-select/|Official Apple Product Page]\n* [link|https://en.wikipedia.org/wiki/IPad#iPad|Official iPad Wikipedia]", lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Standard_iPad', 'title': 'Standard iPad'}, lookup_index=0)] If you’re looking for a more general way to search iFixit based on a keyword or phrase, the /suggest endpoint will return content related to the search term, then the loader will load the content from each of the suggested items and prep and return the documents. [Document(page_content='Banana\nTasty fruit. Good source of potassium. Yellow.\n== Background Information ==\n\nCommonly misspelled, this wildly popular, phone shaped fruit serves as nutrition and an obstacle to slow down vehicles racing close behind you. Also used commonly as a synonym for “crazy” or “insane”.\n\nBotanically, the banana is considered a berry, although it isn’t included in the culinary berry category containing strawberries and raspberries. Belonging to the genus Musa, the banana originated in Southeast Asia and Australia. Now largely cultivated throughout South and Central America, bananas are largely available throughout the world. They are especially valued as a staple food group in developing countries due to the banana tree’s ability to produce fruit year round.\n\nThe banana can be easily opened. Simply remove the outer yellow shell by cracking the top of the stem. Then, with the broken piece, peel downward on each side until the fruity components on the inside are exposed. Once the shell has been removed it cannot be put back together.\n\n== Technical Specifications ==\n\n* Dimensions: Variable depending on genetics of the parent tree\n* Color: Variable depending on ripeness, region, and season\n\n== Additional Information ==\n\n[link|https://en.wikipedia.org/wiki/Banana|Wiki: Banana]', lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Banana', 'title': 'Banana'}, lookup_index=0), Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)]
https://python.langchain.com/docs/integrations/document_loaders/image/
This covers how to load images such as `JPG` or `PNG` into a document format that we can use downstream. ``` Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang”, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0) ``` Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode="elements"`. ``` Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:09.695Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/image/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/image/", "description": "This covers how to load images such as JPG or PNG into a document", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3450", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"image\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:09 GMT", "etag": "W/\"40c9a6934a8ae89a9671f7a302c72f1a\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::bnwhw-1713753549419-352758c19bf3" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/image/", "property": "og:url" }, { "content": "Images | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This covers how to load images such as JPG or PNG into a document", "property": "og:description" } ], "title": "Images | 🦜️🔗 LangChain" }
This covers how to load images such as JPG or PNG into a document format that we can use downstream. Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang”, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0) Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0)
https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_directory/
## Huawei OBS Directory The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents. ``` # Install the required package# pip install esdk-obs-python ``` ``` from langchain_community.document_loaders import OBSDirectoryLoader ``` ``` endpoint = "your-endpoint" ``` ``` # Configure your access credentials\nconfig = {"ak": "your-access-key", "sk": "your-secret-key"}loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config) ``` ## Specify a Prefix for Loading[​](#specify-a-prefix-for-loading "Direct link to Specify a Prefix for Loading") If you want to load objects with a specific prefix from the bucket, you can use the following code: ``` loader = OBSDirectoryLoader( "your-bucket-name", endpoint=endpoint, config=config, prefix="test_prefix") ``` ## Get Authentication Information from ECS[​](#get-authentication-information-from-ecs "Direct link to Get Authentication Information from ECS") If your langchain is deployed on Huawei Cloud ECS and [Agency is set up](https://support.huaweicloud.com/intl/en-us/usermanual-ecs/ecs_03_0166.html#section7), the loader can directly get the security token from ECS without needing access key and secret key. ``` config = {"get_token_from_ecs": True}loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config) ``` ## Use a Public Bucket[​](#use-a-public-bucket "Direct link to Use a Public Bucket") If your bucket’s bucket policy allows anonymous access (anonymous users have `listBucket` and `GetObject` permissions), you can directly load the objects without configuring the `config` parameter. ``` loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:09.937Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_directory/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_directory/", "description": "The following code demonstrates how to load objects from the Huawei OBS", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3451", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"huawei_obs_directory\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:09 GMT", "etag": "W/\"9211a53ce29ab755991d70b2d84b90ec\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::h4p4l-1713753549416-eef0c0093759" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_directory/", "property": "og:url" }, { "content": "Huawei OBS Directory | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The following code demonstrates how to load objects from the Huawei OBS", "property": "og:description" } ], "title": "Huawei OBS Directory | 🦜️🔗 LangChain" }
Huawei OBS Directory The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents. # Install the required package # pip install esdk-obs-python from langchain_community.document_loaders import OBSDirectoryLoader endpoint = "your-endpoint" # Configure your access credentials\n config = {"ak": "your-access-key", "sk": "your-secret-key"} loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config) Specify a Prefix for Loading​ If you want to load objects with a specific prefix from the bucket, you can use the following code: loader = OBSDirectoryLoader( "your-bucket-name", endpoint=endpoint, config=config, prefix="test_prefix" ) Get Authentication Information from ECS​ If your langchain is deployed on Huawei Cloud ECS and Agency is set up, the loader can directly get the security token from ECS without needing access key and secret key. config = {"get_token_from_ecs": True} loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config) Use a Public Bucket​ If your bucket’s bucket policy allows anonymous access (anonymous users have listBucket and GetObject permissions), you can directly load the objects without configuring the config parameter. loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint)
https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_file/
## Huawei OBS File The following code demonstrates how to load an object from the Huawei OBS (Object Storage Service) as document. ``` # Install the required package# pip install esdk-obs-python ``` ``` from langchain_community.document_loaders.obs_file import OBSFileLoader ``` ``` endpoint = "your-endpoint" ``` ``` from obs import ObsClientobs_client = ObsClient( access_key_id="your-access-key", secret_access_key="your-secret-key", server=endpoint,)loader = OBSFileLoader("your-bucket-name", "your-object-key", client=obs_client) ``` ## Each Loader with Separate Authentication Information[​](#each-loader-with-separate-authentication-information "Direct link to Each Loader with Separate Authentication Information") If you don’t need to reuse OBS connections between different loaders, you can directly configure the `config`. The loader will use the config information to initialize its own OBS client. ``` # Configure your access credentials\nconfig = {"ak": "your-access-key", "sk": "your-secret-key"}loader = OBSFileLoader( "your-bucket-name", "your-object-key", endpoint=endpoint, config=config) ``` ## Get Authentication Information from ECS[​](#get-authentication-information-from-ecs "Direct link to Get Authentication Information from ECS") If your langchain is deployed on Huawei Cloud ECS and [Agency is set up](https://support.huaweicloud.com/intl/en-us/usermanual-ecs/ecs_03_0166.html#section7), the loader can directly get the security token from ECS without needing access key and secret key. ``` config = {"get_token_from_ecs": True}loader = OBSFileLoader( "your-bucket-name", "your-object-key", endpoint=endpoint, config=config) ``` ## Access a Publicly Accessible Object[​](#access-a-publicly-accessible-object "Direct link to Access a Publicly Accessible Object") If the object you want to access allows anonymous user access (anonymous users have `GetObject` permission), you can directly load the object without configuring the `config` parameter. ``` loader = OBSFileLoader("your-bucket-name", "your-object-key", endpoint=endpoint) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:10.181Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_file/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_file/", "description": "The following code demonstrates how to load an object from the Huawei", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"huawei_obs_file\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:09 GMT", "etag": "W/\"75bddcee4afb71ba59e58e366b809ae2\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::cp5p8-1713753549441-4e38f46d3ec3" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_file/", "property": "og:url" }, { "content": "Huawei OBS File | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The following code demonstrates how to load an object from the Huawei", "property": "og:description" } ], "title": "Huawei OBS File | 🦜️🔗 LangChain" }
Huawei OBS File The following code demonstrates how to load an object from the Huawei OBS (Object Storage Service) as document. # Install the required package # pip install esdk-obs-python from langchain_community.document_loaders.obs_file import OBSFileLoader endpoint = "your-endpoint" from obs import ObsClient obs_client = ObsClient( access_key_id="your-access-key", secret_access_key="your-secret-key", server=endpoint, ) loader = OBSFileLoader("your-bucket-name", "your-object-key", client=obs_client) Each Loader with Separate Authentication Information​ If you don’t need to reuse OBS connections between different loaders, you can directly configure the config. The loader will use the config information to initialize its own OBS client. # Configure your access credentials\n config = {"ak": "your-access-key", "sk": "your-secret-key"} loader = OBSFileLoader( "your-bucket-name", "your-object-key", endpoint=endpoint, config=config ) Get Authentication Information from ECS​ If your langchain is deployed on Huawei Cloud ECS and Agency is set up, the loader can directly get the security token from ECS without needing access key and secret key. config = {"get_token_from_ecs": True} loader = OBSFileLoader( "your-bucket-name", "your-object-key", endpoint=endpoint, config=config ) Access a Publicly Accessible Object​ If the object you want to access allows anonymous user access (anonymous users have GetObject permission), you can directly load the object without configuring the config parameter. loader = OBSFileLoader("your-bucket-name", "your-object-key", endpoint=endpoint)
https://python.langchain.com/docs/integrations/document_loaders/hacker_news/
## Hacker News > [Hacker News](https://en.wikipedia.org/wiki/Hacker_News) (sometimes abbreviated as `HN`) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator `Y Combinator`. In general, content that can be submitted is defined as “anything that gratifies one’s intellectual curiosity.” This notebook covers how to pull page data and comments from [Hacker News](https://news.ycombinator.com/) ``` from langchain_community.document_loaders import HNLoader ``` ``` loader = HNLoader("https://news.ycombinator.com/item?id=34817881") ``` ``` data[0].page_content[:300] ``` ``` "delta_p_delta_x 73 days ago \n | next [–] \n\nAstrophysical and cosmological simulations are often insightful. They're also very cross-disciplinary; besides the obvious astrophysics, there's networking and sysadmin, parallel computing and algorithm theory (so that the simulation programs a" ``` ``` {'source': 'https://news.ycombinator.com/item?id=34817881', 'title': 'What Lights the Universe’s Standard Candles?'} ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:11.026Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/hacker_news/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/hacker_news/", "description": "Hacker News (sometimes", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4381", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"hacker_news\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:09 GMT", "etag": "W/\"15c91130541e5d8daae517fea9392dd1\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::tjlr2-1713753549735-75a6bebb36d0" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/hacker_news/", "property": "og:url" }, { "content": "Hacker News | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Hacker News (sometimes", "property": "og:description" } ], "title": "Hacker News | 🦜️🔗 LangChain" }
Hacker News Hacker News (sometimes abbreviated as HN) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as “anything that gratifies one’s intellectual curiosity.” This notebook covers how to pull page data and comments from Hacker News from langchain_community.document_loaders import HNLoader loader = HNLoader("https://news.ycombinator.com/item?id=34817881") data[0].page_content[:300] "delta_p_delta_x 73 days ago \n | next [–] \n\nAstrophysical and cosmological simulations are often insightful. They're also very cross-disciplinary; besides the obvious astrophysics, there's networking and sysadmin, parallel computing and algorithm theory (so that the simulation programs a" {'source': 'https://news.ycombinator.com/item?id=34817881', 'title': 'What Lights the Universe’s Standard Candles?'} Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/image_captions/
## Image captions By default, the loader utilizes the pre-trained [Salesforce BLIP image captioning model](https://huggingface.co/Salesforce/blip-image-captioning-base). This notebook shows how to use the `ImageCaptionLoader` to generate a query-able index of image captions ``` %pip install --upgrade --quiet transformers ``` ``` from langchain_community.document_loaders import ImageCaptionLoader ``` ### Prepare a list of image urls from Wikimedia[​](#prepare-a-list-of-image-urls-from-wikimedia "Direct link to Prepare a list of image urls from Wikimedia") ``` list_image_urls = [ "https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg",] ``` ### Create the loader[​](#create-the-loader "Direct link to Create the loader") ``` loader = ImageCaptionLoader(path_images=list_image_urls)list_docs = loader.load()list_docs ``` ``` import requestsfrom PIL import ImageImage.open(requests.get(list_image_urls[0], stream=True).raw).convert("RGB") ``` ### Create the index[​](#create-the-index "Direct link to Create the index") ``` from langchain.indexes import VectorstoreIndexCreatorindex = VectorstoreIndexCreator().from_loaders([loader]) ``` ### Query[​](#query "Direct link to Query") ``` query = "What's the painting about?"index.query(query) ``` ``` query = "What kind of images are there?"index.query(query) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:11.155Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/image_captions/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/image_captions/", "description": "By default, the loader utilizes the pre-trained [Salesforce BLIP image", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4380", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"image_captions\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:09 GMT", "etag": "W/\"244312576a77d0ef2d78b717e1e84a57\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::wkrjw-1713753549966-546603636d67" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/image_captions/", "property": "og:url" }, { "content": "Image captions | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "By default, the loader utilizes the pre-trained [Salesforce BLIP image", "property": "og:description" } ], "title": "Image captions | 🦜️🔗 LangChain" }
Image captions By default, the loader utilizes the pre-trained Salesforce BLIP image captioning model. This notebook shows how to use the ImageCaptionLoader to generate a query-able index of image captions %pip install --upgrade --quiet transformers from langchain_community.document_loaders import ImageCaptionLoader Prepare a list of image urls from Wikimedia​ list_image_urls = [ "https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg", ] Create the loader​ loader = ImageCaptionLoader(path_images=list_image_urls) list_docs = loader.load() list_docs import requests from PIL import Image Image.open(requests.get(list_image_urls[0], stream=True).raw).convert("RGB") Create the index​ from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator().from_loaders([loader]) Query​ query = "What's the painting about?" index.query(query) query = "What kind of images are there?" index.query(query) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/imsdb/
## IMSDb > [IMSDb](https://imsdb.com/) is the `Internet Movie Script Database`. This covers how to load `IMSDb` webpages into a document format that we can use downstream. ``` from langchain_community.document_loaders import IMSDbLoader ``` ``` loader = IMSDbLoader("https://imsdb.com/scripts/BlacKkKlansman.html") ``` ``` data[0].page_content[:500] ``` ``` '\n\r\n\r\n\r\n\r\n BLACKKKLANSMAN\r\n \r\n \r\n \r\n \r\n Written by\r\n\r\n Charlie Wachtel & David Rabinowitz\r\n\r\n and\r\n\r\n Kevin Willmott & Spike Lee\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n FADE IN:\r\n \r\n SCENE FROM "GONE WITH' ``` ``` {'source': 'https://imsdb.com/scripts/BlacKkKlansman.html'} ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:11.672Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/imsdb/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/imsdb/", "description": "IMSDb is the Internet Movie Script Database.", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3452", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"imsdb\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:11 GMT", "etag": "W/\"42a6a88237b05680ec65263b15932720\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::qk8bd-1713753551601-d7f940be2047" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/imsdb/", "property": "og:url" }, { "content": "IMSDb | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "IMSDb is the Internet Movie Script Database.", "property": "og:description" } ], "title": "IMSDb | 🦜️🔗 LangChain" }
IMSDb IMSDb is the Internet Movie Script Database. This covers how to load IMSDb webpages into a document format that we can use downstream. from langchain_community.document_loaders import IMSDbLoader loader = IMSDbLoader("https://imsdb.com/scripts/BlacKkKlansman.html") data[0].page_content[:500] '\n\r\n\r\n\r\n\r\n BLACKKKLANSMAN\r\n \r\n \r\n \r\n \r\n Written by\r\n\r\n Charlie Wachtel & David Rabinowitz\r\n\r\n and\r\n\r\n Kevin Willmott & Spike Lee\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n FADE IN:\r\n \r\n SCENE FROM "GONE WITH' {'source': 'https://imsdb.com/scripts/BlacKkKlansman.html'} Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/joplin/
This notebook covers how to load documents from a `Joplin` database. `Joplin` has a [REST API](https://joplinapp.org/api/references/rest_api/) for accessing its local database. This loader uses the API to retrieve all notes in the database and their metadata. This requires an access token that can be obtained from the app by following these steps: You may either initialize the loader directly with the access token, or store it in the environment variable JOPLIN\_ACCESS\_TOKEN. An alternative to this approach is to export the `Joplin`’s note database to Markdown files (optionally, with Front Matter metadata) and use a Markdown loader, such as ObsidianLoader, to load them. ``` from langchain_community.document_loaders import JoplinLoader ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:11.777Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/joplin/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/joplin/", "description": "Joplin is an open-source note-taking app.", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"joplin\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:11 GMT", "etag": "W/\"2d15254b88b96fee6743237f3732ef3b\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::6nz8d-1713753551662-d29ac475cd43" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/joplin/", "property": "og:url" }, { "content": "Joplin | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Joplin is an open-source note-taking app.", "property": "og:description" } ], "title": "Joplin | 🦜️🔗 LangChain" }
This notebook covers how to load documents from a Joplin database. Joplin has a REST API for accessing its local database. This loader uses the API to retrieve all notes in the database and their metadata. This requires an access token that can be obtained from the app by following these steps: You may either initialize the loader directly with the access token, or store it in the environment variable JOPLIN_ACCESS_TOKEN. An alternative to this approach is to export the Joplin’s note database to Markdown files (optionally, with Front Matter metadata) and use a Markdown loader, such as ObsidianLoader, to load them. from langchain_community.document_loaders import JoplinLoader
https://python.langchain.com/docs/integrations/document_loaders/jupyter_notebook/
This notebook covers how to load data from a `Jupyter notebook (.html)` into a format suitable by LangChain. `NotebookLoader.load()` loads the `.html` notebook file into a `Document` object. ``` [Document(page_content='\'markdown\' cell: \'[\'# Notebook\', \'\', \'This notebook covers how to load data from an .html notebook into a format suitable by LangChain.\']\'\n\n \'code\' cell: \'[\'from langchain_community.document_loaders import NotebookLoader\']\'\n\n \'code\' cell: \'[\'loader = NotebookLoader("example_data/notebook.html")\']\'\n\n \'markdown\' cell: \'[\'`NotebookLoader.load()` loads the `.html` notebook file into a `Document` object.\', \'\', \'**Parameters**:\', \'\', \'* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\', \'* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\', \'* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\', \'* `traceback` (bool): whether to include full traceback (default is False).\']\'\n\n \'code\' cell: \'[\'loader.load(include_outputs=True, max_output_length=20, remove_newline=True)\']\'\n\n', metadata={'source': 'example_data/notebook.html'})] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:12.038Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/jupyter_notebook/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/jupyter_notebook/", "description": "[Jupyter", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4802", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"jupyter_notebook\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:11 GMT", "etag": "W/\"f9eae6adc6d8c05a13848b9ee229adbc\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::6vv8w-1713753551788-969d2e50e93d" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/jupyter_notebook/", "property": "og:url" }, { "content": "Jupyter Notebook | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[Jupyter", "property": "og:description" } ], "title": "Jupyter Notebook | 🦜️🔗 LangChain" }
This notebook covers how to load data from a Jupyter notebook (.html) into a format suitable by LangChain. NotebookLoader.load() loads the .html notebook file into a Document object. [Document(page_content='\'markdown\' cell: \'[\'# Notebook\', \'\', \'This notebook covers how to load data from an .html notebook into a format suitable by LangChain.\']\'\n\n \'code\' cell: \'[\'from langchain_community.document_loaders import NotebookLoader\']\'\n\n \'code\' cell: \'[\'loader = NotebookLoader("example_data/notebook.html")\']\'\n\n \'markdown\' cell: \'[\'`NotebookLoader.load()` loads the `.html` notebook file into a `Document` object.\', \'\', \'**Parameters**:\', \'\', \'* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\', \'* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\', \'* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\', \'* `traceback` (bool): whether to include full traceback (default is False).\']\'\n\n \'code\' cell: \'[\'loader.load(include_outputs=True, max_output_length=20, remove_newline=True)\']\'\n\n', metadata={'source': 'example_data/notebook.html'})]
https://python.langchain.com/docs/integrations/document_loaders/larksuite/
## LarkSuite (FeiShu) > [LarkSuite](https://www.larksuite.com/) is an enterprise collaboration platform developed by ByteDance. This notebook covers how to load data from the `LarkSuite` REST API into a format that can be ingested into LangChain, along with example usage for text summarization. The LarkSuite API requires an access token (tenant\_access\_token or user\_access\_token), checkout [LarkSuite open platform document](https://open.larksuite.com/document) for API details. ``` from getpass import getpassfrom langchain_community.document_loaders.larksuite import LarkSuiteDocLoaderDOMAIN = input("larksuite domain")ACCESS_TOKEN = getpass("larksuite tenant_access_token or user_access_token")DOCUMENT_ID = input("larksuite document id") ``` ``` from pprint import pprintlarksuite_loader = LarkSuiteDocLoader(DOMAIN, ACCESS_TOKEN, DOCUMENT_ID)docs = larksuite_loader.load()pprint(docs) ``` ``` [Document(page_content='Test Doc\nThis is a Test Doc\n\n1\n2\n3\n\n', metadata={'document_id': 'V76kdbd2HoBbYJxdiNNccajunPf', 'revision_id': 11, 'title': 'Test Doc'})] ``` ``` # see https://python.langchain.com/docs/use_cases/summarization for more detailsfrom langchain.chains.summarize import load_summarize_chainfrom langchain_community.llms.fake import FakeListLLMllm = FakeListLLM()chain = load_summarize_chain(llm, chain_type="map_reduce")chain.run(docs) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:12.124Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/larksuite/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/larksuite/", "description": "LarkSuite is an enterprise collaboration", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3452", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"larksuite\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:12 GMT", "etag": "W/\"7da42084c30deb9c3414902fe4191c30\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::5czlr-1713753552044-3e4b7aa69082" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/larksuite/", "property": "og:url" }, { "content": "LarkSuite (FeiShu) | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "LarkSuite is an enterprise collaboration", "property": "og:description" } ], "title": "LarkSuite (FeiShu) | 🦜️🔗 LangChain" }
LarkSuite (FeiShu) LarkSuite is an enterprise collaboration platform developed by ByteDance. This notebook covers how to load data from the LarkSuite REST API into a format that can be ingested into LangChain, along with example usage for text summarization. The LarkSuite API requires an access token (tenant_access_token or user_access_token), checkout LarkSuite open platform document for API details. from getpass import getpass from langchain_community.document_loaders.larksuite import LarkSuiteDocLoader DOMAIN = input("larksuite domain") ACCESS_TOKEN = getpass("larksuite tenant_access_token or user_access_token") DOCUMENT_ID = input("larksuite document id") from pprint import pprint larksuite_loader = LarkSuiteDocLoader(DOMAIN, ACCESS_TOKEN, DOCUMENT_ID) docs = larksuite_loader.load() pprint(docs) [Document(page_content='Test Doc\nThis is a Test Doc\n\n1\n2\n3\n\n', metadata={'document_id': 'V76kdbd2HoBbYJxdiNNccajunPf', 'revision_id': 11, 'title': 'Test Doc'})] # see https://python.langchain.com/docs/use_cases/summarization for more details from langchain.chains.summarize import load_summarize_chain from langchain_community.llms.fake import FakeListLLM llm = FakeListLLM() chain = load_summarize_chain(llm, chain_type="map_reduce") chain.run(docs) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/iugu/
This notebook covers how to load data from the `Iugu REST API` into a format that can be ingested into LangChain, along with example usage for vectorization. The Iugu API requires an access token, which can be found inside of the Iugu dashboard. This document loader also requires a `resource` option which defines what data you want to load. ``` # Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([iugu_loader])iugu_doc_retriever = index.vectorstore.as_retriever() ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:12.491Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/iugu/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/iugu/", "description": "Iugu is a Brazilian services and software as", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4382", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"iugu\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:11 GMT", "etag": "W/\"e052fb147499ebb75f4a04edc1f88dfd\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::w7sgp-1713753551908-bad9a1fb71a5" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/iugu/", "property": "og:url" }, { "content": "Iugu | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Iugu is a Brazilian services and software as", "property": "og:description" } ], "title": "Iugu | 🦜️🔗 LangChain" }
This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization. The Iugu API requires an access token, which can be found inside of the Iugu dashboard. This document loader also requires a resource option which defines what data you want to load. # Create a vectorstore retriever from the loader # see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([iugu_loader]) iugu_doc_retriever = index.vectorstore.as_retriever()
https://python.langchain.com/docs/integrations/document_loaders/lakefs/
## lakeFS > [lakeFS](https://docs.lakefs.io/) provides scalable version control over the data lake, and uses Git-like semantics to create and access those versions. This notebooks covers how to load document objects from a `lakeFS` path (whether it’s an object or a prefix). ## Initializing the lakeFS loader[​](#initializing-the-lakefs-loader "Direct link to Initializing the lakeFS loader") Replace `ENDPOINT`, `LAKEFS_ACCESS_KEY`, and `LAKEFS_SECRET_KEY` values with your own. ``` from langchain_community.document_loaders import LakeFSLoader ``` ``` ENDPOINT = ""LAKEFS_ACCESS_KEY = ""LAKEFS_SECRET_KEY = ""lakefs_loader = LakeFSLoader( lakefs_access_key=LAKEFS_ACCESS_KEY, lakefs_secret_key=LAKEFS_SECRET_KEY, lakefs_endpoint=ENDPOINT,) ``` ## Specifying a path[​](#specifying-a-path "Direct link to Specifying a path") You can specify a prefix or a complete object path to control which files to load. Specify the repository, reference (branch, commit id, or tag), and path in the corresponding `REPO`, `REF`, and `PATH` to load the documents from: ``` REPO = ""REF = ""PATH = ""lakefs_loader.set_repo(REPO)lakefs_loader.set_ref(REF)lakefs_loader.set_path(PATH)docs = lakefs_loader.load()docs ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:12.629Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/lakefs/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/lakefs/", "description": "lakeFS provides scalable version control", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4381", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"lakefs\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:12 GMT", "etag": "W/\"1dfb104c2faeefcb13d066adc8389bc6\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::j6fmw-1713753552220-6dfb057ca70b" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/lakefs/", "property": "og:url" }, { "content": "lakeFS | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "lakeFS provides scalable version control", "property": "og:description" } ], "title": "lakeFS | 🦜️🔗 LangChain" }
lakeFS lakeFS provides scalable version control over the data lake, and uses Git-like semantics to create and access those versions. This notebooks covers how to load document objects from a lakeFS path (whether it’s an object or a prefix). Initializing the lakeFS loader​ Replace ENDPOINT, LAKEFS_ACCESS_KEY, and LAKEFS_SECRET_KEY values with your own. from langchain_community.document_loaders import LakeFSLoader ENDPOINT = "" LAKEFS_ACCESS_KEY = "" LAKEFS_SECRET_KEY = "" lakefs_loader = LakeFSLoader( lakefs_access_key=LAKEFS_ACCESS_KEY, lakefs_secret_key=LAKEFS_SECRET_KEY, lakefs_endpoint=ENDPOINT, ) Specifying a path​ You can specify a prefix or a complete object path to control which files to load. Specify the repository, reference (branch, commit id, or tag), and path in the corresponding REPO, REF, and PATH to load the documents from: REPO = "" REF = "" PATH = "" lakefs_loader.set_repo(REPO) lakefs_loader.set_ref(REF) lakefs_loader.set_path(PATH) docs = lakefs_loader.load() docs Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/mastodon/
## Mastodon > [Mastodon](https://joinmastodon.org/) is a federated social media and social networking service. This loader fetches the text from the “toots” of a list of `Mastodon` accounts, using the `Mastodon.py` Python package. Public accounts can the queried by default without any authentication. If non-public accounts or instances are queried, you have to register an application for your account which gets you an access token, and set that token and your account’s API base URL. Then you need to pass in the Mastodon account names you want to extract, in the `@account@instance` format. ``` from langchain_community.document_loaders import MastodonTootsLoader ``` ``` %pip install --upgrade --quiet Mastodon.py ``` ``` loader = MastodonTootsLoader( mastodon_accounts=["@Gargron@mastodon.social"], number_toots=50, # Default value is 100)# Or set up access information to use a Mastodon app.# Note that the access token can either be passed into# constructor or you can set the environment "MASTODON_ACCESS_TOKEN".# loader = MastodonTootsLoader(# access_token="<ACCESS TOKEN OF MASTODON APP>",# api_base_url="<API BASE URL OF MASTODON APP INSTANCE>",# mastodon_accounts=["@Gargron@mastodon.social"],# number_toots=50, # Default value is 100# ) ``` ``` documents = loader.load()for doc in documents[:3]: print(doc.page_content) print("=" * 80) ``` ``` <p>It is tough to leave this behind and go back to reality. And some people live here! I’m sure there are downsides but it sounds pretty good to me right now.</p>================================================================================<p>I wish we could stay here a little longer, but it is time to go home 🥲</p>================================================================================<p>Last day of the honeymoon. And it’s <a href="https://mastodon.social/tags/caturday" class="mention hashtag" rel="tag">#<span>caturday</span></a>! This cute tabby came to the restaurant to beg for food and got some chicken.</p>================================================================================ ``` The toot texts (the documents’ `page_content`) is by default HTML as returned by the Mastodon API.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:12.913Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/mastodon/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/mastodon/", "description": "Mastodon is a federated social media and", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4382", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"mastodon\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:12 GMT", "etag": "W/\"d3e6f6faf6adc7d4a4c24af80e5a9ace\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::l8zcx-1713753552844-ce6171357fad" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/mastodon/", "property": "og:url" }, { "content": "Mastodon | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Mastodon is a federated social media and", "property": "og:description" } ], "title": "Mastodon | 🦜️🔗 LangChain" }
Mastodon Mastodon is a federated social media and social networking service. This loader fetches the text from the “toots” of a list of Mastodon accounts, using the Mastodon.py Python package. Public accounts can the queried by default without any authentication. If non-public accounts or instances are queried, you have to register an application for your account which gets you an access token, and set that token and your account’s API base URL. Then you need to pass in the Mastodon account names you want to extract, in the @account@instance format. from langchain_community.document_loaders import MastodonTootsLoader %pip install --upgrade --quiet Mastodon.py loader = MastodonTootsLoader( mastodon_accounts=["@Gargron@mastodon.social"], number_toots=50, # Default value is 100 ) # Or set up access information to use a Mastodon app. # Note that the access token can either be passed into # constructor or you can set the environment "MASTODON_ACCESS_TOKEN". # loader = MastodonTootsLoader( # access_token="<ACCESS TOKEN OF MASTODON APP>", # api_base_url="<API BASE URL OF MASTODON APP INSTANCE>", # mastodon_accounts=["@Gargron@mastodon.social"], # number_toots=50, # Default value is 100 # ) documents = loader.load() for doc in documents[:3]: print(doc.page_content) print("=" * 80) <p>It is tough to leave this behind and go back to reality. And some people live here! I’m sure there are downsides but it sounds pretty good to me right now.</p> ================================================================================ <p>I wish we could stay here a little longer, but it is time to go home 🥲</p> ================================================================================ <p>Last day of the honeymoon. And it’s <a href="https://mastodon.social/tags/caturday" class="mention hashtag" rel="tag">#<span>caturday</span></a>! This cute tabby came to the restaurant to beg for food and got some chicken.</p> ================================================================================ The toot texts (the documents’ page_content) is by default HTML as returned by the Mastodon API.
https://python.langchain.com/docs/integrations/document_loaders/mediawikidump/
## MediaWiki Dump > [MediaWiki XML Dumps](https://www.mediawiki.org/wiki/Manual:Importing_XML_dumps) contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc. This covers how to load a MediaWiki XML dump file into a document format that we can use downstream. It uses `mwxml` from `mediawiki-utilities` to dump and `mwparserfromhell` from `earwig` to parse MediaWiki wikicode. Dump files can be obtained with dumpBackup.php or on the Special:Statistics page of the Wiki. ``` # mediawiki-utilities supports XML schema 0.11 in unmerged branches%pip install --upgrade --quiet git+https://github.com/mediawiki-utilities/python-mwtypes@updates_schema_0.11# mediawiki-utilities mwxml has a bug, fix PR pending%pip install --upgrade --quiet git+https://github.com/gdedrouas/python-mwxml@xml_format_0.11%pip install --upgrade --quiet mwparserfromhell ``` ``` from langchain_community.document_loaders import MWDumpLoader ``` ``` loader = MWDumpLoader( file_path="example_data/testmw_pages_current.xml", encoding="utf8", # namespaces = [0,2,3] Optional list to load only specific namespaces. Loads all namespaces by default. skip_redirects=True, # will skip over pages that just redirect to other pages (or not if False) stop_on_error=False, # will skip over pages that cause parsing errors (or not if False))documents = loader.load()print(f"You have {len(documents)} document(s) in your data ") ``` ``` You have 177 document(s) in your data ``` ``` [Document(page_content='\t\n\t\n\tArtist\n\tReleased\n\tRecorded\n\tLength\n\tLabel\n\tProducer', metadata={'source': 'Album'}), Document(page_content='{| class="article-table plainlinks" style="width:100%;"\n|- style="font-size:18px;"\n! style="padding:0px;" | Template documentation\n|-\n| Note: portions of the template sample may not be visible without values provided.\n|-\n| View or edit this documentation. (About template documentation)\n|-\n| Editors can experiment in this template\'s [ sandbox] and [ test case] pages.\n|}Category:Documentation templates', metadata={'source': 'Documentation'}), Document(page_content='Description\nThis template is used to insert descriptions on template pages.\n\nSyntax\nAdd <noinclude></noinclude> at the end of the template page.\n\nAdd <noinclude></noinclude> to transclude an alternative page from the /doc subpage.\n\nUsage\n\nOn the Template page\nThis is the normal format when used:\n\nTEMPLATE CODE\n<includeonly>Any categories to be inserted into articles by the template</includeonly>\n<noinclude>{{Documentation}}</noinclude>\n\nIf your template is not a completed div or table, you may need to close the tags just before {{Documentation}} is inserted (within the noinclude tags).\n\nA line break right before {{Documentation}} can also be useful as it helps prevent the documentation template "running into" previous code.\n\nOn the documentation page\nThe documentation page is usually located on the /doc subpage for a template, but a different page can be specified with the first parameter of the template (see Syntax).\n\nNormally, you will want to write something like the following on the documentation page:\n\n==Description==\nThis template is used to do something.\n\n==Syntax==\nType <code>{{t|templatename}}</code> somewhere.\n\n==Samples==\n<code><nowiki>{{templatename|input}}</nowiki></code> \n\nresults in...\n\n{{templatename|input}}\n\n<includeonly>Any categories for the template itself</includeonly>\n<noinclude>[[Category:Template documentation]]</noinclude>\n\nUse any or all of the above description/syntax/sample output sections. You may also want to add "see also" or other sections.\n\nNote that the above example also uses the Template:T template.\n\nCategory:Documentation templatesCategory:Template documentation', metadata={'source': 'Documentation/doc'}), Document(page_content='Description\nA template link with a variable number of parameters (0-20).\n\nSyntax\n \n\nSource\nImproved version not needing t/piece subtemplate developed on Templates wiki see the list of authors. Copied here via CC-By-SA 3.0 license.\n\nExample\n\nCategory:General wiki templates\nCategory:Template documentation', metadata={'source': 'T/doc'}), Document(page_content='\t\n\t\t \n\t\n\t\t Aliases\n\t Relatives\n\t Affiliation\n Occupation\n \n Biographical information\n Marital status\n \tDate of birth\n Place of birth\n Date of death\n Place of death\n \n Physical description\n Species\n Gender\n Height\n Weight\n Eye color\n\t\n Appearances\n Portrayed by\n Appears in\n Debut\n ', metadata={'source': 'Character'})] ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:13.026Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/mediawikidump/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/mediawikidump/", "description": "[MediaWiki XML", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3452", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"mediawikidump\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:12 GMT", "etag": "W/\"a99432f8da99e6a631ff5741ad6274d5\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::2cm6b-1713753552838-836b1cdf877b" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/mediawikidump/", "property": "og:url" }, { "content": "MediaWiki Dump | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[MediaWiki XML", "property": "og:description" } ], "title": "MediaWiki Dump | 🦜️🔗 LangChain" }
MediaWiki Dump MediaWiki XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc. This covers how to load a MediaWiki XML dump file into a document format that we can use downstream. It uses mwxml from mediawiki-utilities to dump and mwparserfromhell from earwig to parse MediaWiki wikicode. Dump files can be obtained with dumpBackup.php or on the Special:Statistics page of the Wiki. # mediawiki-utilities supports XML schema 0.11 in unmerged branches %pip install --upgrade --quiet git+https://github.com/mediawiki-utilities/python-mwtypes@updates_schema_0.11 # mediawiki-utilities mwxml has a bug, fix PR pending %pip install --upgrade --quiet git+https://github.com/gdedrouas/python-mwxml@xml_format_0.11 %pip install --upgrade --quiet mwparserfromhell from langchain_community.document_loaders import MWDumpLoader loader = MWDumpLoader( file_path="example_data/testmw_pages_current.xml", encoding="utf8", # namespaces = [0,2,3] Optional list to load only specific namespaces. Loads all namespaces by default. skip_redirects=True, # will skip over pages that just redirect to other pages (or not if False) stop_on_error=False, # will skip over pages that cause parsing errors (or not if False) ) documents = loader.load() print(f"You have {len(documents)} document(s) in your data ") You have 177 document(s) in your data [Document(page_content='\t\n\t\n\tArtist\n\tReleased\n\tRecorded\n\tLength\n\tLabel\n\tProducer', metadata={'source': 'Album'}), Document(page_content='{| class="article-table plainlinks" style="width:100%;"\n|- style="font-size:18px;"\n! style="padding:0px;" | Template documentation\n|-\n| Note: portions of the template sample may not be visible without values provided.\n|-\n| View or edit this documentation. (About template documentation)\n|-\n| Editors can experiment in this template\'s [ sandbox] and [ test case] pages.\n|}Category:Documentation templates', metadata={'source': 'Documentation'}), Document(page_content='Description\nThis template is used to insert descriptions on template pages.\n\nSyntax\nAdd <noinclude></noinclude> at the end of the template page.\n\nAdd <noinclude></noinclude> to transclude an alternative page from the /doc subpage.\n\nUsage\n\nOn the Template page\nThis is the normal format when used:\n\nTEMPLATE CODE\n<includeonly>Any categories to be inserted into articles by the template</includeonly>\n<noinclude>{{Documentation}}</noinclude>\n\nIf your template is not a completed div or table, you may need to close the tags just before {{Documentation}} is inserted (within the noinclude tags).\n\nA line break right before {{Documentation}} can also be useful as it helps prevent the documentation template "running into" previous code.\n\nOn the documentation page\nThe documentation page is usually located on the /doc subpage for a template, but a different page can be specified with the first parameter of the template (see Syntax).\n\nNormally, you will want to write something like the following on the documentation page:\n\n==Description==\nThis template is used to do something.\n\n==Syntax==\nType <code>{{t|templatename}}</code> somewhere.\n\n==Samples==\n<code><nowiki>{{templatename|input}}</nowiki></code> \n\nresults in...\n\n{{templatename|input}}\n\n<includeonly>Any categories for the template itself</includeonly>\n<noinclude>[[Category:Template documentation]]</noinclude>\n\nUse any or all of the above description/syntax/sample output sections. You may also want to add "see also" or other sections.\n\nNote that the above example also uses the Template:T template.\n\nCategory:Documentation templatesCategory:Template documentation', metadata={'source': 'Documentation/doc'}), Document(page_content='Description\nA template link with a variable number of parameters (0-20).\n\nSyntax\n \n\nSource\nImproved version not needing t/piece subtemplate developed on Templates wiki see the list of authors. Copied here via CC-By-SA 3.0 license.\n\nExample\n\nCategory:General wiki templates\nCategory:Template documentation', metadata={'source': 'T/doc'}), Document(page_content='\t\n\t\t \n\t\n\t\t Aliases\n\t Relatives\n\t Affiliation\n Occupation\n \n Biographical information\n Marital status\n \tDate of birth\n Place of birth\n Date of death\n Place of death\n \n Physical description\n Species\n Gender\n Height\n Weight\n Eye color\n\t\n Appearances\n Portrayed by\n Appears in\n Debut\n ', metadata={'source': 'Character'})] Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/llmsherpa/
## LLM Sherpa This notebook covers how to use `LLM Sherpa` to load files of many types. `LLM Sherpa` supports different file formats including DOCX, PPTX, HTML, TXT, and XML. `LLMSherpaFileLoader` use LayoutPDFReader, which is part of the LLMSherpa library. This tool is designed to parse PDFs while preserving their layout information, which is often lost when using most PDF to text parsers. Here are some key features of LayoutPDFReader: * It can identify and extract sections and subsections along with their levels. * It combines lines to form paragraphs. * It can identify links between sections and paragraphs. * It can extract tables along with the section the tables are found in. * It can identify and extract lists and nested lists. * It can join content spread across pages. * It can remove repeating headers and footers. * It can remove watermarks. check [llmsherpa](https://llmsherpa.readthedocs.io/en/latest/) documentation. `INFO: this library fail with some pdf files so use it with caution.` ``` # Install package# !pip install --upgrade --quiet llmsherpa ``` ## LLMSherpaFileLoader[​](#llmsherpafileloader "Direct link to LLMSherpaFileLoader") Under the hood LLMSherpaFileLoader defined some strategist to load file content: \[“sections”, “chunks”, “html”, “text”\], setup [nlm-ingestor](https://github.com/nlmatics/nlm-ingestor) to get `llmsherpa_api_url` or use the default. ### sections strategy: return the file parsed into sections[​](#sections-strategy-return-the-file-parsed-into-sections "Direct link to sections strategy: return the file parsed into sections") ``` from langchain_community.document_loaders.llmsherpa import LLMSherpaFileLoaderloader = LLMSherpaFileLoader( file_path="https://arxiv.org/pdf/2402.14207.pdf", new_indent_parser=True, apply_ocr=True, strategy="sections", llmsherpa_api_url="http://localhost:5010/api/parseDocument?renderFormat=all",)docs = loader.load() ``` ``` Document(page_content='Abstract\nWe study how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages.\nThis underexplored problem poses new challenges at the pre-writing stage, including how to research the topic and prepare an outline prior to writing.\nWe propose STORM, a writing system for the Synthesis of Topic Outlines through\nReferences\nFull-length Article\nTopic\nOutline\n2022 Winter Olympics\nOpening Ceremony\nResearch via Question Asking\nRetrieval and Multi-perspective Question Asking.\nSTORM models the pre-writing stage by\nLLM\n(1) discovering diverse perspectives in researching the given topic, (2) simulating conversations where writers carrying different perspectives pose questions to a topic expert grounded on trusted Internet sources, (3) curating the collected information to create an outline.\nFor evaluation, we curate FreshWiki, a dataset of recent high-quality Wikipedia articles, and formulate outline assessments to evaluate the pre-writing stage.\nWe further gather feedback from experienced Wikipedia editors.\nCompared to articles generated by an outlinedriven retrieval-augmented baseline, more of STORM’s articles are deemed to be organized (by a 25% absolute increase) and broad in coverage (by 10%).\nThe expert feedback also helps identify new challenges for generating grounded long articles, such as source bias transfer and over-association of unrelated facts.\n1. Can you provide any information about the transportation arrangements for the opening ceremony?\nLLM\n2. Can you provide any information about the budget for the 2022 Winter Olympics opening ceremony?…\nLLM- Role1\nLLM- Role2\nLLM- Role1', metadata={'source': 'https://arxiv.org/pdf/2402.14207.pdf', 'section_number': 1, 'section_title': 'Abstract'}) ``` ### chunks strategy: return the file parsed into chunks[​](#chunks-strategy-return-the-file-parsed-into-chunks "Direct link to chunks strategy: return the file parsed into chunks") ``` from langchain_community.document_loaders.llmsherpa import LLMSherpaFileLoaderloader = LLMSherpaFileLoader( file_path="https://arxiv.org/pdf/2402.14207.pdf", new_indent_parser=True, apply_ocr=True, strategy="chunks", llmsherpa_api_url="http://localhost:5010/api/parseDocument?renderFormat=all",)docs = loader.load() ``` ``` Document(page_content='Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models\nStanford University {shaoyj, yuchengj, tkanell, peterxu, okhattab}@stanford.edu lam@cs.stanford.edu', metadata={'source': 'https://arxiv.org/pdf/2402.14207.pdf', 'chunk_number': 1, 'chunk_type': 'para'}) ``` ### html strategy: return the file as one html document[​](#html-strategy-return-the-file-as-one-html-document "Direct link to html strategy: return the file as one html document") ``` from langchain_community.document_loaders.llmsherpa import LLMSherpaFileLoaderloader = LLMSherpaFileLoader( file_path="https://arxiv.org/pdf/2402.14207.pdf", new_indent_parser=True, apply_ocr=True, strategy="html", llmsherpa_api_url="http://localhost:5010/api/parseDocument?renderFormat=all",)docs = loader.load() ``` ``` docs[0].page_content[:400] ``` ``` '<html><h1>Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models</h1><table><th><td colSpan=1>Yijia Shao</td><td colSpan=1>Yucheng Jiang</td><td colSpan=1>Theodore A. Kanell</td><td colSpan=1>Peter Xu</td></th><tr><td colSpan=1></td><td colSpan=1>Omar Khattab</td><td colSpan=1>Monica S. Lam</td><td colSpan=1></td></tr></table><p>Stanford University {shaoyj, yuchengj, ' ``` ### text strategy: return the file as one text document[​](#text-strategy-return-the-file-as-one-text-document "Direct link to text strategy: return the file as one text document") ``` from langchain_community.document_loaders.llmsherpa import LLMSherpaFileLoaderloader = LLMSherpaFileLoader( file_path="https://arxiv.org/pdf/2402.14207.pdf", new_indent_parser=True, apply_ocr=True, strategy="text", llmsherpa_api_url="http://localhost:5010/api/parseDocument?renderFormat=all",)docs = loader.load() ``` ``` docs[0].page_content[:400] ``` ``` 'Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models\n | Yijia Shao | Yucheng Jiang | Theodore A. Kanell | Peter Xu\n | --- | --- | --- | ---\n | | Omar Khattab | Monica S. Lam | \n\nStanford University {shaoyj, yuchengj, tkanell, peterxu, okhattab}@stanford.edu lam@cs.stanford.edu\nAbstract\nWe study how to apply large language models to write grounded and organized long' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:13.502Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/llmsherpa/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/llmsherpa/", "description": "This notebook covers how to use LLM Sherpa to load files of many", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3453", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"llmsherpa\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:12 GMT", "etag": "W/\"d9f1aef5a8db1eff9a83ea53646f256c\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::2cm6b-1713753552810-779c895381b6" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/llmsherpa/", "property": "og:url" }, { "content": "LLM Sherpa | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook covers how to use LLM Sherpa to load files of many", "property": "og:description" } ], "title": "LLM Sherpa | 🦜️🔗 LangChain" }
LLM Sherpa This notebook covers how to use LLM Sherpa to load files of many types. LLM Sherpa supports different file formats including DOCX, PPTX, HTML, TXT, and XML. LLMSherpaFileLoader use LayoutPDFReader, which is part of the LLMSherpa library. This tool is designed to parse PDFs while preserving their layout information, which is often lost when using most PDF to text parsers. Here are some key features of LayoutPDFReader: It can identify and extract sections and subsections along with their levels. It combines lines to form paragraphs. It can identify links between sections and paragraphs. It can extract tables along with the section the tables are found in. It can identify and extract lists and nested lists. It can join content spread across pages. It can remove repeating headers and footers. It can remove watermarks. check llmsherpa documentation. INFO: this library fail with some pdf files so use it with caution. # Install package # !pip install --upgrade --quiet llmsherpa LLMSherpaFileLoader​ Under the hood LLMSherpaFileLoader defined some strategist to load file content: [“sections”, “chunks”, “html”, “text”], setup nlm-ingestor to get llmsherpa_api_url or use the default. sections strategy: return the file parsed into sections​ from langchain_community.document_loaders.llmsherpa import LLMSherpaFileLoader loader = LLMSherpaFileLoader( file_path="https://arxiv.org/pdf/2402.14207.pdf", new_indent_parser=True, apply_ocr=True, strategy="sections", llmsherpa_api_url="http://localhost:5010/api/parseDocument?renderFormat=all", ) docs = loader.load() Document(page_content='Abstract\nWe study how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages.\nThis underexplored problem poses new challenges at the pre-writing stage, including how to research the topic and prepare an outline prior to writing.\nWe propose STORM, a writing system for the Synthesis of Topic Outlines through\nReferences\nFull-length Article\nTopic\nOutline\n2022 Winter Olympics\nOpening Ceremony\nResearch via Question Asking\nRetrieval and Multi-perspective Question Asking.\nSTORM models the pre-writing stage by\nLLM\n(1) discovering diverse perspectives in researching the given topic, (2) simulating conversations where writers carrying different perspectives pose questions to a topic expert grounded on trusted Internet sources, (3) curating the collected information to create an outline.\nFor evaluation, we curate FreshWiki, a dataset of recent high-quality Wikipedia articles, and formulate outline assessments to evaluate the pre-writing stage.\nWe further gather feedback from experienced Wikipedia editors.\nCompared to articles generated by an outlinedriven retrieval-augmented baseline, more of STORM’s articles are deemed to be organized (by a 25% absolute increase) and broad in coverage (by 10%).\nThe expert feedback also helps identify new challenges for generating grounded long articles, such as source bias transfer and over-association of unrelated facts.\n1. Can you provide any information about the transportation arrangements for the opening ceremony?\nLLM\n2. Can you provide any information about the budget for the 2022 Winter Olympics opening ceremony?…\nLLM- Role1\nLLM- Role2\nLLM- Role1', metadata={'source': 'https://arxiv.org/pdf/2402.14207.pdf', 'section_number': 1, 'section_title': 'Abstract'}) chunks strategy: return the file parsed into chunks​ from langchain_community.document_loaders.llmsherpa import LLMSherpaFileLoader loader = LLMSherpaFileLoader( file_path="https://arxiv.org/pdf/2402.14207.pdf", new_indent_parser=True, apply_ocr=True, strategy="chunks", llmsherpa_api_url="http://localhost:5010/api/parseDocument?renderFormat=all", ) docs = loader.load() Document(page_content='Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models\nStanford University {shaoyj, yuchengj, tkanell, peterxu, okhattab}@stanford.edu lam@cs.stanford.edu', metadata={'source': 'https://arxiv.org/pdf/2402.14207.pdf', 'chunk_number': 1, 'chunk_type': 'para'}) html strategy: return the file as one html document​ from langchain_community.document_loaders.llmsherpa import LLMSherpaFileLoader loader = LLMSherpaFileLoader( file_path="https://arxiv.org/pdf/2402.14207.pdf", new_indent_parser=True, apply_ocr=True, strategy="html", llmsherpa_api_url="http://localhost:5010/api/parseDocument?renderFormat=all", ) docs = loader.load() docs[0].page_content[:400] '<html><h1>Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models</h1><table><th><td colSpan=1>Yijia Shao</td><td colSpan=1>Yucheng Jiang</td><td colSpan=1>Theodore A. Kanell</td><td colSpan=1>Peter Xu</td></th><tr><td colSpan=1></td><td colSpan=1>Omar Khattab</td><td colSpan=1>Monica S. Lam</td><td colSpan=1></td></tr></table><p>Stanford University {shaoyj, yuchengj, ' text strategy: return the file as one text document​ from langchain_community.document_loaders.llmsherpa import LLMSherpaFileLoader loader = LLMSherpaFileLoader( file_path="https://arxiv.org/pdf/2402.14207.pdf", new_indent_parser=True, apply_ocr=True, strategy="text", llmsherpa_api_url="http://localhost:5010/api/parseDocument?renderFormat=all", ) docs = loader.load() docs[0].page_content[:400] 'Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models\n | Yijia Shao | Yucheng Jiang | Theodore A. Kanell | Peter Xu\n | --- | --- | --- | ---\n | | Omar Khattab | Monica S. Lam | \n\nStanford University {shaoyj, yuchengj, tkanell, peterxu, okhattab}@stanford.edu lam@cs.stanford.edu\nAbstract\nWe study how to apply large language models to write grounded and organized long'
https://python.langchain.com/docs/integrations/document_loaders/merge_doc/
## Merge Documents Loader Merge the documents returned from a set of specified data loaders. ``` from langchain_community.document_loaders import WebBaseLoaderloader_web = WebBaseLoader( "https://github.com/basecamp/handbook/blob/master/37signals-is-you.md") ``` ``` from langchain_community.document_loaders import PyPDFLoaderloader_pdf = PyPDFLoader("../MachineLearning-Lecture01.pdf") ``` ``` from langchain_community.document_loaders.merge import MergedDataLoaderloader_all = MergedDataLoader(loaders=[loader_web, loader_pdf]) ``` ``` docs_all = loader_all.load() ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:13.416Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/merge_doc/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/merge_doc/", "description": "Merge the documents returned from a set of specified data loaders.", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3453", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"merge_doc\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:13 GMT", "etag": "W/\"a25f7bc285bb3c181424e68caf2c8b1a\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::lrtsn-1713753553219-7d86b7bd17b9" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/merge_doc/", "property": "og:url" }, { "content": "Merge Documents Loader | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Merge the documents returned from a set of specified data loaders.", "property": "og:description" } ], "title": "Merge Documents Loader | 🦜️🔗 LangChain" }
Merge Documents Loader Merge the documents returned from a set of specified data loaders. from langchain_community.document_loaders import WebBaseLoader loader_web = WebBaseLoader( "https://github.com/basecamp/handbook/blob/master/37signals-is-you.md" ) from langchain_community.document_loaders import PyPDFLoader loader_pdf = PyPDFLoader("../MachineLearning-Lecture01.pdf") from langchain_community.document_loaders.merge import MergedDataLoader loader_all = MergedDataLoader(loaders=[loader_web, loader_pdf]) docs_all = loader_all.load() Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/mhtml/
## mhtml MHTML is a is used both for emails but also for archived webpages. MHTML, sometimes referred as MHT, stands for MIME HTML is a single file in which entire webpage is archived. When one saves a webpage as MHTML format, this file extension will contain HTML code, images, audio files, flash animation etc. ``` from langchain_community.document_loaders import MHTMLLoader ``` ``` # Create a new loader object for the MHTML fileloader = MHTMLLoader( file_path="../../../../../../tests/integration_tests/examples/example.mht")# Load the document from the filedocuments = loader.load()# Print the documents to see the resultsfor doc in documents: print(doc) ``` ``` page_content='LangChain\nLANG CHAIN 🦜️🔗Official Home Page\xa0\n\n\n\n\n\n\n\nIntegrations\n\n\n\nFeatures\n\n\n\n\nBlog\n\n\n\nConceptual Guide\n\n\n\n\nPython Repo\n\n\nJavaScript Repo\n\n\n\nPython Documentation \n\n\nJavaScript Documentation\n\n\n\n\nPython ChatLangChain \n\n\nJavaScript ChatLangChain\n\n\n\n\nDiscord \n\n\nTwitter\n\n\n\n\nIf you have any comments about our WEB page, you can \nwrite us at the address shown above. However, due to \nthe limited number of personnel in our corporate office, we are unable to \nprovide a direct response.\n\nCopyright © 2023-2023 LangChain Inc.\n\n\n' metadata={'source': '../../../../../../tests/integration_tests/examples/example.mht', 'title': 'LangChain'} ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:13.725Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/mhtml/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/mhtml/", "description": "MHTML is a is used both for emails but also for archived webpages.", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "1261", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"mhtml\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:13 GMT", "etag": "W/\"191b2e74e33ec8e757bbe3dec9e2b23c\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::vsk6m-1713753553221-d87b8ec4fdc9" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/mhtml/", "property": "og:url" }, { "content": "mhtml | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "MHTML is a is used both for emails but also for archived webpages.", "property": "og:description" } ], "title": "mhtml | 🦜️🔗 LangChain" }
mhtml MHTML is a is used both for emails but also for archived webpages. MHTML, sometimes referred as MHT, stands for MIME HTML is a single file in which entire webpage is archived. When one saves a webpage as MHTML format, this file extension will contain HTML code, images, audio files, flash animation etc. from langchain_community.document_loaders import MHTMLLoader # Create a new loader object for the MHTML file loader = MHTMLLoader( file_path="../../../../../../tests/integration_tests/examples/example.mht" ) # Load the document from the file documents = loader.load() # Print the documents to see the results for doc in documents: print(doc) page_content='LangChain\nLANG CHAIN 🦜️🔗Official Home Page\xa0\n\n\n\n\n\n\n\nIntegrations\n\n\n\nFeatures\n\n\n\n\nBlog\n\n\n\nConceptual Guide\n\n\n\n\nPython Repo\n\n\nJavaScript Repo\n\n\n\nPython Documentation \n\n\nJavaScript Documentation\n\n\n\n\nPython ChatLangChain \n\n\nJavaScript ChatLangChain\n\n\n\n\nDiscord \n\n\nTwitter\n\n\n\n\nIf you have any comments about our WEB page, you can \nwrite us at the address shown above. However, due to \nthe limited number of personnel in our corporate office, we are unable to \nprovide a direct response.\n\nCopyright © 2023-2023 LangChain Inc.\n\n\n' metadata={'source': '../../../../../../tests/integration_tests/examples/example.mht', 'title': 'LangChain'} Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/microsoft_onedrive/
## Microsoft OneDrive > [Microsoft OneDrive](https://en.wikipedia.org/wiki/OneDrive) (formerly `SkyDrive`) is a file hosting service operated by Microsoft. This notebook covers how to load documents from `OneDrive`. Currently, only docx, doc, and pdf files are supported. ## Prerequisites[​](#prerequisites "Direct link to Prerequisites") 1. Register an application with the [Microsoft identity platform](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) instructions. 2. When registration finishes, the Azure portal displays the app registration’s Overview pane. You see the Application (client) ID. Also called the `client ID`, this value uniquely identifies your application in the Microsoft identity platform. 3. During the steps you will be following at **item 1**, you can set the redirect URI as `http://localhost:8000/callback` 4. During the steps you will be following at **item 1**, generate a new password (`client_secret`) under Application Secrets section. 5. Follow the instructions at this [document](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-configure-app-expose-web-apis#add-a-scope) to add the following `SCOPES` (`offline_access` and `Files.Read.All`) to your application. 6. Visit the [Graph Explorer Playground](https://developer.microsoft.com/en-us/graph/graph-explorer) to obtain your `OneDrive ID`. The first step is to ensure you are logged in with the account associated your OneDrive account. Then you need to make a request to `https://graph.microsoft.com/v1.0/me/drive` and the response will return a payload with a field `id` that holds the ID of your OneDrive account. 7. You need to install the o365 package using the command `pip install o365`. 8. At the end of the steps you must have the following values: * `CLIENT_ID` * `CLIENT_SECRET` * `DRIVE_ID` ## 🧑 Instructions for ingesting your documents from OneDrive[​](#instructions-for-ingesting-your-documents-from-onedrive "Direct link to 🧑 Instructions for ingesting your documents from OneDrive") ### 🔑 Authentication[​](#authentication "Direct link to 🔑 Authentication") By default, the `OneDriveLoader` expects that the values of `CLIENT_ID` and `CLIENT_SECRET` must be stored as environment variables named `O365_CLIENT_ID` and `O365_CLIENT_SECRET` respectively. You could pass those environment variables through a `.env` file at the root of your application or using the following command in your script. ``` os.environ['O365_CLIENT_ID'] = "YOUR CLIENT ID"os.environ['O365_CLIENT_SECRET'] = "YOUR CLIENT SECRET" ``` This loader uses an authentication called [_on behalf of a user_](https://learn.microsoft.com/en-us/graph/auth-v2-user?context=graph%2Fapi%2F1.0&view=graph-rest-1.0). It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was successful. ``` from langchain_community.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID") ``` Once the authentication has been done, the loader will store a token (`o365_token.txt`) at `~/.credentials/` folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the `auth_with_token` parameter to True in the instantiation of the loader. ``` from langchain_community.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID", auth_with_token=True) ``` ### 🗂️ Documents loader[​](#documents-loader "Direct link to 🗂️ Documents loader") #### 📑 Loading documents from a OneDrive Directory[​](#loading-documents-from-a-onedrive-directory "Direct link to 📑 Loading documents from a OneDrive Directory") `OneDriveLoader` can load documents from a specific folder within your OneDrive. For instance, you want to load all documents that are stored at `Documents/clients` folder within your OneDrive. ``` from langchain_community.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID", folder_path="Documents/clients", auth_with_token=True)documents = loader.load() ``` #### 📑 Loading documents from a list of Documents IDs[​](#loading-documents-from-a-list-of-documents-ids "Direct link to 📑 Loading documents from a list of Documents IDs") Another possibility is to provide a list of `object_id` for each document you want to load. For that, you will need to query the [Microsoft Graph API](https://developer.microsoft.com/en-us/graph/graph-explorer) to find all the documents ID that you are interested in. This [link](https://learn.microsoft.com/en-us/graph/api/resources/onedrive?view=graph-rest-1.0#commonly-accessed-resources) provides a list of endpoints that will be helpful to retrieve the documents ID. For instance, to retrieve information about all objects that are stored at the root of the Documents folder, you need make a request to: `https://graph.microsoft.com/v1.0/drives/{YOUR DRIVE ID}/root/children`. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters. ``` from langchain_community.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID", object_ids=["ID_1", "ID_2"], auth_with_token=True)documents = loader.load() ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:14.236Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_onedrive/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_onedrive/", "description": "Microsoft OneDrive (formerly", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3453", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"microsoft_onedrive\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:14 GMT", "etag": "W/\"01a7f7a8e80a3eeccb7a139fb777d625\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::c8dx6-1713753554126-9f07fd13e789" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_onedrive/", "property": "og:url" }, { "content": "Microsoft OneDrive | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Microsoft OneDrive (formerly", "property": "og:description" } ], "title": "Microsoft OneDrive | 🦜️🔗 LangChain" }
Microsoft OneDrive Microsoft OneDrive (formerly SkyDrive) is a file hosting service operated by Microsoft. This notebook covers how to load documents from OneDrive. Currently, only docx, doc, and pdf files are supported. Prerequisites​ Register an application with the Microsoft identity platform instructions. When registration finishes, the Azure portal displays the app registration’s Overview pane. You see the Application (client) ID. Also called the client ID, this value uniquely identifies your application in the Microsoft identity platform. During the steps you will be following at item 1, you can set the redirect URI as http://localhost:8000/callback During the steps you will be following at item 1, generate a new password (client_secret) under Application Secrets section. Follow the instructions at this document to add the following SCOPES (offline_access and Files.Read.All) to your application. Visit the Graph Explorer Playground to obtain your OneDrive ID. The first step is to ensure you are logged in with the account associated your OneDrive account. Then you need to make a request to https://graph.microsoft.com/v1.0/me/drive and the response will return a payload with a field id that holds the ID of your OneDrive account. You need to install the o365 package using the command pip install o365. At the end of the steps you must have the following values: CLIENT_ID CLIENT_SECRET DRIVE_ID 🧑 Instructions for ingesting your documents from OneDrive​ 🔑 Authentication​ By default, the OneDriveLoader expects that the values of CLIENT_ID and CLIENT_SECRET must be stored as environment variables named O365_CLIENT_ID and O365_CLIENT_SECRET respectively. You could pass those environment variables through a .env file at the root of your application or using the following command in your script. os.environ['O365_CLIENT_ID'] = "YOUR CLIENT ID" os.environ['O365_CLIENT_SECRET'] = "YOUR CLIENT SECRET" This loader uses an authentication called on behalf of a user. It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was successful. from langchain_community.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID") Once the authentication has been done, the loader will store a token (o365_token.txt) at ~/.credentials/ folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the auth_with_token parameter to True in the instantiation of the loader. from langchain_community.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID", auth_with_token=True) 🗂️ Documents loader​ 📑 Loading documents from a OneDrive Directory​ OneDriveLoader can load documents from a specific folder within your OneDrive. For instance, you want to load all documents that are stored at Documents/clients folder within your OneDrive. from langchain_community.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID", folder_path="Documents/clients", auth_with_token=True) documents = loader.load() 📑 Loading documents from a list of Documents IDs​ Another possibility is to provide a list of object_id for each document you want to load. For that, you will need to query the Microsoft Graph API to find all the documents ID that you are interested in. This link provides a list of endpoints that will be helpful to retrieve the documents ID. For instance, to retrieve information about all objects that are stored at the root of the Documents folder, you need make a request to: https://graph.microsoft.com/v1.0/drives/{YOUR DRIVE ID}/root/children. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters. from langchain_community.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID", object_ids=["ID_1", "ID_2"], auth_with_token=True) documents = loader.load()
https://python.langchain.com/docs/integrations/document_loaders/microsoft_excel/
## Microsoft Excel The `UnstructuredExcelLoader` is used to load `Microsoft Excel` files. The loader works with both `.xlsx` and `.xls` files. The page content will be the raw text of the Excel file. If you use the loader in `"elements"` mode, an HTML representation of the Excel file will be available in the document metadata under the `text_as_html` key. ``` from langchain_community.document_loaders import UnstructuredExcelLoader ``` ``` loader = UnstructuredExcelLoader("example_data/stanley-cups.xlsx", mode="elements")docs = loader.load()docs[0] ``` ``` Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'source': 'example_data/stanley-cups.xlsx', 'filename': 'stanley-cups.xlsx', 'file_directory': 'example_data', 'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border="1" class="dataframe">\n <tbody>\n <tr>\n <td>Team</td>\n <td>Location</td>\n <td>Stanley Cups</td>\n </tr>\n <tr>\n <td>Blues</td>\n <td>STL</td>\n <td>1</td>\n </tr>\n <tr>\n <td>Flyers</td>\n <td>PHI</td>\n <td>2</td>\n </tr>\n <tr>\n <td>Maple Leafs</td>\n <td>TOR</td>\n <td>13</td>\n </tr>\n </tbody>\n</table>', 'category': 'Table'}) ``` ## Using Azure AI Document Intelligence[​](#using-azure-ai-document-intelligence "Direct link to Using Azure AI Document Intelligence") > [Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files. > > Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`. This current implementation of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode="single"` or `mode="page"` to return pure texts in a single page or document split by page. ### Prerequisite[​](#prerequisite "Direct link to Prerequisite") An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don’t have. You will be passing `<endpoint>` and `<key>` as parameters to the loader. ``` %pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence ``` ``` from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoaderfile_path = "<filepath>"endpoint = "<endpoint>"key = "<key>"loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout")documents = loader.load() ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:14.386Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_excel/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_excel/", "description": "The UnstructuredExcelLoader is used to load Microsoft Excel files.", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4415", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"microsoft_excel\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:14 GMT", "etag": "W/\"162b1b3b1bc6aa925c7c2233f0c6dbd2\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::kbrfh-1713753554133-8c180dacd610" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_excel/", "property": "og:url" }, { "content": "Microsoft Excel | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The UnstructuredExcelLoader is used to load Microsoft Excel files.", "property": "og:description" } ], "title": "Microsoft Excel | 🦜️🔗 LangChain" }
Microsoft Excel The UnstructuredExcelLoader is used to load Microsoft Excel files. The loader works with both .xlsx and .xls files. The page content will be the raw text of the Excel file. If you use the loader in "elements" mode, an HTML representation of the Excel file will be available in the document metadata under the text_as_html key. from langchain_community.document_loaders import UnstructuredExcelLoader loader = UnstructuredExcelLoader("example_data/stanley-cups.xlsx", mode="elements") docs = loader.load() docs[0] Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'source': 'example_data/stanley-cups.xlsx', 'filename': 'stanley-cups.xlsx', 'file_directory': 'example_data', 'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border="1" class="dataframe">\n <tbody>\n <tr>\n <td>Team</td>\n <td>Location</td>\n <td>Stanley Cups</td>\n </tr>\n <tr>\n <td>Blues</td>\n <td>STL</td>\n <td>1</td>\n </tr>\n <tr>\n <td>Flyers</td>\n <td>PHI</td>\n <td>2</td>\n </tr>\n <tr>\n <td>Maple Leafs</td>\n <td>TOR</td>\n <td>13</td>\n </tr>\n </tbody>\n</table>', 'category': 'Table'}) Using Azure AI Document Intelligence​ Azure AI Document Intelligence (formerly known as Azure Form Recognizer) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files. Document Intelligence supports PDF, JPEG/JPG, PNG, BMP, TIFF, HEIF, DOCX, XLSX, PPTX and HTML. This current implementation of a loader using Document Intelligence can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with MarkdownHeaderTextSplitter for semantic document chunking. You can also use mode="single" or mode="page" to return pure texts in a single page or document split by page. Prerequisite​ An Azure AI Document Intelligence resource in one of the 3 preview regions: East US, West US2, West Europe - follow this document to create one if you don’t have. You will be passing <endpoint> and <key> as parameters to the loader. %pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader file_path = "<filepath>" endpoint = "<endpoint>" key = "<key>" loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout" ) documents = loader.load()
https://python.langchain.com/docs/integrations/document_loaders/college_confidential/
This covers how to load `College Confidential` webpages into a document format that we can use downstream. ``` [Document(page_content='\n\n\n\n\n\n\n\nA68FEB02-9D19-447C-B8BC-818149FD6EAF\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Media (2)\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout Brown\n\n\n\n\n\n\nBrown University Overview\nBrown University is a private, nonprofit school in the urban setting of Providence, Rhode Island. Brown was founded in 1764 and the school currently enrolls around 10,696 students a year, including 7,349 undergraduates. Brown provides on-campus housing for students. Most students live in off campus housing.\n📆 Mark your calendar! January 5, 2023 is the final deadline to submit an application for the Fall 2023 semester. \nThere are many ways for students to get involved at Brown! \nLove music or performing? Join a campus band, sing in a chorus, or perform with one of the school\'s theater groups.\nInterested in journalism or communications? Brown students can write for the campus newspaper, host a radio show or be a producer for the student-run television channel.\nInterested in joining a fraternity or sorority? Brown has fraternities and sororities.\nPlanning to play sports? Brown has many options for athletes. See them all and learn more about life at Brown on the Student Life page.\n\n\n\n2022 Brown Facts At-A-Glance\n\n\n\n\n\nAcademic Calendar\nOther\n\n\nOverall Acceptance Rate\n6%\n\n\nEarly Decision Acceptance Rate\n16%\n\n\nEarly Action Acceptance Rate\nEA not offered\n\n\nApplicants Submitting SAT scores\n51%\n\n\nTuition\n$62,680\n\n\nPercent of Need Met\n100%\n\n\nAverage First-Year Financial Aid Package\n$59,749\n\n\n\n\nIs Brown a Good School?\n\nDifferent people have different ideas about what makes a "good" school. Some factors that can help you determine what a good school for you might be include admissions criteria, acceptance rate, tuition costs, and more.\nLet\'s take a look at these factors to get a clearer sense of what Brown offers and if it could be the right college for you.\nBrown Acceptance Rate 2022\nIt is extremely difficult to get into Brown. Around 6% of applicants get into Brown each year. In 2022, just 2,568 out of the 46,568 students who applied were accepted.\nRetention and Graduation Rates at Brown\nRetention refers to the number of students that stay enrolled at a school over time. This is a way to get a sense of how satisfied students are with their school experience, and if they have the support necessary to succeed in college. \nApproximately 98% of first-year, full-time undergrads who start at Browncome back their sophomore year. 95% of Brown undergrads graduate within six years. The average six-year graduation rate for U.S. colleges and universities is 61% for public schools, and 67% for private, non-profit schools.\nJob Outcomes for Brown Grads\nJob placement stats are a good resource for understanding the value of a degree from Brown by providing a look on how job placement has gone for other grads. \nCheck with Brown directly, for information on any information on starting salaries for recent grads.\nBrown\'s Endowment\nAn endowment is the total value of a school\'s investments, donations, and assets. Endowment is not necessarily an indicator of the quality of a school, but it can give you a sense of how much money a college can afford to invest in expanding programs, improving facilities, and support students. \nAs of 2022, the total market value of Brown University\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \nTuition and Financial Aid at Brown\nTuition is another important factor when choose a college. Some colleges may have high tuition, but do a better job at meeting students\' financial need.\nBrown meets 100% of the demonstrated financial need for undergraduates. The average financial aid package for a full-time, first-year student is around $59,749 a year. \nThe average student debt for graduates in the class of 2022 was around $24,102 per student, not including those with no debt. For context, compare this number with the average national debt, which is around $36,000 per borrower. \nThe 2023-2024 FAFSA Opened on October 1st, 2022\nSome financial aid is awarded on a first-come, first-served basis, so fill out the FAFSA as soon as you can. Visit the FAFSA website to apply for student aid. Remember, the first F in FAFSA stands for FREE! You should never have to pay to submit the Free Application for Federal Student Aid (FAFSA), so be very wary of anyone asking you for money.\nLearn more about Tuition and Financial Aid at Brown.\nBased on this information, does Brown seem like a good fit? Remember, a school that is perfect for one person may be a terrible fit for someone else! So ask yourself: Is Brown a good school for you?\nIf Brown University seems like a school you want to apply to, click the heart button to save it to your college list.\n\nStill Exploring Schools?\nChoose one of the options below to learn more about Brown:\nAdmissions\nStudent Life\nAcademics\nTuition & Aid\nBrown Community Forums\nThen use the college admissions predictor to take a data science look at your chances of getting into some of the best colleges and universities in the U.S.\nWhere is Brown?\nBrown is located in the urban setting of Providence, Rhode Island, less than an hour from Boston. \nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\nYou can also take a virtual campus tour to get a sense of what Brown and Providence are like without leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list of colleges in Rhode Island and save your favorites to your college list.\n\n\n\nCollege Info\n\n\n\n\n\n\n\n\n\n Providence, RI 02912\n \n\n\n\n Campus Setting: Urban\n \n\n\n\n\n\n\n\n (401) 863-2378\n \n\n Website\n \n\n Virtual Tour\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBrown Application Deadline\n\n\n\nFirst-Year Applications are Due\n\nJan 5\n\nTransfer Applications are Due\n\nMar 1\n\n\n\n \n The deadline for Fall first-year applications to Brown is \n Jan 5. \n \n \n \n\n \n The deadline for Fall transfer applications to Brown is \n Mar 1. \n \n \n \n\n \n Check the school website \n for more information about deadlines for specific programs or special admissions programs\n \n \n\n\n\n\n\n\nBrown ACT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nACT Range\n\n\n \n 33 - 35\n \n \n\n\n\nEstimated Chance of Acceptance by ACT Score\n\n\nACT Score\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 to 35\nAvg\n\n\n33 and Less\nLow\n\n\n\n\n\n\nStand out on your college application\n\n• Qualify for scholarships\n• Most students who retest improve their score\n\nSponsored by ACT\n\n\n Take the Next ACT Test\n \n\n\n\n\n\nBrown SAT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nComposite SAT Range\n\n\n \n 720 - 770\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nMath SAT Range\n\n\n \n Not available\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nReading SAT Range\n\n\n \n 740 - 800\n \n \n\n\n\n\n\n\n Brown Tuition & Fees\n \n\n\n\nTuition & Fees\n\n\n\n $82,286\n \nIn State\n\n\n\n\n $82,286\n \nOut-of-State\n\n\n\n\n\n\n\nCost Breakdown\n\n\nIn State\n\n\nOut-of-State\n\n\n\n\nState Tuition\n\n\n\n $62,680\n \n\n\n\n $62,680\n \n\n\n\n\nFees\n\n\n\n $2,466\n \n\n\n\n $2,466\n \n\n\n\n\nHousing\n\n\n\n $15,840\n \n\n\n\n $15,840\n \n\n\n\n\nBooks\n\n\n\n $1,300\n \n\n\n\n $1,300\n \n\n\n\n\n\n Total (Before Financial Aid):\n \n\n\n\n $82,286\n \n\n\n\n $82,286\n \n\n\n\n\n\n\n\n\n\n\n\nStudent Life\n\n Wondering what life at Brown is like? There are approximately \n 10,696 students enrolled at \n Brown, \n including 7,349 undergraduate students and \n 3,347 graduate students.\n 96% percent of students attend school \n full-time, \n 6% percent are from RI and \n 94% percent of students are from other states.\n \n\n\n\n\n\n None\n \n\n\n\n\nUndergraduate Enrollment\n\n\n\n 96%\n \nFull Time\n\n\n\n\n 4%\n \nPart Time\n\n\n\n\n\n\n\n 94%\n \n\n\n\n\nResidency\n\n\n\n 6%\n \nIn State\n\n\n\n\n 94%\n \nOut-of-State\n\n\n\n\n\n\n\n Data Source: IPEDs and Peterson\'s Databases © 2022 Peterson\'s LLC All rights reserved\n \n', lookup_str='', metadata={'source': 'https://www.collegeconfidential.com/colleges/brown-university/'}, lookup_index=0)] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:14.853Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/college_confidential/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/college_confidential/", "description": "College Confidential gives", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3460", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"college_confidential\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:14 GMT", "etag": "W/\"acd7f0cfa525c22bfea15774b373cac5\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::lmhs6-1713753554311-abefe0c10af5" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/college_confidential/", "property": "og:url" }, { "content": "College Confidential | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "College Confidential gives", "property": "og:description" } ], "title": "College Confidential | 🦜️🔗 LangChain" }
This covers how to load College Confidential webpages into a document format that we can use downstream. [Document(page_content='\n\n\n\n\n\n\n\nA68FEB02-9D19-447C-B8BC-818149FD6EAF\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Media (2)\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout Brown\n\n\n\n\n\n\nBrown University Overview\nBrown University is a private, nonprofit school in the urban setting of Providence, Rhode Island. Brown was founded in 1764 and the school currently enrolls around 10,696 students a year, including 7,349 undergraduates. Brown provides on-campus housing for students. Most students live in off campus housing.\n📆 Mark your calendar! January 5, 2023 is the final deadline to submit an application for the Fall 2023 semester. \nThere are many ways for students to get involved at Brown! \nLove music or performing? Join a campus band, sing in a chorus, or perform with one of the school\'s theater groups.\nInterested in journalism or communications? Brown students can write for the campus newspaper, host a radio show or be a producer for the student-run television channel.\nInterested in joining a fraternity or sorority? Brown has fraternities and sororities.\nPlanning to play sports? Brown has many options for athletes. See them all and learn more about life at Brown on the Student Life page.\n\n\n\n2022 Brown Facts At-A-Glance\n\n\n\n\n\nAcademic Calendar\nOther\n\n\nOverall Acceptance Rate\n6%\n\n\nEarly Decision Acceptance Rate\n16%\n\n\nEarly Action Acceptance Rate\nEA not offered\n\n\nApplicants Submitting SAT scores\n51%\n\n\nTuition\n$62,680\n\n\nPercent of Need Met\n100%\n\n\nAverage First-Year Financial Aid Package\n$59,749\n\n\n\n\nIs Brown a Good School?\n\nDifferent people have different ideas about what makes a "good" school. Some factors that can help you determine what a good school for you might be include admissions criteria, acceptance rate, tuition costs, and more.\nLet\'s take a look at these factors to get a clearer sense of what Brown offers and if it could be the right college for you.\nBrown Acceptance Rate 2022\nIt is extremely difficult to get into Brown. Around 6% of applicants get into Brown each year. In 2022, just 2,568 out of the 46,568 students who applied were accepted.\nRetention and Graduation Rates at Brown\nRetention refers to the number of students that stay enrolled at a school over time. This is a way to get a sense of how satisfied students are with their school experience, and if they have the support necessary to succeed in college. \nApproximately 98% of first-year, full-time undergrads who start at Browncome back their sophomore year. 95% of Brown undergrads graduate within six years. The average six-year graduation rate for U.S. colleges and universities is 61% for public schools, and 67% for private, non-profit schools.\nJob Outcomes for Brown Grads\nJob placement stats are a good resource for understanding the value of a degree from Brown by providing a look on how job placement has gone for other grads. \nCheck with Brown directly, for information on any information on starting salaries for recent grads.\nBrown\'s Endowment\nAn endowment is the total value of a school\'s investments, donations, and assets. Endowment is not necessarily an indicator of the quality of a school, but it can give you a sense of how much money a college can afford to invest in expanding programs, improving facilities, and support students. \nAs of 2022, the total market value of Brown University\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \nTuition and Financial Aid at Brown\nTuition is another important factor when choose a college. Some colleges may have high tuition, but do a better job at meeting students\' financial need.\nBrown meets 100% of the demonstrated financial need for undergraduates. The average financial aid package for a full-time, first-year student is around $59,749 a year. \nThe average student debt for graduates in the class of 2022 was around $24,102 per student, not including those with no debt. For context, compare this number with the average national debt, which is around $36,000 per borrower. \nThe 2023-2024 FAFSA Opened on October 1st, 2022\nSome financial aid is awarded on a first-come, first-served basis, so fill out the FAFSA as soon as you can. Visit the FAFSA website to apply for student aid. Remember, the first F in FAFSA stands for FREE! You should never have to pay to submit the Free Application for Federal Student Aid (FAFSA), so be very wary of anyone asking you for money.\nLearn more about Tuition and Financial Aid at Brown.\nBased on this information, does Brown seem like a good fit? Remember, a school that is perfect for one person may be a terrible fit for someone else! So ask yourself: Is Brown a good school for you?\nIf Brown University seems like a school you want to apply to, click the heart button to save it to your college list.\n\nStill Exploring Schools?\nChoose one of the options below to learn more about Brown:\nAdmissions\nStudent Life\nAcademics\nTuition & Aid\nBrown Community Forums\nThen use the college admissions predictor to take a data science look at your chances of getting into some of the best colleges and universities in the U.S.\nWhere is Brown?\nBrown is located in the urban setting of Providence, Rhode Island, less than an hour from Boston. \nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\nYou can also take a virtual campus tour to get a sense of what Brown and Providence are like without leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list of colleges in Rhode Island and save your favorites to your college list.\n\n\n\nCollege Info\n\n\n\n\n\n\n\n\n\n Providence, RI 02912\n \n\n\n\n Campus Setting: Urban\n \n\n\n\n\n\n\n\n (401) 863-2378\n \n\n Website\n \n\n Virtual Tour\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBrown Application Deadline\n\n\n\nFirst-Year Applications are Due\n\nJan 5\n\nTransfer Applications are Due\n\nMar 1\n\n\n\n \n The deadline for Fall first-year applications to Brown is \n Jan 5. \n \n \n \n\n \n The deadline for Fall transfer applications to Brown is \n Mar 1. \n \n \n \n\n \n Check the school website \n for more information about deadlines for specific programs or special admissions programs\n \n \n\n\n\n\n\n\nBrown ACT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nACT Range\n\n\n \n 33 - 35\n \n \n\n\n\nEstimated Chance of Acceptance by ACT Score\n\n\nACT Score\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 to 35\nAvg\n\n\n33 and Less\nLow\n\n\n\n\n\n\nStand out on your college application\n\n• Qualify for scholarships\n• Most students who retest improve their score\n\nSponsored by ACT\n\n\n Take the Next ACT Test\n \n\n\n\n\n\nBrown SAT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nComposite SAT Range\n\n\n \n 720 - 770\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nMath SAT Range\n\n\n \n Not available\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nReading SAT Range\n\n\n \n 740 - 800\n \n \n\n\n\n\n\n\n Brown Tuition & Fees\n \n\n\n\nTuition & Fees\n\n\n\n $82,286\n \nIn State\n\n\n\n\n $82,286\n \nOut-of-State\n\n\n\n\n\n\n\nCost Breakdown\n\n\nIn State\n\n\nOut-of-State\n\n\n\n\nState Tuition\n\n\n\n $62,680\n \n\n\n\n $62,680\n \n\n\n\n\nFees\n\n\n\n $2,466\n \n\n\n\n $2,466\n \n\n\n\n\nHousing\n\n\n\n $15,840\n \n\n\n\n $15,840\n \n\n\n\n\nBooks\n\n\n\n $1,300\n \n\n\n\n $1,300\n \n\n\n\n\n\n Total (Before Financial Aid):\n \n\n\n\n $82,286\n \n\n\n\n $82,286\n \n\n\n\n\n\n\n\n\n\n\n\nStudent Life\n\n Wondering what life at Brown is like? There are approximately \n 10,696 students enrolled at \n Brown, \n including 7,349 undergraduate students and \n 3,347 graduate students.\n 96% percent of students attend school \n full-time, \n 6% percent are from RI and \n 94% percent of students are from other states.\n \n\n\n\n\n\n None\n \n\n\n\n\nUndergraduate Enrollment\n\n\n\n 96%\n \nFull Time\n\n\n\n\n 4%\n \nPart Time\n\n\n\n\n\n\n\n 94%\n \n\n\n\n\nResidency\n\n\n\n 6%\n \nIn State\n\n\n\n\n 94%\n \nOut-of-State\n\n\n\n\n\n\n\n Data Source: IPEDs and Peterson\'s Databases © 2022 Peterson\'s LLC All rights reserved\n \n', lookup_str='', metadata={'source': 'https://www.collegeconfidential.com/colleges/brown-university/'}, lookup_index=0)]
https://python.langchain.com/docs/integrations/document_loaders/concurrent/
Works just like the GenericLoader but concurrently for those who choose to optimize their workflow. ``` loader = ConcurrentLoader.from_filesystem("example_data/", glob="**/*.txt") ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:14.810Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/concurrent/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/concurrent/", "description": "Works just like the GenericLoader but concurrently for those who choose", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3460", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"concurrent\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:14 GMT", "etag": "W/\"3d1681109de597d3b41e698a9889391e\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::nhhrp-1713753554384-e49481cfbd9c" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/concurrent/", "property": "og:url" }, { "content": "Concurrent Loader | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Works just like the GenericLoader but concurrently for those who choose", "property": "og:description" } ], "title": "Concurrent Loader | 🦜️🔗 LangChain" }
Works just like the GenericLoader but concurrently for those who choose to optimize their workflow. loader = ConcurrentLoader.from_filesystem("example_data/", glob="**/*.txt")
https://python.langchain.com/docs/integrations/document_loaders/microsoft_onenote/
## Microsoft OneNote This notebook covers how to load documents from `OneNote`. ## Prerequisites[​](#prerequisites "Direct link to Prerequisites") 1. Register an application with the [Microsoft identity platform](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) instructions. 2. When registration finishes, the Azure portal displays the app registration’s Overview pane. You see the Application (client) ID. Also called the `client ID`, this value uniquely identifies your application in the Microsoft identity platform. 3. During the steps you will be following at **item 1**, you can set the redirect URI as `http://localhost:8000/callback` 4. During the steps you will be following at **item 1**, generate a new password (`client_secret`) under Application Secrets section. 5. Follow the instructions at this [document](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-configure-app-expose-web-apis#add-a-scope) to add the following `SCOPES` (`Notes.Read`) to your application. 6. You need to install the msal and bs4 packages using the commands `pip install msal` and `pip install beautifulsoup4`. 7. At the end of the steps you must have the following values: * `CLIENT_ID` * `CLIENT_SECRET` ## 🧑 Instructions for ingesting your documents from OneNote[​](#instructions-for-ingesting-your-documents-from-onenote "Direct link to 🧑 Instructions for ingesting your documents from OneNote") ### 🔑 Authentication[​](#authentication "Direct link to 🔑 Authentication") By default, the `OneNoteLoader` expects that the values of `CLIENT_ID` and `CLIENT_SECRET` must be stored as environment variables named `MS_GRAPH_CLIENT_ID` and `MS_GRAPH_CLIENT_SECRET` respectively. You could pass those environment variables through a `.env` file at the root of your application or using the following command in your script. ``` os.environ['MS_GRAPH_CLIENT_ID'] = "YOUR CLIENT ID"os.environ['MS_GRAPH_CLIENT_SECRET'] = "YOUR CLIENT SECRET" ``` This loader uses an authentication called [_on behalf of a user_](https://learn.microsoft.com/en-us/graph/auth-v2-user?context=graph%2Fapi%2F1.0&view=graph-rest-1.0). It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was successful. ``` from langchain_community.document_loaders.onenote import OneNoteLoaderloader = OneNoteLoader(notebook_name="NOTEBOOK NAME", section_name="SECTION NAME", page_title="PAGE TITLE") ``` Once the authentication has been done, the loader will store a token (`onenote_graph_token.txt`) at `~/.credentials/` folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the `auth_with_token` parameter to True in the instantiation of the loader. ``` from langchain_community.document_loaders.onenote import OneNoteLoaderloader = OneNoteLoader(notebook_name="NOTEBOOK NAME", section_name="SECTION NAME", page_title="PAGE TITLE", auth_with_token=True) ``` Alternatively, you can also pass the token directly to the loader. This is useful when you want to authenticate with a token that was generated by another application. For instance, you can use the [Microsoft Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer) to generate a token and then pass it to the loader. ``` from langchain_community.document_loaders.onenote import OneNoteLoaderloader = OneNoteLoader(notebook_name="NOTEBOOK NAME", section_name="SECTION NAME", page_title="PAGE TITLE", access_token="TOKEN") ``` ### 🗂️ Documents loader[​](#documents-loader "Direct link to 🗂️ Documents loader") #### 📑 Loading pages from a OneNote Notebook[​](#loading-pages-from-a-onenote-notebook "Direct link to 📑 Loading pages from a OneNote Notebook") `OneNoteLoader` can load pages from OneNote notebooks stored in OneDrive. You can specify any combination of `notebook_name`, `section_name`, `page_title` to filter for pages under a specific notebook, under a specific section, or with a specific title respectively. For instance, you want to load all pages that are stored under a section called `Recipes` within any of your notebooks OneDrive. ``` from langchain_community.document_loaders.onenote import OneNoteLoaderloader = OneNoteLoader(section_name="Recipes", auth_with_token=True)documents = loader.load() ``` #### 📑 Loading pages from a list of Page IDs[​](#loading-pages-from-a-list-of-page-ids "Direct link to 📑 Loading pages from a list of Page IDs") Another possibility is to provide a list of `object_ids` for each page you want to load. For that, you will need to query the [Microsoft Graph API](https://developer.microsoft.com/en-us/graph/graph-explorer) to find all the documents ID that you are interested in. This [link](https://learn.microsoft.com/en-us/graph/onenote-get-content#page-collection) provides a list of endpoints that will be helpful to retrieve the documents ID. For instance, to retrieve information about all pages that are stored in your notebooks, you need make a request to: `https://graph.microsoft.com/v1.0/me/onenote/pages`. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters. ``` from langchain_community.document_loaders.onenote import OneNoteLoaderloader = OneNoteLoader(object_ids=["ID_1", "ID_2"], auth_with_token=True)documents = loader.load() ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:14.986Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_onenote/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_onenote/", "description": "This notebook covers how to load documents from OneNote.", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"microsoft_onenote\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:14 GMT", "etag": "W/\"4acf4b88aa0032dab661c0d738c41f0d\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::6w7ns-1713753554532-01ac31b5dfa9" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_onenote/", "property": "og:url" }, { "content": "Microsoft OneNote | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook covers how to load documents from OneNote.", "property": "og:description" } ], "title": "Microsoft OneNote | 🦜️🔗 LangChain" }
Microsoft OneNote This notebook covers how to load documents from OneNote. Prerequisites​ Register an application with the Microsoft identity platform instructions. When registration finishes, the Azure portal displays the app registration’s Overview pane. You see the Application (client) ID. Also called the client ID, this value uniquely identifies your application in the Microsoft identity platform. During the steps you will be following at item 1, you can set the redirect URI as http://localhost:8000/callback During the steps you will be following at item 1, generate a new password (client_secret) under Application Secrets section. Follow the instructions at this document to add the following SCOPES (Notes.Read) to your application. You need to install the msal and bs4 packages using the commands pip install msal and pip install beautifulsoup4. At the end of the steps you must have the following values: CLIENT_ID CLIENT_SECRET 🧑 Instructions for ingesting your documents from OneNote​ 🔑 Authentication​ By default, the OneNoteLoader expects that the values of CLIENT_ID and CLIENT_SECRET must be stored as environment variables named MS_GRAPH_CLIENT_ID and MS_GRAPH_CLIENT_SECRET respectively. You could pass those environment variables through a .env file at the root of your application or using the following command in your script. os.environ['MS_GRAPH_CLIENT_ID'] = "YOUR CLIENT ID" os.environ['MS_GRAPH_CLIENT_SECRET'] = "YOUR CLIENT SECRET" This loader uses an authentication called on behalf of a user. It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was successful. from langchain_community.document_loaders.onenote import OneNoteLoader loader = OneNoteLoader(notebook_name="NOTEBOOK NAME", section_name="SECTION NAME", page_title="PAGE TITLE") Once the authentication has been done, the loader will store a token (onenote_graph_token.txt) at ~/.credentials/ folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the auth_with_token parameter to True in the instantiation of the loader. from langchain_community.document_loaders.onenote import OneNoteLoader loader = OneNoteLoader(notebook_name="NOTEBOOK NAME", section_name="SECTION NAME", page_title="PAGE TITLE", auth_with_token=True) Alternatively, you can also pass the token directly to the loader. This is useful when you want to authenticate with a token that was generated by another application. For instance, you can use the Microsoft Graph Explorer to generate a token and then pass it to the loader. from langchain_community.document_loaders.onenote import OneNoteLoader loader = OneNoteLoader(notebook_name="NOTEBOOK NAME", section_name="SECTION NAME", page_title="PAGE TITLE", access_token="TOKEN") 🗂️ Documents loader​ 📑 Loading pages from a OneNote Notebook​ OneNoteLoader can load pages from OneNote notebooks stored in OneDrive. You can specify any combination of notebook_name, section_name, page_title to filter for pages under a specific notebook, under a specific section, or with a specific title respectively. For instance, you want to load all pages that are stored under a section called Recipes within any of your notebooks OneDrive. from langchain_community.document_loaders.onenote import OneNoteLoader loader = OneNoteLoader(section_name="Recipes", auth_with_token=True) documents = loader.load() 📑 Loading pages from a list of Page IDs​ Another possibility is to provide a list of object_ids for each page you want to load. For that, you will need to query the Microsoft Graph API to find all the documents ID that you are interested in. This link provides a list of endpoints that will be helpful to retrieve the documents ID. For instance, to retrieve information about all pages that are stored in your notebooks, you need make a request to: https://graph.microsoft.com/v1.0/me/onenote/pages. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters. from langchain_community.document_loaders.onenote import OneNoteLoader loader = OneNoteLoader(object_ids=["ID_1", "ID_2"], auth_with_token=True) documents = loader.load()
https://python.langchain.com/docs/integrations/document_loaders/conll-u/
## CoNLL-U > [CoNLL-U](https://universaldependencies.org/format.html) is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: - Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below. - Blank lines marking sentence boundaries. - Comment lines starting with hash (#). This is an example of how to load a file in [CoNLL-U](https://universaldependencies.org/format.html) format. The whole file is treated as one document. The example data (`conllu.conllu`) is based on one of the standard UD/CoNLL-U examples. ``` from langchain_community.document_loaders import CoNLLULoader ``` ``` loader = CoNLLULoader("example_data/conllu.conllu") ``` ``` [Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})] ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:15.360Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/conll-u/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/conll-u/", "description": "CoNLL-U is revised", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3460", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"conll-u\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:14 GMT", "etag": "W/\"af98446ad0b5facde2616e70896f5691\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::cc8bg-1713753554814-04b30df136f4" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/conll-u/", "property": "og:url" }, { "content": "CoNLL-U | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "CoNLL-U is revised", "property": "og:description" } ], "title": "CoNLL-U | 🦜️🔗 LangChain" }
CoNLL-U CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: - Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below. - Blank lines marking sentence boundaries. - Comment lines starting with hash (#). This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples. from langchain_community.document_loaders import CoNLLULoader loader = CoNLLULoader("example_data/conllu.conllu") [Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})] Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/confluence/
## Confluence > [Confluence](https://www.atlassian.com/software/confluence) is a wiki collaboration platform that saves and organizes all of the project-related material. `Confluence` is a knowledge base that primarily handles content management activities. A loader for `Confluence` pages. This currently supports `username/api_key`, `Oauth2 login`. Additionally, on-prem installations also support `token` authentication. Specify a list `page_id`\-s and/or `space_key` to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned. You can also specify a boolean `include_attachments` to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: `PDF`, `PNG`, `JPEG/JPG`, `SVG`, `Word` and `Excel`. Hint: `space_key` and `page_id` can both be found in the URL of a page in Confluence - [https://yoursite.atlassian.com/wiki/spaces/](https://yoursite.atlassian.com/wiki/spaces/)<space\_key>/pages/<page\_id> Before using ConfluenceLoader make sure you have the latest version of the atlassian-python-api package installed: ``` %pip install --upgrade --quiet atlassian-python-api ``` ## Examples[​](#examples "Direct link to Examples") ### Username and Password or Username and API Token (Atlassian Cloud only)[​](#username-and-password-or-username-and-api-token-atlassian-cloud-only "Direct link to Username and Password or Username and API Token (Atlassian Cloud only)") This example authenticates using either a username and password or, if you’re connecting to an Atlassian Cloud hosted version of Confluence, a username and an API Token. You can generate an API token at: [https://id.atlassian.com/manage-profile/security/api-tokens](https://id.atlassian.com/manage-profile/security/api-tokens). The `limit` parameter specifies how many documents will be retrieved in a single call, not how many documents will be retrieved in total. By default the code will return up to 1000 documents in 50 documents batches. To control the total number of documents use the `max_pages` parameter. Plese note the maximum value for the `limit` parameter in the atlassian-python-api package is currently 100. ``` from langchain_community.document_loaders import ConfluenceLoaderloader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345")documents = loader.load(space_key="SPACE", include_attachments=True, limit=50) ``` ### Personal Access Token (Server/On-Prem only)[​](#personal-access-token-serveron-prem-only "Direct link to Personal Access Token (Server/On-Prem only)") This method is valid for the Data Center/Server on-prem edition only. For more information on how to generate a Personal Access Token (PAT) check the official Confluence documentation at: [https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html). When using a PAT you provide only the token value, you cannot provide a username. Please note that ConfluenceLoader will run under the permissions of the user that generated the PAT and will only be able to load documents for which said user has access to. ``` from langchain_community.document_loaders import ConfluenceLoaderloader = ConfluenceLoader(url="https://yoursite.atlassian.com/wiki", token="12345")documents = loader.load( space_key="SPACE", include_attachments=True, limit=50, max_pages=50) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:15.588Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/confluence/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/confluence/", "description": "Confluence is a wiki", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4390", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"confluence\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:15 GMT", "etag": "W/\"75bf98b0bf0eba76bedef891fc5876ba\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::mq99f-1713753555003-f9406862563c" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/confluence/", "property": "og:url" }, { "content": "Confluence | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Confluence is a wiki", "property": "og:description" } ], "title": "Confluence | 🦜️🔗 LangChain" }
Confluence Confluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities. A loader for Confluence pages. This currently supports username/api_key, Oauth2 login. Additionally, on-prem installations also support token authentication. Specify a list page_id-s and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned. You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel. Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id> Before using ConfluenceLoader make sure you have the latest version of the atlassian-python-api package installed: %pip install --upgrade --quiet atlassian-python-api Examples​ Username and Password or Username and API Token (Atlassian Cloud only)​ This example authenticates using either a username and password or, if you’re connecting to an Atlassian Cloud hosted version of Confluence, a username and an API Token. You can generate an API token at: https://id.atlassian.com/manage-profile/security/api-tokens. The limit parameter specifies how many documents will be retrieved in a single call, not how many documents will be retrieved in total. By default the code will return up to 1000 documents in 50 documents batches. To control the total number of documents use the max_pages parameter. Plese note the maximum value for the limit parameter in the atlassian-python-api package is currently 100. from langchain_community.document_loaders import ConfluenceLoader loader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345" ) documents = loader.load(space_key="SPACE", include_attachments=True, limit=50) Personal Access Token (Server/On-Prem only)​ This method is valid for the Data Center/Server on-prem edition only. For more information on how to generate a Personal Access Token (PAT) check the official Confluence documentation at: https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html. When using a PAT you provide only the token value, you cannot provide a username. Please note that ConfluenceLoader will run under the permissions of the user that generated the PAT and will only be able to load documents for which said user has access to. from langchain_community.document_loaders import ConfluenceLoader loader = ConfluenceLoader(url="https://yoursite.atlassian.com/wiki", token="12345") documents = loader.load( space_key="SPACE", include_attachments=True, limit=50, max_pages=50 )
https://python.langchain.com/docs/integrations/document_loaders/microsoft_powerpoint/
## Microsoft PowerPoint > [Microsoft PowerPoint](https://en.wikipedia.org/wiki/Microsoft_PowerPoint) is a presentation program by Microsoft. This covers how to load `Microsoft PowerPoint` documents into a document format that we can use downstream. ``` from langchain_community.document_loaders import UnstructuredPowerPointLoader ``` ``` loader = UnstructuredPowerPointLoader("example_data/fake-power-point.pptx") ``` ``` [Document(page_content='Adding a Bullet Slide\n\nFind the bullet slide layout\n\nUse _TextFrame.text for first bullet\n\nUse _TextFrame.add_paragraph() for subsequent bullets\n\nHere is a lot of text!\n\nHere is some text in a text box!', metadata={'source': 'example_data/fake-power-point.pptx'})] ``` ### Retain Elements[​](#retain-elements "Direct link to Retain Elements") Under the hood, `Unstructured` creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode="elements"`. ``` loader = UnstructuredPowerPointLoader( "example_data/fake-power-point.pptx", mode="elements") ``` ``` Document(page_content='Adding a Bullet Slide', lookup_str='', metadata={'source': 'example_data/fake-power-point.pptx'}, lookup_index=0) ``` ## Using Azure AI Document Intelligence[​](#using-azure-ai-document-intelligence "Direct link to Using Azure AI Document Intelligence") > [Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files. > > Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`. This current implementation of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode="single"` or `mode="page"` to return pure texts in a single page or document split by page. ## Prerequisite[​](#prerequisite "Direct link to Prerequisite") An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don’t have. You will be passing `<endpoint>` and `<key>` as parameters to the loader. ``` %pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence ``` ``` from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoaderfile_path = "<filepath>"endpoint = "<endpoint>"key = "<key>"loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout")documents = loader.load() ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:15.990Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_powerpoint/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_powerpoint/", "description": "[Microsoft", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3454", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"microsoft_powerpoint\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:15 GMT", "etag": "W/\"38fd1b7affca03345a647728107b26ee\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::l9cgv-1713753555514-7d61e3e792ea" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_powerpoint/", "property": "og:url" }, { "content": "Microsoft PowerPoint | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[Microsoft", "property": "og:description" } ], "title": "Microsoft PowerPoint | 🦜️🔗 LangChain" }
Microsoft PowerPoint Microsoft PowerPoint is a presentation program by Microsoft. This covers how to load Microsoft PowerPoint documents into a document format that we can use downstream. from langchain_community.document_loaders import UnstructuredPowerPointLoader loader = UnstructuredPowerPointLoader("example_data/fake-power-point.pptx") [Document(page_content='Adding a Bullet Slide\n\nFind the bullet slide layout\n\nUse _TextFrame.text for first bullet\n\nUse _TextFrame.add_paragraph() for subsequent bullets\n\nHere is a lot of text!\n\nHere is some text in a text box!', metadata={'source': 'example_data/fake-power-point.pptx'})] Retain Elements​ Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredPowerPointLoader( "example_data/fake-power-point.pptx", mode="elements" ) Document(page_content='Adding a Bullet Slide', lookup_str='', metadata={'source': 'example_data/fake-power-point.pptx'}, lookup_index=0) Using Azure AI Document Intelligence​ Azure AI Document Intelligence (formerly known as Azure Form Recognizer) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files. Document Intelligence supports PDF, JPEG/JPG, PNG, BMP, TIFF, HEIF, DOCX, XLSX, PPTX and HTML. This current implementation of a loader using Document Intelligence can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with MarkdownHeaderTextSplitter for semantic document chunking. You can also use mode="single" or mode="page" to return pure texts in a single page or document split by page. Prerequisite​ An Azure AI Document Intelligence resource in one of the 3 preview regions: East US, West US2, West Europe - follow this document to create one if you don’t have. You will be passing <endpoint> and <key> as parameters to the loader. %pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader file_path = "<filepath>" endpoint = "<endpoint>" key = "<key>" loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout" ) documents = loader.load() Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint/
## Microsoft SharePoint > [Microsoft SharePoint](https://en.wikipedia.org/wiki/SharePoint) is a website-based collaboration system that uses workflow applications, “list” databases, and other web parts and security features to empower business teams to work together developed by Microsoft. This notebook covers how to load documents from the [SharePoint Document Library](https://support.microsoft.com/en-us/office/what-is-a-document-library-3b5976dd-65cf-4c9e-bf5a-713c10ca2872). Currently, only docx, doc, and pdf files are supported. ## Prerequisites[​](#prerequisites "Direct link to Prerequisites") 1. Register an application with the [Microsoft identity platform](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) instructions. 2. When registration finishes, the Azure portal displays the app registration’s Overview pane. You see the Application (client) ID. Also called the `client ID`, this value uniquely identifies your application in the Microsoft identity platform. 3. During the steps you will be following at **item 1**, you can set the redirect URI as `https://login.microsoftonline.com/common/oauth2/nativeclient` 4. During the steps you will be following at **item 1**, generate a new password (`client_secret`) under Application Secrets section. 5. Follow the instructions at this [document](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-configure-app-expose-web-apis#add-a-scope) to add the following `SCOPES` (`offline_access` and `Sites.Read.All`) to your application. 6. To retrieve files from your **Document Library**, you will need its ID. To obtain it, you will need values of `Tenant Name`, `Collection ID`, and `Subsite ID`. 7. To find your `Tenant Name` follow the instructions at this [document](https://learn.microsoft.com/en-us/azure/active-directory-b2c/tenant-management-read-tenant-name). Once you got this, just remove `.onmicrosoft.com` from the value and hold the rest as your `Tenant Name`. 8. To obtain your `Collection ID` and `Subsite ID`, you will need your **SharePoint** `site-name`. Your `SharePoint` site URL has the following format `https://<tenant-name>.sharepoint.com/sites/<site-name>`. The last part of this URL is the `site-name`. 9. To Get the Site `Collection ID`, hit this URL in the browser: `https://<tenant>.sharepoint.com/sites/<site-name>/_api/site/id` and copy the value of the `Edm.Guid` property. 10. To get the `Subsite ID` (or web ID) use: `https://<tenant>.sharepoint.com/sites/<site-name>/_api/web/id` and copy the value of the `Edm.Guid` property. 11. The `SharePoint site ID` has the following format: `<tenant-name>.sharepoint.com,<Collection ID>,<subsite ID>`. You can hold that value to use in the next step. 12. Visit the [Graph Explorer Playground](https://developer.microsoft.com/en-us/graph/graph-explorer) to obtain your `Document Library ID`. The first step is to ensure you are logged in with the account associated with your **SharePoint** site. Then you need to make a request to `https://graph.microsoft.com/v1.0/sites/<SharePoint site ID>/drive` and the response will return a payload with a field `id` that holds the ID of your `Document Library ID`. ### 🔑 Authentication[​](#authentication "Direct link to 🔑 Authentication") By default, the `SharePointLoader` expects that the values of `CLIENT_ID` and `CLIENT_SECRET` must be stored as environment variables named `O365_CLIENT_ID` and `O365_CLIENT_SECRET` respectively. You could pass those environment variables through a `.env` file at the root of your application or using the following command in your script. ``` os.environ['O365_CLIENT_ID'] = "YOUR CLIENT ID"os.environ['O365_CLIENT_SECRET'] = "YOUR CLIENT SECRET" ``` This loader uses an authentication called [_on behalf of a user_](https://learn.microsoft.com/en-us/graph/auth-v2-user?context=graph%2Fapi%2F1.0&view=graph-rest-1.0). It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was succesful. ``` from langchain_community.document_loaders.sharepoint import SharePointLoaderloader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID") ``` Once the authentication has been done, the loader will store a token (`o365_token.txt`) at `~/.credentials/` folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the `auth_with_token` parameter to True in the instantiation of the loader. ``` from langchain_community.document_loaders.sharepoint import SharePointLoaderloader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", auth_with_token=True) ``` ### 🗂️ Documents loader[​](#documents-loader "Direct link to 🗂️ Documents loader") #### 📑 Loading documents from a Document Library Directory[​](#loading-documents-from-a-document-library-directory "Direct link to 📑 Loading documents from a Document Library Directory") `SharePointLoader` can load documents from a specific folder within your Document Library. For instance, you want to load all documents that are stored at `Documents/marketing` folder within your Document Library. ``` from langchain_community.document_loaders.sharepoint import SharePointLoaderloader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", folder_path="Documents/marketing", auth_with_token=True)documents = loader.load() ``` If you are receiving the error `Resource not found for the segment`, try using the `folder_id` instead of the folder path, which can be obtained from the [Microsoft Graph API](https://developer.microsoft.com/en-us/graph/graph-explorer) ``` loader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", auth_with_token=True folder_id="<folder-id>")documents = loader.load() ``` If you wish to load documents from the root directory, you can omit `folder_id`, `folder_path` and `documents_ids` and loader will load root directory. ``` # loads documents from root directoryloader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", auth_with_token=True)documents = loader.load() ``` Combined with `recursive=True` you can simply load all documents from whole SharePoint: ``` # loads documents from root directoryloader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", recursive=True, auth_with_token=True)documents = loader.load() ``` #### 📑 Loading documents from a list of Documents IDs[​](#loading-documents-from-a-list-of-documents-ids "Direct link to 📑 Loading documents from a list of Documents IDs") Another possibility is to provide a list of `object_id` for each document you want to load. For that, you will need to query the [Microsoft Graph API](https://developer.microsoft.com/en-us/graph/graph-explorer) to find all the documents ID that you are interested in. This [link](https://learn.microsoft.com/en-us/graph/api/resources/onedrive?view=graph-rest-1.0#commonly-accessed-resources) provides a list of endpoints that will be helpful to retrieve the documents ID. For instance, to retrieve information about all objects that are stored at `data/finance/` folder, you need make a request to: `https://graph.microsoft.com/v1.0/drives/<document-library-id>/root:/data/finance:/children`. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters. ``` from langchain_community.document_loaders.sharepoint import SharePointLoaderloader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", object_ids=["ID_1", "ID_2"], auth_with_token=True)documents = loader.load() ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:16.190Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint/", "description": "Microsoft SharePoint is a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"microsoft_sharepoint\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:16 GMT", "etag": "W/\"69b7754e6cd0f0d91bca9c4adbc89853\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::pf6jg-1713753555964-2ccab1784946" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint/", "property": "og:url" }, { "content": "Microsoft SharePoint | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Microsoft SharePoint is a", "property": "og:description" } ], "title": "Microsoft SharePoint | 🦜️🔗 LangChain" }
Microsoft SharePoint Microsoft SharePoint is a website-based collaboration system that uses workflow applications, “list” databases, and other web parts and security features to empower business teams to work together developed by Microsoft. This notebook covers how to load documents from the SharePoint Document Library. Currently, only docx, doc, and pdf files are supported. Prerequisites​ Register an application with the Microsoft identity platform instructions. When registration finishes, the Azure portal displays the app registration’s Overview pane. You see the Application (client) ID. Also called the client ID, this value uniquely identifies your application in the Microsoft identity platform. During the steps you will be following at item 1, you can set the redirect URI as https://login.microsoftonline.com/common/oauth2/nativeclient During the steps you will be following at item 1, generate a new password (client_secret) under Application Secrets section. Follow the instructions at this document to add the following SCOPES (offline_access and Sites.Read.All) to your application. To retrieve files from your Document Library, you will need its ID. To obtain it, you will need values of Tenant Name, Collection ID, and Subsite ID. To find your Tenant Name follow the instructions at this document. Once you got this, just remove .onmicrosoft.com from the value and hold the rest as your Tenant Name. To obtain your Collection ID and Subsite ID, you will need your SharePoint site-name. Your SharePoint site URL has the following format https://<tenant-name>.sharepoint.com/sites/<site-name>. The last part of this URL is the site-name. To Get the Site Collection ID, hit this URL in the browser: https://<tenant>.sharepoint.com/sites/<site-name>/_api/site/id and copy the value of the Edm.Guid property. To get the Subsite ID (or web ID) use: https://<tenant>.sharepoint.com/sites/<site-name>/_api/web/id and copy the value of the Edm.Guid property. The SharePoint site ID has the following format: <tenant-name>.sharepoint.com,<Collection ID>,<subsite ID>. You can hold that value to use in the next step. Visit the Graph Explorer Playground to obtain your Document Library ID. The first step is to ensure you are logged in with the account associated with your SharePoint site. Then you need to make a request to https://graph.microsoft.com/v1.0/sites/<SharePoint site ID>/drive and the response will return a payload with a field id that holds the ID of your Document Library ID. 🔑 Authentication​ By default, the SharePointLoader expects that the values of CLIENT_ID and CLIENT_SECRET must be stored as environment variables named O365_CLIENT_ID and O365_CLIENT_SECRET respectively. You could pass those environment variables through a .env file at the root of your application or using the following command in your script. os.environ['O365_CLIENT_ID'] = "YOUR CLIENT ID" os.environ['O365_CLIENT_SECRET'] = "YOUR CLIENT SECRET" This loader uses an authentication called on behalf of a user. It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was succesful. from langchain_community.document_loaders.sharepoint import SharePointLoader loader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID") Once the authentication has been done, the loader will store a token (o365_token.txt) at ~/.credentials/ folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the auth_with_token parameter to True in the instantiation of the loader. from langchain_community.document_loaders.sharepoint import SharePointLoader loader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", auth_with_token=True) 🗂️ Documents loader​ 📑 Loading documents from a Document Library Directory​ SharePointLoader can load documents from a specific folder within your Document Library. For instance, you want to load all documents that are stored at Documents/marketing folder within your Document Library. from langchain_community.document_loaders.sharepoint import SharePointLoader loader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", folder_path="Documents/marketing", auth_with_token=True) documents = loader.load() If you are receiving the error Resource not found for the segment, try using the folder_id instead of the folder path, which can be obtained from the Microsoft Graph API loader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", auth_with_token=True folder_id="<folder-id>") documents = loader.load() If you wish to load documents from the root directory, you can omit folder_id, folder_path and documents_ids and loader will load root directory. # loads documents from root directory loader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", auth_with_token=True) documents = loader.load() Combined with recursive=True you can simply load all documents from whole SharePoint: # loads documents from root directory loader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", recursive=True, auth_with_token=True) documents = loader.load() 📑 Loading documents from a list of Documents IDs​ Another possibility is to provide a list of object_id for each document you want to load. For that, you will need to query the Microsoft Graph API to find all the documents ID that you are interested in. This link provides a list of endpoints that will be helpful to retrieve the documents ID. For instance, to retrieve information about all objects that are stored at data/finance/ folder, you need make a request to: https://graph.microsoft.com/v1.0/drives/<document-library-id>/root:/data/finance:/children. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters. from langchain_community.document_loaders.sharepoint import SharePointLoader loader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", object_ids=["ID_1", "ID_2"], auth_with_token=True) documents = loader.load()
https://python.langchain.com/docs/integrations/document_loaders/copypaste/
## Copy Paste This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don’t even need to use a DocumentLoader, but rather can just construct the Document directly. ``` from langchain_community.docstore.document import Document ``` ``` text = "..... put the text you copy pasted here......" ``` ``` doc = Document(page_content=text) ``` If you want to add metadata about the where you got this piece of text, you easily can with the metadata key. ``` metadata = {"source": "internet", "date": "Friday"} ``` ``` doc = Document(page_content=text, metadata=metadata) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:16.597Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/copypaste/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/copypaste/", "description": "This notebook covers how to load a document object from something you", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3462", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"copypaste\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:16 GMT", "etag": "W/\"6922c732188207e99790c16bde8bab93\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::rh4bg-1713753556182-505a1a5fcc4e" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/copypaste/", "property": "og:url" }, { "content": "Copy Paste | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook covers how to load a document object from something you", "property": "og:description" } ], "title": "Copy Paste | 🦜️🔗 LangChain" }
Copy Paste This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don’t even need to use a DocumentLoader, but rather can just construct the Document directly. from langchain_community.docstore.document import Document text = "..... put the text you copy pasted here......" doc = Document(page_content=text) If you want to add metadata about the where you got this piece of text, you easily can with the metadata key. metadata = {"source": "internet", "date": "Friday"} doc = Document(page_content=text, metadata=metadata) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/microsoft_word/
## Microsoft Word > [Microsoft Word](https://www.microsoft.com/en-us/microsoft-365/word) is a word processor developed by Microsoft. This covers how to load `Word` documents into a document format that we can use downstream. ## Using Docx2txt[​](#using-docx2txt "Direct link to Using Docx2txt") Load .docx using `Docx2txt` into a document. ``` %pip install --upgrade --quiet docx2txt ``` ``` from langchain_community.document_loaders import Docx2txtLoader ``` ``` loader = Docx2txtLoader("example_data/fake.docx") ``` ``` [Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})] ``` ## Using Unstructured[​](#using-unstructured "Direct link to Using Unstructured") ``` from langchain_community.document_loaders import UnstructuredWordDocumentLoader ``` ``` loader = UnstructuredWordDocumentLoader("example_data/fake.docx") ``` ``` [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx'}, lookup_index=0)] ``` ### Retain Elements[​](#retain-elements "Direct link to Retain Elements") Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode="elements"`. ``` loader = UnstructuredWordDocumentLoader("example_data/fake.docx", mode="elements") ``` ``` Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx', 'filename': 'fake.docx', 'category': 'Title'}, lookup_index=0) ``` ## Using Azure AI Document Intelligence[​](#using-azure-ai-document-intelligence "Direct link to Using Azure AI Document Intelligence") > [Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files. > > Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`. This current implementation of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode="single"` or `mode="page"` to return pure texts in a single page or document split by page. ## Prerequisite[​](#prerequisite "Direct link to Prerequisite") An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don’t have. You will be passing `<endpoint>` and `<key>` as parameters to the loader. %pip install –upgrade –quiet langchain langchain-community azure-ai-documentintelligence ``` from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoaderfile_path = "<filepath>"endpoint = "<endpoint>"key = "<key>"loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout")documents = loader.load() ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:16.720Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_word/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_word/", "description": "Microsoft Word", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3455", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"microsoft_word\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:16 GMT", "etag": "W/\"af3bc3e63103d6e8eff56ec12e9ba6c7\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::lrtsn-1713753556513-11c034f9e20d" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/microsoft_word/", "property": "og:url" }, { "content": "Microsoft Word | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Microsoft Word", "property": "og:description" } ], "title": "Microsoft Word | 🦜️🔗 LangChain" }
Microsoft Word Microsoft Word is a word processor developed by Microsoft. This covers how to load Word documents into a document format that we can use downstream. Using Docx2txt​ Load .docx using Docx2txt into a document. %pip install --upgrade --quiet docx2txt from langchain_community.document_loaders import Docx2txtLoader loader = Docx2txtLoader("example_data/fake.docx") [Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})] Using Unstructured​ from langchain_community.document_loaders import UnstructuredWordDocumentLoader loader = UnstructuredWordDocumentLoader("example_data/fake.docx") [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx'}, lookup_index=0)] Retain Elements​ Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredWordDocumentLoader("example_data/fake.docx", mode="elements") Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx', 'filename': 'fake.docx', 'category': 'Title'}, lookup_index=0) Using Azure AI Document Intelligence​ Azure AI Document Intelligence (formerly known as Azure Form Recognizer) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files. Document Intelligence supports PDF, JPEG/JPG, PNG, BMP, TIFF, HEIF, DOCX, XLSX, PPTX and HTML. This current implementation of a loader using Document Intelligence can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with MarkdownHeaderTextSplitter for semantic document chunking. You can also use mode="single" or mode="page" to return pure texts in a single page or document split by page. Prerequisite​ An Azure AI Document Intelligence resource in one of the 3 preview regions: East US, West US2, West Europe - follow this document to create one if you don’t have. You will be passing <endpoint> and <key> as parameters to the loader. %pip install –upgrade –quiet langchain langchain-community azure-ai-documentintelligence from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader file_path = "<filepath>" endpoint = "<endpoint>" key = "<key>" loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout" ) documents = loader.load() Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/couchbase/
## Couchbase > [Couchbase](http://couchbase.com/) is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications. ## Installation[​](#installation "Direct link to Installation") ``` %pip install --upgrade --quiet couchbase ``` ## Querying for Documents from Couchbase[​](#querying-for-documents-from-couchbase "Direct link to Querying for Documents from Couchbase") For more details on connecting to a Couchbase cluster, please check the [Python SDK documentation](https://docs.couchbase.com/python-sdk/current/howtos/managing-connections.html#connection-strings). For help with querying for documents using SQL++ (SQL for JSON), please check the [documentation](https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/index.html). ``` from langchain_community.document_loaders.couchbase import CouchbaseLoaderconnection_string = "couchbase://localhost" # valid Couchbase connection stringdb_username = ( "Administrator" # valid database user with read access to the bucket being queried)db_password = "Password" # password for the database user# query is a valid SQL++ queryquery = """ SELECT h.* FROM `travel-sample`.inventory.hotel h WHERE h.country = 'United States' LIMIT 1 """ ``` ## Create the Loader[​](#create-the-loader "Direct link to Create the Loader") ``` loader = CouchbaseLoader( connection_string, db_username, db_password, query,) ``` You can fetch the documents by calling the `load` method of the loader. It will return a list with all the documents. If you want to avoid this blocking call, you can call `lazy_load` method that returns an Iterator. ``` docs = loader.load()print(docs) ``` ``` [Document(page_content='address: 8301 Hollister Ave\nalias: None\ncheckin: 12PM\ncheckout: 4PM\ncity: Santa Barbara\ncountry: United States\ndescription: Located on 78 acres of oceanfront property, this resort is an upscale experience that caters to luxury travelers. There are 354 guest rooms in 19 separate villas, each in a Spanish style. Property amenities include saline infinity pools, a private beach, clay tennis courts, a 42,000 foot spa and fitness center, and nature trails through the adjoining wetland and forest. The onsite Miro restaurant provides great views of the coast with excellent food and service. With all that said, you pay for the experience, and this resort is not for the budget traveler. In addition to quoted rates there is a $25 per day resort fee that includes a bottle of wine in your room, two bottles of water, access to fitness center and spa, and internet access.\ndirections: None\nemail: None\nfax: None\nfree_breakfast: True\nfree_internet: False\nfree_parking: False\ngeo: {\'accuracy\': \'ROOFTOP\', \'lat\': 34.43429, \'lon\': -119.92137}\nid: 10180\nname: Bacara Resort & Spa\npets_ok: False\nphone: None\nprice: $300-$1000+\npublic_likes: [\'Arnoldo Towne\', \'Olaf Turcotte\', \'Ruben Volkman\', \'Adella Aufderhar\', \'Elwyn Franecki\']\nreviews: [{\'author\': \'Delmer Cole\', \'content\': "Jane and Joyce make every effort to see to your personal needs and comfort. The rooms take one back in time to the original styles and designs of the 1800\'s. A real connection to local residents, the 905 is a regular tour stop and the oldest hotel in the French Quarter. My wife and I prefer to stay in the first floor rooms where there is a sitting room with TV, bedroom, bath and kitchen. The kitchen has a stove and refrigerator, sink, coffeemaker, etc. Plus there is a streetside private entrance (very good security system) and a covered balcony area with seating so you can watch passersby. Quaint, cozy, and most of all: ORIGINAL. No plastic remods. Feels like my great Grandmother\'s place. While there are more luxurious places to stay, if you want the real flavor and eclectic style of N.O. you have to stay here. It just FEELS like New Orleans. The location is one block towards the river from Bourbon Street and smack dab in the middle of everything. Royal street is one of the nicest residential streets in the Quarter and you can walk back to your room and get some peace and quiet whenever you like. The French Quarter is always busy so we bring a small fan to turn on to make some white noise so we can sleep more soundly. Works great. You might not need it at the 905 but it\'s a necessity it if you stay on or near Bourbon Street, which is very loud all the time. Parking tips: You can park right in front to unload and it\'s only a couple blocks to the secure riverfront parking area. Plus there are several public parking lots nearby. My strategy is to get there early, unload, and drive around for a while near the hotel. It\'s not too hard to find a parking place but be careful about where it is. Stay away from corner spots since streets are narrow and delivery trucks don\'t have the room to turn and they will hit your car. Take note of the signs. Tuesday and Thursday they clean the streets and you can\'t park in many areas when they do or they will tow your car. Once you find a spot don\'t move it since everything is walking distance. If you find a good spot and get a ticket it will cost $20, which is cheaper than the daily rate at most parking garages. Even if you don\'t get a ticket make sure to go online to N.O. traffic ticket site to check your license number for violations. Some local kids think it\'s funny to take your ticket and throw it away since the fine doubles every month it\'s not paid. You don\'t know you got a ticket but your fine is getting bigger. We\'ve been coming to the French Quarter for years and have stayed at many of the local hotels. The 905 Royal is our favorite.", \'date\': \'2013-12-05 09:27:07 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 5, \'Rooms\': 5, \'Service\': 5, \'Sleep Quality\': 5, \'Value\': 5}}, {\'author\': \'Orval Lebsack\', \'content\': \'I stayed there with a friend for a girls trip around St. Patricks Day. This was my third time to NOLA, my first at Chateau Lemoyne. The location is excellent....very easy walking distance to everything, without the chaos of staying right on Bourbon Street. Even though its a Holiday Inn, it still has the historical feel and look of NOLA. The pool looked nice too, even though we never used it. The staff was friendly and helpful. Chateau Lemoyne would be hard to top, considering the price.\', \'date\': \'2013-10-26 15:01:39 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 4, \'Rooms\': 4, \'Service\': 4, \'Sleep Quality\': 5, \'Value\': 4}}, {\'author\': \'Hildegard Larkin\', \'content\': \'This hotel is a safe bet for a value stay in French Quarter. Close enough to all sites and action but just out of the real loud & noisy streets. Check in is quick and friendly and room ( king side balcony) while dated was good size and clean. Small balcony with table & chairs is a nice option for evening drink & passing sites below. Down side is no mimi bar fridge ( they are available upon request on a first come basis apparently, so book one when you make initial reservation if necessary) Bathroom is adequate with ok shower pressure and housekeeping is quick and efficient. TIP; forget paying high price for conducted local tours, just take the red trams to end of line and back and then next day the green tram to cross town garden district and zoo and museums. cost for each ride $2.00 each way!! fantastic. Tip: If you stay during hot weather make sure you top up on ice early as later guests can "run the machine dry" for short time. Overall experience met expectations and would recommend for value stay.\', \'date\': \'2012-01-01 18:48:30 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 4, \'Overall\': 4, \'Rooms\': 3, \'Service\': 4, \'Sleep Quality\': 3, \'Value\': 4}}, {\'author\': \'Uriah Rohan\', \'content\': \'The Chateau Le Moyne Holiday Inn is in a perfect location in the French Quarter, a block away from the craziness on Bourbon St. We got a fantastic deal on Priceline and were expecting a standard room for the price. The pleasant hotel clerk upgraded our room much to our delight, without us asking and the concierge also went above an beyond to assist us with information and suggestions for places to dine and possessed an "can do" attitude. Nice pool area to cool off in during the midday NOLA heat. It is definitely a three star establishment, not super luxurious but the beds were comfy and the location superb! If you can get a deal on Priceline, etc, it\\\'s a great value.\', \'date\': \'2014-08-04 15:17:49 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 5, \'Overall\': 4, \'Rooms\': 3, \'Service\': 5, \'Sleep Quality\': 4, \'Value\': 4}}]\nstate: California\ntitle: Goleta\ntollfree: None\ntype: hotel\nurl: http://www.bacararesort.com/\nvacancy: True')] ``` ``` docs_iterator = loader.lazy_load()for doc in docs_iterator: print(doc) break ``` ``` page_content='address: 8301 Hollister Ave\nalias: None\ncheckin: 12PM\ncheckout: 4PM\ncity: Santa Barbara\ncountry: United States\ndescription: Located on 78 acres of oceanfront property, this resort is an upscale experience that caters to luxury travelers. There are 354 guest rooms in 19 separate villas, each in a Spanish style. Property amenities include saline infinity pools, a private beach, clay tennis courts, a 42,000 foot spa and fitness center, and nature trails through the adjoining wetland and forest. The onsite Miro restaurant provides great views of the coast with excellent food and service. With all that said, you pay for the experience, and this resort is not for the budget traveler. In addition to quoted rates there is a $25 per day resort fee that includes a bottle of wine in your room, two bottles of water, access to fitness center and spa, and internet access.\ndirections: None\nemail: None\nfax: None\nfree_breakfast: True\nfree_internet: False\nfree_parking: False\ngeo: {\'accuracy\': \'ROOFTOP\', \'lat\': 34.43429, \'lon\': -119.92137}\nid: 10180\nname: Bacara Resort & Spa\npets_ok: False\nphone: None\nprice: $300-$1000+\npublic_likes: [\'Arnoldo Towne\', \'Olaf Turcotte\', \'Ruben Volkman\', \'Adella Aufderhar\', \'Elwyn Franecki\']\nreviews: [{\'author\': \'Delmer Cole\', \'content\': "Jane and Joyce make every effort to see to your personal needs and comfort. The rooms take one back in time to the original styles and designs of the 1800\'s. A real connection to local residents, the 905 is a regular tour stop and the oldest hotel in the French Quarter. My wife and I prefer to stay in the first floor rooms where there is a sitting room with TV, bedroom, bath and kitchen. The kitchen has a stove and refrigerator, sink, coffeemaker, etc. Plus there is a streetside private entrance (very good security system) and a covered balcony area with seating so you can watch passersby. Quaint, cozy, and most of all: ORIGINAL. No plastic remods. Feels like my great Grandmother\'s place. While there are more luxurious places to stay, if you want the real flavor and eclectic style of N.O. you have to stay here. It just FEELS like New Orleans. The location is one block towards the river from Bourbon Street and smack dab in the middle of everything. Royal street is one of the nicest residential streets in the Quarter and you can walk back to your room and get some peace and quiet whenever you like. The French Quarter is always busy so we bring a small fan to turn on to make some white noise so we can sleep more soundly. Works great. You might not need it at the 905 but it\'s a necessity it if you stay on or near Bourbon Street, which is very loud all the time. Parking tips: You can park right in front to unload and it\'s only a couple blocks to the secure riverfront parking area. Plus there are several public parking lots nearby. My strategy is to get there early, unload, and drive around for a while near the hotel. It\'s not too hard to find a parking place but be careful about where it is. Stay away from corner spots since streets are narrow and delivery trucks don\'t have the room to turn and they will hit your car. Take note of the signs. Tuesday and Thursday they clean the streets and you can\'t park in many areas when they do or they will tow your car. Once you find a spot don\'t move it since everything is walking distance. If you find a good spot and get a ticket it will cost $20, which is cheaper than the daily rate at most parking garages. Even if you don\'t get a ticket make sure to go online to N.O. traffic ticket site to check your license number for violations. Some local kids think it\'s funny to take your ticket and throw it away since the fine doubles every month it\'s not paid. You don\'t know you got a ticket but your fine is getting bigger. We\'ve been coming to the French Quarter for years and have stayed at many of the local hotels. The 905 Royal is our favorite.", \'date\': \'2013-12-05 09:27:07 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 5, \'Rooms\': 5, \'Service\': 5, \'Sleep Quality\': 5, \'Value\': 5}}, {\'author\': \'Orval Lebsack\', \'content\': \'I stayed there with a friend for a girls trip around St. Patricks Day. This was my third time to NOLA, my first at Chateau Lemoyne. The location is excellent....very easy walking distance to everything, without the chaos of staying right on Bourbon Street. Even though its a Holiday Inn, it still has the historical feel and look of NOLA. The pool looked nice too, even though we never used it. The staff was friendly and helpful. Chateau Lemoyne would be hard to top, considering the price.\', \'date\': \'2013-10-26 15:01:39 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 4, \'Rooms\': 4, \'Service\': 4, \'Sleep Quality\': 5, \'Value\': 4}}, {\'author\': \'Hildegard Larkin\', \'content\': \'This hotel is a safe bet for a value stay in French Quarter. Close enough to all sites and action but just out of the real loud & noisy streets. Check in is quick and friendly and room ( king side balcony) while dated was good size and clean. Small balcony with table & chairs is a nice option for evening drink & passing sites below. Down side is no mimi bar fridge ( they are available upon request on a first come basis apparently, so book one when you make initial reservation if necessary) Bathroom is adequate with ok shower pressure and housekeeping is quick and efficient. TIP; forget paying high price for conducted local tours, just take the red trams to end of line and back and then next day the green tram to cross town garden district and zoo and museums. cost for each ride $2.00 each way!! fantastic. Tip: If you stay during hot weather make sure you top up on ice early as later guests can "run the machine dry" for short time. Overall experience met expectations and would recommend for value stay.\', \'date\': \'2012-01-01 18:48:30 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 4, \'Overall\': 4, \'Rooms\': 3, \'Service\': 4, \'Sleep Quality\': 3, \'Value\': 4}}, {\'author\': \'Uriah Rohan\', \'content\': \'The Chateau Le Moyne Holiday Inn is in a perfect location in the French Quarter, a block away from the craziness on Bourbon St. We got a fantastic deal on Priceline and were expecting a standard room for the price. The pleasant hotel clerk upgraded our room much to our delight, without us asking and the concierge also went above an beyond to assist us with information and suggestions for places to dine and possessed an "can do" attitude. Nice pool area to cool off in during the midday NOLA heat. It is definitely a three star establishment, not super luxurious but the beds were comfy and the location superb! If you can get a deal on Priceline, etc, it\\\'s a great value.\', \'date\': \'2014-08-04 15:17:49 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 5, \'Overall\': 4, \'Rooms\': 3, \'Service\': 5, \'Sleep Quality\': 4, \'Value\': 4}}]\nstate: California\ntitle: Goleta\ntollfree: None\ntype: hotel\nurl: http://www.bacararesort.com/\nvacancy: True' ``` ## Specifying Fields with Content and Metadata[​](#specifying-fields-with-content-and-metadata "Direct link to Specifying Fields with Content and Metadata") The fields that are part of the Document content can be specified using the `page_content_fields` parameter. The metadata fields for the Document can be specified using the `metadata_fields` parameter. ``` loader_with_selected_fields = CouchbaseLoader( connection_string, db_username, db_password, query, page_content_fields=[ "address", "name", "city", "phone", "country", "geo", "description", "reviews", ], metadata_fields=["id"],)docs_with_selected_fields = loader_with_selected_fields.load()print(docs_with_selected_fields) ``` ``` [Document(page_content='address: 8301 Hollister Ave\ncity: Santa Barbara\ncountry: United States\ndescription: Located on 78 acres of oceanfront property, this resort is an upscale experience that caters to luxury travelers. There are 354 guest rooms in 19 separate villas, each in a Spanish style. Property amenities include saline infinity pools, a private beach, clay tennis courts, a 42,000 foot spa and fitness center, and nature trails through the adjoining wetland and forest. The onsite Miro restaurant provides great views of the coast with excellent food and service. With all that said, you pay for the experience, and this resort is not for the budget traveler. In addition to quoted rates there is a $25 per day resort fee that includes a bottle of wine in your room, two bottles of water, access to fitness center and spa, and internet access.\ngeo: {\'accuracy\': \'ROOFTOP\', \'lat\': 34.43429, \'lon\': -119.92137}\nname: Bacara Resort & Spa\nphone: None\nreviews: [{\'author\': \'Delmer Cole\', \'content\': "Jane and Joyce make every effort to see to your personal needs and comfort. The rooms take one back in time to the original styles and designs of the 1800\'s. A real connection to local residents, the 905 is a regular tour stop and the oldest hotel in the French Quarter. My wife and I prefer to stay in the first floor rooms where there is a sitting room with TV, bedroom, bath and kitchen. The kitchen has a stove and refrigerator, sink, coffeemaker, etc. Plus there is a streetside private entrance (very good security system) and a covered balcony area with seating so you can watch passersby. Quaint, cozy, and most of all: ORIGINAL. No plastic remods. Feels like my great Grandmother\'s place. While there are more luxurious places to stay, if you want the real flavor and eclectic style of N.O. you have to stay here. It just FEELS like New Orleans. The location is one block towards the river from Bourbon Street and smack dab in the middle of everything. Royal street is one of the nicest residential streets in the Quarter and you can walk back to your room and get some peace and quiet whenever you like. The French Quarter is always busy so we bring a small fan to turn on to make some white noise so we can sleep more soundly. Works great. You might not need it at the 905 but it\'s a necessity it if you stay on or near Bourbon Street, which is very loud all the time. Parking tips: You can park right in front to unload and it\'s only a couple blocks to the secure riverfront parking area. Plus there are several public parking lots nearby. My strategy is to get there early, unload, and drive around for a while near the hotel. It\'s not too hard to find a parking place but be careful about where it is. Stay away from corner spots since streets are narrow and delivery trucks don\'t have the room to turn and they will hit your car. Take note of the signs. Tuesday and Thursday they clean the streets and you can\'t park in many areas when they do or they will tow your car. Once you find a spot don\'t move it since everything is walking distance. If you find a good spot and get a ticket it will cost $20, which is cheaper than the daily rate at most parking garages. Even if you don\'t get a ticket make sure to go online to N.O. traffic ticket site to check your license number for violations. Some local kids think it\'s funny to take your ticket and throw it away since the fine doubles every month it\'s not paid. You don\'t know you got a ticket but your fine is getting bigger. We\'ve been coming to the French Quarter for years and have stayed at many of the local hotels. The 905 Royal is our favorite.", \'date\': \'2013-12-05 09:27:07 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 5, \'Rooms\': 5, \'Service\': 5, \'Sleep Quality\': 5, \'Value\': 5}}, {\'author\': \'Orval Lebsack\', \'content\': \'I stayed there with a friend for a girls trip around St. Patricks Day. This was my third time to NOLA, my first at Chateau Lemoyne. The location is excellent....very easy walking distance to everything, without the chaos of staying right on Bourbon Street. Even though its a Holiday Inn, it still has the historical feel and look of NOLA. The pool looked nice too, even though we never used it. The staff was friendly and helpful. Chateau Lemoyne would be hard to top, considering the price.\', \'date\': \'2013-10-26 15:01:39 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 4, \'Rooms\': 4, \'Service\': 4, \'Sleep Quality\': 5, \'Value\': 4}}, {\'author\': \'Hildegard Larkin\', \'content\': \'This hotel is a safe bet for a value stay in French Quarter. Close enough to all sites and action but just out of the real loud & noisy streets. Check in is quick and friendly and room ( king side balcony) while dated was good size and clean. Small balcony with table & chairs is a nice option for evening drink & passing sites below. Down side is no mimi bar fridge ( they are available upon request on a first come basis apparently, so book one when you make initial reservation if necessary) Bathroom is adequate with ok shower pressure and housekeeping is quick and efficient. TIP; forget paying high price for conducted local tours, just take the red trams to end of line and back and then next day the green tram to cross town garden district and zoo and museums. cost for each ride $2.00 each way!! fantastic. Tip: If you stay during hot weather make sure you top up on ice early as later guests can "run the machine dry" for short time. Overall experience met expectations and would recommend for value stay.\', \'date\': \'2012-01-01 18:48:30 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 4, \'Overall\': 4, \'Rooms\': 3, \'Service\': 4, \'Sleep Quality\': 3, \'Value\': 4}}, {\'author\': \'Uriah Rohan\', \'content\': \'The Chateau Le Moyne Holiday Inn is in a perfect location in the French Quarter, a block away from the craziness on Bourbon St. We got a fantastic deal on Priceline and were expecting a standard room for the price. The pleasant hotel clerk upgraded our room much to our delight, without us asking and the concierge also went above an beyond to assist us with information and suggestions for places to dine and possessed an "can do" attitude. Nice pool area to cool off in during the midday NOLA heat. It is definitely a three star establishment, not super luxurious but the beds were comfy and the location superb! If you can get a deal on Priceline, etc, it\\\'s a great value.\', \'date\': \'2014-08-04 15:17:49 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 5, \'Overall\': 4, \'Rooms\': 3, \'Service\': 5, \'Sleep Quality\': 4, \'Value\': 4}}]', metadata={'id': 10180})] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:17.094Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/couchbase/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/couchbase/", "description": "Couchbase is an award-winning distributed", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"couchbase\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:16 GMT", "etag": "W/\"420ff8b8d28cc73dfaf2d199a3c5755f\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::5n47r-1713753556513-01125f49c985" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/couchbase/", "property": "og:url" }, { "content": "Couchbase | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Couchbase is an award-winning distributed", "property": "og:description" } ], "title": "Couchbase | 🦜️🔗 LangChain" }
Couchbase Couchbase is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications. Installation​ %pip install --upgrade --quiet couchbase Querying for Documents from Couchbase​ For more details on connecting to a Couchbase cluster, please check the Python SDK documentation. For help with querying for documents using SQL++ (SQL for JSON), please check the documentation. from langchain_community.document_loaders.couchbase import CouchbaseLoader connection_string = "couchbase://localhost" # valid Couchbase connection string db_username = ( "Administrator" # valid database user with read access to the bucket being queried ) db_password = "Password" # password for the database user # query is a valid SQL++ query query = """ SELECT h.* FROM `travel-sample`.inventory.hotel h WHERE h.country = 'United States' LIMIT 1 """ Create the Loader​ loader = CouchbaseLoader( connection_string, db_username, db_password, query, ) You can fetch the documents by calling the load method of the loader. It will return a list with all the documents. If you want to avoid this blocking call, you can call lazy_load method that returns an Iterator. docs = loader.load() print(docs) [Document(page_content='address: 8301 Hollister Ave\nalias: None\ncheckin: 12PM\ncheckout: 4PM\ncity: Santa Barbara\ncountry: United States\ndescription: Located on 78 acres of oceanfront property, this resort is an upscale experience that caters to luxury travelers. There are 354 guest rooms in 19 separate villas, each in a Spanish style. Property amenities include saline infinity pools, a private beach, clay tennis courts, a 42,000 foot spa and fitness center, and nature trails through the adjoining wetland and forest. The onsite Miro restaurant provides great views of the coast with excellent food and service. With all that said, you pay for the experience, and this resort is not for the budget traveler. In addition to quoted rates there is a $25 per day resort fee that includes a bottle of wine in your room, two bottles of water, access to fitness center and spa, and internet access.\ndirections: None\nemail: None\nfax: None\nfree_breakfast: True\nfree_internet: False\nfree_parking: False\ngeo: {\'accuracy\': \'ROOFTOP\', \'lat\': 34.43429, \'lon\': -119.92137}\nid: 10180\nname: Bacara Resort & Spa\npets_ok: False\nphone: None\nprice: $300-$1000+\npublic_likes: [\'Arnoldo Towne\', \'Olaf Turcotte\', \'Ruben Volkman\', \'Adella Aufderhar\', \'Elwyn Franecki\']\nreviews: [{\'author\': \'Delmer Cole\', \'content\': "Jane and Joyce make every effort to see to your personal needs and comfort. The rooms take one back in time to the original styles and designs of the 1800\'s. A real connection to local residents, the 905 is a regular tour stop and the oldest hotel in the French Quarter. My wife and I prefer to stay in the first floor rooms where there is a sitting room with TV, bedroom, bath and kitchen. The kitchen has a stove and refrigerator, sink, coffeemaker, etc. Plus there is a streetside private entrance (very good security system) and a covered balcony area with seating so you can watch passersby. Quaint, cozy, and most of all: ORIGINAL. No plastic remods. Feels like my great Grandmother\'s place. While there are more luxurious places to stay, if you want the real flavor and eclectic style of N.O. you have to stay here. It just FEELS like New Orleans. The location is one block towards the river from Bourbon Street and smack dab in the middle of everything. Royal street is one of the nicest residential streets in the Quarter and you can walk back to your room and get some peace and quiet whenever you like. The French Quarter is always busy so we bring a small fan to turn on to make some white noise so we can sleep more soundly. Works great. You might not need it at the 905 but it\'s a necessity it if you stay on or near Bourbon Street, which is very loud all the time. Parking tips: You can park right in front to unload and it\'s only a couple blocks to the secure riverfront parking area. Plus there are several public parking lots nearby. My strategy is to get there early, unload, and drive around for a while near the hotel. It\'s not too hard to find a parking place but be careful about where it is. Stay away from corner spots since streets are narrow and delivery trucks don\'t have the room to turn and they will hit your car. Take note of the signs. Tuesday and Thursday they clean the streets and you can\'t park in many areas when they do or they will tow your car. Once you find a spot don\'t move it since everything is walking distance. If you find a good spot and get a ticket it will cost $20, which is cheaper than the daily rate at most parking garages. Even if you don\'t get a ticket make sure to go online to N.O. traffic ticket site to check your license number for violations. Some local kids think it\'s funny to take your ticket and throw it away since the fine doubles every month it\'s not paid. You don\'t know you got a ticket but your fine is getting bigger. We\'ve been coming to the French Quarter for years and have stayed at many of the local hotels. The 905 Royal is our favorite.", \'date\': \'2013-12-05 09:27:07 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 5, \'Rooms\': 5, \'Service\': 5, \'Sleep Quality\': 5, \'Value\': 5}}, {\'author\': \'Orval Lebsack\', \'content\': \'I stayed there with a friend for a girls trip around St. Patricks Day. This was my third time to NOLA, my first at Chateau Lemoyne. The location is excellent....very easy walking distance to everything, without the chaos of staying right on Bourbon Street. Even though its a Holiday Inn, it still has the historical feel and look of NOLA. The pool looked nice too, even though we never used it. The staff was friendly and helpful. Chateau Lemoyne would be hard to top, considering the price.\', \'date\': \'2013-10-26 15:01:39 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 4, \'Rooms\': 4, \'Service\': 4, \'Sleep Quality\': 5, \'Value\': 4}}, {\'author\': \'Hildegard Larkin\', \'content\': \'This hotel is a safe bet for a value stay in French Quarter. Close enough to all sites and action but just out of the real loud & noisy streets. Check in is quick and friendly and room ( king side balcony) while dated was good size and clean. Small balcony with table & chairs is a nice option for evening drink & passing sites below. Down side is no mimi bar fridge ( they are available upon request on a first come basis apparently, so book one when you make initial reservation if necessary) Bathroom is adequate with ok shower pressure and housekeeping is quick and efficient. TIP; forget paying high price for conducted local tours, just take the red trams to end of line and back and then next day the green tram to cross town garden district and zoo and museums. cost for each ride $2.00 each way!! fantastic. Tip: If you stay during hot weather make sure you top up on ice early as later guests can "run the machine dry" for short time. Overall experience met expectations and would recommend for value stay.\', \'date\': \'2012-01-01 18:48:30 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 4, \'Overall\': 4, \'Rooms\': 3, \'Service\': 4, \'Sleep Quality\': 3, \'Value\': 4}}, {\'author\': \'Uriah Rohan\', \'content\': \'The Chateau Le Moyne Holiday Inn is in a perfect location in the French Quarter, a block away from the craziness on Bourbon St. We got a fantastic deal on Priceline and were expecting a standard room for the price. The pleasant hotel clerk upgraded our room much to our delight, without us asking and the concierge also went above an beyond to assist us with information and suggestions for places to dine and possessed an "can do" attitude. Nice pool area to cool off in during the midday NOLA heat. It is definitely a three star establishment, not super luxurious but the beds were comfy and the location superb! If you can get a deal on Priceline, etc, it\\\'s a great value.\', \'date\': \'2014-08-04 15:17:49 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 5, \'Overall\': 4, \'Rooms\': 3, \'Service\': 5, \'Sleep Quality\': 4, \'Value\': 4}}]\nstate: California\ntitle: Goleta\ntollfree: None\ntype: hotel\nurl: http://www.bacararesort.com/\nvacancy: True')] docs_iterator = loader.lazy_load() for doc in docs_iterator: print(doc) break page_content='address: 8301 Hollister Ave\nalias: None\ncheckin: 12PM\ncheckout: 4PM\ncity: Santa Barbara\ncountry: United States\ndescription: Located on 78 acres of oceanfront property, this resort is an upscale experience that caters to luxury travelers. There are 354 guest rooms in 19 separate villas, each in a Spanish style. Property amenities include saline infinity pools, a private beach, clay tennis courts, a 42,000 foot spa and fitness center, and nature trails through the adjoining wetland and forest. The onsite Miro restaurant provides great views of the coast with excellent food and service. With all that said, you pay for the experience, and this resort is not for the budget traveler. In addition to quoted rates there is a $25 per day resort fee that includes a bottle of wine in your room, two bottles of water, access to fitness center and spa, and internet access.\ndirections: None\nemail: None\nfax: None\nfree_breakfast: True\nfree_internet: False\nfree_parking: False\ngeo: {\'accuracy\': \'ROOFTOP\', \'lat\': 34.43429, \'lon\': -119.92137}\nid: 10180\nname: Bacara Resort & Spa\npets_ok: False\nphone: None\nprice: $300-$1000+\npublic_likes: [\'Arnoldo Towne\', \'Olaf Turcotte\', \'Ruben Volkman\', \'Adella Aufderhar\', \'Elwyn Franecki\']\nreviews: [{\'author\': \'Delmer Cole\', \'content\': "Jane and Joyce make every effort to see to your personal needs and comfort. The rooms take one back in time to the original styles and designs of the 1800\'s. A real connection to local residents, the 905 is a regular tour stop and the oldest hotel in the French Quarter. My wife and I prefer to stay in the first floor rooms where there is a sitting room with TV, bedroom, bath and kitchen. The kitchen has a stove and refrigerator, sink, coffeemaker, etc. Plus there is a streetside private entrance (very good security system) and a covered balcony area with seating so you can watch passersby. Quaint, cozy, and most of all: ORIGINAL. No plastic remods. Feels like my great Grandmother\'s place. While there are more luxurious places to stay, if you want the real flavor and eclectic style of N.O. you have to stay here. It just FEELS like New Orleans. The location is one block towards the river from Bourbon Street and smack dab in the middle of everything. Royal street is one of the nicest residential streets in the Quarter and you can walk back to your room and get some peace and quiet whenever you like. The French Quarter is always busy so we bring a small fan to turn on to make some white noise so we can sleep more soundly. Works great. You might not need it at the 905 but it\'s a necessity it if you stay on or near Bourbon Street, which is very loud all the time. Parking tips: You can park right in front to unload and it\'s only a couple blocks to the secure riverfront parking area. Plus there are several public parking lots nearby. My strategy is to get there early, unload, and drive around for a while near the hotel. It\'s not too hard to find a parking place but be careful about where it is. Stay away from corner spots since streets are narrow and delivery trucks don\'t have the room to turn and they will hit your car. Take note of the signs. Tuesday and Thursday they clean the streets and you can\'t park in many areas when they do or they will tow your car. Once you find a spot don\'t move it since everything is walking distance. If you find a good spot and get a ticket it will cost $20, which is cheaper than the daily rate at most parking garages. Even if you don\'t get a ticket make sure to go online to N.O. traffic ticket site to check your license number for violations. Some local kids think it\'s funny to take your ticket and throw it away since the fine doubles every month it\'s not paid. You don\'t know you got a ticket but your fine is getting bigger. We\'ve been coming to the French Quarter for years and have stayed at many of the local hotels. The 905 Royal is our favorite.", \'date\': \'2013-12-05 09:27:07 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 5, \'Rooms\': 5, \'Service\': 5, \'Sleep Quality\': 5, \'Value\': 5}}, {\'author\': \'Orval Lebsack\', \'content\': \'I stayed there with a friend for a girls trip around St. Patricks Day. This was my third time to NOLA, my first at Chateau Lemoyne. The location is excellent....very easy walking distance to everything, without the chaos of staying right on Bourbon Street. Even though its a Holiday Inn, it still has the historical feel and look of NOLA. The pool looked nice too, even though we never used it. The staff was friendly and helpful. Chateau Lemoyne would be hard to top, considering the price.\', \'date\': \'2013-10-26 15:01:39 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 4, \'Rooms\': 4, \'Service\': 4, \'Sleep Quality\': 5, \'Value\': 4}}, {\'author\': \'Hildegard Larkin\', \'content\': \'This hotel is a safe bet for a value stay in French Quarter. Close enough to all sites and action but just out of the real loud & noisy streets. Check in is quick and friendly and room ( king side balcony) while dated was good size and clean. Small balcony with table & chairs is a nice option for evening drink & passing sites below. Down side is no mimi bar fridge ( they are available upon request on a first come basis apparently, so book one when you make initial reservation if necessary) Bathroom is adequate with ok shower pressure and housekeeping is quick and efficient. TIP; forget paying high price for conducted local tours, just take the red trams to end of line and back and then next day the green tram to cross town garden district and zoo and museums. cost for each ride $2.00 each way!! fantastic. Tip: If you stay during hot weather make sure you top up on ice early as later guests can "run the machine dry" for short time. Overall experience met expectations and would recommend for value stay.\', \'date\': \'2012-01-01 18:48:30 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 4, \'Overall\': 4, \'Rooms\': 3, \'Service\': 4, \'Sleep Quality\': 3, \'Value\': 4}}, {\'author\': \'Uriah Rohan\', \'content\': \'The Chateau Le Moyne Holiday Inn is in a perfect location in the French Quarter, a block away from the craziness on Bourbon St. We got a fantastic deal on Priceline and were expecting a standard room for the price. The pleasant hotel clerk upgraded our room much to our delight, without us asking and the concierge also went above an beyond to assist us with information and suggestions for places to dine and possessed an "can do" attitude. Nice pool area to cool off in during the midday NOLA heat. It is definitely a three star establishment, not super luxurious but the beds were comfy and the location superb! If you can get a deal on Priceline, etc, it\\\'s a great value.\', \'date\': \'2014-08-04 15:17:49 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 5, \'Overall\': 4, \'Rooms\': 3, \'Service\': 5, \'Sleep Quality\': 4, \'Value\': 4}}]\nstate: California\ntitle: Goleta\ntollfree: None\ntype: hotel\nurl: http://www.bacararesort.com/\nvacancy: True' Specifying Fields with Content and Metadata​ The fields that are part of the Document content can be specified using the page_content_fields parameter. The metadata fields for the Document can be specified using the metadata_fields parameter. loader_with_selected_fields = CouchbaseLoader( connection_string, db_username, db_password, query, page_content_fields=[ "address", "name", "city", "phone", "country", "geo", "description", "reviews", ], metadata_fields=["id"], ) docs_with_selected_fields = loader_with_selected_fields.load() print(docs_with_selected_fields) [Document(page_content='address: 8301 Hollister Ave\ncity: Santa Barbara\ncountry: United States\ndescription: Located on 78 acres of oceanfront property, this resort is an upscale experience that caters to luxury travelers. There are 354 guest rooms in 19 separate villas, each in a Spanish style. Property amenities include saline infinity pools, a private beach, clay tennis courts, a 42,000 foot spa and fitness center, and nature trails through the adjoining wetland and forest. The onsite Miro restaurant provides great views of the coast with excellent food and service. With all that said, you pay for the experience, and this resort is not for the budget traveler. In addition to quoted rates there is a $25 per day resort fee that includes a bottle of wine in your room, two bottles of water, access to fitness center and spa, and internet access.\ngeo: {\'accuracy\': \'ROOFTOP\', \'lat\': 34.43429, \'lon\': -119.92137}\nname: Bacara Resort & Spa\nphone: None\nreviews: [{\'author\': \'Delmer Cole\', \'content\': "Jane and Joyce make every effort to see to your personal needs and comfort. The rooms take one back in time to the original styles and designs of the 1800\'s. A real connection to local residents, the 905 is a regular tour stop and the oldest hotel in the French Quarter. My wife and I prefer to stay in the first floor rooms where there is a sitting room with TV, bedroom, bath and kitchen. The kitchen has a stove and refrigerator, sink, coffeemaker, etc. Plus there is a streetside private entrance (very good security system) and a covered balcony area with seating so you can watch passersby. Quaint, cozy, and most of all: ORIGINAL. No plastic remods. Feels like my great Grandmother\'s place. While there are more luxurious places to stay, if you want the real flavor and eclectic style of N.O. you have to stay here. It just FEELS like New Orleans. The location is one block towards the river from Bourbon Street and smack dab in the middle of everything. Royal street is one of the nicest residential streets in the Quarter and you can walk back to your room and get some peace and quiet whenever you like. The French Quarter is always busy so we bring a small fan to turn on to make some white noise so we can sleep more soundly. Works great. You might not need it at the 905 but it\'s a necessity it if you stay on or near Bourbon Street, which is very loud all the time. Parking tips: You can park right in front to unload and it\'s only a couple blocks to the secure riverfront parking area. Plus there are several public parking lots nearby. My strategy is to get there early, unload, and drive around for a while near the hotel. It\'s not too hard to find a parking place but be careful about where it is. Stay away from corner spots since streets are narrow and delivery trucks don\'t have the room to turn and they will hit your car. Take note of the signs. Tuesday and Thursday they clean the streets and you can\'t park in many areas when they do or they will tow your car. Once you find a spot don\'t move it since everything is walking distance. If you find a good spot and get a ticket it will cost $20, which is cheaper than the daily rate at most parking garages. Even if you don\'t get a ticket make sure to go online to N.O. traffic ticket site to check your license number for violations. Some local kids think it\'s funny to take your ticket and throw it away since the fine doubles every month it\'s not paid. You don\'t know you got a ticket but your fine is getting bigger. We\'ve been coming to the French Quarter for years and have stayed at many of the local hotels. The 905 Royal is our favorite.", \'date\': \'2013-12-05 09:27:07 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 5, \'Rooms\': 5, \'Service\': 5, \'Sleep Quality\': 5, \'Value\': 5}}, {\'author\': \'Orval Lebsack\', \'content\': \'I stayed there with a friend for a girls trip around St. Patricks Day. This was my third time to NOLA, my first at Chateau Lemoyne. The location is excellent....very easy walking distance to everything, without the chaos of staying right on Bourbon Street. Even though its a Holiday Inn, it still has the historical feel and look of NOLA. The pool looked nice too, even though we never used it. The staff was friendly and helpful. Chateau Lemoyne would be hard to top, considering the price.\', \'date\': \'2013-10-26 15:01:39 +0300\', \'ratings\': {\'Cleanliness\': 5, \'Location\': 5, \'Overall\': 4, \'Rooms\': 4, \'Service\': 4, \'Sleep Quality\': 5, \'Value\': 4}}, {\'author\': \'Hildegard Larkin\', \'content\': \'This hotel is a safe bet for a value stay in French Quarter. Close enough to all sites and action but just out of the real loud & noisy streets. Check in is quick and friendly and room ( king side balcony) while dated was good size and clean. Small balcony with table & chairs is a nice option for evening drink & passing sites below. Down side is no mimi bar fridge ( they are available upon request on a first come basis apparently, so book one when you make initial reservation if necessary) Bathroom is adequate with ok shower pressure and housekeeping is quick and efficient. TIP; forget paying high price for conducted local tours, just take the red trams to end of line and back and then next day the green tram to cross town garden district and zoo and museums. cost for each ride $2.00 each way!! fantastic. Tip: If you stay during hot weather make sure you top up on ice early as later guests can "run the machine dry" for short time. Overall experience met expectations and would recommend for value stay.\', \'date\': \'2012-01-01 18:48:30 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 4, \'Overall\': 4, \'Rooms\': 3, \'Service\': 4, \'Sleep Quality\': 3, \'Value\': 4}}, {\'author\': \'Uriah Rohan\', \'content\': \'The Chateau Le Moyne Holiday Inn is in a perfect location in the French Quarter, a block away from the craziness on Bourbon St. We got a fantastic deal on Priceline and were expecting a standard room for the price. The pleasant hotel clerk upgraded our room much to our delight, without us asking and the concierge also went above an beyond to assist us with information and suggestions for places to dine and possessed an "can do" attitude. Nice pool area to cool off in during the midday NOLA heat. It is definitely a three star establishment, not super luxurious but the beds were comfy and the location superb! If you can get a deal on Priceline, etc, it\\\'s a great value.\', \'date\': \'2014-08-04 15:17:49 +0300\', \'ratings\': {\'Cleanliness\': 4, \'Location\': 5, \'Overall\': 4, \'Rooms\': 3, \'Service\': 5, \'Sleep Quality\': 4, \'Value\': 4}}]', metadata={'id': 10180})]
https://python.langchain.com/docs/integrations/document_loaders/modern_treasury/
## Modern Treasury > [Modern Treasury](https://www.moderntreasury.com/) simplifies complex payment operations. It is a unified platform to power products and processes that move money. - Connect to banks and payment systems - Track transactions and balances in real-time - Automate payment operations for scale This notebook covers how to load data from the `Modern Treasury REST API` into a format that can be ingested into LangChain, along with example usage for vectorization. ``` from langchain.indexes import VectorstoreIndexCreatorfrom langchain_community.document_loaders import ModernTreasuryLoader ``` The Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings. This document loader also requires a `resource` option which defines what data you want to load. Following resources are available: `payment_orders` [Documentation](https://docs.moderntreasury.com/reference/payment-order-object) `expected_payments` [Documentation](https://docs.moderntreasury.com/reference/expected-payment-object) `returns` [Documentation](https://docs.moderntreasury.com/reference/return-object) `incoming_payment_details` [Documentation](https://docs.moderntreasury.com/reference/incoming-payment-detail-object) `counterparties` [Documentation](https://docs.moderntreasury.com/reference/counterparty-object) `internal_accounts` [Documentation](https://docs.moderntreasury.com/reference/internal-account-object) `external_accounts` [Documentation](https://docs.moderntreasury.com/reference/external-account-object) `transactions` [Documentation](https://docs.moderntreasury.com/reference/transaction-object) `ledgers` [Documentation](https://docs.moderntreasury.com/reference/ledger-object) `ledger_accounts` [Documentation](https://docs.moderntreasury.com/reference/ledger-account-object) `ledger_transactions` [Documentation](https://docs.moderntreasury.com/reference/ledger-transaction-object) `events` [Documentation](https://docs.moderntreasury.com/reference/events) `invoices` [Documentation](https://docs.moderntreasury.com/reference/invoices) ``` modern_treasury_loader = ModernTreasuryLoader("payment_orders") ``` ``` # Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([modern_treasury_loader])modern_treasury_doc_retriever = index.vectorstore.as_retriever() ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:18.935Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/modern_treasury/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/modern_treasury/", "description": "Modern Treasury simplifies complex", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4387", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"modern_treasury\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:18 GMT", "etag": "W/\"85a4ae6b3f9f303c7c2618cacc020419\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::klsh9-1713753558594-c1050a49e865" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/modern_treasury/", "property": "og:url" }, { "content": "Modern Treasury | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Modern Treasury simplifies complex", "property": "og:description" } ], "title": "Modern Treasury | 🦜️🔗 LangChain" }
Modern Treasury Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money. - Connect to banks and payment systems - Track transactions and balances in real-time - Automate payment operations for scale This notebook covers how to load data from the Modern Treasury REST API into a format that can be ingested into LangChain, along with example usage for vectorization. from langchain.indexes import VectorstoreIndexCreator from langchain_community.document_loaders import ModernTreasuryLoader The Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings. This document loader also requires a resource option which defines what data you want to load. Following resources are available: payment_orders Documentation expected_payments Documentation returns Documentation incoming_payment_details Documentation counterparties Documentation internal_accounts Documentation external_accounts Documentation transactions Documentation ledgers Documentation ledger_accounts Documentation ledger_transactions Documentation events Documentation invoices Documentation modern_treasury_loader = ModernTreasuryLoader("payment_orders") # Create a vectorstore retriever from the loader # see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([modern_treasury_loader]) modern_treasury_doc_retriever = index.vectorstore.as_retriever() Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/cube_semantic/
This notebook demonstrates the process of retrieving Cube’s data model metadata in a format suitable for passing to LLMs as embeddings, thereby enhancing contextual information. [Cube](https://cube.dev/) is the Semantic Layer for building data apps. It helps data engineers and application developers access data from modern data stores, organize it into consistent definitions, and deliver it to every application. Cube’s data model provides structure and definitions that are used as a context for LLM to understand data and generate correct queries. LLM doesn’t need to navigate complex joins and metrics calculations because Cube abstracts those and provides a simple interface that operates on the business-level terminology, instead of SQL table and column names. This simplification helps LLM to be less error-prone and avoid hallucinations. ``` import jwtfrom langchain_community.document_loaders import CubeSemanticLoaderapi_url = "https://api-example.gcp-us-central1.cubecloudapp.dev/cubejs-api/v1/meta"cubejs_api_secret = "api-secret-here"security_context = {}# Read more about security context here: https://cube.dev/docs/securityapi_token = jwt.encode(security_context, cubejs_api_secret, algorithm="HS256")loader = CubeSemanticLoader(api_url, api_token)documents = loader.load() ``` ``` # Given string containing page contentpage_content = "Users View City, None"# Given dictionary containing metadatametadata = { "table_name": "users_view", "column_name": "users_view.city", "column_data_type": "string", "column_title": "Users View City", "column_description": "None", "column_member_type": "dimension", "column_values": [ "Austin", "Chicago", "Los Angeles", "Mountain View", "New York", "Palo Alto", "San Francisco", "Seattle", ], "cube_data_obj_type": "view",} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:20.305Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/cube_semantic/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/cube_semantic/", "description": "This notebook demonstrates the process of retrieving Cube’s data model", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3465", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"cube_semantic\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:19 GMT", "etag": "W/\"7049523080a369d6aa39e5fa03d176a0\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::57h9m-1713753559905-6403bf13d4de" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/cube_semantic/", "property": "og:url" }, { "content": "Cube Semantic Layer | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook demonstrates the process of retrieving Cube’s data model", "property": "og:description" } ], "title": "Cube Semantic Layer | 🦜️🔗 LangChain" }
This notebook demonstrates the process of retrieving Cube’s data model metadata in a format suitable for passing to LLMs as embeddings, thereby enhancing contextual information. Cube is the Semantic Layer for building data apps. It helps data engineers and application developers access data from modern data stores, organize it into consistent definitions, and deliver it to every application. Cube’s data model provides structure and definitions that are used as a context for LLM to understand data and generate correct queries. LLM doesn’t need to navigate complex joins and metrics calculations because Cube abstracts those and provides a simple interface that operates on the business-level terminology, instead of SQL table and column names. This simplification helps LLM to be less error-prone and avoid hallucinations. import jwt from langchain_community.document_loaders import CubeSemanticLoader api_url = "https://api-example.gcp-us-central1.cubecloudapp.dev/cubejs-api/v1/meta" cubejs_api_secret = "api-secret-here" security_context = {} # Read more about security context here: https://cube.dev/docs/security api_token = jwt.encode(security_context, cubejs_api_secret, algorithm="HS256") loader = CubeSemanticLoader(api_url, api_token) documents = loader.load() # Given string containing page content page_content = "Users View City, None" # Given dictionary containing metadata metadata = { "table_name": "users_view", "column_name": "users_view.city", "column_data_type": "string", "column_title": "Users View City", "column_description": "None", "column_member_type": "dimension", "column_values": [ "Austin", "Chicago", "Los Angeles", "Mountain View", "New York", "Palo Alto", "San Francisco", "Seattle", ], "cube_data_obj_type": "view", }
https://python.langchain.com/docs/integrations/document_loaders/csv/
``` [Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0)] ``` See the [csv module](https://docs.python.org/3/library/csv.html) documentation for more information of what csv args are supported. ``` [Document(page_content='MLB Team: Team\nPayroll in millions: "Payroll (millions)"\nWins: "Wins"', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='MLB Team: Nationals\nPayroll in millions: 81.34\nWins: 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='MLB Team: Reds\nPayroll in millions: 82.20\nWins: 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='MLB Team: Yankees\nPayroll in millions: 197.96\nWins: 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='MLB Team: Giants\nPayroll in millions: 117.62\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='MLB Team: Braves\nPayroll in millions: 83.31\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='MLB Team: Athletics\nPayroll in millions: 55.37\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='MLB Team: Rangers\nPayroll in millions: 120.51\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='MLB Team: Orioles\nPayroll in millions: 81.43\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='MLB Team: Rays\nPayroll in millions: 64.17\nWins: 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='MLB Team: Angels\nPayroll in millions: 154.49\nWins: 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='MLB Team: Tigers\nPayroll in millions: 132.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='MLB Team: Cardinals\nPayroll in millions: 110.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='MLB Team: Dodgers\nPayroll in millions: 95.14\nWins: 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='MLB Team: White Sox\nPayroll in millions: 96.92\nWins: 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='MLB Team: Brewers\nPayroll in millions: 97.65\nWins: 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='MLB Team: Phillies\nPayroll in millions: 174.54\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='MLB Team: Diamondbacks\nPayroll in millions: 74.28\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='MLB Team: Pirates\nPayroll in millions: 63.43\nWins: 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='MLB Team: Padres\nPayroll in millions: 55.24\nWins: 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='MLB Team: Mariners\nPayroll in millions: 81.97\nWins: 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='MLB Team: Mets\nPayroll in millions: 93.35\nWins: 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='MLB Team: Blue Jays\nPayroll in millions: 75.48\nWins: 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='MLB Team: Royals\nPayroll in millions: 60.91\nWins: 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='MLB Team: Marlins\nPayroll in millions: 118.07\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='MLB Team: Red Sox\nPayroll in millions: 173.18\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='MLB Team: Indians\nPayroll in millions: 78.43\nWins: 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='MLB Team: Twins\nPayroll in millions: 94.08\nWins: 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='MLB Team: Rockies\nPayroll in millions: 78.06\nWins: 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='MLB Team: Cubs\nPayroll in millions: 88.19\nWins: 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0), Document(page_content='MLB Team: Astros\nPayroll in millions: 60.65\nWins: 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 30}, lookup_index=0)] ``` Use the `source_column` argument to specify a source for the document created from each row. Otherwise `file_path` will be used as the source for all documents created from the CSV file. This is useful when using documents loaded from CSV files for chains that answer questions using sources. ``` [Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': 'Nationals', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': 'Reds', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': 'Yankees', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': 'Giants', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': 'Braves', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': 'Athletics', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': 'Rangers', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': 'Orioles', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': 'Rays', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': 'Angels', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': 'Tigers', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': 'Cardinals', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': 'Dodgers', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': 'White Sox', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': 'Brewers', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': 'Phillies', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': 'Diamondbacks', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': 'Pirates', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': 'Padres', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': 'Mariners', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': 'Mets', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': 'Blue Jays', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': 'Royals', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': 'Marlins', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': 'Red Sox', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': 'Indians', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': 'Twins', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': 'Rockies', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': 'Cubs', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': 'Astros', 'row': 29}, lookup_index=0)] ``` You can also load the table using the `UnstructuredCSVLoader`. One advantage of using `UnstructuredCSVLoader` is that if you use it in `"elements"` mode, an HTML representation of the table will be available in the metadata. ``` <table border="1" class="dataframe"> <tbody> <tr> <td>Nationals</td> <td>81.34</td> <td>98</td> </tr> <tr> <td>Reds</td> <td>82.20</td> <td>97</td> </tr> <tr> <td>Yankees</td> <td>197.96</td> <td>95</td> </tr> <tr> <td>Giants</td> <td>117.62</td> <td>94</td> </tr> <tr> <td>Braves</td> <td>83.31</td> <td>94</td> </tr> <tr> <td>Athletics</td> <td>55.37</td> <td>94</td> </tr> <tr> <td>Rangers</td> <td>120.51</td> <td>93</td> </tr> <tr> <td>Orioles</td> <td>81.43</td> <td>93</td> </tr> <tr> <td>Rays</td> <td>64.17</td> <td>90</td> </tr> <tr> <td>Angels</td> <td>154.49</td> <td>89</td> </tr> <tr> <td>Tigers</td> <td>132.30</td> <td>88</td> </tr> <tr> <td>Cardinals</td> <td>110.30</td> <td>88</td> </tr> <tr> <td>Dodgers</td> <td>95.14</td> <td>86</td> </tr> <tr> <td>White Sox</td> <td>96.92</td> <td>85</td> </tr> <tr> <td>Brewers</td> <td>97.65</td> <td>83</td> </tr> <tr> <td>Phillies</td> <td>174.54</td> <td>81</td> </tr> <tr> <td>Diamondbacks</td> <td>74.28</td> <td>81</td> </tr> <tr> <td>Pirates</td> <td>63.43</td> <td>79</td> </tr> <tr> <td>Padres</td> <td>55.24</td> <td>76</td> </tr> <tr> <td>Mariners</td> <td>81.97</td> <td>75</td> </tr> <tr> <td>Mets</td> <td>93.35</td> <td>74</td> </tr> <tr> <td>Blue Jays</td> <td>75.48</td> <td>73</td> </tr> <tr> <td>Royals</td> <td>60.91</td> <td>72</td> </tr> <tr> <td>Marlins</td> <td>118.07</td> <td>69</td> </tr> <tr> <td>Red Sox</td> <td>173.18</td> <td>69</td> </tr> <tr> <td>Indians</td> <td>78.43</td> <td>68</td> </tr> <tr> <td>Twins</td> <td>94.08</td> <td>66</td> </tr> <tr> <td>Rockies</td> <td>78.06</td> <td>64</td> </tr> <tr> <td>Cubs</td> <td>88.19</td> <td>61</td> </tr> <tr> <td>Astros</td> <td>60.65</td> <td>55</td> </tr> </tbody></table> ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:19.976Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/csv/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/csv/", "description": "A [comma-separated values", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "7426", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"csv\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:19 GMT", "etag": "W/\"2bea179a00333490d99524ea830ecbda\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::fc95f-1713753559842-a4a0d9496fad" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/csv/", "property": "og:url" }, { "content": "CSV | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "A [comma-separated values", "property": "og:description" } ], "title": "CSV | 🦜️🔗 LangChain" }
[Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0)] See the csv module documentation for more information of what csv args are supported. [Document(page_content='MLB Team: Team\nPayroll in millions: "Payroll (millions)"\nWins: "Wins"', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='MLB Team: Nationals\nPayroll in millions: 81.34\nWins: 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='MLB Team: Reds\nPayroll in millions: 82.20\nWins: 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='MLB Team: Yankees\nPayroll in millions: 197.96\nWins: 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='MLB Team: Giants\nPayroll in millions: 117.62\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='MLB Team: Braves\nPayroll in millions: 83.31\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='MLB Team: Athletics\nPayroll in millions: 55.37\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='MLB Team: Rangers\nPayroll in millions: 120.51\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='MLB Team: Orioles\nPayroll in millions: 81.43\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='MLB Team: Rays\nPayroll in millions: 64.17\nWins: 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='MLB Team: Angels\nPayroll in millions: 154.49\nWins: 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='MLB Team: Tigers\nPayroll in millions: 132.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='MLB Team: Cardinals\nPayroll in millions: 110.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='MLB Team: Dodgers\nPayroll in millions: 95.14\nWins: 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='MLB Team: White Sox\nPayroll in millions: 96.92\nWins: 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='MLB Team: Brewers\nPayroll in millions: 97.65\nWins: 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='MLB Team: Phillies\nPayroll in millions: 174.54\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='MLB Team: Diamondbacks\nPayroll in millions: 74.28\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='MLB Team: Pirates\nPayroll in millions: 63.43\nWins: 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='MLB Team: Padres\nPayroll in millions: 55.24\nWins: 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='MLB Team: Mariners\nPayroll in millions: 81.97\nWins: 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='MLB Team: Mets\nPayroll in millions: 93.35\nWins: 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='MLB Team: Blue Jays\nPayroll in millions: 75.48\nWins: 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='MLB Team: Royals\nPayroll in millions: 60.91\nWins: 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='MLB Team: Marlins\nPayroll in millions: 118.07\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='MLB Team: Red Sox\nPayroll in millions: 173.18\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='MLB Team: Indians\nPayroll in millions: 78.43\nWins: 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='MLB Team: Twins\nPayroll in millions: 94.08\nWins: 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='MLB Team: Rockies\nPayroll in millions: 78.06\nWins: 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='MLB Team: Cubs\nPayroll in millions: 88.19\nWins: 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0), Document(page_content='MLB Team: Astros\nPayroll in millions: 60.65\nWins: 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 30}, lookup_index=0)] Use the source_column argument to specify a source for the document created from each row. Otherwise file_path will be used as the source for all documents created from the CSV file. This is useful when using documents loaded from CSV files for chains that answer questions using sources. [Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': 'Nationals', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': 'Reds', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': 'Yankees', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': 'Giants', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': 'Braves', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': 'Athletics', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': 'Rangers', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': 'Orioles', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': 'Rays', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': 'Angels', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': 'Tigers', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': 'Cardinals', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': 'Dodgers', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': 'White Sox', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': 'Brewers', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': 'Phillies', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': 'Diamondbacks', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': 'Pirates', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': 'Padres', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': 'Mariners', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': 'Mets', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': 'Blue Jays', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': 'Royals', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': 'Marlins', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': 'Red Sox', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': 'Indians', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': 'Twins', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': 'Rockies', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': 'Cubs', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': 'Astros', 'row': 29}, lookup_index=0)] You can also load the table using the UnstructuredCSVLoader. One advantage of using UnstructuredCSVLoader is that if you use it in "elements" mode, an HTML representation of the table will be available in the metadata. <table border="1" class="dataframe"> <tbody> <tr> <td>Nationals</td> <td>81.34</td> <td>98</td> </tr> <tr> <td>Reds</td> <td>82.20</td> <td>97</td> </tr> <tr> <td>Yankees</td> <td>197.96</td> <td>95</td> </tr> <tr> <td>Giants</td> <td>117.62</td> <td>94</td> </tr> <tr> <td>Braves</td> <td>83.31</td> <td>94</td> </tr> <tr> <td>Athletics</td> <td>55.37</td> <td>94</td> </tr> <tr> <td>Rangers</td> <td>120.51</td> <td>93</td> </tr> <tr> <td>Orioles</td> <td>81.43</td> <td>93</td> </tr> <tr> <td>Rays</td> <td>64.17</td> <td>90</td> </tr> <tr> <td>Angels</td> <td>154.49</td> <td>89</td> </tr> <tr> <td>Tigers</td> <td>132.30</td> <td>88</td> </tr> <tr> <td>Cardinals</td> <td>110.30</td> <td>88</td> </tr> <tr> <td>Dodgers</td> <td>95.14</td> <td>86</td> </tr> <tr> <td>White Sox</td> <td>96.92</td> <td>85</td> </tr> <tr> <td>Brewers</td> <td>97.65</td> <td>83</td> </tr> <tr> <td>Phillies</td> <td>174.54</td> <td>81</td> </tr> <tr> <td>Diamondbacks</td> <td>74.28</td> <td>81</td> </tr> <tr> <td>Pirates</td> <td>63.43</td> <td>79</td> </tr> <tr> <td>Padres</td> <td>55.24</td> <td>76</td> </tr> <tr> <td>Mariners</td> <td>81.97</td> <td>75</td> </tr> <tr> <td>Mets</td> <td>93.35</td> <td>74</td> </tr> <tr> <td>Blue Jays</td> <td>75.48</td> <td>73</td> </tr> <tr> <td>Royals</td> <td>60.91</td> <td>72</td> </tr> <tr> <td>Marlins</td> <td>118.07</td> <td>69</td> </tr> <tr> <td>Red Sox</td> <td>173.18</td> <td>69</td> </tr> <tr> <td>Indians</td> <td>78.43</td> <td>68</td> </tr> <tr> <td>Twins</td> <td>94.08</td> <td>66</td> </tr> <tr> <td>Rockies</td> <td>78.06</td> <td>64</td> </tr> <tr> <td>Cubs</td> <td>88.19</td> <td>61</td> </tr> <tr> <td>Astros</td> <td>60.65</td> <td>55</td> </tr> </tbody> </table>
https://python.langchain.com/docs/integrations/document_loaders/notion/
## Notion DB 1/2 > [Notion](https://www.notion.so/) is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management. This notebook covers how to load documents from a Notion database dump. In order to get this notion dump, follow these instructions: ## 🧑 Instructions for ingesting your own dataset[​](#instructions-for-ingesting-your-own-dataset "Direct link to 🧑 Instructions for ingesting your own dataset") Export your dataset from Notion. You can do this by clicking on the three dots in the upper right hand corner and then clicking `Export`. When exporting, make sure to select the `Markdown & CSV` format option. This will produce a `.zip` file in your Downloads folder. Move the `.zip` file into this repository. Run the following command to unzip the zip file (replace the `Export...` with your own file name as needed). ``` unzip Export-d3adfe0f-3131-4bf3-8987-a52017fc1bae.zip -d Notion_DB ``` Run the following command to ingest the data. ``` from langchain_community.document_loaders import NotionDirectoryLoader ``` ``` loader = NotionDirectoryLoader("Notion_DB") ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:20.516Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/notion/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/notion/", "description": "Notion is a collaboration platform with", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3458", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"notion\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:19 GMT", "etag": "W/\"066ff7e57d850955acc06d2d8b20adfd\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::qw5cn-1713753559913-74f8d998df42" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/notion/", "property": "og:url" }, { "content": "Notion DB 1/2 | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Notion is a collaboration platform with", "property": "og:description" } ], "title": "Notion DB 1/2 | 🦜️🔗 LangChain" }
Notion DB 1/2 Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management. This notebook covers how to load documents from a Notion database dump. In order to get this notion dump, follow these instructions: 🧑 Instructions for ingesting your own dataset​ Export your dataset from Notion. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export. When exporting, make sure to select the Markdown & CSV format option. This will produce a .zip file in your Downloads folder. Move the .zip file into this repository. Run the following command to unzip the zip file (replace the Export... with your own file name as needed). unzip Export-d3adfe0f-3131-4bf3-8987-a52017fc1bae.zip -d Notion_DB Run the following command to ingest the data. from langchain_community.document_loaders import NotionDirectoryLoader loader = NotionDirectoryLoader("Notion_DB") Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/datadog_logs/
This loader fetches the logs from your applications in Datadog using the `datadog_api_client` Python package. You must initialize the loader with your `Datadog API key` and `APP key`, and you need to pass in the query to extract the desired logs. ``` [Document(page_content='message: grep: /etc/datadog-agent/system-probe.yaml: No such file or directory', metadata={'id': 'AgAAAYkwpLImvkjRpQAAAAAAAAAYAAAAAEFZa3dwTUFsQUFEWmZfLU5QdElnM3dBWQAAACQAAAAAMDE4OTMwYTQtYzk3OS00MmJjLTlhNDAtOTY4N2EwY2I5ZDdk', 'status': 'error', 'service': 'agent', 'tags': ['accessible-from-goog-gke-node', 'allow-external-ingress-high-ports', 'allow-external-ingress-http', 'allow-external-ingress-https', 'container_id:c7d8ecd27b5b3cfdf3b0df04b8965af6f233f56b7c3c2ffabfab5e3b6ccbd6a5', 'container_name:lab_datadog_1', 'datadog.pipelines:false', 'datadog.submission_auth:private_api_key', 'docker_image:datadog/agent:7.41.1', 'env:dd101-dev', 'hostname:lab-host', 'image_name:datadog/agent', 'image_tag:7.41.1', 'instance-id:7497601202021312403', 'instance-type:custom-1-4096', 'instruqt_aws_accounts:', 'instruqt_azure_subscriptions:', 'instruqt_gcp_projects:', 'internal-hostname:lab-host.d4rjybavkary.svc.cluster.local', 'numeric_project_id:3390740675', 'p-d4rjybavkary', 'project:instruqt-prod', 'service:agent', 'short_image:agent', 'source:agent', 'zone:europe-west1-b'], 'timestamp': datetime.datetime(2023, 7, 7, 13, 57, 27, 206000, tzinfo=tzutc())}), Document(page_content='message: grep: /etc/datadog-agent/system-probe.yaml: No such file or directory', metadata={'id': 'AgAAAYkwpLImvkjRpgAAAAAAAAAYAAAAAEFZa3dwTUFsQUFEWmZfLU5QdElnM3dBWgAAACQAAAAAMDE4OTMwYTQtYzk3OS00MmJjLTlhNDAtOTY4N2EwY2I5ZDdk', 'status': 'error', 'service': 'agent', 'tags': ['accessible-from-goog-gke-node', 'allow-external-ingress-high-ports', 'allow-external-ingress-http', 'allow-external-ingress-https', 'container_id:c7d8ecd27b5b3cfdf3b0df04b8965af6f233f56b7c3c2ffabfab5e3b6ccbd6a5', 'container_name:lab_datadog_1', 'datadog.pipelines:false', 'datadog.submission_auth:private_api_key', 'docker_image:datadog/agent:7.41.1', 'env:dd101-dev', 'hostname:lab-host', 'image_name:datadog/agent', 'image_tag:7.41.1', 'instance-id:7497601202021312403', 'instance-type:custom-1-4096', 'instruqt_aws_accounts:', 'instruqt_azure_subscriptions:', 'instruqt_gcp_projects:', 'internal-hostname:lab-host.d4rjybavkary.svc.cluster.local', 'numeric_project_id:3390740675', 'p-d4rjybavkary', 'project:instruqt-prod', 'service:agent', 'short_image:agent', 'source:agent', 'zone:europe-west1-b'], 'timestamp': datetime.datetime(2023, 7, 7, 13, 57, 27, 206000, tzinfo=tzutc())})] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:21.054Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/datadog_logs/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/datadog_logs/", "description": "Datadog is a monitoring and analytics", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"datadog_logs\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:19 GMT", "etag": "W/\"32618a9a5bf7031e2fd47ff9b4ec2d22\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::zmsw5-1713753559918-f8dd94f1edfc" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/datadog_logs/", "property": "og:url" }, { "content": "Datadog Logs | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Datadog is a monitoring and analytics", "property": "og:description" } ], "title": "Datadog Logs | 🦜️🔗 LangChain" }
This loader fetches the logs from your applications in Datadog using the datadog_api_client Python package. You must initialize the loader with your Datadog API key and APP key, and you need to pass in the query to extract the desired logs. [Document(page_content='message: grep: /etc/datadog-agent/system-probe.yaml: No such file or directory', metadata={'id': 'AgAAAYkwpLImvkjRpQAAAAAAAAAYAAAAAEFZa3dwTUFsQUFEWmZfLU5QdElnM3dBWQAAACQAAAAAMDE4OTMwYTQtYzk3OS00MmJjLTlhNDAtOTY4N2EwY2I5ZDdk', 'status': 'error', 'service': 'agent', 'tags': ['accessible-from-goog-gke-node', 'allow-external-ingress-high-ports', 'allow-external-ingress-http', 'allow-external-ingress-https', 'container_id:c7d8ecd27b5b3cfdf3b0df04b8965af6f233f56b7c3c2ffabfab5e3b6ccbd6a5', 'container_name:lab_datadog_1', 'datadog.pipelines:false', 'datadog.submission_auth:private_api_key', 'docker_image:datadog/agent:7.41.1', 'env:dd101-dev', 'hostname:lab-host', 'image_name:datadog/agent', 'image_tag:7.41.1', 'instance-id:7497601202021312403', 'instance-type:custom-1-4096', 'instruqt_aws_accounts:', 'instruqt_azure_subscriptions:', 'instruqt_gcp_projects:', 'internal-hostname:lab-host.d4rjybavkary.svc.cluster.local', 'numeric_project_id:3390740675', 'p-d4rjybavkary', 'project:instruqt-prod', 'service:agent', 'short_image:agent', 'source:agent', 'zone:europe-west1-b'], 'timestamp': datetime.datetime(2023, 7, 7, 13, 57, 27, 206000, tzinfo=tzutc())}), Document(page_content='message: grep: /etc/datadog-agent/system-probe.yaml: No such file or directory', metadata={'id': 'AgAAAYkwpLImvkjRpgAAAAAAAAAYAAAAAEFZa3dwTUFsQUFEWmZfLU5QdElnM3dBWgAAACQAAAAAMDE4OTMwYTQtYzk3OS00MmJjLTlhNDAtOTY4N2EwY2I5ZDdk', 'status': 'error', 'service': 'agent', 'tags': ['accessible-from-goog-gke-node', 'allow-external-ingress-high-ports', 'allow-external-ingress-http', 'allow-external-ingress-https', 'container_id:c7d8ecd27b5b3cfdf3b0df04b8965af6f233f56b7c3c2ffabfab5e3b6ccbd6a5', 'container_name:lab_datadog_1', 'datadog.pipelines:false', 'datadog.submission_auth:private_api_key', 'docker_image:datadog/agent:7.41.1', 'env:dd101-dev', 'hostname:lab-host', 'image_name:datadog/agent', 'image_tag:7.41.1', 'instance-id:7497601202021312403', 'instance-type:custom-1-4096', 'instruqt_aws_accounts:', 'instruqt_azure_subscriptions:', 'instruqt_gcp_projects:', 'internal-hostname:lab-host.d4rjybavkary.svc.cluster.local', 'numeric_project_id:3390740675', 'p-d4rjybavkary', 'project:instruqt-prod', 'service:agent', 'short_image:agent', 'source:agent', 'zone:europe-west1-b'], 'timestamp': datetime.datetime(2023, 7, 7, 13, 57, 27, 206000, tzinfo=tzutc())})]
https://python.langchain.com/docs/integrations/document_loaders/notiondb/
## Notion DB 2/2 > [Notion](https://www.notion.so/) is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management. `NotionDBLoader` is a Python class for loading content from a `Notion` database. It retrieves pages from the database, reads their content, and returns a list of Document objects. ## Requirements[​](#requirements "Direct link to Requirements") * A `Notion` Database * Notion Integration Token ## Setup[​](#setup "Direct link to Setup") ### 1\. Create a Notion Table Database[​](#create-a-notion-table-database "Direct link to 1. Create a Notion Table Database") Create a new table database in Notion. You can add any column to the database and they will be treated as metadata. For example you can add the following columns: * Title: set Title as the default property. * Categories: A Multi-select property to store categories associated with the page. * Keywords: A Multi-select property to store keywords associated with the page. Add your content to the body of each page in the database. The NotionDBLoader will extract the content and metadata from these pages. ## 2\. Create a Notion Integration[​](#create-a-notion-integration "Direct link to 2. Create a Notion Integration") To create a Notion Integration, follow these steps: 1. Visit the [Notion Developers](https://www.notion.com/my-integrations) page and log in with your Notion account. 2. Click on the “+ New integration” button. 3. Give your integration a name and choose the workspace where your database is located. 4. Select the require capabilities, this extension only need the Read content capability 5. Click the “Submit” button to create the integration. Once the integration is created, you’ll be provided with an `Integration Token (API key)`. Copy this token and keep it safe, as you’ll need it to use the NotionDBLoader. ### 3\. Connect the Integration to the Database[​](#connect-the-integration-to-the-database "Direct link to 3. Connect the Integration to the Database") To connect your integration to the database, follow these steps: 1. Open your database in Notion. 2. Click on the three-dot menu icon in the top right corner of the database view. 3. Click on the “+ New integration” button. 4. Find your integration, you may need to start typing its name in the search box. 5. Click on the “Connect” button to connect the integration to the database. ### 4\. Get the Database ID[​](#get-the-database-id "Direct link to 4. Get the Database ID") To get the database ID, follow these steps: 1. Open your database in Notion. 2. Click on the three-dot menu icon in the top right corner of the database view. 3. Select “Copy link” from the menu to copy the database URL to your clipboard. 4. The database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: [https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=…](https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=%E2%80%A6). In this example, the database ID is 8935f9d140a04f95a872520c4f123456. With the database properly set up and the integration token and database ID in hand, you can now use the NotionDBLoader code to load content and metadata from your Notion database. ## Usage[​](#usage "Direct link to Usage") NotionDBLoader is part of the langchain package’s document loaders. You can use it as follows: ``` from getpass import getpassNOTION_TOKEN = getpass()DATABASE_ID = getpass() ``` ``` from langchain_community.document_loaders import NotionDBLoader ``` ``` loader = NotionDBLoader( integration_token=NOTION_TOKEN, database_id=DATABASE_ID, request_timeout_sec=30, # optional, defaults to 10) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:20.868Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/notiondb/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/notiondb/", "description": "Notion is a collaboration platform with", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3458", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"notiondb\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:19 GMT", "etag": "W/\"96eff0476cc8d9a6e7c36b8cfbc994d4\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::dz74w-1713753559908-7c58c955aa73" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/notiondb/", "property": "og:url" }, { "content": "Notion DB 2/2 | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Notion is a collaboration platform with", "property": "og:description" } ], "title": "Notion DB 2/2 | 🦜️🔗 LangChain" }
Notion DB 2/2 Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management. NotionDBLoader is a Python class for loading content from a Notion database. It retrieves pages from the database, reads their content, and returns a list of Document objects. Requirements​ A Notion Database Notion Integration Token Setup​ 1. Create a Notion Table Database​ Create a new table database in Notion. You can add any column to the database and they will be treated as metadata. For example you can add the following columns: Title: set Title as the default property. Categories: A Multi-select property to store categories associated with the page. Keywords: A Multi-select property to store keywords associated with the page. Add your content to the body of each page in the database. The NotionDBLoader will extract the content and metadata from these pages. 2. Create a Notion Integration​ To create a Notion Integration, follow these steps: Visit the Notion Developers page and log in with your Notion account. Click on the “+ New integration” button. Give your integration a name and choose the workspace where your database is located. Select the require capabilities, this extension only need the Read content capability Click the “Submit” button to create the integration. Once the integration is created, you’ll be provided with an Integration Token (API key). Copy this token and keep it safe, as you’ll need it to use the NotionDBLoader. 3. Connect the Integration to the Database​ To connect your integration to the database, follow these steps: Open your database in Notion. Click on the three-dot menu icon in the top right corner of the database view. Click on the “+ New integration” button. Find your integration, you may need to start typing its name in the search box. Click on the “Connect” button to connect the integration to the database. 4. Get the Database ID​ To get the database ID, follow these steps: Open your database in Notion. Click on the three-dot menu icon in the top right corner of the database view. Select “Copy link” from the menu to copy the database URL to your clipboard. The database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=…. In this example, the database ID is 8935f9d140a04f95a872520c4f123456. With the database properly set up and the integration token and database ID in hand, you can now use the NotionDBLoader code to load content and metadata from your Notion database. Usage​ NotionDBLoader is part of the langchain package’s document loaders. You can use it as follows: from getpass import getpass NOTION_TOKEN = getpass() DATABASE_ID = getpass() from langchain_community.document_loaders import NotionDBLoader loader = NotionDBLoader( integration_token=NOTION_TOKEN, database_id=DATABASE_ID, request_timeout_sec=30, # optional, defaults to 10 ) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/mongodb/
## MongoDB [MongoDB](https://www.mongodb.com/) is a NoSQL , document-oriented database that supports JSON-like documents with a dynamic schema. ## Overview[​](#overview "Direct link to Overview") The MongoDB Document Loader returns a list of Langchain Documents from a MongoDB database. The Loader requires the following parameters: * MongoDB connection string * MongoDB database name * MongoDB collection name * (Optional) Content Filter dictionary * (Optional) List of field names to include in the output The output takes the following format: * pageContent= Mongo Document * metadata={‘database’: ‘\[database\_name\]’, ‘collection’: ‘\[collection\_name\]’} ## Load the Document Loader[​](#load-the-document-loader "Direct link to Load the Document Loader") ``` # add this import for running in jupyter notebookimport nest_asyncionest_asyncio.apply() ``` ``` from langchain_community.document_loaders.mongodb import MongodbLoader ``` ``` loader = MongodbLoader( connection_string="mongodb://localhost:27017/", db_name="sample_restaurants", collection_name="restaurants", filter_criteria={"borough": "Bronx", "cuisine": "Bakery"}, field_names=["name", "address"],) ``` ``` docs = loader.load()len(docs) ``` ``` Document(page_content="Morris Park Bake Shop {'building': '1007', 'coord': [-73.856077, 40.848447], 'street': 'Morris Park Ave', 'zipcode': '10462'}", metadata={'database': 'sample_restaurants', 'collection': 'restaurants'}) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:21.366Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/mongodb/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/mongodb/", "description": "MongoDB is a NoSQL , document-oriented", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3459", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"mongodb\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:20 GMT", "etag": "W/\"1cdd1f68658fdcb5f0728ab45ce9cc95\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::w5r7l-1713753560509-46b20d6a969e" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/mongodb/", "property": "og:url" }, { "content": "MongoDB | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "MongoDB is a NoSQL , document-oriented", "property": "og:description" } ], "title": "MongoDB | 🦜️🔗 LangChain" }
MongoDB MongoDB is a NoSQL , document-oriented database that supports JSON-like documents with a dynamic schema. Overview​ The MongoDB Document Loader returns a list of Langchain Documents from a MongoDB database. The Loader requires the following parameters: MongoDB connection string MongoDB database name MongoDB collection name (Optional) Content Filter dictionary (Optional) List of field names to include in the output The output takes the following format: pageContent= Mongo Document metadata={‘database’: ‘[database_name]’, ‘collection’: ‘[collection_name]’} Load the Document Loader​ # add this import for running in jupyter notebook import nest_asyncio nest_asyncio.apply() from langchain_community.document_loaders.mongodb import MongodbLoader loader = MongodbLoader( connection_string="mongodb://localhost:27017/", db_name="sample_restaurants", collection_name="restaurants", filter_criteria={"borough": "Bronx", "cuisine": "Bakery"}, field_names=["name", "address"], ) docs = loader.load() len(docs) Document(page_content="Morris Park Bake Shop {'building': '1007', 'coord': [-73.856077, 40.848447], 'street': 'Morris Park Ave', 'zipcode': '10462'}", metadata={'database': 'sample_restaurants', 'collection': 'restaurants'})
https://python.langchain.com/docs/integrations/document_loaders/diffbot/
Unlike traditional web scraping tools, [Diffbot](https://docs.diffbot.com/docs) doesn’t require any rules to read the content on a page. It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type. The result is a website transformed into clean structured data (like JSON or CSV), ready for your application. This covers how to extract HTML documents from a list of URLs using the [Diffbot extract API](https://www.diffbot.com/products/extract/), into a document format that we can use downstream. The Diffbot Extract API Requires an API token. Once you have it, you can extract the data. ``` [Document(page_content='LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\nGetting Started Documentation\nModules\nThere are several main modules that LangChain provides support for. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. These modules are, in increasing order of complexity:\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nUse Cases\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nReference Docs\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\nReference Documentation\nLangChain Ecosystem\nGuides for how other companies/products can be used with LangChain\nLangChain Ecosystem\nAdditional Resources\nAdditional collection of resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.', metadata={'source': 'https://python.langchain.com/en/latest/index.html'})] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:21.738Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/diffbot/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/diffbot/", "description": "Unlike traditional web scraping tools,", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4395", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"diffbot\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:20 GMT", "etag": "W/\"d485c3fa6372703f60f52c0efafaa262\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::lbwqb-1713753560891-d5d2edc8717a" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/diffbot/", "property": "og:url" }, { "content": "Diffbot | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Unlike traditional web scraping tools,", "property": "og:description" } ], "title": "Diffbot | 🦜️🔗 LangChain" }
Unlike traditional web scraping tools, Diffbot doesn’t require any rules to read the content on a page. It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type. The result is a website transformed into clean structured data (like JSON or CSV), ready for your application. This covers how to extract HTML documents from a list of URLs using the Diffbot extract API, into a document format that we can use downstream. The Diffbot Extract API Requires an API token. Once you have it, you can extract the data. [Document(page_content='LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\nGetting Started Documentation\nModules\nThere are several main modules that LangChain provides support for. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. These modules are, in increasing order of complexity:\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nUse Cases\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nReference Docs\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\nReference Documentation\nLangChain Ecosystem\nGuides for how other companies/products can be used with LangChain\nLangChain Ecosystem\nAdditional Resources\nAdditional collection of resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.', metadata={'source': 'https://python.langchain.com/en/latest/index.html'})]
https://python.langchain.com/docs/integrations/document_loaders/news/
This covers how to load HTML news articles from a list of URLs into a document format that we can use downstream. ``` First article: page_content='In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact. Neither she nor her representatives have commented.' metadata={'title': 'Donald Trump indictment: What do we know about the six co-conspirators?', 'link': 'https://www.bbc.com/news/world-us-canada-66388172', 'authors': [], 'language': 'en', 'description': 'Six people accused of helping Mr Trump undermine the election have been described by prosecutors.', 'publish_date': None}Second article: page_content='Ms Williams added: "If there\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\'t have to go through that same experience, I\'m going to do that."' metadata={'title': "Lizzo dancers Arianna Davis and Crystal Williams: 'No one speaks out, they are scared'", 'link': 'https://www.bbc.com/news/entertainment-arts-66384971', 'authors': [], 'language': 'en', 'description': 'The US pop star is being sued for sexual harassment and fat-shaming but has yet to comment.', 'publish_date': None} ``` ``` First article: page_content='In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact. Neither she nor her representatives have commented.' metadata={'title': 'Donald Trump indictment: What do we know about the six co-conspirators?', 'link': 'https://www.bbc.com/news/world-us-canada-66388172', 'authors': [], 'language': 'en', 'description': 'Six people accused of helping Mr Trump undermine the election have been described by prosecutors.', 'publish_date': None, 'keywords': ['powell', 'know', 'donald', 'trump', 'review', 'indictment', 'telling', 'view', 'reasonable', 'person', 'testimony', 'coconspirators', 'riot', 'representatives', 'claims'], 'summary': 'In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact.\nNeither she nor her representatives have commented.'}Second article: page_content='Ms Williams added: "If there\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\'t have to go through that same experience, I\'m going to do that."' metadata={'title': "Lizzo dancers Arianna Davis and Crystal Williams: 'No one speaks out, they are scared'", 'link': 'https://www.bbc.com/news/entertainment-arts-66384971', 'authors': [], 'language': 'en', 'description': 'The US pop star is being sued for sexual harassment and fat-shaming but has yet to comment.', 'publish_date': None, 'keywords': ['davis', 'lizzo', 'singers', 'experience', 'crystal', 'ensure', 'arianna', 'theres', 'williams', 'power', 'going', 'dancers', 'im', 'speaks', 'work', 'ms', 'scared'], 'summary': 'Ms Williams added: "If there\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\'t have to go through that same experience, I\'m going to do that."'} ``` ``` ['powell', 'know', 'donald', 'trump', 'review', 'indictment', 'telling', 'view', 'reasonable', 'person', 'testimony', 'coconspirators', 'riot', 'representatives', 'claims'] ``` ``` 'In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact.\nNeither she nor her representatives have commented.' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:21.837Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/news/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/news/", "description": "This covers how to load HTML news articles from a list of URLs into a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4389", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"news\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:20 GMT", "etag": "W/\"64de1bb4a4518f959ba9178c95541252\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::tjlr2-1713753560907-97ea63bda97d" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/news/", "property": "og:url" }, { "content": "News URL | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This covers how to load HTML news articles from a list of URLs into a", "property": "og:description" } ], "title": "News URL | 🦜️🔗 LangChain" }
This covers how to load HTML news articles from a list of URLs into a document format that we can use downstream. First article: page_content='In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact. Neither she nor her representatives have commented.' metadata={'title': 'Donald Trump indictment: What do we know about the six co-conspirators?', 'link': 'https://www.bbc.com/news/world-us-canada-66388172', 'authors': [], 'language': 'en', 'description': 'Six people accused of helping Mr Trump undermine the election have been described by prosecutors.', 'publish_date': None} Second article: page_content='Ms Williams added: "If there\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\'t have to go through that same experience, I\'m going to do that."' metadata={'title': "Lizzo dancers Arianna Davis and Crystal Williams: 'No one speaks out, they are scared'", 'link': 'https://www.bbc.com/news/entertainment-arts-66384971', 'authors': [], 'language': 'en', 'description': 'The US pop star is being sued for sexual harassment and fat-shaming but has yet to comment.', 'publish_date': None} First article: page_content='In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact. Neither she nor her representatives have commented.' metadata={'title': 'Donald Trump indictment: What do we know about the six co-conspirators?', 'link': 'https://www.bbc.com/news/world-us-canada-66388172', 'authors': [], 'language': 'en', 'description': 'Six people accused of helping Mr Trump undermine the election have been described by prosecutors.', 'publish_date': None, 'keywords': ['powell', 'know', 'donald', 'trump', 'review', 'indictment', 'telling', 'view', 'reasonable', 'person', 'testimony', 'coconspirators', 'riot', 'representatives', 'claims'], 'summary': 'In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact.\nNeither she nor her representatives have commented.'} Second article: page_content='Ms Williams added: "If there\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\'t have to go through that same experience, I\'m going to do that."' metadata={'title': "Lizzo dancers Arianna Davis and Crystal Williams: 'No one speaks out, they are scared'", 'link': 'https://www.bbc.com/news/entertainment-arts-66384971', 'authors': [], 'language': 'en', 'description': 'The US pop star is being sued for sexual harassment and fat-shaming but has yet to comment.', 'publish_date': None, 'keywords': ['davis', 'lizzo', 'singers', 'experience', 'crystal', 'ensure', 'arianna', 'theres', 'williams', 'power', 'going', 'dancers', 'im', 'speaks', 'work', 'ms', 'scared'], 'summary': 'Ms Williams added: "If there\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\'t have to go through that same experience, I\'m going to do that."'} ['powell', 'know', 'donald', 'trump', 'review', 'indictment', 'telling', 'view', 'reasonable', 'person', 'testimony', 'coconspirators', 'riot', 'representatives', 'claims'] 'In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact.\nNeither she nor her representatives have commented.'
https://python.langchain.com/docs/integrations/document_loaders/nuclia/
## Nuclia > [Nuclia](https://nuclia.com/) automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing. > The `Nuclia Understanding API` supports the processing of unstructured data, including text, web pages, documents, and audio/video contents. It extracts all texts wherever they are (using speech-to-text or OCR when needed), it also extracts metadata, embedded files (like images in a PDF), and web links. If machine learning is enabled, it identifies entities, provides a summary of the content and generates embeddings for all the sentences. ## Setup[​](#setup "Direct link to Setup") To use the `Nuclia Understanding API`, you need to have a Nuclia account. You can create one for free at [https://nuclia.cloud](https://nuclia.cloud/), and then [create a NUA key](https://docs.nuclia.dev/docs/docs/using/understanding/intro). ``` %pip install --upgrade --quiet protobuf%pip install --upgrade --quiet nucliadb-protos ``` ``` import osos.environ["NUCLIA_ZONE"] = "<YOUR_ZONE>" # e.g. europe-1os.environ["NUCLIA_NUA_KEY"] = "<YOUR_API_KEY>" ``` ## Example[​](#example "Direct link to Example") To use the Nuclia document loader, you need to instantiate a `NucliaUnderstandingAPI` tool: ``` from langchain_community.tools.nuclia import NucliaUnderstandingAPInua = NucliaUnderstandingAPI(enable_ml=False) ``` ``` from langchain_community.document_loaders.nuclia import NucliaLoaderloader = NucliaLoader("./interview.mp4", nua) ``` You can now call the `load` the document in a loop until you get the document. ``` import timepending = Truewhile pending: time.sleep(15) docs = loader.load() if len(docs) > 0: print(docs[0].page_content) print(docs[0].metadata) pending = False else: print("waiting...") ``` ## Retrieved information[​](#retrieved-information "Direct link to Retrieved information") Nuclia returns the following information: * file metadata * extracted text * nested text (like text in an embedded image) * paragraphs and sentences splitting (defined by the position of their first and last characters, plus start time and end time for a video or audio file) * links * a thumbnail * embedded files Note: Generated files (thumbnail, extracted embedded files, etc.) are provided as a token. You can download them with the [`/processing/download` endpoint](https://docs.nuclia.dev/docs/api#operation/Download_binary_file_processing_download_get). Also at any level, if an attribute exceeds a certain size, it will be put in a downloadable file and will be replaced in the document by a file pointer. This will consist of `{"file": {"uri": "JWT_TOKEN"}}`. The rule is that if the size of the message is greater than 1000000 characters, the biggest parts will be moved to downloadable files. First, the compression process will target vectors. If that is not enough, it will target large field metadata, and finally it will target extracted text.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:22.402Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/nuclia/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/nuclia/", "description": "Nuclia automatically indexes your unstructured", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3460", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"nuclia\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:22 GMT", "etag": "W/\"b881f89f7692c7a03a15615fc0fde54f\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::4gtsm-1713753562292-25910b314d1b" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/nuclia/", "property": "og:url" }, { "content": "Nuclia | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Nuclia automatically indexes your unstructured", "property": "og:description" } ], "title": "Nuclia | 🦜️🔗 LangChain" }
Nuclia Nuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing. The Nuclia Understanding API supports the processing of unstructured data, including text, web pages, documents, and audio/video contents. It extracts all texts wherever they are (using speech-to-text or OCR when needed), it also extracts metadata, embedded files (like images in a PDF), and web links. If machine learning is enabled, it identifies entities, provides a summary of the content and generates embeddings for all the sentences. Setup​ To use the Nuclia Understanding API, you need to have a Nuclia account. You can create one for free at https://nuclia.cloud, and then create a NUA key. %pip install --upgrade --quiet protobuf %pip install --upgrade --quiet nucliadb-protos import os os.environ["NUCLIA_ZONE"] = "<YOUR_ZONE>" # e.g. europe-1 os.environ["NUCLIA_NUA_KEY"] = "<YOUR_API_KEY>" Example​ To use the Nuclia document loader, you need to instantiate a NucliaUnderstandingAPI tool: from langchain_community.tools.nuclia import NucliaUnderstandingAPI nua = NucliaUnderstandingAPI(enable_ml=False) from langchain_community.document_loaders.nuclia import NucliaLoader loader = NucliaLoader("./interview.mp4", nua) You can now call the load the document in a loop until you get the document. import time pending = True while pending: time.sleep(15) docs = loader.load() if len(docs) > 0: print(docs[0].page_content) print(docs[0].metadata) pending = False else: print("waiting...") Retrieved information​ Nuclia returns the following information: file metadata extracted text nested text (like text in an embedded image) paragraphs and sentences splitting (defined by the position of their first and last characters, plus start time and end time for a video or audio file) links a thumbnail embedded files Note: Generated files (thumbnail, extracted embedded files, etc.) are provided as a token. You can download them with the /processing/download endpoint. Also at any level, if an attribute exceeds a certain size, it will be put in a downloadable file and will be replaced in the document by a file pointer. This will consist of {"file": {"uri": "JWT_TOKEN"}}. The rule is that if the size of the message is greater than 1000000 characters, the biggest parts will be moved to downloadable files. First, the compression process will target vectors. If that is not enough, it will target large field metadata, and finally it will target extracted text.
https://python.langchain.com/docs/integrations/document_loaders/discord/
## Discord > [Discord](https://discord.com/) is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called “servers”. A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links. Follow these steps to download your `Discord` data: 1. Go to your **User Settings** 2. Then go to **Privacy and Safety** 3. Head over to the **Request all of my Data** and click on **Request Data** button It might take 30 days for you to receive your data. You’ll receive an email at the address which is registered with Discord. That email will have a download button using which you would be able to download your personal Discord data. ``` import osimport pandas as pd ``` ``` path = input('Please enter the path to the contents of the Discord "messages" folder: ')li = []for f in os.listdir(path): expected_csv_path = os.path.join(path, f, "messages.csv") csv_exists = os.path.isfile(expected_csv_path) if csv_exists: df = pd.read_csv(expected_csv_path, index_col=None, header=0) li.append(df)df = pd.concat(li, axis=0, ignore_index=True, sort=False) ``` ``` from langchain_community.document_loaders.discord import DiscordChatLoader ``` ``` loader = DiscordChatLoader(df, user_id_col="ID")print(loader.load()) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:22.617Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/discord/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/discord/", "description": "Discord is a VoIP and instant messaging social", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"discord\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:22 GMT", "etag": "W/\"ce94aaea3f35320f414466d300720a35\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::pn8nk-1713753562305-bc1e91df125a" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/discord/", "property": "og:url" }, { "content": "Discord | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Discord is a VoIP and instant messaging social", "property": "og:description" } ], "title": "Discord | 🦜️🔗 LangChain" }
Discord Discord is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called “servers”. A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links. Follow these steps to download your Discord data: Go to your User Settings Then go to Privacy and Safety Head over to the Request all of my Data and click on Request Data button It might take 30 days for you to receive your data. You’ll receive an email at the address which is registered with Discord. That email will have a download button using which you would be able to download your personal Discord data. import os import pandas as pd path = input('Please enter the path to the contents of the Discord "messages" folder: ') li = [] for f in os.listdir(path): expected_csv_path = os.path.join(path, f, "messages.csv") csv_exists = os.path.isfile(expected_csv_path) if csv_exists: df = pd.read_csv(expected_csv_path, index_col=None, header=0) li.append(df) df = pd.concat(li, axis=0, ignore_index=True, sort=False) from langchain_community.document_loaders.discord import DiscordChatLoader loader = DiscordChatLoader(df, user_id_col="ID") print(loader.load()) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/duckdb/
## DuckDB > [DuckDB](https://duckdb.org/) is an in-process SQL OLAP database management system. Load a `DuckDB` query with one document per row. ``` %pip install --upgrade --quiet duckdb ``` ``` from langchain_community.document_loaders import DuckDBLoader ``` ``` %%file example.csvTeam,PayrollNationals,81.34Reds,82.20 ``` ``` loader = DuckDBLoader("SELECT * FROM read_csv_auto('example.csv')")data = loader.load() ``` ``` [Document(page_content='Team: Nationals\nPayroll: 81.34', metadata={}), Document(page_content='Team: Reds\nPayroll: 82.2', metadata={})] ``` ## Specifying Which Columns are Content vs Metadata[​](#specifying-which-columns-are-content-vs-metadata "Direct link to Specifying Which Columns are Content vs Metadata") ``` loader = DuckDBLoader( "SELECT * FROM read_csv_auto('example.csv')", page_content_columns=["Team"], metadata_columns=["Payroll"],)data = loader.load() ``` ``` [Document(page_content='Team: Nationals', metadata={'Payroll': 81.34}), Document(page_content='Team: Reds', metadata={'Payroll': 82.2})] ``` ``` loader = DuckDBLoader( "SELECT Team, Payroll, Team As source FROM read_csv_auto('example.csv')", metadata_columns=["source"],)data = loader.load() ``` ``` [Document(page_content='Team: Nationals\nPayroll: 81.34\nsource: Nationals', metadata={'source': 'Nationals'}), Document(page_content='Team: Reds\nPayroll: 82.2\nsource: Reds', metadata={'source': 'Reds'})] ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:23.037Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/duckdb/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/duckdb/", "description": "DuckDB is an in-process SQL OLAP database", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3467", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"duckdb\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:22 GMT", "etag": "W/\"862f25b31194b40732283dc3d63e8704\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::7zqfq-1713753562789-37f5bae4dd92" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/duckdb/", "property": "og:url" }, { "content": "DuckDB | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "DuckDB is an in-process SQL OLAP database", "property": "og:description" } ], "title": "DuckDB | 🦜️🔗 LangChain" }
DuckDB DuckDB is an in-process SQL OLAP database management system. Load a DuckDB query with one document per row. %pip install --upgrade --quiet duckdb from langchain_community.document_loaders import DuckDBLoader %%file example.csv Team,Payroll Nationals,81.34 Reds,82.20 loader = DuckDBLoader("SELECT * FROM read_csv_auto('example.csv')") data = loader.load() [Document(page_content='Team: Nationals\nPayroll: 81.34', metadata={}), Document(page_content='Team: Reds\nPayroll: 82.2', metadata={})] Specifying Which Columns are Content vs Metadata​ loader = DuckDBLoader( "SELECT * FROM read_csv_auto('example.csv')", page_content_columns=["Team"], metadata_columns=["Payroll"], ) data = loader.load() [Document(page_content='Team: Nationals', metadata={'Payroll': 81.34}), Document(page_content='Team: Reds', metadata={'Payroll': 82.2})] loader = DuckDBLoader( "SELECT Team, Payroll, Team As source FROM read_csv_auto('example.csv')", metadata_columns=["source"], ) data = loader.load() [Document(page_content='Team: Nationals\nPayroll: 81.34\nsource: Nationals', metadata={'source': 'Nationals'}), Document(page_content='Team: Reds\nPayroll: 82.2\nsource: Reds', metadata={'source': 'Reds'})] Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/dropbox/
## Dropbox [Dropbox](https://en.wikipedia.org/wiki/Dropbox) is a file hosting service that brings everything-traditional files, cloud content, and web shortcuts together in one place. This notebook covers how to load documents from _Dropbox_. In addition to common files such as text and PDF files, it also supports _Dropbox Paper_ files. ## Prerequisites[​](#prerequisites "Direct link to Prerequisites") 1. Create a Dropbox app. 2. Give the app these scope permissions: `files.metadata.read` and `files.content.read`. 3. Generate access token: [https://www.dropbox.com/developers/apps/create](https://www.dropbox.com/developers/apps/create). 4. `pip install dropbox` (requires `pip install "unstructured[pdf]"` for PDF filetype). ## Instructions[​](#instructions "Direct link to Instructions") \`DropboxLoader\`\` requires you to create a Dropbox App and generate an access token. This can be done from [https://www.dropbox.com/developers/apps/create](https://www.dropbox.com/developers/apps/create). You also need to have the Dropbox Python SDK installed (pip install dropbox). DropboxLoader can load data from a list of Dropbox file paths or a single Dropbox folder path. Both paths should be relative to the root directory of the Dropbox account linked to the access token. ``` Requirement already satisfied: dropbox in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (11.36.2)Requirement already satisfied: requests>=2.16.2 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from dropbox) (2.31.0)Requirement already satisfied: six>=1.12.0 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from dropbox) (1.16.0)Requirement already satisfied: stone>=2 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from dropbox) (3.3.1)Requirement already satisfied: charset-normalizer<4,>=2 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (3.2.0)Requirement already satisfied: idna<4,>=2.5 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (3.4)Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (2.0.4)Requirement already satisfied: certifi>=2017.4.17 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (2023.7.22)Requirement already satisfied: ply>=3.4 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from stone>=2->dropbox) (3.11)Note: you may need to restart the kernel to use updated packages. ``` ``` from langchain_community.document_loaders import DropboxLoader ``` ``` # Generate access token: https://www.dropbox.com/developers/apps/create.dropbox_access_token = "<DROPBOX_ACCESS_TOKEN>"# Dropbox root folderdropbox_folder_path = "" ``` ``` loader = DropboxLoader( dropbox_access_token=dropbox_access_token, dropbox_folder_path=dropbox_folder_path, recursive=False,) ``` ``` documents = loader.load() ``` ``` File /JHSfLKn0.jpeg could not be decoded as text. Skipping.File /A REPORT ON WILES’ CAMBRIDGE LECTURES.pdf could not be decoded as text. Skipping. ``` ``` for document in documents: print(document) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:23.220Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/dropbox/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/dropbox/", "description": "Dropbox is a file hosting", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3468", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"dropbox\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:22 GMT", "etag": "W/\"1d6b913e03cdf329bb185c8cb360402c\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::l9cgv-1713753562901-eba4ca4aee37" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/dropbox/", "property": "og:url" }, { "content": "Dropbox | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Dropbox is a file hosting", "property": "og:description" } ], "title": "Dropbox | 🦜️🔗 LangChain" }
Dropbox Dropbox is a file hosting service that brings everything-traditional files, cloud content, and web shortcuts together in one place. This notebook covers how to load documents from Dropbox. In addition to common files such as text and PDF files, it also supports Dropbox Paper files. Prerequisites​ Create a Dropbox app. Give the app these scope permissions: files.metadata.read and files.content.read. Generate access token: https://www.dropbox.com/developers/apps/create. pip install dropbox (requires pip install "unstructured[pdf]" for PDF filetype). Instructions​ `DropboxLoader`` requires you to create a Dropbox App and generate an access token. This can be done from https://www.dropbox.com/developers/apps/create. You also need to have the Dropbox Python SDK installed (pip install dropbox). DropboxLoader can load data from a list of Dropbox file paths or a single Dropbox folder path. Both paths should be relative to the root directory of the Dropbox account linked to the access token. Requirement already satisfied: dropbox in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (11.36.2) Requirement already satisfied: requests>=2.16.2 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from dropbox) (2.31.0) Requirement already satisfied: six>=1.12.0 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from dropbox) (1.16.0) Requirement already satisfied: stone>=2 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from dropbox) (3.3.1) Requirement already satisfied: charset-normalizer<4,>=2 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (3.2.0) Requirement already satisfied: idna<4,>=2.5 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (3.4) Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (2.0.4) Requirement already satisfied: certifi>=2017.4.17 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (2023.7.22) Requirement already satisfied: ply>=3.4 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from stone>=2->dropbox) (3.11) Note: you may need to restart the kernel to use updated packages. from langchain_community.document_loaders import DropboxLoader # Generate access token: https://www.dropbox.com/developers/apps/create. dropbox_access_token = "<DROPBOX_ACCESS_TOKEN>" # Dropbox root folder dropbox_folder_path = "" loader = DropboxLoader( dropbox_access_token=dropbox_access_token, dropbox_folder_path=dropbox_folder_path, recursive=False, ) documents = loader.load() File /JHSfLKn0.jpeg could not be decoded as text. Skipping. File /A REPORT ON WILES’ CAMBRIDGE LECTURES.pdf could not be decoded as text. Skipping. for document in documents: print(document)
https://python.langchain.com/docs/integrations/document_loaders/email/
## Email This notebook shows how to load email (`.eml`) or `Microsoft Outlook` (`.msg`) files. ## Using Unstructured[​](#using-unstructured "Direct link to Using Unstructured") ``` %pip install --upgrade --quiet unstructured ``` ``` from langchain_community.document_loaders import UnstructuredEmailLoader ``` ``` loader = UnstructuredEmailLoader("example_data/fake-email.eml") ``` ``` [Document(page_content='This is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', metadata={'source': 'example_data/fake-email.eml'})] ``` ### Retain Elements[​](#retain-elements "Direct link to Retain Elements") Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode="elements"`. ``` loader = UnstructuredEmailLoader("example_data/fake-email.eml", mode="elements") ``` ``` Document(page_content='This is a test email to use for unit tests.', metadata={'source': 'example_data/fake-email.eml', 'filename': 'fake-email.eml', 'file_directory': 'example_data', 'date': '2022-12-16T17:04:16-05:00', 'filetype': 'message/rfc822', 'sent_from': ['Matthew Robinson <mrobinson@unstructured.io>'], 'sent_to': ['Matthew Robinson <mrobinson@unstructured.io>'], 'subject': 'Test Email', 'category': 'NarrativeText'}) ``` ### Processing Attachments[​](#processing-attachments "Direct link to Processing Attachments") You can process attachments with `UnstructuredEmailLoader` by setting `process_attachments=True` in the constructor. By default, attachments will be partitioned using the `partition` function from `unstructured`. You can use a different partitioning function by passing the function to the `attachment_partitioner` kwarg. ``` loader = UnstructuredEmailLoader( "example_data/fake-email.eml", mode="elements", process_attachments=True,) ``` ``` Document(page_content='This is a test email to use for unit tests.', metadata={'source': 'example_data/fake-email.eml', 'filename': 'fake-email.eml', 'file_directory': 'example_data', 'date': '2022-12-16T17:04:16-05:00', 'filetype': 'message/rfc822', 'sent_from': ['Matthew Robinson <mrobinson@unstructured.io>'], 'sent_to': ['Matthew Robinson <mrobinson@unstructured.io>'], 'subject': 'Test Email', 'category': 'NarrativeText'}) ``` ## Using OutlookMessageLoader[​](#using-outlookmessageloader "Direct link to Using OutlookMessageLoader") ``` %pip install --upgrade --quiet extract_msg ``` ``` from langchain_community.document_loaders import OutlookMessageLoader ``` ``` loader = OutlookMessageLoader("example_data/fake-email.msg") ``` ``` Document(page_content='This is a test email to experiment with the MS Outlook MSG Extractor\r\n\r\n\r\n-- \r\n\r\n\r\nKind regards\r\n\r\n\r\n\r\n\r\nBrian Zhou\r\n\r\n', metadata={'subject': 'Test for TIF files', 'sender': 'Brian Zhou <brizhou@gmail.com>', 'date': 'Mon, 18 Nov 2013 16:26:24 +0800'}) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:23.502Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/email/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/email/", "description": "This notebook shows how to load email (.eml) or Microsoft Outlook", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"email\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:22 GMT", "etag": "W/\"a6fe12af306893f52531ff6096a13e5c\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::5ds8x-1713753562896-3dc8d79ed83d" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/email/", "property": "og:url" }, { "content": "Email | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook shows how to load email (.eml) or Microsoft Outlook", "property": "og:description" } ], "title": "Email | 🦜️🔗 LangChain" }
Email This notebook shows how to load email (.eml) or Microsoft Outlook (.msg) files. Using Unstructured​ %pip install --upgrade --quiet unstructured from langchain_community.document_loaders import UnstructuredEmailLoader loader = UnstructuredEmailLoader("example_data/fake-email.eml") [Document(page_content='This is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', metadata={'source': 'example_data/fake-email.eml'})] Retain Elements​ Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredEmailLoader("example_data/fake-email.eml", mode="elements") Document(page_content='This is a test email to use for unit tests.', metadata={'source': 'example_data/fake-email.eml', 'filename': 'fake-email.eml', 'file_directory': 'example_data', 'date': '2022-12-16T17:04:16-05:00', 'filetype': 'message/rfc822', 'sent_from': ['Matthew Robinson <mrobinson@unstructured.io>'], 'sent_to': ['Matthew Robinson <mrobinson@unstructured.io>'], 'subject': 'Test Email', 'category': 'NarrativeText'}) Processing Attachments​ You can process attachments with UnstructuredEmailLoader by setting process_attachments=True in the constructor. By default, attachments will be partitioned using the partition function from unstructured. You can use a different partitioning function by passing the function to the attachment_partitioner kwarg. loader = UnstructuredEmailLoader( "example_data/fake-email.eml", mode="elements", process_attachments=True, ) Document(page_content='This is a test email to use for unit tests.', metadata={'source': 'example_data/fake-email.eml', 'filename': 'fake-email.eml', 'file_directory': 'example_data', 'date': '2022-12-16T17:04:16-05:00', 'filetype': 'message/rfc822', 'sent_from': ['Matthew Robinson <mrobinson@unstructured.io>'], 'sent_to': ['Matthew Robinson <mrobinson@unstructured.io>'], 'subject': 'Test Email', 'category': 'NarrativeText'}) Using OutlookMessageLoader​ %pip install --upgrade --quiet extract_msg from langchain_community.document_loaders import OutlookMessageLoader loader = OutlookMessageLoader("example_data/fake-email.msg") Document(page_content='This is a test email to experiment with the MS Outlook MSG Extractor\r\n\r\n\r\n-- \r\n\r\n\r\nKind regards\r\n\r\n\r\n\r\n\r\nBrian Zhou\r\n\r\n', metadata={'subject': 'Test for TIF files', 'sender': 'Brian Zhou <brizhou@gmail.com>', 'date': 'Mon, 18 Nov 2013 16:26:24 +0800'})
https://python.langchain.com/docs/integrations/document_loaders/epub/
## EPub > [EPUB](https://en.wikipedia.org/wiki/EPUB) is an e-book file format that uses the “.epub” file extension. The term is short for electronic publication and is sometimes styled ePub. `EPUB` is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers. This covers how to load `.epub` documents into the Document format that we can use downstream. You’ll need to install the [`pandoc`](https://pandoc.org/installing.html) package for this loader to work. ``` %pip install --upgrade --quiet pandoc ``` ``` from langchain_community.document_loaders import UnstructuredEPubLoader ``` ``` loader = UnstructuredEPubLoader("winter-sports.epub") ``` ## Retain Elements[​](#retain-elements "Direct link to Retain Elements") Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode="elements"`. ``` loader = UnstructuredEPubLoader("winter-sports.epub", mode="elements") ``` ``` Document(page_content='The Project Gutenberg eBook of Winter Sports in\nSwitzerland, by E. F. Benson', lookup_str='', metadata={'source': 'winter-sports.epub', 'page_number': 1, 'category': 'Title'}, lookup_index=0) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:23.867Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/epub/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/epub/", "description": "EPUB is an e-book file format", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3467", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"epub\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:23 GMT", "etag": "W/\"398960f3ddb34dce54df2ff4205088a8\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::cb5mv-1713753563008-30e5cee12d06" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/epub/", "property": "og:url" }, { "content": "EPub | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "EPUB is an e-book file format", "property": "og:description" } ], "title": "EPub | 🦜️🔗 LangChain" }
EPub EPUB is an e-book file format that uses the “.epub” file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers. This covers how to load .epub documents into the Document format that we can use downstream. You’ll need to install the pandoc package for this loader to work. %pip install --upgrade --quiet pandoc from langchain_community.document_loaders import UnstructuredEPubLoader loader = UnstructuredEPubLoader("winter-sports.epub") Retain Elements​ Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredEPubLoader("winter-sports.epub", mode="elements") Document(page_content='The Project Gutenberg eBook of Winter Sports in\nSwitzerland, by E. F. Benson', lookup_str='', metadata={'source': 'winter-sports.epub', 'page_number': 1, 'category': 'Title'}, lookup_index=0) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/docugami/
## Docugami This notebook covers how to load documents from `Docugami`. It provides the advantages of using this system over alternative data loaders. ## Prerequisites[​](#prerequisites "Direct link to Prerequisites") 1. Install necessary python packages. 2. Grab an access token for your workspace, and make sure it is set as the `DOCUGAMI_API_KEY` environment variable. 3. Grab some docset and document IDs for your processed documents, as described here: [https://help.docugami.com/home/docugami-api](https://help.docugami.com/home/docugami-api) ``` # You need the dgml-utils package to use the DocugamiLoader (run pip install directly without "poetry run" if you are not using poetry)!poetry run pip install docugami-langchain dgml-utils==0.3.0 --upgrade --quiet ``` ## Quick start[​](#quick-start "Direct link to Quick start") 1. Create a [Docugami workspace](http://www.docugami.com/) (free trials available) 2. Add your documents (PDF, DOCX or DOC) and allow Docugami to ingest and cluster them into sets of similar documents, e.g. NDAs, Lease Agreements, and Service Agreements. There is no fixed set of document types supported by the system, the clusters created depend on your particular documents, and you can [change the docset assignments](https://help.docugami.com/home/working-with-the-doc-sets-view) later. 3. Create an access token via the Developer Playground for your workspace. [Detailed instructions](https://help.docugami.com/home/docugami-api) 4. Explore the [Docugami API](https://api-docs.docugami.com/) to get a list of your processed docset IDs, or just the document IDs for a particular docset. 5. Use the DocugamiLoader as detailed below, to get rich semantic chunks for your documents. 6. Optionally, build and publish one or more [reports or abstracts](https://help.docugami.com/home/reports). This helps Docugami improve the semantic XML with better tags based on your preferences, which are then added to the DocugamiLoader output as metadata. Use techniques like [self-querying retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/) to do high accuracy Document QA. ## Advantages vs Other Chunking Techniques[​](#advantages-vs-other-chunking-techniques "Direct link to Advantages vs Other Chunking Techniques") Appropriate chunking of your documents is critical for retrieval from documents. Many chunking techniques exist, including simple ones that rely on whitespace and recursive chunk splitting based on character length. Docugami offers a different approach: 1. **Intelligent Chunking:** Docugami breaks down every document into a hierarchical semantic XML tree of chunks of varying sizes, from single words or numerical values to entire sections. These chunks follow the semantic contours of the document, providing a more meaningful representation than arbitrary length or simple whitespace-based chunking. 2. **Semantic Annotations:** Chunks are annotated with semantic tags that are coherent across the document set, facilitating consistent hierarchical queries across multiple documents, even if they are written and formatted differently. For example, in set of lease agreements, you can easily identify key provisions like the Landlord, Tenant, or Renewal Date, as well as more complex information such as the wording of any sub-lease provision or whether a specific jurisdiction has an exception section within a Termination Clause. 3. **Structured Representation:** In addition, the XML tree indicates the structural contours of every document, using attributes denoting headings, paragraphs, lists, tables, and other common elements, and does that consistently across all supported document formats, such as scanned PDFs or DOCX files. It appropriately handles long-form document characteristics like page headers/footers or multi-column flows for clean text extraction. 4. **Additional Metadata:** Chunks are also annotated with additional metadata, if a user has been using Docugami. This additional metadata can be used for high-accuracy Document QA without context window restrictions. See detailed code walk-through below. ``` import osfrom docugami_langchain.document_loaders import DocugamiLoader ``` ## Load Documents[​](#load-documents "Direct link to Load Documents") If the DOCUGAMI\_API\_KEY environment variable is set, there is no need to pass it in to the loader explicitly otherwise you can pass it in as the `access_token` parameter. ``` DOCUGAMI_API_KEY = os.environ.get("DOCUGAMI_API_KEY") ``` ``` docset_id = "26xpy3aes7xp"document_ids = ["d7jqdzcj50sj", "cgd1eacfkchw"]# To load all docs in the given docset ID, just don't provide document_idsloader = DocugamiLoader(docset_id=docset_id, document_ids=document_ids)chunks = loader.load()len(chunks) ``` The `metadata` for each `Document` (really, a chunk of an actual PDF, DOC or DOCX) contains some useful additional information: 1. **id and source:** ID and Name of the file (PDF, DOC or DOCX) the chunk is sourced from within Docugami. 2. **xpath:** XPath inside the XML representation of the document, for the chunk. Useful for source citations directly to the actual chunk inside the document XML. 3. **structure:** Structural attributes of the chunk, e.g. h1, h2, div, table, td, etc. Useful to filter out certain kinds of chunks if needed by the caller. 4. **tag:** Semantic tag for the chunk, using various generative and extractive techniques. More details here: [https://github.com/docugami/DFM-benchmarks](https://github.com/docugami/DFM-benchmarks) You can control chunking behavior by setting the following properties on the `DocugamiLoader` instance: 1. You can set min and max chunk size, which the system tries to adhere to with minimal truncation. You can set `loader.min_text_length` and `loader.max_text_length` to control these. 2. By default, only the text for chunks is returned. However, Docugami’s XML knowledge graph has additional rich information including semantic tags for entities inside the chunk. Set `loader.include_xml_tags = True` if you want the additional xml metadata on the returned chunks. 3. In addition, you can set `loader.parent_hierarchy_levels` if you want Docugami to return parent chunks in the chunks it returns. The child chunks point to the parent chunks via the `loader.parent_id_key` value. This is useful e.g. with the [MultiVector Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector/) for [small-to-big](https://www.youtube.com/watch?v=ihSiRrOUwmg) retrieval. See detailed example later in this notebook. ``` loader.min_text_length = 64loader.include_xml_tags = Truechunks = loader.load()for chunk in chunks[:5]: print(chunk) ``` ``` page_content='MASTER SERVICES AGREEMENT\n <ThisServicesAgreement> This Services Agreement (the “Agreement”) sets forth terms under which <Company>MagicSoft, Inc. </Company>a <Org><USState>Washington </USState>Corporation </Org>(“Company”) located at <CompanyAddress><CompanyStreetAddress><Company>600 </Company><Company>4th Ave</Company></CompanyStreetAddress>, <Company>Seattle</Company>, <Client>WA </Client><ProvideServices>98104 </ProvideServices></CompanyAddress>shall provide services to <Client>Daltech, Inc.</Client>, a <Company><USState>Washington </USState>Corporation </Company>(the “Client”) located at <ClientAddress><ClientStreetAddress><Client>701 </Client><Client>1st St</Client></ClientStreetAddress>, <Client>Kirkland</Client>, <State>WA </State><Client>98033</Client></ClientAddress>. This Agreement is effective as of <EffectiveDate>February 15, 2021 </EffectiveDate>(“Effective Date”). </ThisServicesAgreement>' metadata={'xpath': '/dg:chunk/docset:MASTERSERVICESAGREEMENT-section/dg:chunk', 'id': 'c28554d0af5114e2b102e6fc4dcbbde5', 'name': 'Master Services Agreement - Daltech.docx', 'source': 'Master Services Agreement - Daltech.docx', 'structure': 'h1 p', 'tag': 'chunk ThisServicesAgreement', 'Liability': '', 'Workers Compensation Insurance': '$1,000,000', 'Limit': '$1,000,000', 'Commercial General Liability Insurance': '$2,000,000', 'Technology Professional Liability Errors Omissions Policy': '$5,000,000', 'Excess Liability Umbrella Coverage': '$9,000,000', 'Client': 'Daltech, Inc.', 'Services Agreement Date': 'INITIAL STATEMENT OF WORK (SOW) The purpose of this SOW is to describe the Software and Services that Company will initially provide to Daltech, Inc. the “Client”) under the terms and conditions of the Services Agreement entered into between the parties on June 15, 2021', 'Completion of the Services by Company Date': 'February 15, 2022', 'Charge': 'one hundred percent (100%)', 'Company': 'MagicSoft, Inc.', 'Effective Date': 'February 15, 2021', 'Start Date': '03/15/2021', 'Scheduled Onsite Visits Are Cancelled': 'ten (10) working days', 'Limit on Liability': '', 'Liability Cap': '', 'Business Automobile Liability': 'Business Automobile Liability covering all vehicles that Company owns, hires or leases with a limit of no less than $1,000,000 (combined single limit for bodily injury and property damage) for each accident.', 'Contractual Liability Coverage': 'Commercial General Liability insurance including Contractual Liability Coverage , with coverage for products liability, completed operations, property damage and bodily injury, including death , with an aggregate limit of no less than $2,000,000 . This policy shall name Client as an additional insured with respect to the provision of services provided under this Agreement. This policy shall include a waiver of subrogation against Client.', 'Technology Professional Liability Errors Omissions': 'Technology Professional Liability Errors & Omissions policy (which includes Cyber Risk coverage and Computer Security and Privacy Liability coverage) with a limit of no less than $5,000,000 per occurrence and in the aggregate.'}page_content='A. STANDARD SOFTWARE AND SERVICES AGREEMENT\n 1. Deliverables.\n Company shall provide Client with software, technical support, product management, development, and <_testRef>testing </_testRef>services (“Services”) to the Client as described on one or more Statements of Work signed by Company and Client that reference this Agreement (“SOW” or “Statement of Work”). Company shall perform Services in a prompt manner and have the final product or service (“Deliverable”) ready for Client no later than the due date specified in the applicable SOW (“Completion Date”). This due date is subject to change in accordance with the Change Order process defined in the applicable SOW. Client shall assist Company by promptly providing all information requests known or available and relevant to the Services in a timely manner.' metadata={'xpath': '/dg:chunk/docset:MASTERSERVICESAGREEMENT-section/docset:MASTERSERVICESAGREEMENT/dg:chunk[1]/docset:Standard/dg:chunk[1]/dg:chunk[1]', 'id': 'de60160d328df10fa2637637c803d2d4', 'name': 'Master Services Agreement - Daltech.docx', 'source': 'Master Services Agreement - Daltech.docx', 'structure': 'lim h1 lim h1 div', 'tag': 'chunk', 'Liability': '', 'Workers Compensation Insurance': '$1,000,000', 'Limit': '$1,000,000', 'Commercial General Liability Insurance': '$2,000,000', 'Technology Professional Liability Errors Omissions Policy': '$5,000,000', 'Excess Liability Umbrella Coverage': '$9,000,000', 'Client': 'Daltech, Inc.', 'Services Agreement Date': 'INITIAL STATEMENT OF WORK (SOW) The purpose of this SOW is to describe the Software and Services that Company will initially provide to Daltech, Inc. the “Client”) under the terms and conditions of the Services Agreement entered into between the parties on June 15, 2021', 'Completion of the Services by Company Date': 'February 15, 2022', 'Charge': 'one hundred percent (100%)', 'Company': 'MagicSoft, Inc.', 'Effective Date': 'February 15, 2021', 'Start Date': '03/15/2021', 'Scheduled Onsite Visits Are Cancelled': 'ten (10) working days', 'Limit on Liability': '', 'Liability Cap': '', 'Business Automobile Liability': 'Business Automobile Liability covering all vehicles that Company owns, hires or leases with a limit of no less than $1,000,000 (combined single limit for bodily injury and property damage) for each accident.', 'Contractual Liability Coverage': 'Commercial General Liability insurance including Contractual Liability Coverage , with coverage for products liability, completed operations, property damage and bodily injury, including death , with an aggregate limit of no less than $2,000,000 . This policy shall name Client as an additional insured with respect to the provision of services provided under this Agreement. This policy shall include a waiver of subrogation against Client.', 'Technology Professional Liability Errors Omissions': 'Technology Professional Liability Errors & Omissions policy (which includes Cyber Risk coverage and Computer Security and Privacy Liability coverage) with a limit of no less than $5,000,000 per occurrence and in the aggregate.'}page_content='2. Onsite Services.\n 2.1 Onsite visits will be charged on a <Frequency>daily </Frequency>basis (minimum <OnsiteVisits>8 hours</OnsiteVisits>).' metadata={'xpath': '/dg:chunk/docset:MASTERSERVICESAGREEMENT-section/docset:MASTERSERVICESAGREEMENT/dg:chunk[1]/docset:Standard/dg:chunk[3]/dg:chunk[1]', 'id': 'db18315b437ac2de6b555d2d8ef8f893', 'name': 'Master Services Agreement - Daltech.docx', 'source': 'Master Services Agreement - Daltech.docx', 'structure': 'lim h1 lim p', 'tag': 'chunk', 'Liability': '', 'Workers Compensation Insurance': '$1,000,000', 'Limit': '$1,000,000', 'Commercial General Liability Insurance': '$2,000,000', 'Technology Professional Liability Errors Omissions Policy': '$5,000,000', 'Excess Liability Umbrella Coverage': '$9,000,000', 'Client': 'Daltech, Inc.', 'Services Agreement Date': 'INITIAL STATEMENT OF WORK (SOW) The purpose of this SOW is to describe the Software and Services that Company will initially provide to Daltech, Inc. the “Client”) under the terms and conditions of the Services Agreement entered into between the parties on June 15, 2021', 'Completion of the Services by Company Date': 'February 15, 2022', 'Charge': 'one hundred percent (100%)', 'Company': 'MagicSoft, Inc.', 'Effective Date': 'February 15, 2021', 'Start Date': '03/15/2021', 'Scheduled Onsite Visits Are Cancelled': 'ten (10) working days', 'Limit on Liability': '', 'Liability Cap': '', 'Business Automobile Liability': 'Business Automobile Liability covering all vehicles that Company owns, hires or leases with a limit of no less than $1,000,000 (combined single limit for bodily injury and property damage) for each accident.', 'Contractual Liability Coverage': 'Commercial General Liability insurance including Contractual Liability Coverage , with coverage for products liability, completed operations, property damage and bodily injury, including death , with an aggregate limit of no less than $2,000,000 . This policy shall name Client as an additional insured with respect to the provision of services provided under this Agreement. This policy shall include a waiver of subrogation against Client.', 'Technology Professional Liability Errors Omissions': 'Technology Professional Liability Errors & Omissions policy (which includes Cyber Risk coverage and Computer Security and Privacy Liability coverage) with a limit of no less than $5,000,000 per occurrence and in the aggregate.'}page_content='2.2 <Expenses>Time and expenses will be charged based on actuals unless otherwise described in an Order Form or accompanying SOW. </Expenses>' metadata={'xpath': '/dg:chunk/docset:MASTERSERVICESAGREEMENT-section/docset:MASTERSERVICESAGREEMENT/dg:chunk[1]/docset:Standard/dg:chunk[3]/dg:chunk[2]/docset:ADailyBasis/dg:chunk[2]/dg:chunk', 'id': '506220fa472d5c48c8ee3db78c1122c1', 'name': 'Master Services Agreement - Daltech.docx', 'source': 'Master Services Agreement - Daltech.docx', 'structure': 'lim p', 'tag': 'chunk Expenses', 'Liability': '', 'Workers Compensation Insurance': '$1,000,000', 'Limit': '$1,000,000', 'Commercial General Liability Insurance': '$2,000,000', 'Technology Professional Liability Errors Omissions Policy': '$5,000,000', 'Excess Liability Umbrella Coverage': '$9,000,000', 'Client': 'Daltech, Inc.', 'Services Agreement Date': 'INITIAL STATEMENT OF WORK (SOW) The purpose of this SOW is to describe the Software and Services that Company will initially provide to Daltech, Inc. the “Client”) under the terms and conditions of the Services Agreement entered into between the parties on June 15, 2021', 'Completion of the Services by Company Date': 'February 15, 2022', 'Charge': 'one hundred percent (100%)', 'Company': 'MagicSoft, Inc.', 'Effective Date': 'February 15, 2021', 'Start Date': '03/15/2021', 'Scheduled Onsite Visits Are Cancelled': 'ten (10) working days', 'Limit on Liability': '', 'Liability Cap': '', 'Business Automobile Liability': 'Business Automobile Liability covering all vehicles that Company owns, hires or leases with a limit of no less than $1,000,000 (combined single limit for bodily injury and property damage) for each accident.', 'Contractual Liability Coverage': 'Commercial General Liability insurance including Contractual Liability Coverage , with coverage for products liability, completed operations, property damage and bodily injury, including death , with an aggregate limit of no less than $2,000,000 . This policy shall name Client as an additional insured with respect to the provision of services provided under this Agreement. This policy shall include a waiver of subrogation against Client.', 'Technology Professional Liability Errors Omissions': 'Technology Professional Liability Errors & Omissions policy (which includes Cyber Risk coverage and Computer Security and Privacy Liability coverage) with a limit of no less than $5,000,000 per occurrence and in the aggregate.'}page_content='2.3 <RegularWorkingHours>All work will be executed during regular working hours <RegularWorkingHours>Monday</RegularWorkingHours>-<Weekday>Friday </Weekday><RegularWorkingHours><RegularWorkingHours>0800</RegularWorkingHours>-<Number>1900</Number></RegularWorkingHours>. For work outside of these hours on weekdays, Company will charge <Charge>one hundred percent (100%) </Charge>of the regular hourly rate and <Charge>two hundred percent (200%) </Charge>for Saturdays, Sundays and public holidays applicable to Company. </RegularWorkingHours>' metadata={'xpath': '/dg:chunk/docset:MASTERSERVICESAGREEMENT-section/docset:MASTERSERVICESAGREEMENT/dg:chunk[1]/docset:Standard/dg:chunk[3]/dg:chunk[2]/docset:ADailyBasis/dg:chunk[3]/dg:chunk', 'id': 'dac7a3ded61b5c4f3e59771243ea46c1', 'name': 'Master Services Agreement - Daltech.docx', 'source': 'Master Services Agreement - Daltech.docx', 'structure': 'lim p', 'tag': 'chunk RegularWorkingHours', 'Liability': '', 'Workers Compensation Insurance': '$1,000,000', 'Limit': '$1,000,000', 'Commercial General Liability Insurance': '$2,000,000', 'Technology Professional Liability Errors Omissions Policy': '$5,000,000', 'Excess Liability Umbrella Coverage': '$9,000,000', 'Client': 'Daltech, Inc.', 'Services Agreement Date': 'INITIAL STATEMENT OF WORK (SOW) The purpose of this SOW is to describe the Software and Services that Company will initially provide to Daltech, Inc. the “Client”) under the terms and conditions of the Services Agreement entered into between the parties on June 15, 2021', 'Completion of the Services by Company Date': 'February 15, 2022', 'Charge': 'one hundred percent (100%)', 'Company': 'MagicSoft, Inc.', 'Effective Date': 'February 15, 2021', 'Start Date': '03/15/2021', 'Scheduled Onsite Visits Are Cancelled': 'ten (10) working days', 'Limit on Liability': '', 'Liability Cap': '', 'Business Automobile Liability': 'Business Automobile Liability covering all vehicles that Company owns, hires or leases with a limit of no less than $1,000,000 (combined single limit for bodily injury and property damage) for each accident.', 'Contractual Liability Coverage': 'Commercial General Liability insurance including Contractual Liability Coverage , with coverage for products liability, completed operations, property damage and bodily injury, including death , with an aggregate limit of no less than $2,000,000 . This policy shall name Client as an additional insured with respect to the provision of services provided under this Agreement. This policy shall include a waiver of subrogation against Client.', 'Technology Professional Liability Errors Omissions': 'Technology Professional Liability Errors & Omissions policy (which includes Cyber Risk coverage and Computer Security and Privacy Liability coverage) with a limit of no less than $5,000,000 per occurrence and in the aggregate.'} ``` ## Basic Use: Docugami Loader for Document QA[​](#basic-use-docugami-loader-for-document-qa "Direct link to Basic Use: Docugami Loader for Document QA") You can use the Docugami Loader like a standard loader for Document QA over multiple docs, albeit with much better chunks that follow the natural contours of the document. There are many great tutorials on how to do this, e.g. [this one](https://www.youtube.com/watch?v=3yPBVii7Ct0). We can just use the same code, but use the `DocugamiLoader` for better chunking, instead of loading text or PDF files directly with basic splitting techniques. ``` !poetry run pip install --upgrade langchain-openai tiktoken chromadb hnswlib ``` ``` # For this example, we already have a processed docset for a set of lease documentsloader = DocugamiLoader(docset_id="zo954yqy53wp")chunks = loader.load()# strip semantic metadata intentionally, to test how things work without semantic metadatafor chunk in chunks: stripped_metadata = chunk.metadata.copy() for key in chunk.metadata: if key not in ["name", "xpath", "id", "structure"]: # remove semantic metadata del stripped_metadata[key] chunk.metadata = stripped_metadataprint(len(chunks)) ``` The documents returned by the loader are already split, so we don’t need to use a text splitter. Optionally, we can use the metadata on each document, for example the structure or tag attributes, to do any post-processing we want. We will just use the output of the `DocugamiLoader` as-is to set up a retrieval QA chain the usual way. ``` from langchain.chains import RetrievalQAfrom langchain_community.vectorstores.chroma import Chromafrom langchain_openai import OpenAI, OpenAIEmbeddingsembedding = OpenAIEmbeddings()vectordb = Chroma.from_documents(documents=chunks, embedding=embedding)retriever = vectordb.as_retriever()qa_chain = RetrievalQA.from_chain_type( llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=True) ``` ``` # Try out the retriever with an example queryqa_chain("What can tenants do with signage on their properties?") ``` ``` {'query': 'What can tenants do with signage on their properties?', 'result': ' Tenants can place or attach signage (digital or otherwise) to their property after receiving written permission from the landlord, which permission shall not be unreasonably withheld. The signage must conform to all applicable laws, ordinances, etc. governing the same, and tenants must remove all such signs by the termination of the lease.', 'source_documents': [Document(page_content='6.01 Signage. Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord, which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant’s expense. Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises. ARTICLE VII UTILITIES', metadata={'id': '1c290eea05915ba0f24c4a1ffc05d6f3', 'name': 'Sample Commercial Leases/TruTone Lane 6.pdf', 'structure': 'lim h1', 'xpath': '/dg:chunk/dg:chunk/dg:chunk[2]/dg:chunk[1]/docset:TheApprovedUse/dg:chunk[12]/dg:chunk[1]'}), Document(page_content='6.01 Signage. Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord, which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant’s expense. Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises. ARTICLE VII UTILITIES', metadata={'id': '1c290eea05915ba0f24c4a1ffc05d6f3', 'name': 'Sample Commercial Leases/TruTone Lane 2.pdf', 'structure': 'lim h1', 'xpath': '/dg:chunk/dg:chunk/dg:chunk[2]/dg:chunk[1]/docset:TheApprovedUse/dg:chunk[12]/dg:chunk[1]'}), Document(page_content='Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord, which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant’s expense. Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises.', metadata={'id': '58d268162ecc36d8633b7bc364afcb8c', 'name': 'Sample Commercial Leases/TruTone Lane 2.docx', 'structure': 'div', 'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/dg:chunk/docset:ARTICLEVISIGNAGE-section/docset:ARTICLEVISIGNAGE/docset:_601Signage'}), Document(page_content='8. SIGNS:\n Tenant shall not install signs upon the Premises without Landlord’s prior written approval, which approval shall not be unreasonably withheld or delayed, and any such signage shall be subject to any applicable governmental laws, ordinances, regulations, and other requirements. Tenant shall remove all such signs by the terminations of this Lease. Such installations and removals shall be made in such a manner as to avoid injury or defacement of the Building and other improvements, and Tenant shall repair any injury or defacement, including without limitation discoloration caused by such installations and/or removal.', metadata={'id': '6b7d88f0c979c65d5db088fc177fa81f', 'name': 'Lease Agreements/Bioplex, Inc.pdf', 'structure': 'lim h1 div', 'xpath': '/dg:chunk/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/docset:TheObligation/dg:chunk[8]/dg:chunk'})]} ``` ## Using Docugami Knowledge Graph for High Accuracy Document QA[​](#using-docugami-knowledge-graph-for-high-accuracy-document-qa "Direct link to Using Docugami Knowledge Graph for High Accuracy Document QA") One issue with large documents is that the correct answer to your question may depend on chunks that are far apart in the document. Typical chunking techniques, even with overlap, will struggle with providing the LLM sufficent context to answer such questions. With upcoming very large context LLMs, it may be possible to stuff a lot of tokens, perhaps even entire documents, inside the context but this will still hit limits at some point with very long documents, or a lot of documents. For example, if we ask a more complex question that requires the LLM to draw on chunks from different parts of the document, even OpenAI’s powerful LLM is unable to answer correctly. ``` chain_response = qa_chain("What is rentable area for the property owned by DHA Group?")chain_response["result"] # correct answer should be 13,500 sq ft ``` ``` chain_response["source_documents"] ``` ``` [Document(page_content='1.6 Rentable Area of the Premises.', metadata={'id': '5b39a1ae84d51682328dca1467be211f', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'lim h1', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:CatalystGroup/dg:chunk[6]/dg:chunk'}), Document(page_content='1.6 Rentable Area of the Premises.', metadata={'id': '5b39a1ae84d51682328dca1467be211f', 'name': 'Sample Commercial Leases/Shorebucks LLC_AZ.pdf', 'structure': 'lim h1', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:MenloGroup/dg:chunk[6]/dg:chunk'}), Document(page_content='1.6 Rentable Area of the Premises.', metadata={'id': '5b39a1ae84d51682328dca1467be211f', 'name': 'Sample Commercial Leases/Shorebucks LLC_FL.pdf', 'structure': 'lim h1', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:Florida-section/docset:Florida/docset:Shorebucks/dg:chunk[5]/dg:chunk'}), Document(page_content='1.6 Rentable Area of the Premises.', metadata={'id': '5b39a1ae84d51682328dca1467be211f', 'name': 'Sample Commercial Leases/Shorebucks LLC_TX.pdf', 'structure': 'lim h1', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:LandmarkLlc/dg:chunk[6]/dg:chunk'})] ``` At first glance the answer may seem reasonable, but it is incorrect. If you review the source chunks carefully for this answer, you will see that the chunking of the document did not end up putting the Landlord name and the rentable area in the same context, and produced irrelevant chunks therefore the answer is incorrect (should be **13,500 sq ft**) Docugami can help here. Chunks are annotated with additional metadata created using different techniques if a user has been [using Docugami](https://help.docugami.com/home/reports). More technical approaches will be added later. Specifically, let’s ask Docugami to return XML tags on its output, as well as additional metadata: ``` loader = DocugamiLoader(docset_id="zo954yqy53wp")loader.include_xml_tags = ( True # for additional semantics from the Docugami knowledge graph)chunks = loader.load()print(chunks[0].metadata) ``` ``` {'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': '47297e277e556f3ce8b570047304560b', 'name': 'Sample Commercial Leases/Shorebucks LLC_AZ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_AZ.pdf', 'structure': 'h1 h1 p', 'tag': 'chunk Lease', 'Lease Date': 'March 29th , 2019', 'Landlord': 'Menlo Group', 'Tenant': 'Shorebucks LLC', 'Premises Address': '1564 E Broadway Rd , Tempe , Arizona 85282', 'Term of Lease': '96 full calendar months', 'Square Feet': '16,159'} ``` We can use a [self-querying retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/) to improve our query accuracy, using this additional metadata: ``` !poetry run pip install --upgrade lark --quiet ``` ``` from langchain.chains.query_constructor.schema import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_community.vectorstores.chroma import ChromaEXCLUDE_KEYS = ["id", "xpath", "structure"]metadata_field_info = [ AttributeInfo( name=key, description=f"The {key} for this chunk", type="string", ) for key in chunks[0].metadata if key.lower() not in EXCLUDE_KEYS]document_content_description = "Contents of this chunk"llm = OpenAI(temperature=0)vectordb = Chroma.from_documents(documents=chunks, embedding=embedding)retriever = SelfQueryRetriever.from_llm( llm, vectordb, document_content_description, metadata_field_info, verbose=True)qa_chain = RetrievalQA.from_chain_type( llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=True, verbose=True,) ``` Let’s run the same question again. It returns the correct result since all the chunks have metadata key/value pairs on them carrying key information about the document even if this information is physically very far away from the source chunk used to generate the answer. ``` qa_chain( "What is rentable area for the property owned by DHA Group?") # correct answer should be 13,500 sq ft ``` ``` > Entering new RetrievalQA chain...> Finished chain. ``` ``` {'query': 'What is rentable area for the property owned by DHA Group?', 'result': ' The rentable area of the property owned by DHA Group is 13,500 square feet.', 'source_documents': [Document(page_content='1.6 Rentable Area of the Premises.', metadata={'Landlord': 'DHA Group', 'Lease Date': 'March 29th , 2019', 'Premises Address': '111 Bauer Dr , Oakland , New Jersey , 07436', 'Square Feet': '13,500', 'Tenant': 'Shorebucks LLC', 'Term of Lease': '84 full calendar months', 'id': '5b39a1ae84d51682328dca1467be211f', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'lim h1', 'tag': 'chunk', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/dg:chunk[6]/dg:chunk'}), Document(page_content='<RentableAreaofthePremises><SquareFeet>13,500 </SquareFeet>square feet. This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party. </RentableAreaofthePremises>', metadata={'Landlord': 'DHA Group', 'Lease Date': 'March 29th , 2019', 'Premises Address': '111 Bauer Dr , Oakland , New Jersey , 07436', 'Square Feet': '13,500', 'Tenant': 'Shorebucks LLC', 'Term of Lease': '84 full calendar months', 'id': '4c06903d087f5a83e486ee42cd702d31', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/dg:chunk[6]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises'}), Document(page_content='<TheTermAnnualMarketRent>shall mean (i) for the initial Lease Year (“Year 1”) <Money>$2,239,748.00 </Money>per year (i.e., the product of the Rentable Area of the Premises multiplied by <Money>$82.00</Money>) (the “Year 1 Market Rent Hurdle”); (ii) for the Lease Year thereafter, <Percent>one hundred three percent (103%) </Percent>of the Year 1 Market Rent Hurdle, and (iii) for each Lease Year thereafter until the termination or expiration of this Lease, the Annual Market Rent Threshold shall be <AnnualMarketRentThreshold>one hundred three percent (103%) </AnnualMarketRentThreshold>of the Annual Market Rent Threshold for the immediately prior Lease Year. </TheTermAnnualMarketRent>', metadata={'Landlord': 'DHA Group', 'Lease Date': 'March 29th , 2019', 'Premises Address': '111 Bauer Dr , Oakland , New Jersey , 07436', 'Square Feet': '13,500', 'Tenant': 'Shorebucks LLC', 'Term of Lease': '84 full calendar months', 'id': '6b90beeadace5d4d12b25706fb48e631', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'TheTermAnnualMarketRent', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCredit-section/docset:GrossRentCredit/dg:chunk/dg:chunk/dg:chunk/dg:chunk[2]/docset:PercentageRent/dg:chunk[2]/dg:chunk[2]/docset:TenantSRevenue/dg:chunk[2]/docset:TenantSRevenue/dg:chunk[3]/docset:TheTermAnnualMarketRent-section/docset:TheTermAnnualMarketRent'}), Document(page_content='1.11 Percentage Rent.\n (a) <GrossRevenue><Percent>55% </Percent>of Gross Revenue to Landlord until Landlord receives Percentage Rent in an amount equal to the Annual Market Rent Hurdle (as escalated); and </GrossRevenue>', metadata={'Landlord': 'DHA Group', 'Lease Date': 'March 29th , 2019', 'Premises Address': '111 Bauer Dr , Oakland , New Jersey , 07436', 'Square Feet': '13,500', 'Tenant': 'Shorebucks LLC', 'Term of Lease': '84 full calendar months', 'id': 'c8bb9cbedf65a578d9db3f25f519dd3d', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'lim h1 lim p', 'tag': 'chunk GrossRevenue', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCredit-section/docset:GrossRentCredit/dg:chunk/dg:chunk/dg:chunk/docset:PercentageRent/dg:chunk[1]/dg:chunk[1]'})]} ``` This time the answer is correct, since the self-querying retriever created a filter on the landlord attribute of the metadata, correctly filtering to document that specifically is about the DHA Group landlord. The resulting source chunks are all relevant to this landlord, and this improves answer accuracy even though the landlord is not directly mentioned in the specific chunk that contains the correct answer. ## Advanced Topic: Small-to-Big Retrieval with Document Knowledge Graph Hierarchy Documents are inherently semi-structured and the DocugamiLoader is able to navigate the semantic and structural contours of the document to provide parent chunk references on the chunks it returns. This is useful e.g. with the [MultiVector Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector/) for [small-to-big](https://www.youtube.com/watch?v=ihSiRrOUwmg) retrieval. To get parent chunk references, you can set `loader.parent_hierarchy_levels` to a non-zero value. ``` from typing import Dict, Listfrom docugami_langchain.document_loaders import DocugamiLoaderfrom langchain_core.documents import Documentloader = DocugamiLoader(docset_id="zo954yqy53wp")loader.include_xml_tags = ( True # for additional semantics from the Docugami knowledge graph)loader.parent_hierarchy_levels = 3 # for expanded contextloader.max_text_length = ( 1024 * 8) # 8K chars are roughly 2K tokens (ref: https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them)loader.include_project_metadata_in_doc_metadata = ( False # Not filtering on vector metadata, so remove to lighten the vectors)chunks: List[Document] = loader.load()# build separate maps of parent and child chunksparents_by_id: Dict[str, Document] = {}children_by_id: Dict[str, Document] = {}for chunk in chunks: chunk_id = chunk.metadata.get("id") parent_chunk_id = chunk.metadata.get(loader.parent_id_key) if not parent_chunk_id: # parent chunk parents_by_id[chunk_id] = chunk else: # child chunk children_by_id[chunk_id] = chunk ``` ``` # Explore some of the parent chunk relationshipsfor id, chunk in list(children_by_id.items())[:5]: parent_chunk_id = chunk.metadata.get(loader.parent_id_key) if parent_chunk_id: # child chunks have the parent chunk id set print(f"PARENT CHUNK {parent_chunk_id}: {parents_by_id[parent_chunk_id]}") print(f"CHUNK {id}: {chunk}") ``` ``` PARENT CHUNK 7df09fbfc65bb8377054808aac2d16fd: page_content='OFFICE LEASE\n THIS OFFICE LEASE\n <Lease>(the "Lease") is made and entered into as of <LeaseDate>March 29th, 2019</LeaseDate>, by and between Landlord and Tenant. "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease. </Lease>\nW I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>\n1. BASIC LEASE INFORMATION AND DEFINED TERMS.\nThe key business terms of this Lease and the defined terms used in this Lease are as follows:' metadata={'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': '7df09fbfc65bb8377054808aac2d16fd', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'h1 h1 p h1 p lim h1 p', 'tag': 'chunk Lease chunk TheTerms'}CHUNK 47297e277e556f3ce8b570047304560b: page_content='OFFICE LEASE\n THIS OFFICE LEASE\n <Lease>(the "Lease") is made and entered into as of <LeaseDate>March 29th, 2019</LeaseDate>, by and between Landlord and Tenant. "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease. </Lease>' metadata={'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': '47297e277e556f3ce8b570047304560b', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'h1 h1 p', 'tag': 'chunk Lease', 'doc_id': '7df09fbfc65bb8377054808aac2d16fd'}PARENT CHUNK bb84925da3bed22c30ea1bdc173ff54f: page_content='OFFICE LEASE\n THIS OFFICE LEASE\n <Lease>(the "Lease") is made and entered into as of <LeaseDate>January 8th, 2018</LeaseDate>, by and between Landlord and Tenant. "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease. </Lease>\nW I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>\n1. BASIC LEASE INFORMATION AND DEFINED TERMS.\nThe key business terms of this Lease and the defined terms used in this Lease are as follows:\n1.1 Landlord.\n <Landlord>Catalyst Group LLC </Landlord>' metadata={'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': 'bb84925da3bed22c30ea1bdc173ff54f', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'h1 h1 p h1 p lim h1 p lim h1 div', 'tag': 'chunk Lease chunk TheTerms chunk Landlord'}CHUNK 2f1746cbd546d1d61a9250c50de7a7fa: page_content='W I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>' metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/dg:chunk', 'id': '2f1746cbd546d1d61a9250c50de7a7fa', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'h1 p', 'tag': 'chunk TheTerms', 'doc_id': 'bb84925da3bed22c30ea1bdc173ff54f'}PARENT CHUNK 0b0d765b6e504a6ba54fa76b203e62ec: page_content='OFFICE LEASE\n THIS OFFICE LEASE\n <Lease>(the "Lease") is made and entered into as of <LeaseDate>January 8th, 2018</LeaseDate>, by and between Landlord and Tenant. "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease. </Lease>\nW I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>\n1. BASIC LEASE INFORMATION AND DEFINED TERMS.\nThe key business terms of this Lease and the defined terms used in this Lease are as follows:\n1.1 Landlord.\n <Landlord>Catalyst Group LLC </Landlord>\n1.2 Tenant.\n <Tenant>Shorebucks LLC </Tenant>' metadata={'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': '0b0d765b6e504a6ba54fa76b203e62ec', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'h1 h1 p h1 p lim h1 p lim h1 div lim h1 div', 'tag': 'chunk Lease chunk TheTerms chunk Landlord chunk Tenant'}CHUNK b362dfe776ec5a7a66451a8c7c220b59: page_content='1. BASIC LEASE INFORMATION AND DEFINED TERMS.' metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/dg:chunk', 'id': 'b362dfe776ec5a7a66451a8c7c220b59', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'lim h1', 'tag': 'chunk', 'doc_id': '0b0d765b6e504a6ba54fa76b203e62ec'}PARENT CHUNK c942010baaf76aa4d4657769492f6edb: page_content='OFFICE LEASE\n THIS OFFICE LEASE\n <Lease>(the "Lease") is made and entered into as of <LeaseDate>January 8th, 2018</LeaseDate>, by and between Landlord and Tenant. "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease. </Lease>\nW I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>\n1. BASIC LEASE INFORMATION AND DEFINED TERMS.\nThe key business terms of this Lease and the defined terms used in this Lease are as follows:\n1.1 Landlord.\n <Landlord>Catalyst Group LLC </Landlord>\n1.2 Tenant.\n <Tenant>Shorebucks LLC </Tenant>\n1.3 Building.\n <Building>The building containing the Premises located at <PremisesAddress><PremisesStreetAddress><MainStreet>600 </MainStreet><StreetName>Main Street</StreetName></PremisesStreetAddress>, <City>Bellevue</City>, <State>WA</State>, <Premises>98004</Premises></PremisesAddress>. The Building is located within the Project. </Building>' metadata={'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': 'c942010baaf76aa4d4657769492f6edb', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'h1 h1 p h1 p lim h1 p lim h1 div lim h1 div lim h1 div', 'tag': 'chunk Lease chunk TheTerms chunk Landlord chunk Tenant chunk Building'}CHUNK a95971d693b7aa0f6640df1fbd18c2ba: page_content='The key business terms of this Lease and the defined terms used in this Lease are as follows:' metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/dg:chunk', 'id': 'a95971d693b7aa0f6640df1fbd18c2ba', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'p', 'tag': 'chunk', 'doc_id': 'c942010baaf76aa4d4657769492f6edb'}PARENT CHUNK f34b649cde7fc4ae156849a56d690495: page_content='W I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>\n1. BASIC LEASE INFORMATION AND DEFINED TERMS.\n<BASICLEASEINFORMATIONANDDEFINEDTERMS>The key business terms of this Lease and the defined terms used in this Lease are as follows: </BASICLEASEINFORMATIONANDDEFINEDTERMS>\n1.1 Landlord.\n <Landlord><Landlord>Menlo Group</Landlord>, a <USState>Delaware </USState>limited liability company authorized to transact business in <USState>Arizona</USState>. </Landlord>\n1.2 Tenant.\n <Tenant>Shorebucks LLC </Tenant>\n1.3 Building.\n <Building>The building containing the Premises located at <PremisesAddress><PremisesStreetAddress><Premises>1564 </Premises><Premises>E Broadway Rd</Premises></PremisesStreetAddress>, <City>Tempe</City>, <USState>Arizona </USState><Premises>85282</Premises></PremisesAddress>. The Building is located within the Project. </Building>\n1.4 Project.\n <Project>The parcel of land and the buildings and improvements located on such land known as Shorebucks Office <ShorebucksOfficeAddress><ShorebucksOfficeStreetAddress><ShorebucksOffice>6 </ShorebucksOffice><ShorebucksOffice6>located at <Number>1564 </Number>E Broadway Rd</ShorebucksOffice6></ShorebucksOfficeStreetAddress>, <City>Tempe</City>, <USState>Arizona </USState><Number>85282</Number></ShorebucksOfficeAddress>. The Project is legally described in EXHIBIT "A" to this Lease. </Project>' metadata={'xpath': '/dg:chunk/docset:WITNESSETH-section/dg:chunk', 'id': 'f34b649cde7fc4ae156849a56d690495', 'name': 'Sample Commercial Leases/Shorebucks LLC_AZ.docx', 'source': 'Sample Commercial Leases/Shorebucks LLC_AZ.docx', 'structure': 'h1 p lim h1 div lim h1 div lim h1 div lim h1 div lim h1 div', 'tag': 'chunk TheTerms BASICLEASEINFORMATIONANDDEFINEDTERMS chunk Landlord chunk Tenant chunk Building chunk Project'}CHUNK 21b4d9517f7ccdc0e3a028ce5043a2a0: page_content='1.1 Landlord.\n <Landlord><Landlord>Menlo Group</Landlord>, a <USState>Delaware </USState>limited liability company authorized to transact business in <USState>Arizona</USState>. </Landlord>' metadata={'xpath': '/dg:chunk/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk[1]/dg:chunk[1]/dg:chunk/dg:chunk[2]/dg:chunk', 'id': '21b4d9517f7ccdc0e3a028ce5043a2a0', 'name': 'Sample Commercial Leases/Shorebucks LLC_AZ.docx', 'source': 'Sample Commercial Leases/Shorebucks LLC_AZ.docx', 'structure': 'lim h1 div', 'tag': 'chunk Landlord', 'doc_id': 'f34b649cde7fc4ae156849a56d690495'} ``` ``` from langchain.retrievers.multi_vector import MultiVectorRetriever, SearchTypefrom langchain.storage import InMemoryStorefrom langchain_community.vectorstores.chroma import Chromafrom langchain_openai import OpenAIEmbeddings# The vectorstore to use to index the child chunksvectorstore = Chroma(collection_name="big2small", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()# The retriever (empty to start)retriever = MultiVectorRetriever( vectorstore=vectorstore, docstore=store, search_type=SearchType.mmr, # use max marginal relevance search search_kwargs={"k": 2},)# Add child chunks to vector storeretriever.vectorstore.add_documents(list(children_by_id.values()))# Add parent chunks to docstoreretriever.docstore.mset(parents_by_id.items()) ``` ``` # Query vector store directly, should return chunksfound_chunks = vectorstore.similarity_search( "what signs does Birch Street allow on their property?", k=2)for chunk in found_chunks: print(chunk.page_content) print(chunk.metadata[loader.parent_id_key]) ``` ``` 24. SIGNS. <SIGNS>No signage shall be placed by Tenant on any portion of the Project. However, Tenant shall be permitted to place a sign bearing its name in a location approved by Landlord near the entrance to the Premises (at Tenant's cost) and will be furnished a single listing of its name in the Building's directory (at Landlord's cost), all in accordance with the criteria adopted <Frequency>from time to time </Frequency>by Landlord for the Project. Any changes or additional listings in the directory shall be furnished (subject to availability of space) for the then Building Standard charge. </SIGNS>43090337ed2409e0da24ee07e2adbe94<TheExterior> Tenant agrees that all signs, awnings, protective gates, security devices and other installations visible from the exterior of the Premises shall be subject to Landlord's prior written approval, shall be subject to the prior approval of the <Org>Landmarks </Org><Landmarks>Preservation Commission </Landmarks>of the City of <USState>New <Org>York</Org></USState>, if required, and shall not interfere with or block either of the adjacent stores, provided, however, that Landlord shall not unreasonably withhold consent for signs that Tenant desires to install. Tenant agrees that any permitted signs, awnings, protective gates, security devices, and other installations shall be installed at Tenant’s sole cost and expense professionally prepared and dignified and subject to Landlord's prior written approval, which shall not be unreasonably withheld, delayed or conditioned, and subject to such reasonable rules and restrictions as Landlord <Frequency>from time to time </Frequency>may impose. Tenant shall submit to Landlord drawings of the proposed signs and other installations, showing the size, color, illumination and general appearance thereof, together with a statement of the manner in which the same are to be affixed to the Premises. Tenant shall not commence the installation of the proposed signs and other installations unless and until Landlord shall have approved the same in writing. . Tenant shall not install any neon sign. The aforesaid signs shall be used solely for the purpose of identifying Tenant's business. No changes shall be made in the signs and other installations without first obtaining Landlord's prior written consent thereto, which consent shall not be unreasonably withheld, delayed or conditioned. Tenant shall, at its own cost and expense, obtain and exhibit to Landlord such permits or certificates of approval as Tenant may be required to obtain from any and all City, State and other authorities having jurisdiction covering the erection, installation, maintenance or use of said signs or other installations, and Tenant shall maintain the said signs and other installations together with any appurtenances thereto in good order and condition and to the satisfaction of the Landlord and in accordance with any and all orders, regulations, requirements and rules of any public authorities having jurisdiction thereover. Landlord consents to Tenant’s Initial Signage described in annexed Exhibit D. </TheExterior>54ddfc3e47f41af7e747b2bc439ea96b ``` ``` # Query retriever, should return parents (using MMR since that was set as search_type above)retrieved_parent_docs = retriever.get_relevant_documents( "what signs does Birch Street allow on their property?")for chunk in retrieved_parent_docs: print(chunk.page_content) print(chunk.metadata["id"]) ``` ``` 21. SERVICES AND UTILITIES. <SERVICESANDUTILITIES>Landlord shall have no obligation to provide any utilities or services to the Premises other than passenger elevator service to the Premises. Tenant shall be solely responsible for and shall promptly pay all charges for water, electricity, or any other utility used or consumed in the Premises, including all costs associated with separately metering for the Premises. Tenant shall be responsible for repairs and maintenance to exit lighting, emergency lighting, and fire extinguishers for the Premises. Tenant is responsible for interior janitorial, pest control, and waste removal services. Landlord may at any time change the electrical utility provider for the Building. Tenant’s use of electrical, HVAC, or other services furnished by Landlord shall not exceed, either in voltage, rated capacity, use, or overall load, that which Landlord deems to be standard for the Building. In no event shall Landlord be liable for damages resulting from the failure to furnish any service, and any interruption or failure shall in no manner entitle Tenant to any remedies including abatement of Rent. If at any time during the Lease Term the Project has any type of card access system for the Parking Areas or the Building, Tenant shall purchase access cards for all occupants of the Premises from Landlord at a Building Standard charge and shall comply with Building Standard terms relating to access to the Parking Areas and the Building. </SERVICESANDUTILITIES>22. SECURITY DEPOSIT. <SECURITYDEPOSIT>The Security Deposit shall be held by Landlord as security for Tenant's full and faithful performance of this Lease including the payment of Rent. Tenant grants Landlord a security interest in the Security Deposit. The Security Deposit may be commingled with other funds of Landlord and Landlord shall have no liability for payment of any interest on the Security Deposit. Landlord may apply the Security Deposit to the extent required to cure any default by Tenant. If Landlord so applies the Security Deposit, Tenant shall deliver to Landlord the amount necessary to replenish the Security Deposit to its original sum within <Deliver>five days </Deliver>after notice from Landlord. The Security Deposit shall not be deemed an advance payment of Rent or a measure of damages for any default by Tenant, nor shall it be a defense to any action that Landlord may bring against Tenant. </SECURITYDEPOSIT>23. GOVERNMENTAL REGULATIONS. <GOVERNMENTALREGULATIONS>Tenant, at Tenant's sole cost and expense, shall promptly comply (and shall cause all subtenants and licensees to comply) with all laws, codes, and ordinances of governmental authorities, including the Americans with Disabilities Act of <AmericanswithDisabilitiesActDate>1990 </AmericanswithDisabilitiesActDate>as amended (the "ADA"), and all recorded covenants and restrictions affecting the Project, pertaining to Tenant, its conduct of business, and its use and occupancy of the Premises, including the performance of any work to the Common Areas required because of Tenant's specific use (as opposed to general office use) of the Premises or Alterations to the Premises made by Tenant. </GOVERNMENTALREGULATIONS>24. SIGNS. <SIGNS>No signage shall be placed by Tenant on any portion of the Project. However, Tenant shall be permitted to place a sign bearing its name in a location approved by Landlord near the entrance to the Premises (at Tenant's cost) and will be furnished a single listing of its name in the Building's directory (at Landlord's cost), all in accordance with the criteria adopted <Frequency>from time to time </Frequency>by Landlord for the Project. Any changes or additional listings in the directory shall be furnished (subject to availability of space) for the then Building Standard charge. </SIGNS>25. BROKER. <BROKER>Landlord and Tenant each represent and warrant that they have neither consulted nor negotiated with any broker or finder regarding the Premises, except the Landlord's Broker and Tenant's Broker. Tenant shall indemnify, defend, and hold Landlord harmless from and against any claims for commissions from any real estate broker other than Landlord's Broker and Tenant's Broker with whom Tenant has dealt in connection with this Lease. Landlord shall indemnify, defend, and hold Tenant harmless from and against payment of any leasing commission due Landlord's Broker and Tenant's Broker in connection with this Lease and any claims for commissions from any real estate broker other than Landlord's Broker and Tenant's Broker with whom Landlord has dealt in connection with this Lease. The terms of this article shall survive the expiration or earlier termination of this Lease. </BROKER>26. END OF TERM. <ENDOFTERM>Tenant shall surrender the Premises to Landlord at the expiration or sooner termination of this Lease or Tenant's right of possession in good order and condition, broom-clean, except for reasonable wear and tear. All Alterations made by Landlord or Tenant to the Premises shall become Landlord's property on the expiration or sooner termination of the Lease Term. On the expiration or sooner termination of the Lease Term, Tenant, at its expense, shall remove from the Premises all of Tenant's personal property, all computer and telecommunications wiring, and all Alterations that Landlord designates by notice to Tenant. Tenant shall also repair any damage to the Premises caused by the removal. Any items of Tenant's property that shall remain in the Premises after the expiration or sooner termination of the Lease Term, may, at the option of Landlord and without notice, be deemed to have been abandoned, and in that case, those items may be retained by Landlord as its property to be disposed of by Landlord, without accountability or notice to Tenant or any other party, in the manner Landlord shall determine, at Tenant's expense. </ENDOFTERM>27. ATTORNEYS' FEES. <ATTORNEYSFEES>Except as otherwise provided in this Lease, the prevailing party in any litigation or other dispute resolution proceeding, including arbitration, arising out of or in any manner based on or relating to this Lease, including tort actions and actions for injunctive, declaratory, and provisional relief, shall be entitled to recover from the losing party actual attorneys' fees and costs, including fees for litigating the entitlement to or amount of fees or costs owed under this provision, and fees in connection with bankruptcy, appellate, or collection proceedings. No person or entity other than Landlord or Tenant has any right to recover fees under this paragraph. In addition, if Landlord becomes a party to any suit or proceeding affecting the Premises or involving this Lease or Tenant's interest under this Lease, other than a suit between Landlord and Tenant, or if Landlord engages counsel to collect any of the amounts owed under this Lease, or to enforce performance of any of the agreements, conditions, covenants, provisions, or stipulations of this Lease, without commencing litigation, then the costs, expenses, and reasonable attorneys' fees and disbursements incurred by Landlord shall be paid to Landlord by Tenant. </ATTORNEYSFEES>43090337ed2409e0da24ee07e2adbe94<TenantsSoleCost> Tenant, at Tenant's sole cost and expense, shall be responsible for the removal and disposal of all of garbage, waste, and refuse from the Premises on a <Frequency>daily </Frequency>basis. Tenant shall cause all garbage, waste and refuse to be stored within the Premises until <Stored>thirty (30) minutes </Stored>before closing, except that Tenant shall be permitted, to the extent permitted by law, to place garbage outside the Premises after the time specified in the immediately preceding sentence for pick up prior to <PickUp>6:00 A.M. </PickUp>next following. Garbage shall be placed at the edge of the sidewalk in front of the Premises at the location furthest from he main entrance to the Building or such other location in front of the Building as may be specified by Landlord. </TenantsSoleCost><ItsSoleCost> Tenant, at its sole cost and expense, agrees to use all reasonable diligence in accordance with the best prevailing methods for the prevention and extermination of vermin, rats, and mice, mold, fungus, allergens, <Bacterium>bacteria </Bacterium>and all other similar conditions in the Premises. Tenant, at Tenant's expense, shall cause the Premises to be exterminated <Exterminated>from time to time </Exterminated>to the reasonable satisfaction of Landlord and shall employ licensed exterminating companies. Landlord shall not be responsible for any cleaning, waste removal, janitorial, or similar services for the Premises, and Tenant sha ll not be entitled to seek any abatement, setoff or credit from the Landlord in the event any conditions described in this Article are found to exist in the Premises. </ItsSoleCost>42B. Sidewalk Use and Maintenance<TheSidewalk> Tenant shall, at its sole cost and expense, keep the sidewalk in front of the Premises 18 inches into the street from the curb clean free of garbage, waste, refuse, excess water, snow, and ice and Tenant shall pay, as additional rent, any fine, cost, or expense caused by Tenant's failure to do so. In the event Tenant operates a sidewalk café, Tenant shall, at its sole cost and expense, maintain, repair, and replace as necessary, the sidewalk in front of the Premises and the metal trapdoor leading to the basement of the Premises, if any. Tenant shall post warning signs and cones on all sides of any side door when in use and attach a safety bar across any such door at all times when open. </TheSidewalk><Display> In no event shall Tenant use, or permit to be used, the space adjacent to or any other space outside of the Premises, for display, sale or any other similar undertaking; except [1] in the event of a legal and licensed “street fair” type program or [<Number>2</Number>] if the local zoning, Community Board [if applicable] and other municipal laws, rules and regulations, allow for sidewalk café use and, if such I s the case, said operation shall be in strict accordance with all of the aforesaid requirements and conditions. . In no event shall Tenant use, or permit to be used, any advertising medium and/or loud speaker and/or sound amplifier and/or radio or television broadcast which may be heard outside of the Premises or which does not comply with the reasonable rules and regulations of Landlord which then will be in effect. </Display>42C. Store Front Maintenance <TheBulkheadAndSecurityGate> Tenant agrees to wash the storefront, including the bulkhead and security gate, from the top to the ground, monthly or more often as Landlord reasonably requests and make all repairs and replacements as and when deemed necessary by Landlord, to all windows and plate and ot her glass in or about the Premises and the security gate, if any. In case of any default by Tenant in maintaining the storefront as herein provided, Landlord may do so at its own expense and bill the cost thereof to Tenant as additional rent. </TheBulkheadAndSecurityGate>42D. Music, Noise, and Vibration4474c92ae7ccec9184ed2fef9f072734 ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:24.057Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/docugami/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/docugami/", "description": "This notebook covers how to load documents from Docugami. It provides", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4397", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"docugami\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:22 GMT", "etag": "W/\"01358f94d9f3e9349155547a5ee39572\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::rn94v-1713753562911-3d99128c6609" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/docugami/", "property": "og:url" }, { "content": "Docugami | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook covers how to load documents from Docugami. It provides", "property": "og:description" } ], "title": "Docugami | 🦜️🔗 LangChain" }
Docugami This notebook covers how to load documents from Docugami. It provides the advantages of using this system over alternative data loaders. Prerequisites​ Install necessary python packages. Grab an access token for your workspace, and make sure it is set as the DOCUGAMI_API_KEY environment variable. Grab some docset and document IDs for your processed documents, as described here: https://help.docugami.com/home/docugami-api # You need the dgml-utils package to use the DocugamiLoader (run pip install directly without "poetry run" if you are not using poetry) !poetry run pip install docugami-langchain dgml-utils==0.3.0 --upgrade --quiet Quick start​ Create a Docugami workspace (free trials available) Add your documents (PDF, DOCX or DOC) and allow Docugami to ingest and cluster them into sets of similar documents, e.g. NDAs, Lease Agreements, and Service Agreements. There is no fixed set of document types supported by the system, the clusters created depend on your particular documents, and you can change the docset assignments later. Create an access token via the Developer Playground for your workspace. Detailed instructions Explore the Docugami API to get a list of your processed docset IDs, or just the document IDs for a particular docset. Use the DocugamiLoader as detailed below, to get rich semantic chunks for your documents. Optionally, build and publish one or more reports or abstracts. This helps Docugami improve the semantic XML with better tags based on your preferences, which are then added to the DocugamiLoader output as metadata. Use techniques like self-querying retriever to do high accuracy Document QA. Advantages vs Other Chunking Techniques​ Appropriate chunking of your documents is critical for retrieval from documents. Many chunking techniques exist, including simple ones that rely on whitespace and recursive chunk splitting based on character length. Docugami offers a different approach: Intelligent Chunking: Docugami breaks down every document into a hierarchical semantic XML tree of chunks of varying sizes, from single words or numerical values to entire sections. These chunks follow the semantic contours of the document, providing a more meaningful representation than arbitrary length or simple whitespace-based chunking. Semantic Annotations: Chunks are annotated with semantic tags that are coherent across the document set, facilitating consistent hierarchical queries across multiple documents, even if they are written and formatted differently. For example, in set of lease agreements, you can easily identify key provisions like the Landlord, Tenant, or Renewal Date, as well as more complex information such as the wording of any sub-lease provision or whether a specific jurisdiction has an exception section within a Termination Clause. Structured Representation: In addition, the XML tree indicates the structural contours of every document, using attributes denoting headings, paragraphs, lists, tables, and other common elements, and does that consistently across all supported document formats, such as scanned PDFs or DOCX files. It appropriately handles long-form document characteristics like page headers/footers or multi-column flows for clean text extraction. Additional Metadata: Chunks are also annotated with additional metadata, if a user has been using Docugami. This additional metadata can be used for high-accuracy Document QA without context window restrictions. See detailed code walk-through below. import os from docugami_langchain.document_loaders import DocugamiLoader Load Documents​ If the DOCUGAMI_API_KEY environment variable is set, there is no need to pass it in to the loader explicitly otherwise you can pass it in as the access_token parameter. DOCUGAMI_API_KEY = os.environ.get("DOCUGAMI_API_KEY") docset_id = "26xpy3aes7xp" document_ids = ["d7jqdzcj50sj", "cgd1eacfkchw"] # To load all docs in the given docset ID, just don't provide document_ids loader = DocugamiLoader(docset_id=docset_id, document_ids=document_ids) chunks = loader.load() len(chunks) The metadata for each Document (really, a chunk of an actual PDF, DOC or DOCX) contains some useful additional information: id and source: ID and Name of the file (PDF, DOC or DOCX) the chunk is sourced from within Docugami. xpath: XPath inside the XML representation of the document, for the chunk. Useful for source citations directly to the actual chunk inside the document XML. structure: Structural attributes of the chunk, e.g. h1, h2, div, table, td, etc. Useful to filter out certain kinds of chunks if needed by the caller. tag: Semantic tag for the chunk, using various generative and extractive techniques. More details here: https://github.com/docugami/DFM-benchmarks You can control chunking behavior by setting the following properties on the DocugamiLoader instance: You can set min and max chunk size, which the system tries to adhere to with minimal truncation. You can set loader.min_text_length and loader.max_text_length to control these. By default, only the text for chunks is returned. However, Docugami’s XML knowledge graph has additional rich information including semantic tags for entities inside the chunk. Set loader.include_xml_tags = True if you want the additional xml metadata on the returned chunks. In addition, you can set loader.parent_hierarchy_levels if you want Docugami to return parent chunks in the chunks it returns. The child chunks point to the parent chunks via the loader.parent_id_key value. This is useful e.g. with the MultiVector Retriever for small-to-big retrieval. See detailed example later in this notebook. loader.min_text_length = 64 loader.include_xml_tags = True chunks = loader.load() for chunk in chunks[:5]: print(chunk) page_content='MASTER SERVICES AGREEMENT\n <ThisServicesAgreement> This Services Agreement (the “Agreement”) sets forth terms under which <Company>MagicSoft, Inc. </Company>a <Org><USState>Washington </USState>Corporation </Org>(“Company”) located at <CompanyAddress><CompanyStreetAddress><Company>600 </Company><Company>4th Ave</Company></CompanyStreetAddress>, <Company>Seattle</Company>, <Client>WA </Client><ProvideServices>98104 </ProvideServices></CompanyAddress>shall provide services to <Client>Daltech, Inc.</Client>, a <Company><USState>Washington </USState>Corporation </Company>(the “Client”) located at <ClientAddress><ClientStreetAddress><Client>701 </Client><Client>1st St</Client></ClientStreetAddress>, <Client>Kirkland</Client>, <State>WA </State><Client>98033</Client></ClientAddress>. This Agreement is effective as of <EffectiveDate>February 15, 2021 </EffectiveDate>(“Effective Date”). </ThisServicesAgreement>' metadata={'xpath': '/dg:chunk/docset:MASTERSERVICESAGREEMENT-section/dg:chunk', 'id': 'c28554d0af5114e2b102e6fc4dcbbde5', 'name': 'Master Services Agreement - Daltech.docx', 'source': 'Master Services Agreement - Daltech.docx', 'structure': 'h1 p', 'tag': 'chunk ThisServicesAgreement', 'Liability': '', 'Workers Compensation Insurance': '$1,000,000', 'Limit': '$1,000,000', 'Commercial General Liability Insurance': '$2,000,000', 'Technology Professional Liability Errors Omissions Policy': '$5,000,000', 'Excess Liability Umbrella Coverage': '$9,000,000', 'Client': 'Daltech, Inc.', 'Services Agreement Date': 'INITIAL STATEMENT OF WORK (SOW) The purpose of this SOW is to describe the Software and Services that Company will initially provide to Daltech, Inc. the “Client”) under the terms and conditions of the Services Agreement entered into between the parties on June 15, 2021', 'Completion of the Services by Company Date': 'February 15, 2022', 'Charge': 'one hundred percent (100%)', 'Company': 'MagicSoft, Inc.', 'Effective Date': 'February 15, 2021', 'Start Date': '03/15/2021', 'Scheduled Onsite Visits Are Cancelled': 'ten (10) working days', 'Limit on Liability': '', 'Liability Cap': '', 'Business Automobile Liability': 'Business Automobile Liability covering all vehicles that Company owns, hires or leases with a limit of no less than $1,000,000 (combined single limit for bodily injury and property damage) for each accident.', 'Contractual Liability Coverage': 'Commercial General Liability insurance including Contractual Liability Coverage , with coverage for products liability, completed operations, property damage and bodily injury, including death , with an aggregate limit of no less than $2,000,000 . This policy shall name Client as an additional insured with respect to the provision of services provided under this Agreement. This policy shall include a waiver of subrogation against Client.', 'Technology Professional Liability Errors Omissions': 'Technology Professional Liability Errors & Omissions policy (which includes Cyber Risk coverage and Computer Security and Privacy Liability coverage) with a limit of no less than $5,000,000 per occurrence and in the aggregate.'} page_content='A. STANDARD SOFTWARE AND SERVICES AGREEMENT\n 1. Deliverables.\n Company shall provide Client with software, technical support, product management, development, and <_testRef>testing </_testRef>services (“Services”) to the Client as described on one or more Statements of Work signed by Company and Client that reference this Agreement (“SOW” or “Statement of Work”). Company shall perform Services in a prompt manner and have the final product or service (“Deliverable”) ready for Client no later than the due date specified in the applicable SOW (“Completion Date”). This due date is subject to change in accordance with the Change Order process defined in the applicable SOW. Client shall assist Company by promptly providing all information requests known or available and relevant to the Services in a timely manner.' metadata={'xpath': '/dg:chunk/docset:MASTERSERVICESAGREEMENT-section/docset:MASTERSERVICESAGREEMENT/dg:chunk[1]/docset:Standard/dg:chunk[1]/dg:chunk[1]', 'id': 'de60160d328df10fa2637637c803d2d4', 'name': 'Master Services Agreement - Daltech.docx', 'source': 'Master Services Agreement - Daltech.docx', 'structure': 'lim h1 lim h1 div', 'tag': 'chunk', 'Liability': '', 'Workers Compensation Insurance': '$1,000,000', 'Limit': '$1,000,000', 'Commercial General Liability Insurance': '$2,000,000', 'Technology Professional Liability Errors Omissions Policy': '$5,000,000', 'Excess Liability Umbrella Coverage': '$9,000,000', 'Client': 'Daltech, Inc.', 'Services Agreement Date': 'INITIAL STATEMENT OF WORK (SOW) The purpose of this SOW is to describe the Software and Services that Company will initially provide to Daltech, Inc. the “Client”) under the terms and conditions of the Services Agreement entered into between the parties on June 15, 2021', 'Completion of the Services by Company Date': 'February 15, 2022', 'Charge': 'one hundred percent (100%)', 'Company': 'MagicSoft, Inc.', 'Effective Date': 'February 15, 2021', 'Start Date': '03/15/2021', 'Scheduled Onsite Visits Are Cancelled': 'ten (10) working days', 'Limit on Liability': '', 'Liability Cap': '', 'Business Automobile Liability': 'Business Automobile Liability covering all vehicles that Company owns, hires or leases with a limit of no less than $1,000,000 (combined single limit for bodily injury and property damage) for each accident.', 'Contractual Liability Coverage': 'Commercial General Liability insurance including Contractual Liability Coverage , with coverage for products liability, completed operations, property damage and bodily injury, including death , with an aggregate limit of no less than $2,000,000 . This policy shall name Client as an additional insured with respect to the provision of services provided under this Agreement. This policy shall include a waiver of subrogation against Client.', 'Technology Professional Liability Errors Omissions': 'Technology Professional Liability Errors & Omissions policy (which includes Cyber Risk coverage and Computer Security and Privacy Liability coverage) with a limit of no less than $5,000,000 per occurrence and in the aggregate.'} page_content='2. Onsite Services.\n 2.1 Onsite visits will be charged on a <Frequency>daily </Frequency>basis (minimum <OnsiteVisits>8 hours</OnsiteVisits>).' metadata={'xpath': '/dg:chunk/docset:MASTERSERVICESAGREEMENT-section/docset:MASTERSERVICESAGREEMENT/dg:chunk[1]/docset:Standard/dg:chunk[3]/dg:chunk[1]', 'id': 'db18315b437ac2de6b555d2d8ef8f893', 'name': 'Master Services Agreement - Daltech.docx', 'source': 'Master Services Agreement - Daltech.docx', 'structure': 'lim h1 lim p', 'tag': 'chunk', 'Liability': '', 'Workers Compensation Insurance': '$1,000,000', 'Limit': '$1,000,000', 'Commercial General Liability Insurance': '$2,000,000', 'Technology Professional Liability Errors Omissions Policy': '$5,000,000', 'Excess Liability Umbrella Coverage': '$9,000,000', 'Client': 'Daltech, Inc.', 'Services Agreement Date': 'INITIAL STATEMENT OF WORK (SOW) The purpose of this SOW is to describe the Software and Services that Company will initially provide to Daltech, Inc. the “Client”) under the terms and conditions of the Services Agreement entered into between the parties on June 15, 2021', 'Completion of the Services by Company Date': 'February 15, 2022', 'Charge': 'one hundred percent (100%)', 'Company': 'MagicSoft, Inc.', 'Effective Date': 'February 15, 2021', 'Start Date': '03/15/2021', 'Scheduled Onsite Visits Are Cancelled': 'ten (10) working days', 'Limit on Liability': '', 'Liability Cap': '', 'Business Automobile Liability': 'Business Automobile Liability covering all vehicles that Company owns, hires or leases with a limit of no less than $1,000,000 (combined single limit for bodily injury and property damage) for each accident.', 'Contractual Liability Coverage': 'Commercial General Liability insurance including Contractual Liability Coverage , with coverage for products liability, completed operations, property damage and bodily injury, including death , with an aggregate limit of no less than $2,000,000 . This policy shall name Client as an additional insured with respect to the provision of services provided under this Agreement. This policy shall include a waiver of subrogation against Client.', 'Technology Professional Liability Errors Omissions': 'Technology Professional Liability Errors & Omissions policy (which includes Cyber Risk coverage and Computer Security and Privacy Liability coverage) with a limit of no less than $5,000,000 per occurrence and in the aggregate.'} page_content='2.2 <Expenses>Time and expenses will be charged based on actuals unless otherwise described in an Order Form or accompanying SOW. </Expenses>' metadata={'xpath': '/dg:chunk/docset:MASTERSERVICESAGREEMENT-section/docset:MASTERSERVICESAGREEMENT/dg:chunk[1]/docset:Standard/dg:chunk[3]/dg:chunk[2]/docset:ADailyBasis/dg:chunk[2]/dg:chunk', 'id': '506220fa472d5c48c8ee3db78c1122c1', 'name': 'Master Services Agreement - Daltech.docx', 'source': 'Master Services Agreement - Daltech.docx', 'structure': 'lim p', 'tag': 'chunk Expenses', 'Liability': '', 'Workers Compensation Insurance': '$1,000,000', 'Limit': '$1,000,000', 'Commercial General Liability Insurance': '$2,000,000', 'Technology Professional Liability Errors Omissions Policy': '$5,000,000', 'Excess Liability Umbrella Coverage': '$9,000,000', 'Client': 'Daltech, Inc.', 'Services Agreement Date': 'INITIAL STATEMENT OF WORK (SOW) The purpose of this SOW is to describe the Software and Services that Company will initially provide to Daltech, Inc. the “Client”) under the terms and conditions of the Services Agreement entered into between the parties on June 15, 2021', 'Completion of the Services by Company Date': 'February 15, 2022', 'Charge': 'one hundred percent (100%)', 'Company': 'MagicSoft, Inc.', 'Effective Date': 'February 15, 2021', 'Start Date': '03/15/2021', 'Scheduled Onsite Visits Are Cancelled': 'ten (10) working days', 'Limit on Liability': '', 'Liability Cap': '', 'Business Automobile Liability': 'Business Automobile Liability covering all vehicles that Company owns, hires or leases with a limit of no less than $1,000,000 (combined single limit for bodily injury and property damage) for each accident.', 'Contractual Liability Coverage': 'Commercial General Liability insurance including Contractual Liability Coverage , with coverage for products liability, completed operations, property damage and bodily injury, including death , with an aggregate limit of no less than $2,000,000 . This policy shall name Client as an additional insured with respect to the provision of services provided under this Agreement. This policy shall include a waiver of subrogation against Client.', 'Technology Professional Liability Errors Omissions': 'Technology Professional Liability Errors & Omissions policy (which includes Cyber Risk coverage and Computer Security and Privacy Liability coverage) with a limit of no less than $5,000,000 per occurrence and in the aggregate.'} page_content='2.3 <RegularWorkingHours>All work will be executed during regular working hours <RegularWorkingHours>Monday</RegularWorkingHours>-<Weekday>Friday </Weekday><RegularWorkingHours><RegularWorkingHours>0800</RegularWorkingHours>-<Number>1900</Number></RegularWorkingHours>. For work outside of these hours on weekdays, Company will charge <Charge>one hundred percent (100%) </Charge>of the regular hourly rate and <Charge>two hundred percent (200%) </Charge>for Saturdays, Sundays and public holidays applicable to Company. </RegularWorkingHours>' metadata={'xpath': '/dg:chunk/docset:MASTERSERVICESAGREEMENT-section/docset:MASTERSERVICESAGREEMENT/dg:chunk[1]/docset:Standard/dg:chunk[3]/dg:chunk[2]/docset:ADailyBasis/dg:chunk[3]/dg:chunk', 'id': 'dac7a3ded61b5c4f3e59771243ea46c1', 'name': 'Master Services Agreement - Daltech.docx', 'source': 'Master Services Agreement - Daltech.docx', 'structure': 'lim p', 'tag': 'chunk RegularWorkingHours', 'Liability': '', 'Workers Compensation Insurance': '$1,000,000', 'Limit': '$1,000,000', 'Commercial General Liability Insurance': '$2,000,000', 'Technology Professional Liability Errors Omissions Policy': '$5,000,000', 'Excess Liability Umbrella Coverage': '$9,000,000', 'Client': 'Daltech, Inc.', 'Services Agreement Date': 'INITIAL STATEMENT OF WORK (SOW) The purpose of this SOW is to describe the Software and Services that Company will initially provide to Daltech, Inc. the “Client”) under the terms and conditions of the Services Agreement entered into between the parties on June 15, 2021', 'Completion of the Services by Company Date': 'February 15, 2022', 'Charge': 'one hundred percent (100%)', 'Company': 'MagicSoft, Inc.', 'Effective Date': 'February 15, 2021', 'Start Date': '03/15/2021', 'Scheduled Onsite Visits Are Cancelled': 'ten (10) working days', 'Limit on Liability': '', 'Liability Cap': '', 'Business Automobile Liability': 'Business Automobile Liability covering all vehicles that Company owns, hires or leases with a limit of no less than $1,000,000 (combined single limit for bodily injury and property damage) for each accident.', 'Contractual Liability Coverage': 'Commercial General Liability insurance including Contractual Liability Coverage , with coverage for products liability, completed operations, property damage and bodily injury, including death , with an aggregate limit of no less than $2,000,000 . This policy shall name Client as an additional insured with respect to the provision of services provided under this Agreement. This policy shall include a waiver of subrogation against Client.', 'Technology Professional Liability Errors Omissions': 'Technology Professional Liability Errors & Omissions policy (which includes Cyber Risk coverage and Computer Security and Privacy Liability coverage) with a limit of no less than $5,000,000 per occurrence and in the aggregate.'} Basic Use: Docugami Loader for Document QA​ You can use the Docugami Loader like a standard loader for Document QA over multiple docs, albeit with much better chunks that follow the natural contours of the document. There are many great tutorials on how to do this, e.g. this one. We can just use the same code, but use the DocugamiLoader for better chunking, instead of loading text or PDF files directly with basic splitting techniques. !poetry run pip install --upgrade langchain-openai tiktoken chromadb hnswlib # For this example, we already have a processed docset for a set of lease documents loader = DocugamiLoader(docset_id="zo954yqy53wp") chunks = loader.load() # strip semantic metadata intentionally, to test how things work without semantic metadata for chunk in chunks: stripped_metadata = chunk.metadata.copy() for key in chunk.metadata: if key not in ["name", "xpath", "id", "structure"]: # remove semantic metadata del stripped_metadata[key] chunk.metadata = stripped_metadata print(len(chunks)) The documents returned by the loader are already split, so we don’t need to use a text splitter. Optionally, we can use the metadata on each document, for example the structure or tag attributes, to do any post-processing we want. We will just use the output of the DocugamiLoader as-is to set up a retrieval QA chain the usual way. from langchain.chains import RetrievalQA from langchain_community.vectorstores.chroma import Chroma from langchain_openai import OpenAI, OpenAIEmbeddings embedding = OpenAIEmbeddings() vectordb = Chroma.from_documents(documents=chunks, embedding=embedding) retriever = vectordb.as_retriever() qa_chain = RetrievalQA.from_chain_type( llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=True ) # Try out the retriever with an example query qa_chain("What can tenants do with signage on their properties?") {'query': 'What can tenants do with signage on their properties?', 'result': ' Tenants can place or attach signage (digital or otherwise) to their property after receiving written permission from the landlord, which permission shall not be unreasonably withheld. The signage must conform to all applicable laws, ordinances, etc. governing the same, and tenants must remove all such signs by the termination of the lease.', 'source_documents': [Document(page_content='6.01 Signage. Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord, which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant’s expense. Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises. ARTICLE VII UTILITIES', metadata={'id': '1c290eea05915ba0f24c4a1ffc05d6f3', 'name': 'Sample Commercial Leases/TruTone Lane 6.pdf', 'structure': 'lim h1', 'xpath': '/dg:chunk/dg:chunk/dg:chunk[2]/dg:chunk[1]/docset:TheApprovedUse/dg:chunk[12]/dg:chunk[1]'}), Document(page_content='6.01 Signage. Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord, which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant’s expense. Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises. ARTICLE VII UTILITIES', metadata={'id': '1c290eea05915ba0f24c4a1ffc05d6f3', 'name': 'Sample Commercial Leases/TruTone Lane 2.pdf', 'structure': 'lim h1', 'xpath': '/dg:chunk/dg:chunk/dg:chunk[2]/dg:chunk[1]/docset:TheApprovedUse/dg:chunk[12]/dg:chunk[1]'}), Document(page_content='Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord, which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant’s expense. Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises.', metadata={'id': '58d268162ecc36d8633b7bc364afcb8c', 'name': 'Sample Commercial Leases/TruTone Lane 2.docx', 'structure': 'div', 'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/dg:chunk/docset:ARTICLEVISIGNAGE-section/docset:ARTICLEVISIGNAGE/docset:_601Signage'}), Document(page_content='8. SIGNS:\n Tenant shall not install signs upon the Premises without Landlord’s prior written approval, which approval shall not be unreasonably withheld or delayed, and any such signage shall be subject to any applicable governmental laws, ordinances, regulations, and other requirements. Tenant shall remove all such signs by the terminations of this Lease. Such installations and removals shall be made in such a manner as to avoid injury or defacement of the Building and other improvements, and Tenant shall repair any injury or defacement, including without limitation discoloration caused by such installations and/or removal.', metadata={'id': '6b7d88f0c979c65d5db088fc177fa81f', 'name': 'Lease Agreements/Bioplex, Inc.pdf', 'structure': 'lim h1 div', 'xpath': '/dg:chunk/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/docset:TheObligation/dg:chunk[8]/dg:chunk'})]} Using Docugami Knowledge Graph for High Accuracy Document QA​ One issue with large documents is that the correct answer to your question may depend on chunks that are far apart in the document. Typical chunking techniques, even with overlap, will struggle with providing the LLM sufficent context to answer such questions. With upcoming very large context LLMs, it may be possible to stuff a lot of tokens, perhaps even entire documents, inside the context but this will still hit limits at some point with very long documents, or a lot of documents. For example, if we ask a more complex question that requires the LLM to draw on chunks from different parts of the document, even OpenAI’s powerful LLM is unable to answer correctly. chain_response = qa_chain("What is rentable area for the property owned by DHA Group?") chain_response["result"] # correct answer should be 13,500 sq ft chain_response["source_documents"] [Document(page_content='1.6 Rentable Area of the Premises.', metadata={'id': '5b39a1ae84d51682328dca1467be211f', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'lim h1', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:CatalystGroup/dg:chunk[6]/dg:chunk'}), Document(page_content='1.6 Rentable Area of the Premises.', metadata={'id': '5b39a1ae84d51682328dca1467be211f', 'name': 'Sample Commercial Leases/Shorebucks LLC_AZ.pdf', 'structure': 'lim h1', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:MenloGroup/dg:chunk[6]/dg:chunk'}), Document(page_content='1.6 Rentable Area of the Premises.', metadata={'id': '5b39a1ae84d51682328dca1467be211f', 'name': 'Sample Commercial Leases/Shorebucks LLC_FL.pdf', 'structure': 'lim h1', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:Florida-section/docset:Florida/docset:Shorebucks/dg:chunk[5]/dg:chunk'}), Document(page_content='1.6 Rentable Area of the Premises.', metadata={'id': '5b39a1ae84d51682328dca1467be211f', 'name': 'Sample Commercial Leases/Shorebucks LLC_TX.pdf', 'structure': 'lim h1', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:LandmarkLlc/dg:chunk[6]/dg:chunk'})] At first glance the answer may seem reasonable, but it is incorrect. If you review the source chunks carefully for this answer, you will see that the chunking of the document did not end up putting the Landlord name and the rentable area in the same context, and produced irrelevant chunks therefore the answer is incorrect (should be 13,500 sq ft) Docugami can help here. Chunks are annotated with additional metadata created using different techniques if a user has been using Docugami. More technical approaches will be added later. Specifically, let’s ask Docugami to return XML tags on its output, as well as additional metadata: loader = DocugamiLoader(docset_id="zo954yqy53wp") loader.include_xml_tags = ( True # for additional semantics from the Docugami knowledge graph ) chunks = loader.load() print(chunks[0].metadata) {'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': '47297e277e556f3ce8b570047304560b', 'name': 'Sample Commercial Leases/Shorebucks LLC_AZ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_AZ.pdf', 'structure': 'h1 h1 p', 'tag': 'chunk Lease', 'Lease Date': 'March 29th , 2019', 'Landlord': 'Menlo Group', 'Tenant': 'Shorebucks LLC', 'Premises Address': '1564 E Broadway Rd , Tempe , Arizona 85282', 'Term of Lease': '96 full calendar months', 'Square Feet': '16,159'} We can use a self-querying retriever to improve our query accuracy, using this additional metadata: !poetry run pip install --upgrade lark --quiet from langchain.chains.query_constructor.schema import AttributeInfo from langchain.retrievers.self_query.base import SelfQueryRetriever from langchain_community.vectorstores.chroma import Chroma EXCLUDE_KEYS = ["id", "xpath", "structure"] metadata_field_info = [ AttributeInfo( name=key, description=f"The {key} for this chunk", type="string", ) for key in chunks[0].metadata if key.lower() not in EXCLUDE_KEYS ] document_content_description = "Contents of this chunk" llm = OpenAI(temperature=0) vectordb = Chroma.from_documents(documents=chunks, embedding=embedding) retriever = SelfQueryRetriever.from_llm( llm, vectordb, document_content_description, metadata_field_info, verbose=True ) qa_chain = RetrievalQA.from_chain_type( llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=True, verbose=True, ) Let’s run the same question again. It returns the correct result since all the chunks have metadata key/value pairs on them carrying key information about the document even if this information is physically very far away from the source chunk used to generate the answer. qa_chain( "What is rentable area for the property owned by DHA Group?" ) # correct answer should be 13,500 sq ft > Entering new RetrievalQA chain... > Finished chain. {'query': 'What is rentable area for the property owned by DHA Group?', 'result': ' The rentable area of the property owned by DHA Group is 13,500 square feet.', 'source_documents': [Document(page_content='1.6 Rentable Area of the Premises.', metadata={'Landlord': 'DHA Group', 'Lease Date': 'March 29th , 2019', 'Premises Address': '111 Bauer Dr , Oakland , New Jersey , 07436', 'Square Feet': '13,500', 'Tenant': 'Shorebucks LLC', 'Term of Lease': '84 full calendar months', 'id': '5b39a1ae84d51682328dca1467be211f', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'lim h1', 'tag': 'chunk', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/dg:chunk[6]/dg:chunk'}), Document(page_content='<RentableAreaofthePremises><SquareFeet>13,500 </SquareFeet>square feet. This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party. </RentableAreaofthePremises>', metadata={'Landlord': 'DHA Group', 'Lease Date': 'March 29th , 2019', 'Premises Address': '111 Bauer Dr , Oakland , New Jersey , 07436', 'Square Feet': '13,500', 'Tenant': 'Shorebucks LLC', 'Term of Lease': '84 full calendar months', 'id': '4c06903d087f5a83e486ee42cd702d31', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/dg:chunk[6]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises'}), Document(page_content='<TheTermAnnualMarketRent>shall mean (i) for the initial Lease Year (“Year 1”) <Money>$2,239,748.00 </Money>per year (i.e., the product of the Rentable Area of the Premises multiplied by <Money>$82.00</Money>) (the “Year 1 Market Rent Hurdle”); (ii) for the Lease Year thereafter, <Percent>one hundred three percent (103%) </Percent>of the Year 1 Market Rent Hurdle, and (iii) for each Lease Year thereafter until the termination or expiration of this Lease, the Annual Market Rent Threshold shall be <AnnualMarketRentThreshold>one hundred three percent (103%) </AnnualMarketRentThreshold>of the Annual Market Rent Threshold for the immediately prior Lease Year. </TheTermAnnualMarketRent>', metadata={'Landlord': 'DHA Group', 'Lease Date': 'March 29th , 2019', 'Premises Address': '111 Bauer Dr , Oakland , New Jersey , 07436', 'Square Feet': '13,500', 'Tenant': 'Shorebucks LLC', 'Term of Lease': '84 full calendar months', 'id': '6b90beeadace5d4d12b25706fb48e631', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'TheTermAnnualMarketRent', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCredit-section/docset:GrossRentCredit/dg:chunk/dg:chunk/dg:chunk/dg:chunk[2]/docset:PercentageRent/dg:chunk[2]/dg:chunk[2]/docset:TenantSRevenue/dg:chunk[2]/docset:TenantSRevenue/dg:chunk[3]/docset:TheTermAnnualMarketRent-section/docset:TheTermAnnualMarketRent'}), Document(page_content='1.11 Percentage Rent.\n (a) <GrossRevenue><Percent>55% </Percent>of Gross Revenue to Landlord until Landlord receives Percentage Rent in an amount equal to the Annual Market Rent Hurdle (as escalated); and </GrossRevenue>', metadata={'Landlord': 'DHA Group', 'Lease Date': 'March 29th , 2019', 'Premises Address': '111 Bauer Dr , Oakland , New Jersey , 07436', 'Square Feet': '13,500', 'Tenant': 'Shorebucks LLC', 'Term of Lease': '84 full calendar months', 'id': 'c8bb9cbedf65a578d9db3f25f519dd3d', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'lim h1 lim p', 'tag': 'chunk GrossRevenue', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCredit-section/docset:GrossRentCredit/dg:chunk/dg:chunk/dg:chunk/docset:PercentageRent/dg:chunk[1]/dg:chunk[1]'})]} This time the answer is correct, since the self-querying retriever created a filter on the landlord attribute of the metadata, correctly filtering to document that specifically is about the DHA Group landlord. The resulting source chunks are all relevant to this landlord, and this improves answer accuracy even though the landlord is not directly mentioned in the specific chunk that contains the correct answer. Advanced Topic: Small-to-Big Retrieval with Document Knowledge Graph Hierarchy Documents are inherently semi-structured and the DocugamiLoader is able to navigate the semantic and structural contours of the document to provide parent chunk references on the chunks it returns. This is useful e.g. with the MultiVector Retriever for small-to-big retrieval. To get parent chunk references, you can set loader.parent_hierarchy_levels to a non-zero value. from typing import Dict, List from docugami_langchain.document_loaders import DocugamiLoader from langchain_core.documents import Document loader = DocugamiLoader(docset_id="zo954yqy53wp") loader.include_xml_tags = ( True # for additional semantics from the Docugami knowledge graph ) loader.parent_hierarchy_levels = 3 # for expanded context loader.max_text_length = ( 1024 * 8 ) # 8K chars are roughly 2K tokens (ref: https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them) loader.include_project_metadata_in_doc_metadata = ( False # Not filtering on vector metadata, so remove to lighten the vectors ) chunks: List[Document] = loader.load() # build separate maps of parent and child chunks parents_by_id: Dict[str, Document] = {} children_by_id: Dict[str, Document] = {} for chunk in chunks: chunk_id = chunk.metadata.get("id") parent_chunk_id = chunk.metadata.get(loader.parent_id_key) if not parent_chunk_id: # parent chunk parents_by_id[chunk_id] = chunk else: # child chunk children_by_id[chunk_id] = chunk # Explore some of the parent chunk relationships for id, chunk in list(children_by_id.items())[:5]: parent_chunk_id = chunk.metadata.get(loader.parent_id_key) if parent_chunk_id: # child chunks have the parent chunk id set print(f"PARENT CHUNK {parent_chunk_id}: {parents_by_id[parent_chunk_id]}") print(f"CHUNK {id}: {chunk}") PARENT CHUNK 7df09fbfc65bb8377054808aac2d16fd: page_content='OFFICE LEASE\n THIS OFFICE LEASE\n <Lease>(the "Lease") is made and entered into as of <LeaseDate>March 29th, 2019</LeaseDate>, by and between Landlord and Tenant. "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease. </Lease>\nW I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>\n1. BASIC LEASE INFORMATION AND DEFINED TERMS.\nThe key business terms of this Lease and the defined terms used in this Lease are as follows:' metadata={'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': '7df09fbfc65bb8377054808aac2d16fd', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'h1 h1 p h1 p lim h1 p', 'tag': 'chunk Lease chunk TheTerms'} CHUNK 47297e277e556f3ce8b570047304560b: page_content='OFFICE LEASE\n THIS OFFICE LEASE\n <Lease>(the "Lease") is made and entered into as of <LeaseDate>March 29th, 2019</LeaseDate>, by and between Landlord and Tenant. "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease. </Lease>' metadata={'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': '47297e277e556f3ce8b570047304560b', 'name': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_NJ.pdf', 'structure': 'h1 h1 p', 'tag': 'chunk Lease', 'doc_id': '7df09fbfc65bb8377054808aac2d16fd'} PARENT CHUNK bb84925da3bed22c30ea1bdc173ff54f: page_content='OFFICE LEASE\n THIS OFFICE LEASE\n <Lease>(the "Lease") is made and entered into as of <LeaseDate>January 8th, 2018</LeaseDate>, by and between Landlord and Tenant. "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease. </Lease>\nW I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>\n1. BASIC LEASE INFORMATION AND DEFINED TERMS.\nThe key business terms of this Lease and the defined terms used in this Lease are as follows:\n1.1 Landlord.\n <Landlord>Catalyst Group LLC </Landlord>' metadata={'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': 'bb84925da3bed22c30ea1bdc173ff54f', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'h1 h1 p h1 p lim h1 p lim h1 div', 'tag': 'chunk Lease chunk TheTerms chunk Landlord'} CHUNK 2f1746cbd546d1d61a9250c50de7a7fa: page_content='W I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>' metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/dg:chunk', 'id': '2f1746cbd546d1d61a9250c50de7a7fa', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'h1 p', 'tag': 'chunk TheTerms', 'doc_id': 'bb84925da3bed22c30ea1bdc173ff54f'} PARENT CHUNK 0b0d765b6e504a6ba54fa76b203e62ec: page_content='OFFICE LEASE\n THIS OFFICE LEASE\n <Lease>(the "Lease") is made and entered into as of <LeaseDate>January 8th, 2018</LeaseDate>, by and between Landlord and Tenant. "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease. </Lease>\nW I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>\n1. BASIC LEASE INFORMATION AND DEFINED TERMS.\nThe key business terms of this Lease and the defined terms used in this Lease are as follows:\n1.1 Landlord.\n <Landlord>Catalyst Group LLC </Landlord>\n1.2 Tenant.\n <Tenant>Shorebucks LLC </Tenant>' metadata={'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': '0b0d765b6e504a6ba54fa76b203e62ec', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'h1 h1 p h1 p lim h1 p lim h1 div lim h1 div', 'tag': 'chunk Lease chunk TheTerms chunk Landlord chunk Tenant'} CHUNK b362dfe776ec5a7a66451a8c7c220b59: page_content='1. BASIC LEASE INFORMATION AND DEFINED TERMS.' metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/dg:chunk', 'id': 'b362dfe776ec5a7a66451a8c7c220b59', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'lim h1', 'tag': 'chunk', 'doc_id': '0b0d765b6e504a6ba54fa76b203e62ec'} PARENT CHUNK c942010baaf76aa4d4657769492f6edb: page_content='OFFICE LEASE\n THIS OFFICE LEASE\n <Lease>(the "Lease") is made and entered into as of <LeaseDate>January 8th, 2018</LeaseDate>, by and between Landlord and Tenant. "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease. </Lease>\nW I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>\n1. BASIC LEASE INFORMATION AND DEFINED TERMS.\nThe key business terms of this Lease and the defined terms used in this Lease are as follows:\n1.1 Landlord.\n <Landlord>Catalyst Group LLC </Landlord>\n1.2 Tenant.\n <Tenant>Shorebucks LLC </Tenant>\n1.3 Building.\n <Building>The building containing the Premises located at <PremisesAddress><PremisesStreetAddress><MainStreet>600 </MainStreet><StreetName>Main Street</StreetName></PremisesStreetAddress>, <City>Bellevue</City>, <State>WA</State>, <Premises>98004</Premises></PremisesAddress>. The Building is located within the Project. </Building>' metadata={'xpath': '/docset:OFFICELEASE-section/dg:chunk', 'id': 'c942010baaf76aa4d4657769492f6edb', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'h1 h1 p h1 p lim h1 p lim h1 div lim h1 div lim h1 div', 'tag': 'chunk Lease chunk TheTerms chunk Landlord chunk Tenant chunk Building'} CHUNK a95971d693b7aa0f6640df1fbd18c2ba: page_content='The key business terms of this Lease and the defined terms used in this Lease are as follows:' metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/dg:chunk', 'id': 'a95971d693b7aa0f6640df1fbd18c2ba', 'name': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'source': 'Sample Commercial Leases/Shorebucks LLC_WA.pdf', 'structure': 'p', 'tag': 'chunk', 'doc_id': 'c942010baaf76aa4d4657769492f6edb'} PARENT CHUNK f34b649cde7fc4ae156849a56d690495: page_content='W I T N E S S E T H\n <TheTerms> Subject to and on the terms and conditions of this Lease, Landlord leases to Tenant and Tenant hires from Landlord the Premises. </TheTerms>\n1. BASIC LEASE INFORMATION AND DEFINED TERMS.\n<BASICLEASEINFORMATIONANDDEFINEDTERMS>The key business terms of this Lease and the defined terms used in this Lease are as follows: </BASICLEASEINFORMATIONANDDEFINEDTERMS>\n1.1 Landlord.\n <Landlord><Landlord>Menlo Group</Landlord>, a <USState>Delaware </USState>limited liability company authorized to transact business in <USState>Arizona</USState>. </Landlord>\n1.2 Tenant.\n <Tenant>Shorebucks LLC </Tenant>\n1.3 Building.\n <Building>The building containing the Premises located at <PremisesAddress><PremisesStreetAddress><Premises>1564 </Premises><Premises>E Broadway Rd</Premises></PremisesStreetAddress>, <City>Tempe</City>, <USState>Arizona </USState><Premises>85282</Premises></PremisesAddress>. The Building is located within the Project. </Building>\n1.4 Project.\n <Project>The parcel of land and the buildings and improvements located on such land known as Shorebucks Office <ShorebucksOfficeAddress><ShorebucksOfficeStreetAddress><ShorebucksOffice>6 </ShorebucksOffice><ShorebucksOffice6>located at <Number>1564 </Number>E Broadway Rd</ShorebucksOffice6></ShorebucksOfficeStreetAddress>, <City>Tempe</City>, <USState>Arizona </USState><Number>85282</Number></ShorebucksOfficeAddress>. The Project is legally described in EXHIBIT "A" to this Lease. </Project>' metadata={'xpath': '/dg:chunk/docset:WITNESSETH-section/dg:chunk', 'id': 'f34b649cde7fc4ae156849a56d690495', 'name': 'Sample Commercial Leases/Shorebucks LLC_AZ.docx', 'source': 'Sample Commercial Leases/Shorebucks LLC_AZ.docx', 'structure': 'h1 p lim h1 div lim h1 div lim h1 div lim h1 div lim h1 div', 'tag': 'chunk TheTerms BASICLEASEINFORMATIONANDDEFINEDTERMS chunk Landlord chunk Tenant chunk Building chunk Project'} CHUNK 21b4d9517f7ccdc0e3a028ce5043a2a0: page_content='1.1 Landlord.\n <Landlord><Landlord>Menlo Group</Landlord>, a <USState>Delaware </USState>limited liability company authorized to transact business in <USState>Arizona</USState>. </Landlord>' metadata={'xpath': '/dg:chunk/docset:WITNESSETH-section/docset:WITNESSETH/dg:chunk[1]/dg:chunk[1]/dg:chunk/dg:chunk[2]/dg:chunk', 'id': '21b4d9517f7ccdc0e3a028ce5043a2a0', 'name': 'Sample Commercial Leases/Shorebucks LLC_AZ.docx', 'source': 'Sample Commercial Leases/Shorebucks LLC_AZ.docx', 'structure': 'lim h1 div', 'tag': 'chunk Landlord', 'doc_id': 'f34b649cde7fc4ae156849a56d690495'} from langchain.retrievers.multi_vector import MultiVectorRetriever, SearchType from langchain.storage import InMemoryStore from langchain_community.vectorstores.chroma import Chroma from langchain_openai import OpenAIEmbeddings # The vectorstore to use to index the child chunks vectorstore = Chroma(collection_name="big2small", embedding_function=OpenAIEmbeddings()) # The storage layer for the parent documents store = InMemoryStore() # The retriever (empty to start) retriever = MultiVectorRetriever( vectorstore=vectorstore, docstore=store, search_type=SearchType.mmr, # use max marginal relevance search search_kwargs={"k": 2}, ) # Add child chunks to vector store retriever.vectorstore.add_documents(list(children_by_id.values())) # Add parent chunks to docstore retriever.docstore.mset(parents_by_id.items()) # Query vector store directly, should return chunks found_chunks = vectorstore.similarity_search( "what signs does Birch Street allow on their property?", k=2 ) for chunk in found_chunks: print(chunk.page_content) print(chunk.metadata[loader.parent_id_key]) 24. SIGNS. <SIGNS>No signage shall be placed by Tenant on any portion of the Project. However, Tenant shall be permitted to place a sign bearing its name in a location approved by Landlord near the entrance to the Premises (at Tenant's cost) and will be furnished a single listing of its name in the Building's directory (at Landlord's cost), all in accordance with the criteria adopted <Frequency>from time to time </Frequency>by Landlord for the Project. Any changes or additional listings in the directory shall be furnished (subject to availability of space) for the then Building Standard charge. </SIGNS> 43090337ed2409e0da24ee07e2adbe94 <TheExterior> Tenant agrees that all signs, awnings, protective gates, security devices and other installations visible from the exterior of the Premises shall be subject to Landlord's prior written approval, shall be subject to the prior approval of the <Org>Landmarks </Org><Landmarks>Preservation Commission </Landmarks>of the City of <USState>New <Org>York</Org></USState>, if required, and shall not interfere with or block either of the adjacent stores, provided, however, that Landlord shall not unreasonably withhold consent for signs that Tenant desires to install. Tenant agrees that any permitted signs, awnings, protective gates, security devices, and other installations shall be installed at Tenant’s sole cost and expense professionally prepared and dignified and subject to Landlord's prior written approval, which shall not be unreasonably withheld, delayed or conditioned, and subject to such reasonable rules and restrictions as Landlord <Frequency>from time to time </Frequency>may impose. Tenant shall submit to Landlord drawings of the proposed signs and other installations, showing the size, color, illumination and general appearance thereof, together with a statement of the manner in which the same are to be affixed to the Premises. Tenant shall not commence the installation of the proposed signs and other installations unless and until Landlord shall have approved the same in writing. . Tenant shall not install any neon sign. The aforesaid signs shall be used solely for the purpose of identifying Tenant's business. No changes shall be made in the signs and other installations without first obtaining Landlord's prior written consent thereto, which consent shall not be unreasonably withheld, delayed or conditioned. Tenant shall, at its own cost and expense, obtain and exhibit to Landlord such permits or certificates of approval as Tenant may be required to obtain from any and all City, State and other authorities having jurisdiction covering the erection, installation, maintenance or use of said signs or other installations, and Tenant shall maintain the said signs and other installations together with any appurtenances thereto in good order and condition and to the satisfaction of the Landlord and in accordance with any and all orders, regulations, requirements and rules of any public authorities having jurisdiction thereover. Landlord consents to Tenant’s Initial Signage described in annexed Exhibit D. </TheExterior> 54ddfc3e47f41af7e747b2bc439ea96b # Query retriever, should return parents (using MMR since that was set as search_type above) retrieved_parent_docs = retriever.get_relevant_documents( "what signs does Birch Street allow on their property?" ) for chunk in retrieved_parent_docs: print(chunk.page_content) print(chunk.metadata["id"]) 21. SERVICES AND UTILITIES. <SERVICESANDUTILITIES>Landlord shall have no obligation to provide any utilities or services to the Premises other than passenger elevator service to the Premises. Tenant shall be solely responsible for and shall promptly pay all charges for water, electricity, or any other utility used or consumed in the Premises, including all costs associated with separately metering for the Premises. Tenant shall be responsible for repairs and maintenance to exit lighting, emergency lighting, and fire extinguishers for the Premises. Tenant is responsible for interior janitorial, pest control, and waste removal services. Landlord may at any time change the electrical utility provider for the Building. Tenant’s use of electrical, HVAC, or other services furnished by Landlord shall not exceed, either in voltage, rated capacity, use, or overall load, that which Landlord deems to be standard for the Building. In no event shall Landlord be liable for damages resulting from the failure to furnish any service, and any interruption or failure shall in no manner entitle Tenant to any remedies including abatement of Rent. If at any time during the Lease Term the Project has any type of card access system for the Parking Areas or the Building, Tenant shall purchase access cards for all occupants of the Premises from Landlord at a Building Standard charge and shall comply with Building Standard terms relating to access to the Parking Areas and the Building. </SERVICESANDUTILITIES> 22. SECURITY DEPOSIT. <SECURITYDEPOSIT>The Security Deposit shall be held by Landlord as security for Tenant's full and faithful performance of this Lease including the payment of Rent. Tenant grants Landlord a security interest in the Security Deposit. The Security Deposit may be commingled with other funds of Landlord and Landlord shall have no liability for payment of any interest on the Security Deposit. Landlord may apply the Security Deposit to the extent required to cure any default by Tenant. If Landlord so applies the Security Deposit, Tenant shall deliver to Landlord the amount necessary to replenish the Security Deposit to its original sum within <Deliver>five days </Deliver>after notice from Landlord. The Security Deposit shall not be deemed an advance payment of Rent or a measure of damages for any default by Tenant, nor shall it be a defense to any action that Landlord may bring against Tenant. </SECURITYDEPOSIT> 23. GOVERNMENTAL REGULATIONS. <GOVERNMENTALREGULATIONS>Tenant, at Tenant's sole cost and expense, shall promptly comply (and shall cause all subtenants and licensees to comply) with all laws, codes, and ordinances of governmental authorities, including the Americans with Disabilities Act of <AmericanswithDisabilitiesActDate>1990 </AmericanswithDisabilitiesActDate>as amended (the "ADA"), and all recorded covenants and restrictions affecting the Project, pertaining to Tenant, its conduct of business, and its use and occupancy of the Premises, including the performance of any work to the Common Areas required because of Tenant's specific use (as opposed to general office use) of the Premises or Alterations to the Premises made by Tenant. </GOVERNMENTALREGULATIONS> 24. SIGNS. <SIGNS>No signage shall be placed by Tenant on any portion of the Project. However, Tenant shall be permitted to place a sign bearing its name in a location approved by Landlord near the entrance to the Premises (at Tenant's cost) and will be furnished a single listing of its name in the Building's directory (at Landlord's cost), all in accordance with the criteria adopted <Frequency>from time to time </Frequency>by Landlord for the Project. Any changes or additional listings in the directory shall be furnished (subject to availability of space) for the then Building Standard charge. </SIGNS> 25. BROKER. <BROKER>Landlord and Tenant each represent and warrant that they have neither consulted nor negotiated with any broker or finder regarding the Premises, except the Landlord's Broker and Tenant's Broker. Tenant shall indemnify, defend, and hold Landlord harmless from and against any claims for commissions from any real estate broker other than Landlord's Broker and Tenant's Broker with whom Tenant has dealt in connection with this Lease. Landlord shall indemnify, defend, and hold Tenant harmless from and against payment of any leasing commission due Landlord's Broker and Tenant's Broker in connection with this Lease and any claims for commissions from any real estate broker other than Landlord's Broker and Tenant's Broker with whom Landlord has dealt in connection with this Lease. The terms of this article shall survive the expiration or earlier termination of this Lease. </BROKER> 26. END OF TERM. <ENDOFTERM>Tenant shall surrender the Premises to Landlord at the expiration or sooner termination of this Lease or Tenant's right of possession in good order and condition, broom-clean, except for reasonable wear and tear. All Alterations made by Landlord or Tenant to the Premises shall become Landlord's property on the expiration or sooner termination of the Lease Term. On the expiration or sooner termination of the Lease Term, Tenant, at its expense, shall remove from the Premises all of Tenant's personal property, all computer and telecommunications wiring, and all Alterations that Landlord designates by notice to Tenant. Tenant shall also repair any damage to the Premises caused by the removal. Any items of Tenant's property that shall remain in the Premises after the expiration or sooner termination of the Lease Term, may, at the option of Landlord and without notice, be deemed to have been abandoned, and in that case, those items may be retained by Landlord as its property to be disposed of by Landlord, without accountability or notice to Tenant or any other party, in the manner Landlord shall determine, at Tenant's expense. </ENDOFTERM> 27. ATTORNEYS' FEES. <ATTORNEYSFEES>Except as otherwise provided in this Lease, the prevailing party in any litigation or other dispute resolution proceeding, including arbitration, arising out of or in any manner based on or relating to this Lease, including tort actions and actions for injunctive, declaratory, and provisional relief, shall be entitled to recover from the losing party actual attorneys' fees and costs, including fees for litigating the entitlement to or amount of fees or costs owed under this provision, and fees in connection with bankruptcy, appellate, or collection proceedings. No person or entity other than Landlord or Tenant has any right to recover fees under this paragraph. In addition, if Landlord becomes a party to any suit or proceeding affecting the Premises or involving this Lease or Tenant's interest under this Lease, other than a suit between Landlord and Tenant, or if Landlord engages counsel to collect any of the amounts owed under this Lease, or to enforce performance of any of the agreements, conditions, covenants, provisions, or stipulations of this Lease, without commencing litigation, then the costs, expenses, and reasonable attorneys' fees and disbursements incurred by Landlord shall be paid to Landlord by Tenant. </ATTORNEYSFEES> 43090337ed2409e0da24ee07e2adbe94 <TenantsSoleCost> Tenant, at Tenant's sole cost and expense, shall be responsible for the removal and disposal of all of garbage, waste, and refuse from the Premises on a <Frequency>daily </Frequency>basis. Tenant shall cause all garbage, waste and refuse to be stored within the Premises until <Stored>thirty (30) minutes </Stored>before closing, except that Tenant shall be permitted, to the extent permitted by law, to place garbage outside the Premises after the time specified in the immediately preceding sentence for pick up prior to <PickUp>6:00 A.M. </PickUp>next following. Garbage shall be placed at the edge of the sidewalk in front of the Premises at the location furthest from he main entrance to the Building or such other location in front of the Building as may be specified by Landlord. </TenantsSoleCost> <ItsSoleCost> Tenant, at its sole cost and expense, agrees to use all reasonable diligence in accordance with the best prevailing methods for the prevention and extermination of vermin, rats, and mice, mold, fungus, allergens, <Bacterium>bacteria </Bacterium>and all other similar conditions in the Premises. Tenant, at Tenant's expense, shall cause the Premises to be exterminated <Exterminated>from time to time </Exterminated>to the reasonable satisfaction of Landlord and shall employ licensed exterminating companies. Landlord shall not be responsible for any cleaning, waste removal, janitorial, or similar services for the Premises, and Tenant sha ll not be entitled to seek any abatement, setoff or credit from the Landlord in the event any conditions described in this Article are found to exist in the Premises. </ItsSoleCost> 42B. Sidewalk Use and Maintenance <TheSidewalk> Tenant shall, at its sole cost and expense, keep the sidewalk in front of the Premises 18 inches into the street from the curb clean free of garbage, waste, refuse, excess water, snow, and ice and Tenant shall pay, as additional rent, any fine, cost, or expense caused by Tenant's failure to do so. In the event Tenant operates a sidewalk café, Tenant shall, at its sole cost and expense, maintain, repair, and replace as necessary, the sidewalk in front of the Premises and the metal trapdoor leading to the basement of the Premises, if any. Tenant shall post warning signs and cones on all sides of any side door when in use and attach a safety bar across any such door at all times when open. </TheSidewalk> <Display> In no event shall Tenant use, or permit to be used, the space adjacent to or any other space outside of the Premises, for display, sale or any other similar undertaking; except [1] in the event of a legal and licensed “street fair” type program or [<Number>2</Number>] if the local zoning, Community Board [if applicable] and other municipal laws, rules and regulations, allow for sidewalk café use and, if such I s the case, said operation shall be in strict accordance with all of the aforesaid requirements and conditions. . In no event shall Tenant use, or permit to be used, any advertising medium and/or loud speaker and/or sound amplifier and/or radio or television broadcast which may be heard outside of the Premises or which does not comply with the reasonable rules and regulations of Landlord which then will be in effect. </Display> 42C. Store Front Maintenance <TheBulkheadAndSecurityGate> Tenant agrees to wash the storefront, including the bulkhead and security gate, from the top to the ground, monthly or more often as Landlord reasonably requests and make all repairs and replacements as and when deemed necessary by Landlord, to all windows and plate and ot her glass in or about the Premises and the security gate, if any. In case of any default by Tenant in maintaining the storefront as herein provided, Landlord may do so at its own expense and bill the cost thereof to Tenant as additional rent. </TheBulkheadAndSecurityGate> 42D. Music, Noise, and Vibration 4474c92ae7ccec9184ed2fef9f072734
https://python.langchain.com/docs/integrations/document_loaders/docusaurus/
By utilizing the existing `SitemapLoader`, this loader scans and loads all pages from a given Docusaurus application and returns the main documentation content of each page as a Document. ``` Fetching pages: 100%|##########| 939/939 [01:19<00:00, 11.85it/s] ``` ``` Document(page_content="\n\n\n\n\nCookbook | 🦜️🔗 Langchain\n\n\n\n\n\n\nSkip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKCookbookThe page you're looking for has been moved to the cookbook section of the repo as a notebook.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.\n\n\n\n", metadata={'source': 'https://python.langchain.com/cookbook', 'loc': 'https://python.langchain.com/cookbook', 'changefreq': 'weekly', 'priority': '0.5'}) ``` Sitemaps can contain thousands of URLs and ften you don’t need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the `url_filter` parameter. Only URLs that match one of the patterns will be loaded. ``` Fetching pages: 100%|##########| 1/1 [00:00<00:00, 5.21it/s] ``` ``` Document(page_content='\n\n\n\n\nSitemap | 🦜️🔗 Langchain\n\n\n\n\n\n\nSkip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersSitemapOn this pageSitemapExtends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren\'t concerned about being a good citizen, or you control the scrapped server, or don\'t care about load. Note, while this will speed up the scraping process, but it may cause the server to block you. Be careful!pip install nest_asyncio Requirement already satisfied: nest_asyncio in /Users/tasp/Code/projects/langchain/.venv/lib/python3.10/site-packages (1.5.6) [notice] A new release of pip available: 22.3.1 -> 23.0.1 [notice] To update, run: pip install --upgrade pip# fixes a bug with asyncio and jupyterimport nest_asyncionest_asyncio.apply()from langchain_community.document_loaders.sitemap import SitemapLoadersitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")docs = sitemap_loader.load()You can change the requests_per_second parameter to increase the max concurrent requests. and use requests_kwargs to pass kwargs when send requests.sitemap_loader.requests_per_second = 2# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issuesitemap_loader.requests_kwargs = {"verify": False}docs[0] Document(page_content=\'\\n\\n\\n\\n\\n\\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nSkip to main content\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nCtrl+K\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n🦜🔗 LangChain 0.0.123\\n\\n\\n\\nGetting Started\\n\\nQuickstart Guide\\n\\nModules\\n\\nPrompt Templates\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nCreate a custom prompt template\\nCreate a custom example selector\\nProvide few shot examples to a prompt\\nPrompt Serialization\\nExample Selectors\\nOutput Parsers\\n\\n\\nReference\\nPromptTemplates\\nExample Selector\\n\\n\\n\\n\\nLLMs\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nGeneric Functionality\\nCustom LLM\\nFake LLM\\nLLM Caching\\nLLM Serialization\\nToken Usage Tracking\\n\\n\\nIntegrations\\nAI21\\nAleph Alpha\\nAnthropic\\nAzure OpenAI LLM Example\\nBanana\\nCerebriumAI LLM Example\\nCohere\\nDeepInfra LLM Example\\nForefrontAI LLM Example\\nGooseAI LLM Example\\nHugging Face Hub\\nManifest\\nModal\\nOpenAI\\nPetals LLM Example\\nPromptLayer OpenAI\\nSageMakerEndpoint\\nSelf-Hosted Models via Runhouse\\nStochasticAI\\nWriter\\n\\n\\nAsync API for LLM\\nStreaming with LLMs\\n\\n\\nReference\\n\\n\\nDocument Loaders\\nKey Concepts\\nHow To Guides\\nCoNLL-U\\nAirbyte JSON\\nAZLyrics\\nBlackboard\\nCollege Confidential\\nCopy Paste\\nCSV Loader\\nDirectory Loader\\nEmail\\nEverNote\\nFacebook Chat\\nFigma\\nGCS Directory\\nGCS File Storage\\nGitBook\\nGoogle Drive\\nGutenberg\\nHacker News\\nHTML\\niFixit\\nImages\\nIMSDb\\nMarkdown\\nNotebook\\nNotion\\nObsidian\\nPDF\\nPowerPoint\\nReadTheDocs Documentation\\nRoam\\ns3 Directory\\ns3 File\\nSubtitle Files\\nTelegram\\nUnstructured File Loader\\nURL\\nWeb Base\\nWord Documents\\nYouTube\\n\\n\\n\\n\\nUtils\\nKey Concepts\\nGeneric Utilities\\nBash\\nBing Search\\nGoogle Search\\nGoogle Serper API\\nIFTTT WebHooks\\nPython REPL\\nRequests\\nSearxNG Search API\\nSerpAPI\\nWolfram Alpha\\nZapier Natural Language Actions API\\n\\n\\nReference\\nPython REPL\\nSerpAPI\\nSearxNG Search\\nDocstore\\nText Splitter\\nEmbeddings\\nVectorStores\\n\\n\\n\\n\\nIndexes\\nGetting Started\\nKey Concepts\\nHow To Guides\\nEmbeddings\\nHypothetical Document Embeddings\\nText Splitter\\nVectorStores\\nAtlasDB\\nChroma\\nDeep Lake\\nElasticSearch\\nFAISS\\nMilvus\\nOpenSearch\\nPGVector\\nPinecone\\nQdrant\\nRedis\\nWeaviate\\nChatGPT Plugin Retriever\\nVectorStore Retriever\\nAnalyze Document\\nChat Index\\nGraph QA\\nQuestion Answering with Sources\\nQuestion Answering\\nSummarization\\nRetrieval Question/Answering\\nRetrieval Question Answering with Sources\\nVector DB Text Generation\\n\\n\\n\\n\\nChains\\nGetting Started\\nHow-To Guides\\nGeneric Chains\\nLoading from LangChainHub\\nLLM Chain\\nSequential Chains\\nSerialization\\nTransformation Chain\\n\\n\\nUtility Chains\\nAPI Chains\\nSelf-Critique Chain with Constitutional AI\\nBashChain\\nLLMCheckerChain\\nLLM Math\\nLLMRequestsChain\\nLLMSummarizationCheckerChain\\nModeration\\nPAL\\nSQLite example\\n\\n\\nAsync API for Chain\\n\\n\\nKey Concepts\\nReference\\n\\n\\nAgents\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nAgents and Vectorstores\\nAsync API for Agent\\nConversation Agent (for Chat Models)\\nChatGPT Plugins\\nCustom Agent\\nDefining Custom Tools\\nHuman as a tool\\nIntermediate Steps\\nLoading from LangChainHub\\nMax Iterations\\nMulti Input Tools\\nSearch Tools\\nSerialization\\nAdding SharedMemory to an Agent and its Tools\\nCSV Agent\\nJSON Agent\\nOpenAPI Agent\\nPandas Dataframe Agent\\nPython Agent\\nSQL Database Agent\\nVectorstore Agent\\nMRKL\\nMRKL Chat\\nReAct\\nSelf Ask With Search\\n\\n\\nReference\\n\\n\\nMemory\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nConversationBufferMemory\\nConversationBufferWindowMemory\\nEntity Memory\\nConversation Knowledge Graph Memory\\nConversationSummaryMemory\\nConversationSummaryBufferMemory\\nConversationTokenBufferMemory\\nAdding Memory To an LLMChain\\nAdding Memory to a Multi-Input Chain\\nAdding Memory to an Agent\\nChatGPT Clone\\nConversation Agent\\nConversational Memory Customization\\nCustom Memory\\nMultiple Memory\\n\\n\\n\\n\\nChat\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nAgent\\nChat Vector DB\\nFew Shot Examples\\nMemory\\nPromptLayer ChatOpenAI\\nStreaming\\nRetrieval Question/Answering\\nRetrieval Question Answering with Sources\\n\\n\\n\\n\\n\\nUse Cases\\n\\nAgents\\nChatbots\\nGenerate Examples\\nData Augmented Generation\\nQuestion Answering\\nSummarization\\nQuerying Tabular Data\\nExtraction\\nEvaluation\\nAgent Benchmarking: Search + Calculator\\nAgent VectorDB Question Answering Benchmarking\\nBenchmarking Template\\nData Augmented Question Answering\\nUsing Hugging Face Datasets\\nLLM Math\\nQuestion Answering Benchmarking: Paul Graham Essay\\nQuestion Answering Benchmarking: State of the Union Address\\nQA Generation\\nQuestion Answering\\nSQL Question Answering Benchmarking: Chinook\\n\\n\\nModel Comparison\\n\\nReference\\n\\nInstallation\\nIntegrations\\nAPI References\\nPrompts\\nPromptTemplates\\nExample Selector\\n\\n\\nUtilities\\nPython REPL\\nSerpAPI\\nSearxNG Search\\nDocstore\\nText Splitter\\nEmbeddings\\nVectorStores\\n\\n\\nChains\\nAgents\\n\\n\\n\\nEcosystem\\n\\nLangChain Ecosystem\\nAI21 Labs\\nAtlasDB\\nBanana\\nCerebriumAI\\nChroma\\nCohere\\nDeepInfra\\nDeep Lake\\nForefrontAI\\nGoogle Search Wrapper\\nGoogle Serper Wrapper\\nGooseAI\\nGraphsignal\\nHazy Research\\nHelicone\\nHugging Face\\nMilvus\\nModal\\nNLPCloud\\nOpenAI\\nOpenSearch\\nPetals\\nPGVector\\nPinecone\\nPromptLayer\\nQdrant\\nRunhouse\\nSearxNG Search API\\nSerpAPI\\nStochasticAI\\nUnstructured\\nWeights & Biases\\nWeaviate\\nWolfram Alpha Wrapper\\nWriter\\n\\n\\n\\nAdditional Resources\\n\\nLangChainHub\\nGlossary\\nLangChain Gallery\\nDeployments\\nTracing\\nDiscord\\nProduction Support\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n.rst\\n\\n\\n\\n\\n\\n\\n\\n.pdf\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain\\n\\n\\n\\n\\n Contents \\n\\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain#\\nLarge language models (LLMs) are emerging as a transformative technology, enabling\\ndevelopers to build applications that they previously could not.\\nBut using these LLMs in isolation is often not enough to\\ncreate a truly powerful app - the real power comes when you are able to\\ncombine them with other sources of computation or knowledge.\\nThis library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:\\n❓ Question Answering over specific documents\\n\\nDocumentation\\nEnd-to-end Example: Question Answering over Notion Database\\n\\n💬 Chatbots\\n\\nDocumentation\\nEnd-to-end Example: Chat-LangChain\\n\\n🤖 Agents\\n\\nDocumentation\\nEnd-to-end Example: GPT+WolframAlpha\\n\\n\\nGetting Started#\\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\\n\\nGetting Started Documentation\\n\\n\\n\\n\\n\\nModules#\\nThere are several main modules that LangChain provides support for.\\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\\nThese modules are, in increasing order of complexity:\\n\\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\\nLLMs: This includes a generic interface for all LLMs, and common utilities for working with LLMs.\\nDocument Loaders: This includes a standard interface for loading documents, as well as specific integrations to all types of text data sources.\\nUtils: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.\\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\\nChat: Chat models are a variation on Language Models that expose a different API - rather than working with raw text, they work with messages. LangChain provides a standard interface for working with them and doing all the same things as above.\\n\\n\\n\\n\\n\\nUse Cases#\\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\\n\\nAgents: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.\\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\\nData Augmented Generation: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.\\nQuestion Answering: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.\\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\\nGenerate similar examples: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.\\nCompare models: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\\n\\n\\n\\n\\n\\nReference Docs#\\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\\n\\nReference Documentation\\n\\n\\n\\n\\n\\nLangChain Ecosystem#\\nGuides for how other companies/products can be used with LangChain\\n\\nLangChain Ecosystem\\n\\n\\n\\n\\n\\nAdditional Resources#\\nAdditional collection of resources we think may be useful as you develop your application!\\n\\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\\nDiscord: Join us on our Discord to discuss all things LangChain!\\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nnext\\nQuickstart Guide\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Contents\\n \\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nBy Harrison Chase\\n\\n\\n\\n\\n \\n © Copyright 2023, Harrison Chase.\\n \\n\\n\\n\\n\\n Last updated on Mar 24, 2023.\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\', lookup_str=\'\', metadata={\'source\': \'https://python.langchain.com/en/stable/\', \'loc\': \'https://python.langchain.com/en/stable/\', \'lastmod\': \'2023-03-24T19:30:54.647430+00:00\', \'changefreq\': \'weekly\', \'priority\': \'1\'}, lookup_index=0)Filtering sitemap URLs\u200bSitemaps can be massive files, with thousands of URLs. Often you don\'t need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the url_filter parameter. Only URLs that match one of the patterns will be loaded.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://python.langchain.com/en/latest/"],)documents = loader.load()documents[0] Document(page_content=\'\\n\\n\\n\\n\\n\\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nSkip to main content\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nCtrl+K\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n🦜🔗 LangChain 0.0.123\\n\\n\\n\\nGetting Started\\n\\nQuickstart Guide\\n\\nModules\\n\\nModels\\nLLMs\\nGetting Started\\nGeneric Functionality\\nHow to use the async API for LLMs\\nHow to write a custom LLM wrapper\\nHow (and why) to use the fake LLM\\nHow to cache LLM calls\\nHow to serialize LLM classes\\nHow to stream LLM responses\\nHow to track token usage\\n\\n\\nIntegrations\\nAI21\\nAleph Alpha\\nAnthropic\\nAzure OpenAI LLM Example\\nBanana\\nCerebriumAI LLM Example\\nCohere\\nDeepInfra LLM Example\\nForefrontAI LLM Example\\nGooseAI LLM Example\\nHugging Face Hub\\nManifest\\nModal\\nOpenAI\\nPetals LLM Example\\nPromptLayer OpenAI\\nSageMakerEndpoint\\nSelf-Hosted Models via Runhouse\\nStochasticAI\\nWriter\\n\\n\\nReference\\n\\n\\nChat Models\\nGetting Started\\nHow-To Guides\\nHow to use few shot examples\\nHow to stream responses\\n\\n\\nIntegrations\\nAzure\\nOpenAI\\nPromptLayer ChatOpenAI\\n\\n\\n\\n\\nText Embedding Models\\nAzureOpenAI\\nCohere\\nFake Embeddings\\nHugging Face Hub\\nInstructEmbeddings\\nOpenAI\\nSageMaker Endpoint Embeddings\\nSelf Hosted Embeddings\\nTensorflowHub\\n\\n\\n\\n\\nPrompts\\nPrompt Templates\\nGetting Started\\nHow-To Guides\\nHow to create a custom prompt template\\nHow to create a prompt template that uses few shot examples\\nHow to work with partial Prompt Templates\\nHow to serialize prompts\\n\\n\\nReference\\nPromptTemplates\\nExample Selector\\n\\n\\n\\n\\nChat Prompt Template\\nExample Selectors\\nHow to create a custom example selector\\nLengthBased ExampleSelector\\nMaximal Marginal Relevance ExampleSelector\\nNGram Overlap ExampleSelector\\nSimilarity ExampleSelector\\n\\n\\nOutput Parsers\\nOutput Parsers\\nCommaSeparatedListOutputParser\\nOutputFixingParser\\nPydanticOutputParser\\nRetryOutputParser\\nStructured Output Parser\\n\\n\\n\\n\\nIndexes\\nGetting Started\\nDocument Loaders\\nCoNLL-U\\nAirbyte JSON\\nAZLyrics\\nBlackboard\\nCollege Confidential\\nCopy Paste\\nCSV Loader\\nDirectory Loader\\nEmail\\nEverNote\\nFacebook Chat\\nFigma\\nGCS Directory\\nGCS File Storage\\nGitBook\\nGoogle Drive\\nGutenberg\\nHacker News\\nHTML\\niFixit\\nImages\\nIMSDb\\nMarkdown\\nNotebook\\nNotion\\nObsidian\\nPDF\\nPowerPoint\\nReadTheDocs Documentation\\nRoam\\ns3 Directory\\ns3 File\\nSubtitle Files\\nTelegram\\nUnstructured File Loader\\nURL\\nWeb Base\\nWord Documents\\nYouTube\\n\\n\\nText Splitters\\nGetting Started\\nCharacter Text Splitter\\nHuggingFace Length Function\\nLatex Text Splitter\\nMarkdown Text Splitter\\nNLTK Text Splitter\\nPython Code Text Splitter\\nRecursiveCharacterTextSplitter\\nSpacy Text Splitter\\ntiktoken (OpenAI) Length Function\\nTiktokenText Splitter\\n\\n\\nVectorstores\\nGetting Started\\nAtlasDB\\nChroma\\nDeep Lake\\nElasticSearch\\nFAISS\\nMilvus\\nOpenSearch\\nPGVector\\nPinecone\\nQdrant\\nRedis\\nWeaviate\\n\\n\\nRetrievers\\nChatGPT Plugin Retriever\\nVectorStore Retriever\\n\\n\\n\\n\\nMemory\\nGetting Started\\nHow-To Guides\\nConversationBufferMemory\\nConversationBufferWindowMemory\\nEntity Memory\\nConversation Knowledge Graph Memory\\nConversationSummaryMemory\\nConversationSummaryBufferMemory\\nConversationTokenBufferMemory\\nHow to add Memory to an LLMChain\\nHow to add memory to a Multi-Input Chain\\nHow to add Memory to an Agent\\nHow to customize conversational memory\\nHow to create a custom Memory class\\nHow to use multiple memroy classes in the same chain\\n\\n\\n\\n\\nChains\\nGetting Started\\nHow-To Guides\\nAsync API for Chain\\nLoading from LangChainHub\\nLLM Chain\\nSequential Chains\\nSerialization\\nTransformation Chain\\nAnalyze Document\\nChat Index\\nGraph QA\\nHypothetical Document Embeddings\\nQuestion Answering with Sources\\nQuestion Answering\\nSummarization\\nRetrieval Question/Answering\\nRetrieval Question Answering with Sources\\nVector DB Text Generation\\nAPI Chains\\nSelf-Critique Chain with Constitutional AI\\nBashChain\\nLLMCheckerChain\\nLLM Math\\nLLMRequestsChain\\nLLMSummarizationCheckerChain\\nModeration\\nPAL\\nSQLite example\\n\\n\\nReference\\n\\n\\nAgents\\nGetting Started\\nTools\\nGetting Started\\nDefining Custom Tools\\nMulti Input Tools\\nBash\\nBing Search\\nChatGPT Plugins\\nGoogle Search\\nGoogle Serper API\\nHuman as a tool\\nIFTTT WebHooks\\nPython REPL\\nRequests\\nSearch Tools\\nSearxNG Search API\\nSerpAPI\\nWolfram Alpha\\nZapier Natural Language Actions API\\n\\n\\nAgents\\nAgent Types\\nCustom Agent\\nConversation Agent (for Chat Models)\\nConversation Agent\\nMRKL\\nMRKL Chat\\nReAct\\nSelf Ask With Search\\n\\n\\nToolkits\\nCSV Agent\\nJSON Agent\\nOpenAPI Agent\\nPandas Dataframe Agent\\nPython Agent\\nSQL Database Agent\\nVectorstore Agent\\n\\n\\nAgent Executors\\nHow to combine agents and vectorstores\\nHow to use the async API for Agents\\nHow to create ChatGPT Clone\\nHow to access intermediate steps\\nHow to cap the max number of iterations\\nHow to add SharedMemory to an Agent and its Tools\\n\\n\\n\\n\\n\\nUse Cases\\n\\nPersonal Assistants\\nQuestion Answering over Docs\\nChatbots\\nQuerying Tabular Data\\nInteracting with APIs\\nSummarization\\nExtraction\\nEvaluation\\nAgent Benchmarking: Search + Calculator\\nAgent VectorDB Question Answering Benchmarking\\nBenchmarking Template\\nData Augmented Question Answering\\nUsing Hugging Face Datasets\\nLLM Math\\nQuestion Answering Benchmarking: Paul Graham Essay\\nQuestion Answering Benchmarking: State of the Union Address\\nQA Generation\\nQuestion Answering\\nSQL Question Answering Benchmarking: Chinook\\n\\n\\n\\nReference\\n\\nInstallation\\nIntegrations\\nAPI References\\nPrompts\\nPromptTemplates\\nExample Selector\\n\\n\\nUtilities\\nPython REPL\\nSerpAPI\\nSearxNG Search\\nDocstore\\nText Splitter\\nEmbeddings\\nVectorStores\\n\\n\\nChains\\nAgents\\n\\n\\n\\nEcosystem\\n\\nLangChain Ecosystem\\nAI21 Labs\\nAtlasDB\\nBanana\\nCerebriumAI\\nChroma\\nCohere\\nDeepInfra\\nDeep Lake\\nForefrontAI\\nGoogle Search Wrapper\\nGoogle Serper Wrapper\\nGooseAI\\nGraphsignal\\nHazy Research\\nHelicone\\nHugging Face\\nMilvus\\nModal\\nNLPCloud\\nOpenAI\\nOpenSearch\\nPetals\\nPGVector\\nPinecone\\nPromptLayer\\nQdrant\\nRunhouse\\nSearxNG Search API\\nSerpAPI\\nStochasticAI\\nUnstructured\\nWeights & Biases\\nWeaviate\\nWolfram Alpha Wrapper\\nWriter\\n\\n\\n\\nAdditional Resources\\n\\nLangChainHub\\nGlossary\\nLangChain Gallery\\nDeployments\\nTracing\\nDiscord\\nProduction Support\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n.rst\\n\\n\\n\\n\\n\\n\\n\\n.pdf\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain\\n\\n\\n\\n\\n Contents \\n\\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain#\\nLangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\\n\\nBe data-aware: connect a language model to other sources of data\\nBe agentic: allow a language model to interact with its environment\\n\\nThe LangChain framework is designed with the above principles in mind.\\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\\n\\nGetting Started#\\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\\n\\nGetting Started Documentation\\n\\n\\n\\n\\n\\nModules#\\nThere are several main modules that LangChain provides support for.\\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\\nThese modules are, in increasing order of complexity:\\n\\nModels: The various model types and model integrations LangChain supports.\\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\\n\\n\\n\\n\\n\\nUse Cases#\\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\\n\\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\\nExtraction: Extract structured information from text.\\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\\n\\n\\n\\n\\n\\nReference Docs#\\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\\n\\nReference Documentation\\n\\n\\n\\n\\n\\nLangChain Ecosystem#\\nGuides for how other companies/products can be used with LangChain\\n\\nLangChain Ecosystem\\n\\n\\n\\n\\n\\nAdditional Resources#\\nAdditional collection of resources we think may be useful as you develop your application!\\n\\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\\nDiscord: Join us on our Discord to discuss all things LangChain!\\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nnext\\nQuickstart Guide\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Contents\\n \\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nBy Harrison Chase\\n\\n\\n\\n\\n \\n © Copyright 2023, Harrison Chase.\\n \\n\\n\\n\\n\\n Last updated on Mar 27, 2023.\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\', lookup_str=\'\', metadata={\'source\': \'https://python.langchain.com/en/latest/\', \'loc\': \'https://python.langchain.com/en/latest/\', \'lastmod\': \'2023-03-27T22:50:49.790324+00:00\', \'changefreq\': \'daily\', \'priority\': \'0.9\'}, lookup_index=0)Add custom scraping rules\u200bThe SitemapLoader uses beautifulsoup4 for the scraping process, and it scrapes every element on the page by default. The SitemapLoader constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers or navigation elements. The following example shows how to develop and use a custom function to avoid navigation and header elements.Import the beautifulsoup4 library and define the custom function.pip install beautifulsoup4from bs4 import BeautifulSoupdef remove_nav_and_header_elements(content: BeautifulSoup) -> str: # Find all \'nav\' and \'header\' elements in the BeautifulSoup object nav_elements = content.find_all("nav") header_elements = content.find_all("header") # Remove each \'nav\' and \'header\' element from the BeautifulSoup object for element in nav_elements + header_elements: element.decompose() return str(content.get_text())Add your custom function to the SitemapLoader object.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://python.langchain.com/en/latest/"], parsing_function=remove_nav_and_header_elements,)Local Sitemap\u200bThe sitemap loader can also be used to load local files.sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True)docs = sitemap_loader.load() Fetching pages: 100%|####################################################################################################################################| 3/3 [00:00<00:00, 3.91it/s]PreviousRSTNextSlackFiltering sitemap URLsAdd custom scraping rulesLocal SitemapCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.\n\n\n\n', metadata={'source': 'https://python.langchain.com/docs/integrations/document_loaders/sitemap', 'loc': 'https://python.langchain.com/docs/integrations/document_loaders/sitemap', 'changefreq': 'weekly', 'priority': '0.5'}) ``` By default, the parser **removes** all but the main content of the docusaurus page, which is normally the `<article>` tag. You also have the option to define an **inclusive** list HTML tags by providing them as a list utilizing the `custom_html_tags` parameter. For example: You can also define an entirely custom parsing function if you need finer-grained control over the returned content for each page. The following example shows how to develop and use a custom function to avoid navigation and header elements. Add your custom function to the `DocusaurusLoader` object.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:25.512Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/docusaurus/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/docusaurus/", "description": "Docusaurus is a static-site generator which", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3470", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"docusaurus\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:24 GMT", "etag": "W/\"ef4b23c48d6484d71d337681cfbf5e2a\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::5m9xz-1713753564894-f41fa541da45" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/docusaurus/", "property": "og:url" }, { "content": "Docusaurus | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Docusaurus is a static-site generator which", "property": "og:description" } ], "title": "Docusaurus | 🦜️🔗 LangChain" }
By utilizing the existing SitemapLoader, this loader scans and loads all pages from a given Docusaurus application and returns the main documentation content of each page as a Document. Fetching pages: 100%|##########| 939/939 [01:19<00:00, 11.85it/s] Document(page_content="\n\n\n\n\nCookbook | 🦜️🔗 Langchain\n\n\n\n\n\n\nSkip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKCookbookThe page you're looking for has been moved to the cookbook section of the repo as a notebook.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.\n\n\n\n", metadata={'source': 'https://python.langchain.com/cookbook', 'loc': 'https://python.langchain.com/cookbook', 'changefreq': 'weekly', 'priority': '0.5'}) Sitemaps can contain thousands of URLs and ften you don’t need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the url_filter parameter. Only URLs that match one of the patterns will be loaded. Fetching pages: 100%|##########| 1/1 [00:00<00:00, 5.21it/s] Document(page_content='\n\n\n\n\nSitemap | 🦜️🔗 Langchain\n\n\n\n\n\n\nSkip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersSitemapOn this pageSitemapExtends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren\'t concerned about being a good citizen, or you control the scrapped server, or don\'t care about load. Note, while this will speed up the scraping process, but it may cause the server to block you. Be careful!pip install nest_asyncio Requirement already satisfied: nest_asyncio in /Users/tasp/Code/projects/langchain/.venv/lib/python3.10/site-packages (1.5.6) [notice] A new release of pip available: 22.3.1 -> 23.0.1 [notice] To update, run: pip install --upgrade pip# fixes a bug with asyncio and jupyterimport nest_asyncionest_asyncio.apply()from langchain_community.document_loaders.sitemap import SitemapLoadersitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")docs = sitemap_loader.load()You can change the requests_per_second parameter to increase the max concurrent requests. and use requests_kwargs to pass kwargs when send requests.sitemap_loader.requests_per_second = 2# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issuesitemap_loader.requests_kwargs = {"verify": False}docs[0] Document(page_content=\'\\n\\n\\n\\n\\n\\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nSkip to main content\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nCtrl+K\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n🦜🔗 LangChain 0.0.123\\n\\n\\n\\nGetting Started\\n\\nQuickstart Guide\\n\\nModules\\n\\nPrompt Templates\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nCreate a custom prompt template\\nCreate a custom example selector\\nProvide few shot examples to a prompt\\nPrompt Serialization\\nExample Selectors\\nOutput Parsers\\n\\n\\nReference\\nPromptTemplates\\nExample Selector\\n\\n\\n\\n\\nLLMs\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nGeneric Functionality\\nCustom LLM\\nFake LLM\\nLLM Caching\\nLLM Serialization\\nToken Usage Tracking\\n\\n\\nIntegrations\\nAI21\\nAleph Alpha\\nAnthropic\\nAzure OpenAI LLM Example\\nBanana\\nCerebriumAI LLM Example\\nCohere\\nDeepInfra LLM Example\\nForefrontAI LLM Example\\nGooseAI LLM Example\\nHugging Face Hub\\nManifest\\nModal\\nOpenAI\\nPetals LLM Example\\nPromptLayer OpenAI\\nSageMakerEndpoint\\nSelf-Hosted Models via Runhouse\\nStochasticAI\\nWriter\\n\\n\\nAsync API for LLM\\nStreaming with LLMs\\n\\n\\nReference\\n\\n\\nDocument Loaders\\nKey Concepts\\nHow To Guides\\nCoNLL-U\\nAirbyte JSON\\nAZLyrics\\nBlackboard\\nCollege Confidential\\nCopy Paste\\nCSV Loader\\nDirectory Loader\\nEmail\\nEverNote\\nFacebook Chat\\nFigma\\nGCS Directory\\nGCS File Storage\\nGitBook\\nGoogle Drive\\nGutenberg\\nHacker News\\nHTML\\niFixit\\nImages\\nIMSDb\\nMarkdown\\nNotebook\\nNotion\\nObsidian\\nPDF\\nPowerPoint\\nReadTheDocs Documentation\\nRoam\\ns3 Directory\\ns3 File\\nSubtitle Files\\nTelegram\\nUnstructured File Loader\\nURL\\nWeb Base\\nWord Documents\\nYouTube\\n\\n\\n\\n\\nUtils\\nKey Concepts\\nGeneric Utilities\\nBash\\nBing Search\\nGoogle Search\\nGoogle Serper API\\nIFTTT WebHooks\\nPython REPL\\nRequests\\nSearxNG Search API\\nSerpAPI\\nWolfram Alpha\\nZapier Natural Language Actions API\\n\\n\\nReference\\nPython REPL\\nSerpAPI\\nSearxNG Search\\nDocstore\\nText Splitter\\nEmbeddings\\nVectorStores\\n\\n\\n\\n\\nIndexes\\nGetting Started\\nKey Concepts\\nHow To Guides\\nEmbeddings\\nHypothetical Document Embeddings\\nText Splitter\\nVectorStores\\nAtlasDB\\nChroma\\nDeep Lake\\nElasticSearch\\nFAISS\\nMilvus\\nOpenSearch\\nPGVector\\nPinecone\\nQdrant\\nRedis\\nWeaviate\\nChatGPT Plugin Retriever\\nVectorStore Retriever\\nAnalyze Document\\nChat Index\\nGraph QA\\nQuestion Answering with Sources\\nQuestion Answering\\nSummarization\\nRetrieval Question/Answering\\nRetrieval Question Answering with Sources\\nVector DB Text Generation\\n\\n\\n\\n\\nChains\\nGetting Started\\nHow-To Guides\\nGeneric Chains\\nLoading from LangChainHub\\nLLM Chain\\nSequential Chains\\nSerialization\\nTransformation Chain\\n\\n\\nUtility Chains\\nAPI Chains\\nSelf-Critique Chain with Constitutional AI\\nBashChain\\nLLMCheckerChain\\nLLM Math\\nLLMRequestsChain\\nLLMSummarizationCheckerChain\\nModeration\\nPAL\\nSQLite example\\n\\n\\nAsync API for Chain\\n\\n\\nKey Concepts\\nReference\\n\\n\\nAgents\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nAgents and Vectorstores\\nAsync API for Agent\\nConversation Agent (for Chat Models)\\nChatGPT Plugins\\nCustom Agent\\nDefining Custom Tools\\nHuman as a tool\\nIntermediate Steps\\nLoading from LangChainHub\\nMax Iterations\\nMulti Input Tools\\nSearch Tools\\nSerialization\\nAdding SharedMemory to an Agent and its Tools\\nCSV Agent\\nJSON Agent\\nOpenAPI Agent\\nPandas Dataframe Agent\\nPython Agent\\nSQL Database Agent\\nVectorstore Agent\\nMRKL\\nMRKL Chat\\nReAct\\nSelf Ask With Search\\n\\n\\nReference\\n\\n\\nMemory\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nConversationBufferMemory\\nConversationBufferWindowMemory\\nEntity Memory\\nConversation Knowledge Graph Memory\\nConversationSummaryMemory\\nConversationSummaryBufferMemory\\nConversationTokenBufferMemory\\nAdding Memory To an LLMChain\\nAdding Memory to a Multi-Input Chain\\nAdding Memory to an Agent\\nChatGPT Clone\\nConversation Agent\\nConversational Memory Customization\\nCustom Memory\\nMultiple Memory\\n\\n\\n\\n\\nChat\\nGetting Started\\nKey Concepts\\nHow-To Guides\\nAgent\\nChat Vector DB\\nFew Shot Examples\\nMemory\\nPromptLayer ChatOpenAI\\nStreaming\\nRetrieval Question/Answering\\nRetrieval Question Answering with Sources\\n\\n\\n\\n\\n\\nUse Cases\\n\\nAgents\\nChatbots\\nGenerate Examples\\nData Augmented Generation\\nQuestion Answering\\nSummarization\\nQuerying Tabular Data\\nExtraction\\nEvaluation\\nAgent Benchmarking: Search + Calculator\\nAgent VectorDB Question Answering Benchmarking\\nBenchmarking Template\\nData Augmented Question Answering\\nUsing Hugging Face Datasets\\nLLM Math\\nQuestion Answering Benchmarking: Paul Graham Essay\\nQuestion Answering Benchmarking: State of the Union Address\\nQA Generation\\nQuestion Answering\\nSQL Question Answering Benchmarking: Chinook\\n\\n\\nModel Comparison\\n\\nReference\\n\\nInstallation\\nIntegrations\\nAPI References\\nPrompts\\nPromptTemplates\\nExample Selector\\n\\n\\nUtilities\\nPython REPL\\nSerpAPI\\nSearxNG Search\\nDocstore\\nText Splitter\\nEmbeddings\\nVectorStores\\n\\n\\nChains\\nAgents\\n\\n\\n\\nEcosystem\\n\\nLangChain Ecosystem\\nAI21 Labs\\nAtlasDB\\nBanana\\nCerebriumAI\\nChroma\\nCohere\\nDeepInfra\\nDeep Lake\\nForefrontAI\\nGoogle Search Wrapper\\nGoogle Serper Wrapper\\nGooseAI\\nGraphsignal\\nHazy Research\\nHelicone\\nHugging Face\\nMilvus\\nModal\\nNLPCloud\\nOpenAI\\nOpenSearch\\nPetals\\nPGVector\\nPinecone\\nPromptLayer\\nQdrant\\nRunhouse\\nSearxNG Search API\\nSerpAPI\\nStochasticAI\\nUnstructured\\nWeights & Biases\\nWeaviate\\nWolfram Alpha Wrapper\\nWriter\\n\\n\\n\\nAdditional Resources\\n\\nLangChainHub\\nGlossary\\nLangChain Gallery\\nDeployments\\nTracing\\nDiscord\\nProduction Support\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n.rst\\n\\n\\n\\n\\n\\n\\n\\n.pdf\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain\\n\\n\\n\\n\\n Contents \\n\\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain#\\nLarge language models (LLMs) are emerging as a transformative technology, enabling\\ndevelopers to build applications that they previously could not.\\nBut using these LLMs in isolation is often not enough to\\ncreate a truly powerful app - the real power comes when you are able to\\ncombine them with other sources of computation or knowledge.\\nThis library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:\\n❓ Question Answering over specific documents\\n\\nDocumentation\\nEnd-to-end Example: Question Answering over Notion Database\\n\\n💬 Chatbots\\n\\nDocumentation\\nEnd-to-end Example: Chat-LangChain\\n\\n🤖 Agents\\n\\nDocumentation\\nEnd-to-end Example: GPT+WolframAlpha\\n\\n\\nGetting Started#\\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\\n\\nGetting Started Documentation\\n\\n\\n\\n\\n\\nModules#\\nThere are several main modules that LangChain provides support for.\\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\\nThese modules are, in increasing order of complexity:\\n\\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\\nLLMs: This includes a generic interface for all LLMs, and common utilities for working with LLMs.\\nDocument Loaders: This includes a standard interface for loading documents, as well as specific integrations to all types of text data sources.\\nUtils: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.\\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\\nChat: Chat models are a variation on Language Models that expose a different API - rather than working with raw text, they work with messages. LangChain provides a standard interface for working with them and doing all the same things as above.\\n\\n\\n\\n\\n\\nUse Cases#\\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\\n\\nAgents: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.\\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\\nData Augmented Generation: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.\\nQuestion Answering: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.\\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\\nGenerate similar examples: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.\\nCompare models: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\\n\\n\\n\\n\\n\\nReference Docs#\\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\\n\\nReference Documentation\\n\\n\\n\\n\\n\\nLangChain Ecosystem#\\nGuides for how other companies/products can be used with LangChain\\n\\nLangChain Ecosystem\\n\\n\\n\\n\\n\\nAdditional Resources#\\nAdditional collection of resources we think may be useful as you develop your application!\\n\\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\\nDiscord: Join us on our Discord to discuss all things LangChain!\\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nnext\\nQuickstart Guide\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Contents\\n \\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nBy Harrison Chase\\n\\n\\n\\n\\n \\n © Copyright 2023, Harrison Chase.\\n \\n\\n\\n\\n\\n Last updated on Mar 24, 2023.\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\', lookup_str=\'\', metadata={\'source\': \'https://python.langchain.com/en/stable/\', \'loc\': \'https://python.langchain.com/en/stable/\', \'lastmod\': \'2023-03-24T19:30:54.647430+00:00\', \'changefreq\': \'weekly\', \'priority\': \'1\'}, lookup_index=0)Filtering sitemap URLs\u200bSitemaps can be massive files, with thousands of URLs. Often you don\'t need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the url_filter parameter. Only URLs that match one of the patterns will be loaded.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://python.langchain.com/en/latest/"],)documents = loader.load()documents[0] Document(page_content=\'\\n\\n\\n\\n\\n\\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nSkip to main content\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nCtrl+K\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n🦜🔗 LangChain 0.0.123\\n\\n\\n\\nGetting Started\\n\\nQuickstart Guide\\n\\nModules\\n\\nModels\\nLLMs\\nGetting Started\\nGeneric Functionality\\nHow to use the async API for LLMs\\nHow to write a custom LLM wrapper\\nHow (and why) to use the fake LLM\\nHow to cache LLM calls\\nHow to serialize LLM classes\\nHow to stream LLM responses\\nHow to track token usage\\n\\n\\nIntegrations\\nAI21\\nAleph Alpha\\nAnthropic\\nAzure OpenAI LLM Example\\nBanana\\nCerebriumAI LLM Example\\nCohere\\nDeepInfra LLM Example\\nForefrontAI LLM Example\\nGooseAI LLM Example\\nHugging Face Hub\\nManifest\\nModal\\nOpenAI\\nPetals LLM Example\\nPromptLayer OpenAI\\nSageMakerEndpoint\\nSelf-Hosted Models via Runhouse\\nStochasticAI\\nWriter\\n\\n\\nReference\\n\\n\\nChat Models\\nGetting Started\\nHow-To Guides\\nHow to use few shot examples\\nHow to stream responses\\n\\n\\nIntegrations\\nAzure\\nOpenAI\\nPromptLayer ChatOpenAI\\n\\n\\n\\n\\nText Embedding Models\\nAzureOpenAI\\nCohere\\nFake Embeddings\\nHugging Face Hub\\nInstructEmbeddings\\nOpenAI\\nSageMaker Endpoint Embeddings\\nSelf Hosted Embeddings\\nTensorflowHub\\n\\n\\n\\n\\nPrompts\\nPrompt Templates\\nGetting Started\\nHow-To Guides\\nHow to create a custom prompt template\\nHow to create a prompt template that uses few shot examples\\nHow to work with partial Prompt Templates\\nHow to serialize prompts\\n\\n\\nReference\\nPromptTemplates\\nExample Selector\\n\\n\\n\\n\\nChat Prompt Template\\nExample Selectors\\nHow to create a custom example selector\\nLengthBased ExampleSelector\\nMaximal Marginal Relevance ExampleSelector\\nNGram Overlap ExampleSelector\\nSimilarity ExampleSelector\\n\\n\\nOutput Parsers\\nOutput Parsers\\nCommaSeparatedListOutputParser\\nOutputFixingParser\\nPydanticOutputParser\\nRetryOutputParser\\nStructured Output Parser\\n\\n\\n\\n\\nIndexes\\nGetting Started\\nDocument Loaders\\nCoNLL-U\\nAirbyte JSON\\nAZLyrics\\nBlackboard\\nCollege Confidential\\nCopy Paste\\nCSV Loader\\nDirectory Loader\\nEmail\\nEverNote\\nFacebook Chat\\nFigma\\nGCS Directory\\nGCS File Storage\\nGitBook\\nGoogle Drive\\nGutenberg\\nHacker News\\nHTML\\niFixit\\nImages\\nIMSDb\\nMarkdown\\nNotebook\\nNotion\\nObsidian\\nPDF\\nPowerPoint\\nReadTheDocs Documentation\\nRoam\\ns3 Directory\\ns3 File\\nSubtitle Files\\nTelegram\\nUnstructured File Loader\\nURL\\nWeb Base\\nWord Documents\\nYouTube\\n\\n\\nText Splitters\\nGetting Started\\nCharacter Text Splitter\\nHuggingFace Length Function\\nLatex Text Splitter\\nMarkdown Text Splitter\\nNLTK Text Splitter\\nPython Code Text Splitter\\nRecursiveCharacterTextSplitter\\nSpacy Text Splitter\\ntiktoken (OpenAI) Length Function\\nTiktokenText Splitter\\n\\n\\nVectorstores\\nGetting Started\\nAtlasDB\\nChroma\\nDeep Lake\\nElasticSearch\\nFAISS\\nMilvus\\nOpenSearch\\nPGVector\\nPinecone\\nQdrant\\nRedis\\nWeaviate\\n\\n\\nRetrievers\\nChatGPT Plugin Retriever\\nVectorStore Retriever\\n\\n\\n\\n\\nMemory\\nGetting Started\\nHow-To Guides\\nConversationBufferMemory\\nConversationBufferWindowMemory\\nEntity Memory\\nConversation Knowledge Graph Memory\\nConversationSummaryMemory\\nConversationSummaryBufferMemory\\nConversationTokenBufferMemory\\nHow to add Memory to an LLMChain\\nHow to add memory to a Multi-Input Chain\\nHow to add Memory to an Agent\\nHow to customize conversational memory\\nHow to create a custom Memory class\\nHow to use multiple memroy classes in the same chain\\n\\n\\n\\n\\nChains\\nGetting Started\\nHow-To Guides\\nAsync API for Chain\\nLoading from LangChainHub\\nLLM Chain\\nSequential Chains\\nSerialization\\nTransformation Chain\\nAnalyze Document\\nChat Index\\nGraph QA\\nHypothetical Document Embeddings\\nQuestion Answering with Sources\\nQuestion Answering\\nSummarization\\nRetrieval Question/Answering\\nRetrieval Question Answering with Sources\\nVector DB Text Generation\\nAPI Chains\\nSelf-Critique Chain with Constitutional AI\\nBashChain\\nLLMCheckerChain\\nLLM Math\\nLLMRequestsChain\\nLLMSummarizationCheckerChain\\nModeration\\nPAL\\nSQLite example\\n\\n\\nReference\\n\\n\\nAgents\\nGetting Started\\nTools\\nGetting Started\\nDefining Custom Tools\\nMulti Input Tools\\nBash\\nBing Search\\nChatGPT Plugins\\nGoogle Search\\nGoogle Serper API\\nHuman as a tool\\nIFTTT WebHooks\\nPython REPL\\nRequests\\nSearch Tools\\nSearxNG Search API\\nSerpAPI\\nWolfram Alpha\\nZapier Natural Language Actions API\\n\\n\\nAgents\\nAgent Types\\nCustom Agent\\nConversation Agent (for Chat Models)\\nConversation Agent\\nMRKL\\nMRKL Chat\\nReAct\\nSelf Ask With Search\\n\\n\\nToolkits\\nCSV Agent\\nJSON Agent\\nOpenAPI Agent\\nPandas Dataframe Agent\\nPython Agent\\nSQL Database Agent\\nVectorstore Agent\\n\\n\\nAgent Executors\\nHow to combine agents and vectorstores\\nHow to use the async API for Agents\\nHow to create ChatGPT Clone\\nHow to access intermediate steps\\nHow to cap the max number of iterations\\nHow to add SharedMemory to an Agent and its Tools\\n\\n\\n\\n\\n\\nUse Cases\\n\\nPersonal Assistants\\nQuestion Answering over Docs\\nChatbots\\nQuerying Tabular Data\\nInteracting with APIs\\nSummarization\\nExtraction\\nEvaluation\\nAgent Benchmarking: Search + Calculator\\nAgent VectorDB Question Answering Benchmarking\\nBenchmarking Template\\nData Augmented Question Answering\\nUsing Hugging Face Datasets\\nLLM Math\\nQuestion Answering Benchmarking: Paul Graham Essay\\nQuestion Answering Benchmarking: State of the Union Address\\nQA Generation\\nQuestion Answering\\nSQL Question Answering Benchmarking: Chinook\\n\\n\\n\\nReference\\n\\nInstallation\\nIntegrations\\nAPI References\\nPrompts\\nPromptTemplates\\nExample Selector\\n\\n\\nUtilities\\nPython REPL\\nSerpAPI\\nSearxNG Search\\nDocstore\\nText Splitter\\nEmbeddings\\nVectorStores\\n\\n\\nChains\\nAgents\\n\\n\\n\\nEcosystem\\n\\nLangChain Ecosystem\\nAI21 Labs\\nAtlasDB\\nBanana\\nCerebriumAI\\nChroma\\nCohere\\nDeepInfra\\nDeep Lake\\nForefrontAI\\nGoogle Search Wrapper\\nGoogle Serper Wrapper\\nGooseAI\\nGraphsignal\\nHazy Research\\nHelicone\\nHugging Face\\nMilvus\\nModal\\nNLPCloud\\nOpenAI\\nOpenSearch\\nPetals\\nPGVector\\nPinecone\\nPromptLayer\\nQdrant\\nRunhouse\\nSearxNG Search API\\nSerpAPI\\nStochasticAI\\nUnstructured\\nWeights & Biases\\nWeaviate\\nWolfram Alpha Wrapper\\nWriter\\n\\n\\n\\nAdditional Resources\\n\\nLangChainHub\\nGlossary\\nLangChain Gallery\\nDeployments\\nTracing\\nDiscord\\nProduction Support\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n.rst\\n\\n\\n\\n\\n\\n\\n\\n.pdf\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain\\n\\n\\n\\n\\n Contents \\n\\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\nWelcome to LangChain#\\nLangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\\n\\nBe data-aware: connect a language model to other sources of data\\nBe agentic: allow a language model to interact with its environment\\n\\nThe LangChain framework is designed with the above principles in mind.\\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\\n\\nGetting Started#\\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\\n\\nGetting Started Documentation\\n\\n\\n\\n\\n\\nModules#\\nThere are several main modules that LangChain provides support for.\\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\\nThese modules are, in increasing order of complexity:\\n\\nModels: The various model types and model integrations LangChain supports.\\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\\n\\n\\n\\n\\n\\nUse Cases#\\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\\n\\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\\nExtraction: Extract structured information from text.\\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\\n\\n\\n\\n\\n\\nReference Docs#\\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\\n\\nReference Documentation\\n\\n\\n\\n\\n\\nLangChain Ecosystem#\\nGuides for how other companies/products can be used with LangChain\\n\\nLangChain Ecosystem\\n\\n\\n\\n\\n\\nAdditional Resources#\\nAdditional collection of resources we think may be useful as you develop your application!\\n\\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\\nDiscord: Join us on our Discord to discuss all things LangChain!\\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nnext\\nQuickstart Guide\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n Contents\\n \\n\\n\\nGetting Started\\nModules\\nUse Cases\\nReference Docs\\nLangChain Ecosystem\\nAdditional Resources\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nBy Harrison Chase\\n\\n\\n\\n\\n \\n © Copyright 2023, Harrison Chase.\\n \\n\\n\\n\\n\\n Last updated on Mar 27, 2023.\\n \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\', lookup_str=\'\', metadata={\'source\': \'https://python.langchain.com/en/latest/\', \'loc\': \'https://python.langchain.com/en/latest/\', \'lastmod\': \'2023-03-27T22:50:49.790324+00:00\', \'changefreq\': \'daily\', \'priority\': \'0.9\'}, lookup_index=0)Add custom scraping rules\u200bThe SitemapLoader uses beautifulsoup4 for the scraping process, and it scrapes every element on the page by default. The SitemapLoader constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers or navigation elements. The following example shows how to develop and use a custom function to avoid navigation and header elements.Import the beautifulsoup4 library and define the custom function.pip install beautifulsoup4from bs4 import BeautifulSoupdef remove_nav_and_header_elements(content: BeautifulSoup) -> str: # Find all \'nav\' and \'header\' elements in the BeautifulSoup object nav_elements = content.find_all("nav") header_elements = content.find_all("header") # Remove each \'nav\' and \'header\' element from the BeautifulSoup object for element in nav_elements + header_elements: element.decompose() return str(content.get_text())Add your custom function to the SitemapLoader object.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://python.langchain.com/en/latest/"], parsing_function=remove_nav_and_header_elements,)Local Sitemap\u200bThe sitemap loader can also be used to load local files.sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True)docs = sitemap_loader.load() Fetching pages: 100%|####################################################################################################################################| 3/3 [00:00<00:00, 3.91it/s]PreviousRSTNextSlackFiltering sitemap URLsAdd custom scraping rulesLocal SitemapCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.\n\n\n\n', metadata={'source': 'https://python.langchain.com/docs/integrations/document_loaders/sitemap', 'loc': 'https://python.langchain.com/docs/integrations/document_loaders/sitemap', 'changefreq': 'weekly', 'priority': '0.5'}) By default, the parser removes all but the main content of the docusaurus page, which is normally the <article> tag. You also have the option to define an inclusive list HTML tags by providing them as a list utilizing the custom_html_tags parameter. For example: You can also define an entirely custom parsing function if you need finer-grained control over the returned content for each page. The following example shows how to develop and use a custom function to avoid navigation and header elements. Add your custom function to the DocusaurusLoader object.
https://python.langchain.com/docs/integrations/document_loaders/etherscan/
The `Etherscan` loader use `Etherscan API` to load transactions histories under specific account on `Ethereum Mainnet`. You will need a `Etherscan api key` to proceed. The free api key has 5 calls per seconds quota. If the account does not have corresponding transactions, the loader will a list with one document. The content of document is ’’. You can pass different filters to loader to access different functionalities we mentioned above: All functions related to transactions histories are restricted 1000 histories maximum because of Etherscan limit. You can use the following parameters to find the transaction histories you need: ``` {'blockNumber': '13242975', 'timeStamp': '1631878751', 'hash': '0x366dda325b1a6570928873665b6b418874a7dedf7fee9426158fa3536b621788', 'nonce': '28', 'blockHash': '0x5469dba1b1e1372962cf2be27ab2640701f88c00640c4d26b8cc2ae9ac256fb6', 'from': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3', 'contractAddress': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '298131000000000', 'tokenName': 'ABCHANGE.io', 'tokenSymbol': 'XCH', 'tokenDecimal': '9', 'transactionIndex': '71', 'gas': '15000000', 'gasPrice': '48614996176', 'gasUsed': '5712724', 'cumulativeGasUsed': '11507920', 'input': 'deprecated', 'confirmations': '4492277'} ``` ``` [Document(page_content="{'blockNumber': '1723771', 'timeStamp': '1466213371', 'hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'nonce': '3155', 'blockHash': '0xc2c2207bcaf341eed07f984c9a90b3f8e8bdbdbd2ac6562f8c2f5bfa4b51299d', 'transactionIndex': '5', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13149213761000000000', 'gas': '90000', 'gasPrice': '22655598156', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '126000', 'gasUsed': '21000', 'confirmations': '16011481', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1727090', 'timeStamp': '1466262018', 'hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'nonce': '3267', 'blockHash': '0xc0cff378c3446b9b22d217c2c5f54b1c85b89a632c69c55b76cdffe88d2b9f4d', 'transactionIndex': '20', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11521979886000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3806725', 'gasUsed': '21000', 'confirmations': '16008162', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1730337', 'timeStamp': '1466308222', 'hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'nonce': '3344', 'blockHash': '0x3a52d28b8587d55c621144a161a0ad5c37dd9f7d63b629ab31da04fa410b2cfa', 'transactionIndex': '1', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9783400526000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '60788', 'gasUsed': '21000', 'confirmations': '16004915', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1733479', 'timeStamp': '1466352351', 'hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'nonce': '3367', 'blockHash': '0x9928661e7ae125b3ae0bcf5e076555a3ee44c52ae31bd6864c9c93a6ebb3f43e', 'transactionIndex': '0', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '1570706444000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '16001773', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1734172', 'timeStamp': '1466362463', 'hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'nonce': '1016', 'blockHash': '0x8a8afe2b446713db88218553cfb5dd202422928e5e0bc00475ed2f37d95649de', 'transactionIndex': '4', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '6322276709000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '105333', 'gasUsed': '21000', 'confirmations': '16001080', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1737276', 'timeStamp': '1466406037', 'hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'nonce': '1024', 'blockHash': '0xe117cad73752bb485c3bef24556e45b7766b283229180fcabc9711f3524b9f79', 'transactionIndex': '35', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9976891868000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3187163', 'gasUsed': '21000', 'confirmations': '15997976', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1740314', 'timeStamp': '1466450262', 'hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'nonce': '1051', 'blockHash': '0x588d17842819a81afae3ac6644d8005c12ce55ddb66c8d4c202caa91d4e8fdbe', 'transactionIndex': '6', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8060633765000000000', 'gas': '90000', 'gasPrice': '22926905859', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '153077', 'gasUsed': '21000', 'confirmations': '15994938', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1743384', 'timeStamp': '1466494099', 'hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'nonce': '1068', 'blockHash': '0x997245108c84250057fda27306b53f9438ad40978a95ca51d8fd7477e73fbaa7', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9541921352000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '119650', 'gasUsed': '21000', 'confirmations': '15991868', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1746405', 'timeStamp': '1466538123', 'hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'nonce': '1092', 'blockHash': '0x3af3966cdaf22e8b112792ee2e0edd21ceb5a0e7bf9d8c168a40cf22deb3690c', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8433783799000000000', 'gas': '90000', 'gasPrice': '25689279306', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15988847', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1749459', 'timeStamp': '1466582044', 'hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'nonce': '1096', 'blockHash': '0x5fc5d2a903977b35ce1239975ae23f9157d45d7bd8a8f6205e8ce270000797f9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10269065805000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15985793', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1752614', 'timeStamp': '1466626168', 'hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'nonce': '1118', 'blockHash': '0x88ef054b98e47504332609394e15c0a4467f84042396717af6483f0bcd916127', 'transactionIndex': '11', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11325836780000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '252000', 'gasUsed': '21000', 'confirmations': '15982638', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1755659', 'timeStamp': '1466669931', 'hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'nonce': '1133', 'blockHash': '0x2983972217a91343860415d1744c2a55246a297c4810908bbd3184785bc9b0c2', 'transactionIndex': '14', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13226475343000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '2674679', 'gasUsed': '21000', 'confirmations': '15979593', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1758709', 'timeStamp': '1466713652', 'hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'nonce': '1147', 'blockHash': '0x1660de1e73067251be0109d267a21ffc7d5bde21719a3664c7045c32e771ecf9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9758447294000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15976543', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1761783', 'timeStamp': '1466757809', 'hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'nonce': '1169', 'blockHash': '0x7576961afa4218a3264addd37a41f55c444dd534e9410dbd6f93f7fe20e0363e', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10197126683000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15973469', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1764895', 'timeStamp': '1466801683', 'hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'nonce': '1186', 'blockHash': '0x2e687643becd3c36e0c396a02af0842775e17ccefa0904de5aeca0a9a1aa795e', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8690241462000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15970357', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1767936', 'timeStamp': '1466845682', 'hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'nonce': '1211', 'blockHash': '0xb01d8fd47b3554a99352ac3e5baf5524f314cfbc4262afcfbea1467b2d682898', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11914401843000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15967316', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1770911', 'timeStamp': '1466888890', 'hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'nonce': '1212', 'blockHash': '0x79a9de39276132dab8bf00dc3e060f0e8a14f5e16a0ee4e9cc491da31b25fe58', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10918214730000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15964341', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1774044', 'timeStamp': '1466932983', 'hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'nonce': '1240', 'blockHash': '0x69cee390378c3b886f9543fb3a1cb2fc97621ec155f7884564d4c866348ce539', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9979637283000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15961208', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1777057', 'timeStamp': '1466976422', 'hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'nonce': '1248', 'blockHash': '0xc7cacda0ac38c99f1b9bccbeee1562a41781d2cfaa357e8c7b4af6a49584b968', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '4556173496000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15958195', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1780120', 'timeStamp': '1467020353', 'hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'nonce': '1266', 'blockHash': '0xfc0e066e5b613239e1a01e6d582e7ab162ceb3ca4f719dfbd1a0c965adcfe1c5', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11890330240000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15955132', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'})] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:26.516Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/etherscan/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/etherscan/", "description": "Etherscan is the leading blockchain", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4399", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"etherscan\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:26 GMT", "etag": "W/\"65075a892b6ae6f52b9089df7aabce66\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::xhgjf-1713753566380-04a5e3fe90df" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/etherscan/", "property": "og:url" }, { "content": "Etherscan | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Etherscan is the leading blockchain", "property": "og:description" } ], "title": "Etherscan | 🦜️🔗 LangChain" }
The Etherscan loader use Etherscan API to load transactions histories under specific account on Ethereum Mainnet. You will need a Etherscan api key to proceed. The free api key has 5 calls per seconds quota. If the account does not have corresponding transactions, the loader will a list with one document. The content of document is ’’. You can pass different filters to loader to access different functionalities we mentioned above: All functions related to transactions histories are restricted 1000 histories maximum because of Etherscan limit. You can use the following parameters to find the transaction histories you need: {'blockNumber': '13242975', 'timeStamp': '1631878751', 'hash': '0x366dda325b1a6570928873665b6b418874a7dedf7fee9426158fa3536b621788', 'nonce': '28', 'blockHash': '0x5469dba1b1e1372962cf2be27ab2640701f88c00640c4d26b8cc2ae9ac256fb6', 'from': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3', 'contractAddress': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '298131000000000', 'tokenName': 'ABCHANGE.io', 'tokenSymbol': 'XCH', 'tokenDecimal': '9', 'transactionIndex': '71', 'gas': '15000000', 'gasPrice': '48614996176', 'gasUsed': '5712724', 'cumulativeGasUsed': '11507920', 'input': 'deprecated', 'confirmations': '4492277'} [Document(page_content="{'blockNumber': '1723771', 'timeStamp': '1466213371', 'hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'nonce': '3155', 'blockHash': '0xc2c2207bcaf341eed07f984c9a90b3f8e8bdbdbd2ac6562f8c2f5bfa4b51299d', 'transactionIndex': '5', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13149213761000000000', 'gas': '90000', 'gasPrice': '22655598156', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '126000', 'gasUsed': '21000', 'confirmations': '16011481', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1727090', 'timeStamp': '1466262018', 'hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'nonce': '3267', 'blockHash': '0xc0cff378c3446b9b22d217c2c5f54b1c85b89a632c69c55b76cdffe88d2b9f4d', 'transactionIndex': '20', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11521979886000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3806725', 'gasUsed': '21000', 'confirmations': '16008162', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1730337', 'timeStamp': '1466308222', 'hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'nonce': '3344', 'blockHash': '0x3a52d28b8587d55c621144a161a0ad5c37dd9f7d63b629ab31da04fa410b2cfa', 'transactionIndex': '1', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9783400526000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '60788', 'gasUsed': '21000', 'confirmations': '16004915', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1733479', 'timeStamp': '1466352351', 'hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'nonce': '3367', 'blockHash': '0x9928661e7ae125b3ae0bcf5e076555a3ee44c52ae31bd6864c9c93a6ebb3f43e', 'transactionIndex': '0', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '1570706444000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '16001773', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1734172', 'timeStamp': '1466362463', 'hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'nonce': '1016', 'blockHash': '0x8a8afe2b446713db88218553cfb5dd202422928e5e0bc00475ed2f37d95649de', 'transactionIndex': '4', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '6322276709000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '105333', 'gasUsed': '21000', 'confirmations': '16001080', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1737276', 'timeStamp': '1466406037', 'hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'nonce': '1024', 'blockHash': '0xe117cad73752bb485c3bef24556e45b7766b283229180fcabc9711f3524b9f79', 'transactionIndex': '35', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9976891868000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3187163', 'gasUsed': '21000', 'confirmations': '15997976', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1740314', 'timeStamp': '1466450262', 'hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'nonce': '1051', 'blockHash': '0x588d17842819a81afae3ac6644d8005c12ce55ddb66c8d4c202caa91d4e8fdbe', 'transactionIndex': '6', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8060633765000000000', 'gas': '90000', 'gasPrice': '22926905859', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '153077', 'gasUsed': '21000', 'confirmations': '15994938', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1743384', 'timeStamp': '1466494099', 'hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'nonce': '1068', 'blockHash': '0x997245108c84250057fda27306b53f9438ad40978a95ca51d8fd7477e73fbaa7', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9541921352000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '119650', 'gasUsed': '21000', 'confirmations': '15991868', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1746405', 'timeStamp': '1466538123', 'hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'nonce': '1092', 'blockHash': '0x3af3966cdaf22e8b112792ee2e0edd21ceb5a0e7bf9d8c168a40cf22deb3690c', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8433783799000000000', 'gas': '90000', 'gasPrice': '25689279306', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15988847', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1749459', 'timeStamp': '1466582044', 'hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'nonce': '1096', 'blockHash': '0x5fc5d2a903977b35ce1239975ae23f9157d45d7bd8a8f6205e8ce270000797f9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10269065805000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15985793', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1752614', 'timeStamp': '1466626168', 'hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'nonce': '1118', 'blockHash': '0x88ef054b98e47504332609394e15c0a4467f84042396717af6483f0bcd916127', 'transactionIndex': '11', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11325836780000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '252000', 'gasUsed': '21000', 'confirmations': '15982638', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1755659', 'timeStamp': '1466669931', 'hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'nonce': '1133', 'blockHash': '0x2983972217a91343860415d1744c2a55246a297c4810908bbd3184785bc9b0c2', 'transactionIndex': '14', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13226475343000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '2674679', 'gasUsed': '21000', 'confirmations': '15979593', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1758709', 'timeStamp': '1466713652', 'hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'nonce': '1147', 'blockHash': '0x1660de1e73067251be0109d267a21ffc7d5bde21719a3664c7045c32e771ecf9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9758447294000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15976543', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1761783', 'timeStamp': '1466757809', 'hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'nonce': '1169', 'blockHash': '0x7576961afa4218a3264addd37a41f55c444dd534e9410dbd6f93f7fe20e0363e', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10197126683000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15973469', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1764895', 'timeStamp': '1466801683', 'hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'nonce': '1186', 'blockHash': '0x2e687643becd3c36e0c396a02af0842775e17ccefa0904de5aeca0a9a1aa795e', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8690241462000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15970357', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1767936', 'timeStamp': '1466845682', 'hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'nonce': '1211', 'blockHash': '0xb01d8fd47b3554a99352ac3e5baf5524f314cfbc4262afcfbea1467b2d682898', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11914401843000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15967316', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1770911', 'timeStamp': '1466888890', 'hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'nonce': '1212', 'blockHash': '0x79a9de39276132dab8bf00dc3e060f0e8a14f5e16a0ee4e9cc491da31b25fe58', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10918214730000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15964341', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1774044', 'timeStamp': '1466932983', 'hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'nonce': '1240', 'blockHash': '0x69cee390378c3b886f9543fb3a1cb2fc97621ec155f7884564d4c866348ce539', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9979637283000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15961208', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1777057', 'timeStamp': '1466976422', 'hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'nonce': '1248', 'blockHash': '0xc7cacda0ac38c99f1b9bccbeee1562a41781d2cfaa357e8c7b4af6a49584b968', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '4556173496000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15958195', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1780120', 'timeStamp': '1467020353', 'hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'nonce': '1266', 'blockHash': '0xfc0e066e5b613239e1a01e6d582e7ab162ceb3ca4f719dfbd1a0c965adcfe1c5', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11890330240000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15955132', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'})]
https://python.langchain.com/docs/integrations/document_loaders/fauna/
``` from langchain_community.document_loaders.fauna import FaunaLoadersecret = "<enter-valid-fauna-secret>"query = "Item.all()" # Fauna query. Assumes that the collection is called "Item"field = "text" # The field that contains the page content. Assumes that the field is called "text"loader = FaunaLoader(query, field, secret)docs = loader.lazy_load()for value in docs: print(value) ``` You get a `after` value if there are more data. You can get values after the curcor by passing in the `after` string in query.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:26.828Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/fauna/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/fauna/", "description": "Fauna is a Document Database.", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3471", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"fauna\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:26 GMT", "etag": "W/\"80efbad2a8084c56bf1feef21e985bb8\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::p9qs5-1713753566776-060eb710e673" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/fauna/", "property": "og:url" }, { "content": "Fauna | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Fauna is a Document Database.", "property": "og:description" } ], "title": "Fauna | 🦜️🔗 LangChain" }
from langchain_community.document_loaders.fauna import FaunaLoader secret = "<enter-valid-fauna-secret>" query = "Item.all()" # Fauna query. Assumes that the collection is called "Item" field = "text" # The field that contains the page content. Assumes that the field is called "text" loader = FaunaLoader(query, field, secret) docs = loader.lazy_load() for value in docs: print(value) You get a after value if there are more data. You can get values after the curcor by passing in the after string in query.
https://python.langchain.com/docs/integrations/document_loaders/facebook_chat/
## Facebook Chat > [Messenger](https://en.wikipedia.org/wiki/Messenger_(software)) is an American proprietary instant messaging app and platform developed by `Meta Platforms`. Originally developed as `Facebook Chat` in 2008, the company revamped its messaging service in 2010. This notebook covers how to load data from the [Facebook Chats](https://www.facebook.com/business/help/1646890868956360) into a format that can be ingested into LangChain. ``` from langchain_community.document_loaders import FacebookChatLoader ``` ``` loader = FacebookChatLoader("example_data/facebook_chat.json") ``` ``` [Document(page_content='User 2 on 2023-02-05 03:46:11: Bye!\n\nUser 1 on 2023-02-05 03:43:55: Oh no worries! Bye\n\nUser 2 on 2023-02-05 03:24:37: No Im sorry it was my mistake, the blue one is not for sale\n\nUser 1 on 2023-02-05 03:05:40: I thought you were selling the blue one!\n\nUser 1 on 2023-02-05 03:05:09: Im not interested in this bag. Im interested in the blue one!\n\nUser 2 on 2023-02-05 03:04:28: Here is $129\n\nUser 2 on 2023-02-05 03:04:05: Online is at least $100\n\nUser 1 on 2023-02-05 02:59:59: How much do you want?\n\nUser 2 on 2023-02-04 22:17:56: Goodmorning! $50 is too low.\n\nUser 1 on 2023-02-04 14:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\n\n', metadata={'source': 'example_data/facebook_chat.json'})] ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:26.927Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/facebook_chat/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/facebook_chat/", "description": "Messenger) is an", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3471", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"facebook_chat\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:26 GMT", "etag": "W/\"4c432a50b963b5adc251e6ba8cb1b0d6\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::nrswz-1713753566843-c06fc9845529" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/facebook_chat/", "property": "og:url" }, { "content": "Facebook Chat | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Messenger) is an", "property": "og:description" } ], "title": "Facebook Chat | 🦜️🔗 LangChain" }
Facebook Chat Messenger is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010. This notebook covers how to load data from the Facebook Chats into a format that can be ingested into LangChain. from langchain_community.document_loaders import FacebookChatLoader loader = FacebookChatLoader("example_data/facebook_chat.json") [Document(page_content='User 2 on 2023-02-05 03:46:11: Bye!\n\nUser 1 on 2023-02-05 03:43:55: Oh no worries! Bye\n\nUser 2 on 2023-02-05 03:24:37: No Im sorry it was my mistake, the blue one is not for sale\n\nUser 1 on 2023-02-05 03:05:40: I thought you were selling the blue one!\n\nUser 1 on 2023-02-05 03:05:09: Im not interested in this bag. Im interested in the blue one!\n\nUser 2 on 2023-02-05 03:04:28: Here is $129\n\nUser 2 on 2023-02-05 03:04:05: Online is at least $100\n\nUser 1 on 2023-02-05 02:59:59: How much do you want?\n\nUser 2 on 2023-02-04 22:17:56: Goodmorning! $50 is too low.\n\nUser 1 on 2023-02-04 14:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\n\n', metadata={'source': 'example_data/facebook_chat.json'})] Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/evernote/
[EverNote](https://evernote.com/) is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual “notebooks” and can be tagged, annotated, edited, searched, and exported. A document will be created for each note in the export. ``` [Document(page_content='testing this\n\nwhat happens?\n\nto the world?**Jan - March 2022**', metadata={'source': 'example_data/testing.enex'})] ``` ``` [Document(page_content='testing this\n\nwhat happens?\n\nto the world?', metadata={'title': 'testing', 'created': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=47, tm_sec=46, tm_wday=3, tm_yday=40, tm_isdst=-1), 'updated': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=53, tm_sec=28, tm_wday=3, tm_yday=40, tm_isdst=-1), 'note-attributes.author': 'Harrison Chase', 'source': 'example_data/testing.enex'}), Document(page_content='**Jan - March 2022**', metadata={'title': 'Summer Training Program', 'created': time.struct_time(tm_year=2022, tm_mon=12, tm_mday=27, tm_hour=1, tm_min=59, tm_sec=48, tm_wday=1, tm_yday=361, tm_isdst=-1), 'note-attributes.author': 'Mike McGarry', 'note-attributes.source': 'mobile.iphone', 'source': 'example_data/testing.enex'})] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:27.026Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/evernote/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/evernote/", "description": "EverNote is intended for archiving and", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"evernote\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:26 GMT", "etag": "W/\"165723def7b0dcd82a99ac909dd149db\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::494vn-1713753566778-afc2ac4da063" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/evernote/", "property": "og:url" }, { "content": "EverNote | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "EverNote is intended for archiving and", "property": "og:description" } ], "title": "EverNote | 🦜️🔗 LangChain" }
EverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual “notebooks” and can be tagged, annotated, edited, searched, and exported. A document will be created for each note in the export. [Document(page_content='testing this\n\nwhat happens?\n\nto the world?**Jan - March 2022**', metadata={'source': 'example_data/testing.enex'})] [Document(page_content='testing this\n\nwhat happens?\n\nto the world?', metadata={'title': 'testing', 'created': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=47, tm_sec=46, tm_wday=3, tm_yday=40, tm_isdst=-1), 'updated': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=53, tm_sec=28, tm_wday=3, tm_yday=40, tm_isdst=-1), 'note-attributes.author': 'Harrison Chase', 'source': 'example_data/testing.enex'}), Document(page_content='**Jan - March 2022**', metadata={'title': 'Summer Training Program', 'created': time.struct_time(tm_year=2022, tm_mon=12, tm_mday=27, tm_hour=1, tm_min=59, tm_sec=48, tm_wday=1, tm_yday=361, tm_isdst=-1), 'note-attributes.author': 'Mike McGarry', 'note-attributes.source': 'mobile.iphone', 'source': 'example_data/testing.enex'})]
https://python.langchain.com/docs/integrations/document_loaders/geopandas/
[Geopandas](https://geopandas.org/en/stable/index.html) is an open-source project to make working with geospatial data in python easier. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. Geometric operations are performed by shapely. Geopandas further depends on fiona for file access and matplotlib for plotting. LLM applications (chat, QA) that utilize geospatial data are an interesting area for exploration. Visualization of the sample of SF crime data. ``` import matplotlib.pyplot as plt# Load San Francisco map datasf = gpd.read_file("https://data.sfgov.org/resource/3psu-pn9h.geojson")# Plot the San Francisco map and the pointsfig, ax = plt.subplots(figsize=(10, 10))sf.plot(ax=ax, color="white", edgecolor="black")gdf.plot(ax=ax, color="red", markersize=5)plt.show() ``` Load GeoPandas dataframe as a `Document` for downstream processing (embedding, chat, etc). The `geometry` will be the default `page_content` columns, and all other columns are placed in `metadata`. But, we can specify the `page_content_column`. ``` Document(page_content='POINT (-122.420084075249 37.7083109744362)', metadata={'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309', ':@computed_region_6qbp_sg9q': nan, ':@computed_region_qgnn_b9vv': nan, ':@computed_region_ajp5_b2md': nan, ':@computed_region_yftq_j783': nan, ':@computed_region_p5aj_wyqh': nan, ':@computed_region_fyvs_ahh9': nan, ':@computed_region_6pnf_4xz7': nan, ':@computed_region_jwn9_ihcz': nan, ':@computed_region_9dfj_4gjx': nan, ':@computed_region_4isq_27mq': nan, ':@computed_region_pigm_ib2e': nan, ':@computed_region_9jxd_iqea': nan, ':@computed_region_6ezc_tdp2': nan, ':@computed_region_h4ep_8xdi': nan, ':@computed_region_n4xg_c4py': nan, ':@computed_region_fcz8_est8': nan, ':@computed_region_nqbw_i6c3': nan, ':@computed_region_2dwj_jsy4': nan, 'Latitude': 37.7083109744362, 'Longitude': -122.420084075249}) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:27.303Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/geopandas/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/geopandas/", "description": "Geopandas is an", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"geopandas\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:26 GMT", "etag": "W/\"c45a27d54c3cc1a0b1c46f5e6d862db4\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::9fj28-1713753566785-b3009f090a81" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/geopandas/", "property": "og:url" }, { "content": "Geopandas | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Geopandas is an", "property": "og:description" } ], "title": "Geopandas | 🦜️🔗 LangChain" }
Geopandas is an open-source project to make working with geospatial data in python easier. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. Geometric operations are performed by shapely. Geopandas further depends on fiona for file access and matplotlib for plotting. LLM applications (chat, QA) that utilize geospatial data are an interesting area for exploration. Visualization of the sample of SF crime data. import matplotlib.pyplot as plt # Load San Francisco map data sf = gpd.read_file("https://data.sfgov.org/resource/3psu-pn9h.geojson") # Plot the San Francisco map and the points fig, ax = plt.subplots(figsize=(10, 10)) sf.plot(ax=ax, color="white", edgecolor="black") gdf.plot(ax=ax, color="red", markersize=5) plt.show() Load GeoPandas dataframe as a Document for downstream processing (embedding, chat, etc). The geometry will be the default page_content columns, and all other columns are placed in metadata. But, we can specify the page_content_column. Document(page_content='POINT (-122.420084075249 37.7083109744362)', metadata={'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309', ':@computed_region_6qbp_sg9q': nan, ':@computed_region_qgnn_b9vv': nan, ':@computed_region_ajp5_b2md': nan, ':@computed_region_yftq_j783': nan, ':@computed_region_p5aj_wyqh': nan, ':@computed_region_fyvs_ahh9': nan, ':@computed_region_6pnf_4xz7': nan, ':@computed_region_jwn9_ihcz': nan, ':@computed_region_9dfj_4gjx': nan, ':@computed_region_4isq_27mq': nan, ':@computed_region_pigm_ib2e': nan, ':@computed_region_9jxd_iqea': nan, ':@computed_region_6ezc_tdp2': nan, ':@computed_region_h4ep_8xdi': nan, ':@computed_region_n4xg_c4py': nan, ':@computed_region_fcz8_est8': nan, ':@computed_region_nqbw_i6c3': nan, ':@computed_region_2dwj_jsy4': nan, 'Latitude': 37.7083109744362, 'Longitude': -122.420084075249})
https://python.langchain.com/docs/integrations/document_loaders/figma/
## Figma > [Figma](https://www.figma.com/) is a collaborative web application for interface design. This notebook covers how to load data from the `Figma` REST API into a format that can be ingested into LangChain, along with example usage for code generation. ``` import osfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain_community.document_loaders.figma import FigmaFileLoaderfrom langchain_core.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,)from langchain_openai import ChatOpenAI ``` The Figma API Requires an access token, node\_ids, and a file key. The file key can be pulled from the URL. [https://www.figma.com/file/{filekey}/sampleFilename](https://www.figma.com/file/%7Bfilekey%7D/sampleFilename) Node IDs are also available in the URL. Click on anything and look for the ‘?node-id={node\_id}’ param. Access token instructions are in the Figma help center article: [https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens](https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens) ``` figma_loader = FigmaFileLoader( os.environ.get("ACCESS_TOKEN"), os.environ.get("NODE_IDS"), os.environ.get("FILE_KEY"),) ``` ``` # see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([figma_loader])figma_doc_retriever = index.vectorstore.as_retriever() ``` ``` def generate_code(human_input): # I have no idea if the Jon Carmack thing makes for better code. YMMV. # See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info system_prompt_template = """You are expert coder Jon Carmack. Use the provided design context to create idiomatic HTML/CSS code as possible based on the user request. Everything must be inline in one file and your response must be directly renderable by the browser. Figma file nodes and metadata: {context}""" human_prompt_template = "Code the {text}. Ensure it's mobile responsive" system_message_prompt = SystemMessagePromptTemplate.from_template( system_prompt_template ) human_message_prompt = HumanMessagePromptTemplate.from_template( human_prompt_template ) # delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results gpt_4 = ChatOpenAI(temperature=0.02, model_name="gpt-4") # Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input) conversation = [system_message_prompt, human_message_prompt] chat_prompt = ChatPromptTemplate.from_messages(conversation) response = gpt_4( chat_prompt.format_prompt( context=relevant_nodes, text=human_input ).to_messages() ) return response ``` ``` response = generate_code("page top header") ``` Returns the following in `response.content`: ``` <!DOCTYPE html>\n<html lang="en">\n<head>\n <meta charset="UTF-8">\n <meta name="viewport" content="width=device-width, initial-scale=1.0">\n <style>\n @import url(\'https://fonts.googleapis.com/css2?family=DM+Sans:wght@500;700&family=Inter:wght@600&display=swap\');\n\n body {\n margin: 0;\n font-family: \'DM Sans\', sans-serif;\n }\n\n .header {\n display: flex;\n justify-content: space-between;\n align-items: center;\n padding: 20px;\n background-color: #fff;\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\n }\n\n .header h1 {\n font-size: 16px;\n font-weight: 700;\n margin: 0;\n }\n\n .header nav {\n display: flex;\n align-items: center;\n }\n\n .header nav a {\n font-size: 14px;\n font-weight: 500;\n text-decoration: none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="header">\n <h1>Company Contact</h1>\n <nav>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n </nav>\n </header>\n</body>\n</html> ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:27.534Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/figma/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/figma/", "description": "Figma is a collaborative web application for", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"figma\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:26 GMT", "etag": "W/\"d5c2deed1bdc082964c8d978855b78da\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::q7885-1713753566792-027d78b7125e" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/figma/", "property": "og:url" }, { "content": "Figma | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Figma is a collaborative web application for", "property": "og:description" } ], "title": "Figma | 🦜️🔗 LangChain" }
Figma Figma is a collaborative web application for interface design. This notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation. import os from langchain.indexes import VectorstoreIndexCreator from langchain_community.document_loaders.figma import FigmaFileLoader from langchain_core.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, ) from langchain_openai import ChatOpenAI The Figma API Requires an access token, node_ids, and a file key. The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename Node IDs are also available in the URL. Click on anything and look for the ‘?node-id={node_id}’ param. Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens figma_loader = FigmaFileLoader( os.environ.get("ACCESS_TOKEN"), os.environ.get("NODE_IDS"), os.environ.get("FILE_KEY"), ) # see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([figma_loader]) figma_doc_retriever = index.vectorstore.as_retriever() def generate_code(human_input): # I have no idea if the Jon Carmack thing makes for better code. YMMV. # See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info system_prompt_template = """You are expert coder Jon Carmack. Use the provided design context to create idiomatic HTML/CSS code as possible based on the user request. Everything must be inline in one file and your response must be directly renderable by the browser. Figma file nodes and metadata: {context}""" human_prompt_template = "Code the {text}. Ensure it's mobile responsive" system_message_prompt = SystemMessagePromptTemplate.from_template( system_prompt_template ) human_message_prompt = HumanMessagePromptTemplate.from_template( human_prompt_template ) # delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results gpt_4 = ChatOpenAI(temperature=0.02, model_name="gpt-4") # Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input) conversation = [system_message_prompt, human_message_prompt] chat_prompt = ChatPromptTemplate.from_messages(conversation) response = gpt_4( chat_prompt.format_prompt( context=relevant_nodes, text=human_input ).to_messages() ) return response response = generate_code("page top header") Returns the following in response.content: <!DOCTYPE html>\n<html lang="en">\n<head>\n <meta charset="UTF-8">\n <meta name="viewport" content="width=device-width, initial-scale=1.0">\n <style>\n @import url(\'https://fonts.googleapis.com/css2?family=DM+Sans:wght@500;700&family=Inter:wght@600&display=swap\');\n\n body {\n margin: 0;\n font-family: \'DM Sans\', sans-serif;\n }\n\n .header {\n display: flex;\n justify-content: space-between;\n align-items: center;\n padding: 20px;\n background-color: #fff;\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\n }\n\n .header h1 {\n font-size: 16px;\n font-weight: 700;\n margin: 0;\n }\n\n .header nav {\n display: flex;\n align-items: center;\n }\n\n .header nav a {\n font-size: 14px;\n font-weight: 500;\n text-decoration: none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="header">\n <h1>Company Contact</h1>\n <nav>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n </nav>\n </header>\n</body>\n</html>
https://python.langchain.com/docs/integrations/document_loaders/obsidian/
This notebook covers how to load documents from an `Obsidian` database. Since `Obsidian` is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory. `Obsidian` files also sometimes contain [metadata](https://help.obsidian.md/Editing+and+formatting/Metadata) which is a YAML block at the top of the file. These values will be added to the document’s metadata. (`ObsidianLoader` can also be passed a `collect_metadata=False` argument to disable this behavior.) ``` from langchain_community.document_loaders import ObsidianLoader ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:27.724Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/obsidian/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/obsidian/", "description": "Obsidian is a powerful and extensible", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"obsidian\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:27 GMT", "etag": "W/\"0c16448ec5a72c71846bc83dbd5e8d4d\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::jznfn-1713753567121-cabef45d7755" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/obsidian/", "property": "og:url" }, { "content": "Obsidian | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Obsidian is a powerful and extensible", "property": "og:description" } ], "title": "Obsidian | 🦜️🔗 LangChain" }
This notebook covers how to load documents from an Obsidian database. Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory. Obsidian files also sometimes contain metadata which is a YAML block at the top of the file. These values will be added to the document’s metadata. (ObsidianLoader can also be passed a collect_metadata=False argument to disable this behavior.) from langchain_community.document_loaders import ObsidianLoader
https://python.langchain.com/docs/integrations/document_loaders/git/
## Git > [Git](https://en.wikipedia.org/wiki/Git) is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development. This notebook shows how to load text files from `Git` repository. ## Load existing repository from disk[​](#load-existing-repository-from-disk "Direct link to Load existing repository from disk") ``` %pip install --upgrade --quiet GitPython ``` ``` from git import Reporepo = Repo.clone_from( "https://github.com/langchain-ai/langchain", to_path="./example_data/test_repo1")branch = repo.head.reference ``` ``` from langchain_community.document_loaders import GitLoader ``` ``` loader = GitLoader(repo_path="./example_data/test_repo1/", branch=branch) ``` ``` page_content='.venv\n.github\n.git\n.mypy_cache\n.pytest_cache\nDockerfile' metadata={'file_path': '.dockerignore', 'file_name': '.dockerignore', 'file_type': ''} ``` ## Clone repository from url[​](#clone-repository-from-url "Direct link to Clone repository from url") ``` from langchain_community.document_loaders import GitLoader ``` ``` loader = GitLoader( clone_url="https://github.com/langchain-ai/langchain", repo_path="./example_data/test_repo2/", branch="master",) ``` ## Filtering files to load[​](#filtering-files-to-load "Direct link to Filtering files to load") ``` from langchain_community.document_loaders import GitLoader# e.g. loading only python filesloader = GitLoader( repo_path="./example_data/test_repo1/", file_filter=lambda file_path: file_path.endswith(".py"),) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:27.761Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/git/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/git/", "description": "Git is a distributed version", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3471", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"git\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:27 GMT", "etag": "W/\"307a7f950c0c183a446b24f3f699504f\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::rsd2t-1713753567128-53cb03c1d912" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/git/", "property": "og:url" }, { "content": "Git | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Git is a distributed version", "property": "og:description" } ], "title": "Git | 🦜️🔗 LangChain" }
Git Git is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development. This notebook shows how to load text files from Git repository. Load existing repository from disk​ %pip install --upgrade --quiet GitPython from git import Repo repo = Repo.clone_from( "https://github.com/langchain-ai/langchain", to_path="./example_data/test_repo1" ) branch = repo.head.reference from langchain_community.document_loaders import GitLoader loader = GitLoader(repo_path="./example_data/test_repo1/", branch=branch) page_content='.venv\n.github\n.git\n.mypy_cache\n.pytest_cache\nDockerfile' metadata={'file_path': '.dockerignore', 'file_name': '.dockerignore', 'file_type': ''} Clone repository from url​ from langchain_community.document_loaders import GitLoader loader = GitLoader( clone_url="https://github.com/langchain-ai/langchain", repo_path="./example_data/test_repo2/", branch="master", ) Filtering files to load​ from langchain_community.document_loaders import GitLoader # e.g. loading only python files loader = GitLoader( repo_path="./example_data/test_repo1/", file_filter=lambda file_path: file_path.endswith(".py"), ) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/firecrawl/
[FireCrawl](https://firecrawl.dev/?ref=langchain) crawls and convert any website into LLM-ready data. It crawls all accessible subpages and give you clean markdown and metadata for each. No sitemap required. FireCrawl handles complex tasks such as reverse proxies, caching, rate limits, and content blocked by JavaScript. Built by the [mendable.ai](https://mendable.ai/) team. ``` Requirement already satisfied: firecrawl-py in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (0.0.5)Requirement already satisfied: requests in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (from firecrawl-py) (2.31.0)Requirement already satisfied: charset-normalizer<4,>=2 in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (from requests->firecrawl-py) (3.3.2)Requirement already satisfied: idna<4,>=2.5 in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (from requests->firecrawl-py) (3.6)Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (from requests->firecrawl-py) (2.0.7)Requirement already satisfied: certifi>=2017.4.17 in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (from requests->firecrawl-py) (2024.2.2)Note: you may need to restart the kernel to use updated packages. ``` ``` [Document(page_content='[Skip to content](#skip)\n\n[🔥 FireCrawl](/)\n\n[Playground](/playground)\n[Pricing](/pricing)\n\n[Log In](/signin)\n[Log In](/signin)\n[Sign Up](/signin/signup)\n\n![Slack Logo](/images/slack_logo_icon.png)\n\nNew message in: #coach-gtm\n==========================\n\n@CoachGTM: Your meeting prep for Pied Piper < > WindFlow Dynamics is ready! Meeting starts in 30 minutes\n\nTurn websites into \n_LLM-ready_ data\n=====================================\n\nCrawl and convert any website into clean markdown\n\nTry now (100 free credits)No credit card required\n\nA product by\n\n[![Mendable Logo](/images/mendable_logo_transparent.png)Mendable](https://mendable.ai)\n\n![Mendable Website Image](/mendable-hero-8.png)\n\nCrawl, Capture, Clean\n---------------------\n\nWe crawl all accessible subpages and give you clean markdown for each. No sitemap required.\n\n \n [\\\n {\\\n "url": "https://www.mendable.ai/",\\\n "markdown": "## Welcome to Mendable\\\n Mendable empowers teams with AI-driven solutions - \\\n streamlining sales and support."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/features",\\\n "markdown": "## Features\\\n Discover how Mendable\'s cutting-edge features can \\\n transform your business operations."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/pricing",\\\n "markdown": "## Pricing Plans\\\n Choose the perfect plan that fits your business needs."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/about",\\\n "markdown": "## About Us\\\n \\\n Learn more about Mendable\'s mission and the \\\n team behind our innovative platform."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/contact",\\\n "markdown": "## Contact Us\\\n Get in touch with us for any queries or support."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/blog",\\\n "markdown": "## Blog\\\n Stay updated with the latest news and insights from Mendable."\\\n }\\\n ]\n \n\nNote: The markdown has been edited for display purposes.\n\nWe handle the hard stuff\n------------------------\n\nReverse proxyies, caching, rate limits, js-blocked content and more...\n\n#### Crawling\n\nFireCrawl crawls all accessible subpages, even without a sitemap.\n\n#### Dynamic content\n\nFireCrawl gathers data even if a website uses javascript to render content.\n\n#### To Markdown\n\nFireCrawl returns clean, well formatted markdown - ready for use in LLM applications\n\n#### Continuous updates\n\nSchedule syncs with FireCrawl. No cron jobs or orchestration required.\n\n#### Caching\n\nFireCrawl caches content, so you don\'t have to wait for a full scrape unless new content exists.\n\n#### Built for AI\n\nBuilt by LLM engineers, for LLM engineers. Giving you clean data the way you want it.\n\nPricing Plans\n=============\n\nStarter\n-------\n\n50k credits ($1.00/1k)\n\n$50/month\n\n* Scrape 50,000 pages\n* Credits valid for 6 months\n* 2 simultaneous scrapers\\*\n\nSubscribe\n\nStandard\n--------\n\n500k credits ($0.75/1k)\n\n$375/month\n\n* Scrape 500,000 pages\n* Credits valid for 6 months\n* 4 simultaneous scrapers\\*\n\nSubscribe\n\nScale\n-----\n\n12.5M credits ($0.30/1k)\n\n$1,250/month\n\n* Scrape 2,500,000 pages\n* Credits valid for 6 months\n* 10 simultaneous scrapes\\*\n\nSubscribe\n\n\\* a "scraper" refers to how many scraper jobs you can simultaneously submit.\n\nWhat sites work?\n----------------\n\nFirecrawl is best suited for business websites, docs and help centers.\n\nBuisness websites\n\nGathering business intelligence or connecting company data to your AI\n\nBlogs, Documentation and Help centers\n\nGather content from documentation and other textual sources\n\nSocial Media\n\nComing soon\n\n![Feature 01](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-business-2.b6c6b56a.png&w=1920&q=75)\n\n![Feature 02](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-docs-sites.11eef02d.png&w=1920&q=75)\n\nComing Soon\n-----------\n\n[But I want it now!](https://calendly.com/d/cp3d-rvx-58g/mendable-meeting)\n\\* Schedule a meeting\n\n![Feature 04](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-business-2.b6c6b56a.png&w=1920&q=75)\n\n![Slack Logo](/images/slack_logo_icon.png)\n\nNew message in: #coach-gtm\n==========================\n\n@CoachGTM: Your meeting prep for Pied Piper < > WindFlow Dynamics is ready! Meeting starts in 30 minutes\n\n[🔥](/)\n\nReady to _Build?_\n-----------------\n\n[Meet with us](https://calendly.com/d/cp3d-rvx-58g/mendable-meeting)\n\n[Try 100 queries free](/signin)\n\n[Discord](https://discord.gg/gSmWdAkdwd)\n\nFAQ\n---\n\nFrequently asked questions about FireCrawl\n\nWhat is FireCrawl?\n\nFireCrawl is an advanced web crawling and data conversion tool designed to transform any website into clean, LLM-ready markdown. Ideal for AI developers and data scientists, it automates the collection, cleaning, and formatting of web data, streamlining the preparation process for Large Language Model (LLM) applications.\n\nHow does FireCrawl handle dynamic content on websites?\n\nUnlike traditional web scrapers, FireCrawl is equipped to handle dynamic content rendered with JavaScript. It ensures comprehensive data collection from all accessible subpages, making it a reliable tool for scraping websites that rely heavily on JS for content delivery.\n\nCan FireCrawl crawl websites without a sitemap?\n\nYes, FireCrawl can access and crawl all accessible subpages of a website, even in the absence of a sitemap. This feature enables users to gather data from a wide array of web sources with minimal setup.\n\nWhat formats can FireCrawl convert web data into?\n\nFireCrawl specializes in converting web data into clean, well-formatted markdown. This format is particularly suited for LLM applications, offering a structured yet flexible way to represent web content.\n\nHow does FireCrawl ensure the cleanliness of the data?\n\nFireCrawl employs advanced algorithms to clean and structure the scraped data, removing unnecessary elements and formatting the content into readable markdown. This process ensures that the data is ready for use in LLM applications without further preprocessing.\n\nIs FireCrawl suitable for large-scale data scraping projects?\n\nAbsolutely. FireCrawl offers various pricing plans, including a Scale plan that supports scraping of millions of pages. With features like caching and scheduled syncs, it\'s designed to efficiently handle large-scale data scraping and continuous updates, making it ideal for enterprises and large projects.\n\nWhat measures does FireCrawl take to handle web scraping challenges like rate limits and caching?\n\nFireCrawl is built to navigate common web scraping challenges, including reverse proxies, rate limits, and caching. It smartly manages requests and employs caching techniques to minimize bandwidth usage and avoid triggering anti-scraping mechanisms, ensuring reliable data collection.\n\nHow can I try FireCrawl?\n\nYou can start with FireCrawl by trying our free trial, which includes 100 pages. This trial allows you to experience firsthand how FireCrawl can streamline your data collection and conversion processes. Sign up and begin transforming web content into LLM-ready data today!\n\nWho can benefit from using FireCrawl?\n\nFireCrawl is tailored for LLM engineers, data scientists, AI researchers, and developers looking to harness web data for training machine learning models, market research, content aggregation, and more. It simplifies the data preparation process, allowing professionals to focus on insights and model development.\n\n[🔥](/)\n\n© A product by Mendable.ai - All rights reserved.\n\n[Twitter](https://twitter.com/mendableai)\n[GitHub](https://github.com/sideguide)\n[Discord](https://discord.gg/gSmWdAkdwd)\n\nBacked by![Y Combinator Logo](/images/yc.svg)\n\n![SOC 2 Type II](/soc2type2badge.png)\n\n###### Company\n\n* [About us](#0)\n \n* [Diversity & Inclusion](#0)\n \n* [Blog](#0)\n \n* [Careers](#0)\n \n* [Financial statements](#0)\n \n\n###### Resources\n\n* [Community](#0)\n \n* [Terms of service](#0)\n \n* [Collaboration features](#0)\n \n\n###### Legals\n\n* [Refund policy](#0)\n \n* [Terms & Conditions](#0)\n \n* [Privacy policy](#0)\n \n* [Brand Kit](#0)', metadata={'title': 'Home - FireCrawl', 'description': 'FireCrawl crawls and converts any website into clean markdown.', 'language': None, 'sourceURL': 'https://firecrawl.dev/'}), Document(page_content='[Skip to content](#skip)\n\n[🔥 FireCrawl](/)\n\n[Playground](/playground)\n[Pricing](/pricing)\n\n[Log In](/signin)\n[Log In](/signin)\n[Sign Up](/signin/signup)\n\nPricing Plans\n=============\n\nStarter\n-------\n\n50k credits ($1.00/1k)\n\n$50/month\n\n* Scrape 50,000 pages\n* Credits valid for 6 months\n* 2 simultaneous scrapers\\*\n\nSubscribe\n\nStandard\n--------\n\n500k credits ($0.75/1k)\n\n$375/month\n\n* Scrape 500,000 pages\n* Credits valid for 6 months\n* 4 simultaneous scrapers\\*\n\nSubscribe\n\nScale\n-----\n\n12.5M credits ($0.30/1k)\n\n$1,250/month\n\n* Scrape 2,500,000 pages\n* Credits valid for 6 months\n* 10 simultaneous scrapes\\*\n\nSubscribe\n\n\\* a "scraper" refers to how many scraper jobs you can simultaneously submit.\n\n[🔥](/)\n\n© A product by Mendable.ai - All rights reserved.\n\n[Twitter](https://twitter.com/mendableai)\n[GitHub](https://github.com/sideguide)\n[Discord](https://discord.gg/gSmWdAkdwd)\n\nBacked by![Y Combinator Logo](/images/yc.svg)\n\n![SOC 2 Type II](/soc2type2badge.png)\n\n###### Company\n\n* [About us](#0)\n \n* [Diversity & Inclusion](#0)\n \n* [Blog](#0)\n \n* [Careers](#0)\n \n* [Financial statements](#0)\n \n\n###### Resources\n\n* [Community](#0)\n \n* [Terms of service](#0)\n \n* [Collaboration features](#0)\n \n\n###### Legals\n\n* [Refund policy](#0)\n \n* [Terms & Conditions](#0)\n \n* [Privacy policy](#0)\n \n* [Brand Kit](#0)', metadata={'title': 'FireCrawl', 'description': 'Turn any website into LLM-ready data.', 'language': None, 'sourceURL': 'https://firecrawl.dev/pricing'})] ``` ``` [Document(page_content='[Skip to content](#skip)\n\n[🔥 FireCrawl](/)\n\n[Playground](/playground)\n[Pricing](/pricing)\n\n[Log In](/signin)\n[Log In](/signin)\n[Sign Up](/signin/signup)\n\n![Slack Logo](/images/slack_logo_icon.png)\n\nNew message in: #coach-gtm\n==========================\n\n@CoachGTM: Your meeting prep for Pied Piper < > WindFlow Dynamics is ready! Meeting starts in 30 minutes\n\nTurn websites into \n_LLM-ready_ data\n=====================================\n\nCrawl and convert any website into clean markdown\n\nTry now (100 free credits)No credit card required\n\nA product by\n\n[![Mendable Logo](/images/mendable_logo_transparent.png)Mendable](https://mendable.ai)\n\n![Mendable Website Image](/mendable-hero-8.png)\n\nCrawl, Capture, Clean\n---------------------\n\nWe crawl all accessible subpages and give you clean markdown for each. No sitemap required.\n\n \n [\\\n {\\\n "url": "https://www.mendable.ai/",\\\n "markdown": "## Welcome to Mendable\\\n Mendable empowers teams with AI-driven solutions - \\\n streamlining sales and support."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/features",\\\n "markdown": "## Features\\\n Discover how Mendable\'s cutting-edge features can \\\n transform your business operations."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/pricing",\\\n "markdown": "## Pricing Plans\\\n Choose the perfect plan that fits your business needs."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/about",\\\n "markdown": "## About Us\\\n \\\n Learn more about Mendable\'s mission and the \\\n team behind our innovative platform."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/contact",\\\n "markdown": "## Contact Us\\\n Get in touch with us for any queries or support."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/blog",\\\n "markdown": "## Blog\\\n Stay updated with the latest news and insights from Mendable."\\\n }\\\n ]\n \n\nNote: The markdown has been edited for display purposes.\n\nWe handle the hard stuff\n------------------------\n\nReverse proxyies, caching, rate limits, js-blocked content and more...\n\n#### Crawling\n\nFireCrawl crawls all accessible subpages, even without a sitemap.\n\n#### Dynamic content\n\nFireCrawl gathers data even if a website uses javascript to render content.\n\n#### To Markdown\n\nFireCrawl returns clean, well formatted markdown - ready for use in LLM applications\n\n#### Continuous updates\n\nSchedule syncs with FireCrawl. No cron jobs or orchestration required.\n\n#### Caching\n\nFireCrawl caches content, so you don\'t have to wait for a full scrape unless new content exists.\n\n#### Built for AI\n\nBuilt by LLM engineers, for LLM engineers. Giving you clean data the way you want it.\n\nPricing Plans\n=============\n\nStarter\n-------\n\n50k credits ($1.00/1k)\n\n$50/month\n\n* Scrape 50,000 pages\n* Credits valid for 6 months\n* 2 simultaneous scrapers\\*\n\nSubscribe\n\nStandard\n--------\n\n500k credits ($0.75/1k)\n\n$375/month\n\n* Scrape 500,000 pages\n* Credits valid for 6 months\n* 4 simultaneous scrapers\\*\n\nSubscribe\n\nScale\n-----\n\n12.5M credits ($0.30/1k)\n\n$1,250/month\n\n* Scrape 2,500,000 pages\n* Credits valid for 6 months\n* 10 simultaneous scrapes\\*\n\nSubscribe\n\n\\* a "scraper" refers to how many scraper jobs you can simultaneously submit.\n\nWhat sites work?\n----------------\n\nFirecrawl is best suited for business websites, docs and help centers.\n\nBuisness websites\n\nGathering business intelligence or connecting company data to your AI\n\nBlogs, Documentation and Help centers\n\nGather content from documentation and other textual sources\n\nSocial Media\n\nComing soon\n\n![Feature 01](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-business-2.b6c6b56a.png&w=1920&q=75)\n\n![Feature 02](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-docs-sites.11eef02d.png&w=1920&q=75)\n\nComing Soon\n-----------\n\n[But I want it now!](https://calendly.com/d/cp3d-rvx-58g/mendable-meeting)\n\\* Schedule a meeting\n\n![Feature 04](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-business-2.b6c6b56a.png&w=1920&q=75)\n\n![Slack Logo](/images/slack_logo_icon.png)\n\nNew message in: #coach-gtm\n==========================\n\n@CoachGTM: Your meeting prep for Pied Piper < > WindFlow Dynamics is ready! Meeting starts in 30 minutes\n\n[🔥](/)\n\nReady to _Build?_\n-----------------\n\n[Meet with us](https://calendly.com/d/cp3d-rvx-58g/mendable-meeting)\n\n[Try 100 queries free](/signin)\n\n[Discord](https://discord.gg/gSmWdAkdwd)\n\nFAQ\n---\n\nFrequently asked questions about FireCrawl\n\nWhat is FireCrawl?\n\nFireCrawl is an advanced web crawling and data conversion tool designed to transform any website into clean, LLM-ready markdown. Ideal for AI developers and data scientists, it automates the collection, cleaning, and formatting of web data, streamlining the preparation process for Large Language Model (LLM) applications.\n\nHow does FireCrawl handle dynamic content on websites?\n\nUnlike traditional web scrapers, FireCrawl is equipped to handle dynamic content rendered with JavaScript. It ensures comprehensive data collection from all accessible subpages, making it a reliable tool for scraping websites that rely heavily on JS for content delivery.\n\nCan FireCrawl crawl websites without a sitemap?\n\nYes, FireCrawl can access and crawl all accessible subpages of a website, even in the absence of a sitemap. This feature enables users to gather data from a wide array of web sources with minimal setup.\n\nWhat formats can FireCrawl convert web data into?\n\nFireCrawl specializes in converting web data into clean, well-formatted markdown. This format is particularly suited for LLM applications, offering a structured yet flexible way to represent web content.\n\nHow does FireCrawl ensure the cleanliness of the data?\n\nFireCrawl employs advanced algorithms to clean and structure the scraped data, removing unnecessary elements and formatting the content into readable markdown. This process ensures that the data is ready for use in LLM applications without further preprocessing.\n\nIs FireCrawl suitable for large-scale data scraping projects?\n\nAbsolutely. FireCrawl offers various pricing plans, including a Scale plan that supports scraping of millions of pages. With features like caching and scheduled syncs, it\'s designed to efficiently handle large-scale data scraping and continuous updates, making it ideal for enterprises and large projects.\n\nWhat measures does FireCrawl take to handle web scraping challenges like rate limits and caching?\n\nFireCrawl is built to navigate common web scraping challenges, including reverse proxies, rate limits, and caching. It smartly manages requests and employs caching techniques to minimize bandwidth usage and avoid triggering anti-scraping mechanisms, ensuring reliable data collection.\n\nHow can I try FireCrawl?\n\nYou can start with FireCrawl by trying our free trial, which includes 100 pages. This trial allows you to experience firsthand how FireCrawl can streamline your data collection and conversion processes. Sign up and begin transforming web content into LLM-ready data today!\n\nWho can benefit from using FireCrawl?\n\nFireCrawl is tailored for LLM engineers, data scientists, AI researchers, and developers looking to harness web data for training machine learning models, market research, content aggregation, and more. It simplifies the data preparation process, allowing professionals to focus on insights and model development.\n\n[🔥](/)\n\n© A product by Mendable.ai - All rights reserved.\n\n[Twitter](https://twitter.com/mendableai)\n[GitHub](https://github.com/sideguide)\n[Discord](https://discord.gg/gSmWdAkdwd)\n\nBacked by![Y Combinator Logo](/images/yc.svg)\n\n![SOC 2 Type II](/soc2type2badge.png)\n\n###### Company\n\n* [About us](#0)\n \n* [Diversity & Inclusion](#0)\n \n* [Blog](#0)\n \n* [Careers](#0)\n \n* [Financial statements](#0)\n \n\n###### Resources\n\n* [Community](#0)\n \n* [Terms of service](#0)\n \n* [Collaboration features](#0)\n \n\n###### Legals\n\n* [Refund policy](#0)\n \n* [Terms & Conditions](#0)\n \n* [Privacy policy](#0)\n \n* [Brand Kit](#0)', metadata={'title': 'Home - FireCrawl', 'description': 'FireCrawl crawls and converts any website into clean markdown.', 'language': None, 'sourceURL': 'https://firecrawl.dev'})] ``` You can also pass `params` to the loader. This is a dictionary of options to pass to the crawler. See the [FireCrawl API documentation](https://github.com/mendableai/firecrawl-py) for more information.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:28.021Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/firecrawl/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/firecrawl/", "description": "FireCrawl crawls and convert any", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "5778", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"firecrawl\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:27 GMT", "etag": "W/\"9eef1200fc3159023a5a7c29ad410d25\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::jgllw-1713753567134-dad4824061db" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/firecrawl/", "property": "og:url" }, { "content": "FireCrawl | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "FireCrawl crawls and convert any", "property": "og:description" } ], "title": "FireCrawl | 🦜️🔗 LangChain" }
FireCrawl crawls and convert any website into LLM-ready data. It crawls all accessible subpages and give you clean markdown and metadata for each. No sitemap required. FireCrawl handles complex tasks such as reverse proxies, caching, rate limits, and content blocked by JavaScript. Built by the mendable.ai team. Requirement already satisfied: firecrawl-py in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (0.0.5) Requirement already satisfied: requests in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (from firecrawl-py) (2.31.0) Requirement already satisfied: charset-normalizer<4,>=2 in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (from requests->firecrawl-py) (3.3.2) Requirement already satisfied: idna<4,>=2.5 in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (from requests->firecrawl-py) (3.6) Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (from requests->firecrawl-py) (2.0.7) Requirement already satisfied: certifi>=2017.4.17 in /Users/nicolascamara/anaconda3/envs/langchain/lib/python3.9/site-packages (from requests->firecrawl-py) (2024.2.2) Note: you may need to restart the kernel to use updated packages. [Document(page_content='[Skip to content](#skip)\n\n[🔥 FireCrawl](/)\n\n[Playground](/playground)\n[Pricing](/pricing)\n\n[Log In](/signin)\n[Log In](/signin)\n[Sign Up](/signin/signup)\n\n![Slack Logo](/images/slack_logo_icon.png)\n\nNew message in: #coach-gtm\n==========================\n\n@CoachGTM: Your meeting prep for Pied Piper < > WindFlow Dynamics is ready! Meeting starts in 30 minutes\n\nTurn websites into \n_LLM-ready_ data\n=====================================\n\nCrawl and convert any website into clean markdown\n\nTry now (100 free credits)No credit card required\n\nA product by\n\n[![Mendable Logo](/images/mendable_logo_transparent.png)Mendable](https://mendable.ai)\n\n![Mendable Website Image](/mendable-hero-8.png)\n\nCrawl, Capture, Clean\n---------------------\n\nWe crawl all accessible subpages and give you clean markdown for each. No sitemap required.\n\n \n [\\\n {\\\n "url": "https://www.mendable.ai/",\\\n "markdown": "## Welcome to Mendable\\\n Mendable empowers teams with AI-driven solutions - \\\n streamlining sales and support."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/features",\\\n "markdown": "## Features\\\n Discover how Mendable\'s cutting-edge features can \\\n transform your business operations."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/pricing",\\\n "markdown": "## Pricing Plans\\\n Choose the perfect plan that fits your business needs."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/about",\\\n "markdown": "## About Us\\\n \\\n Learn more about Mendable\'s mission and the \\\n team behind our innovative platform."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/contact",\\\n "markdown": "## Contact Us\\\n Get in touch with us for any queries or support."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/blog",\\\n "markdown": "## Blog\\\n Stay updated with the latest news and insights from Mendable."\\\n }\\\n ]\n \n\nNote: The markdown has been edited for display purposes.\n\nWe handle the hard stuff\n------------------------\n\nReverse proxyies, caching, rate limits, js-blocked content and more...\n\n#### Crawling\n\nFireCrawl crawls all accessible subpages, even without a sitemap.\n\n#### Dynamic content\n\nFireCrawl gathers data even if a website uses javascript to render content.\n\n#### To Markdown\n\nFireCrawl returns clean, well formatted markdown - ready for use in LLM applications\n\n#### Continuous updates\n\nSchedule syncs with FireCrawl. No cron jobs or orchestration required.\n\n#### Caching\n\nFireCrawl caches content, so you don\'t have to wait for a full scrape unless new content exists.\n\n#### Built for AI\n\nBuilt by LLM engineers, for LLM engineers. Giving you clean data the way you want it.\n\nPricing Plans\n=============\n\nStarter\n-------\n\n50k credits ($1.00/1k)\n\n$50/month\n\n* Scrape 50,000 pages\n* Credits valid for 6 months\n* 2 simultaneous scrapers\\*\n\nSubscribe\n\nStandard\n--------\n\n500k credits ($0.75/1k)\n\n$375/month\n\n* Scrape 500,000 pages\n* Credits valid for 6 months\n* 4 simultaneous scrapers\\*\n\nSubscribe\n\nScale\n-----\n\n12.5M credits ($0.30/1k)\n\n$1,250/month\n\n* Scrape 2,500,000 pages\n* Credits valid for 6 months\n* 10 simultaneous scrapes\\*\n\nSubscribe\n\n\\* a "scraper" refers to how many scraper jobs you can simultaneously submit.\n\nWhat sites work?\n----------------\n\nFirecrawl is best suited for business websites, docs and help centers.\n\nBuisness websites\n\nGathering business intelligence or connecting company data to your AI\n\nBlogs, Documentation and Help centers\n\nGather content from documentation and other textual sources\n\nSocial Media\n\nComing soon\n\n![Feature 01](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-business-2.b6c6b56a.png&w=1920&q=75)\n\n![Feature 02](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-docs-sites.11eef02d.png&w=1920&q=75)\n\nComing Soon\n-----------\n\n[But I want it now!](https://calendly.com/d/cp3d-rvx-58g/mendable-meeting)\n\\* Schedule a meeting\n\n![Feature 04](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-business-2.b6c6b56a.png&w=1920&q=75)\n\n![Slack Logo](/images/slack_logo_icon.png)\n\nNew message in: #coach-gtm\n==========================\n\n@CoachGTM: Your meeting prep for Pied Piper < > WindFlow Dynamics is ready! Meeting starts in 30 minutes\n\n[🔥](/)\n\nReady to _Build?_\n-----------------\n\n[Meet with us](https://calendly.com/d/cp3d-rvx-58g/mendable-meeting)\n\n[Try 100 queries free](/signin)\n\n[Discord](https://discord.gg/gSmWdAkdwd)\n\nFAQ\n---\n\nFrequently asked questions about FireCrawl\n\nWhat is FireCrawl?\n\nFireCrawl is an advanced web crawling and data conversion tool designed to transform any website into clean, LLM-ready markdown. Ideal for AI developers and data scientists, it automates the collection, cleaning, and formatting of web data, streamlining the preparation process for Large Language Model (LLM) applications.\n\nHow does FireCrawl handle dynamic content on websites?\n\nUnlike traditional web scrapers, FireCrawl is equipped to handle dynamic content rendered with JavaScript. It ensures comprehensive data collection from all accessible subpages, making it a reliable tool for scraping websites that rely heavily on JS for content delivery.\n\nCan FireCrawl crawl websites without a sitemap?\n\nYes, FireCrawl can access and crawl all accessible subpages of a website, even in the absence of a sitemap. This feature enables users to gather data from a wide array of web sources with minimal setup.\n\nWhat formats can FireCrawl convert web data into?\n\nFireCrawl specializes in converting web data into clean, well-formatted markdown. This format is particularly suited for LLM applications, offering a structured yet flexible way to represent web content.\n\nHow does FireCrawl ensure the cleanliness of the data?\n\nFireCrawl employs advanced algorithms to clean and structure the scraped data, removing unnecessary elements and formatting the content into readable markdown. This process ensures that the data is ready for use in LLM applications without further preprocessing.\n\nIs FireCrawl suitable for large-scale data scraping projects?\n\nAbsolutely. FireCrawl offers various pricing plans, including a Scale plan that supports scraping of millions of pages. With features like caching and scheduled syncs, it\'s designed to efficiently handle large-scale data scraping and continuous updates, making it ideal for enterprises and large projects.\n\nWhat measures does FireCrawl take to handle web scraping challenges like rate limits and caching?\n\nFireCrawl is built to navigate common web scraping challenges, including reverse proxies, rate limits, and caching. It smartly manages requests and employs caching techniques to minimize bandwidth usage and avoid triggering anti-scraping mechanisms, ensuring reliable data collection.\n\nHow can I try FireCrawl?\n\nYou can start with FireCrawl by trying our free trial, which includes 100 pages. This trial allows you to experience firsthand how FireCrawl can streamline your data collection and conversion processes. Sign up and begin transforming web content into LLM-ready data today!\n\nWho can benefit from using FireCrawl?\n\nFireCrawl is tailored for LLM engineers, data scientists, AI researchers, and developers looking to harness web data for training machine learning models, market research, content aggregation, and more. It simplifies the data preparation process, allowing professionals to focus on insights and model development.\n\n[🔥](/)\n\n© A product by Mendable.ai - All rights reserved.\n\n[Twitter](https://twitter.com/mendableai)\n[GitHub](https://github.com/sideguide)\n[Discord](https://discord.gg/gSmWdAkdwd)\n\nBacked by![Y Combinator Logo](/images/yc.svg)\n\n![SOC 2 Type II](/soc2type2badge.png)\n\n###### Company\n\n* [About us](#0)\n \n* [Diversity & Inclusion](#0)\n \n* [Blog](#0)\n \n* [Careers](#0)\n \n* [Financial statements](#0)\n \n\n###### Resources\n\n* [Community](#0)\n \n* [Terms of service](#0)\n \n* [Collaboration features](#0)\n \n\n###### Legals\n\n* [Refund policy](#0)\n \n* [Terms & Conditions](#0)\n \n* [Privacy policy](#0)\n \n* [Brand Kit](#0)', metadata={'title': 'Home - FireCrawl', 'description': 'FireCrawl crawls and converts any website into clean markdown.', 'language': None, 'sourceURL': 'https://firecrawl.dev/'}), Document(page_content='[Skip to content](#skip)\n\n[🔥 FireCrawl](/)\n\n[Playground](/playground)\n[Pricing](/pricing)\n\n[Log In](/signin)\n[Log In](/signin)\n[Sign Up](/signin/signup)\n\nPricing Plans\n=============\n\nStarter\n-------\n\n50k credits ($1.00/1k)\n\n$50/month\n\n* Scrape 50,000 pages\n* Credits valid for 6 months\n* 2 simultaneous scrapers\\*\n\nSubscribe\n\nStandard\n--------\n\n500k credits ($0.75/1k)\n\n$375/month\n\n* Scrape 500,000 pages\n* Credits valid for 6 months\n* 4 simultaneous scrapers\\*\n\nSubscribe\n\nScale\n-----\n\n12.5M credits ($0.30/1k)\n\n$1,250/month\n\n* Scrape 2,500,000 pages\n* Credits valid for 6 months\n* 10 simultaneous scrapes\\*\n\nSubscribe\n\n\\* a "scraper" refers to how many scraper jobs you can simultaneously submit.\n\n[🔥](/)\n\n© A product by Mendable.ai - All rights reserved.\n\n[Twitter](https://twitter.com/mendableai)\n[GitHub](https://github.com/sideguide)\n[Discord](https://discord.gg/gSmWdAkdwd)\n\nBacked by![Y Combinator Logo](/images/yc.svg)\n\n![SOC 2 Type II](/soc2type2badge.png)\n\n###### Company\n\n* [About us](#0)\n \n* [Diversity & Inclusion](#0)\n \n* [Blog](#0)\n \n* [Careers](#0)\n \n* [Financial statements](#0)\n \n\n###### Resources\n\n* [Community](#0)\n \n* [Terms of service](#0)\n \n* [Collaboration features](#0)\n \n\n###### Legals\n\n* [Refund policy](#0)\n \n* [Terms & Conditions](#0)\n \n* [Privacy policy](#0)\n \n* [Brand Kit](#0)', metadata={'title': 'FireCrawl', 'description': 'Turn any website into LLM-ready data.', 'language': None, 'sourceURL': 'https://firecrawl.dev/pricing'})] [Document(page_content='[Skip to content](#skip)\n\n[🔥 FireCrawl](/)\n\n[Playground](/playground)\n[Pricing](/pricing)\n\n[Log In](/signin)\n[Log In](/signin)\n[Sign Up](/signin/signup)\n\n![Slack Logo](/images/slack_logo_icon.png)\n\nNew message in: #coach-gtm\n==========================\n\n@CoachGTM: Your meeting prep for Pied Piper < > WindFlow Dynamics is ready! Meeting starts in 30 minutes\n\nTurn websites into \n_LLM-ready_ data\n=====================================\n\nCrawl and convert any website into clean markdown\n\nTry now (100 free credits)No credit card required\n\nA product by\n\n[![Mendable Logo](/images/mendable_logo_transparent.png)Mendable](https://mendable.ai)\n\n![Mendable Website Image](/mendable-hero-8.png)\n\nCrawl, Capture, Clean\n---------------------\n\nWe crawl all accessible subpages and give you clean markdown for each. No sitemap required.\n\n \n [\\\n {\\\n "url": "https://www.mendable.ai/",\\\n "markdown": "## Welcome to Mendable\\\n Mendable empowers teams with AI-driven solutions - \\\n streamlining sales and support."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/features",\\\n "markdown": "## Features\\\n Discover how Mendable\'s cutting-edge features can \\\n transform your business operations."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/pricing",\\\n "markdown": "## Pricing Plans\\\n Choose the perfect plan that fits your business needs."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/about",\\\n "markdown": "## About Us\\\n \\\n Learn more about Mendable\'s mission and the \\\n team behind our innovative platform."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/contact",\\\n "markdown": "## Contact Us\\\n Get in touch with us for any queries or support."\\\n },\\\n {\\\n "url": "https://www.mendable.ai/blog",\\\n "markdown": "## Blog\\\n Stay updated with the latest news and insights from Mendable."\\\n }\\\n ]\n \n\nNote: The markdown has been edited for display purposes.\n\nWe handle the hard stuff\n------------------------\n\nReverse proxyies, caching, rate limits, js-blocked content and more...\n\n#### Crawling\n\nFireCrawl crawls all accessible subpages, even without a sitemap.\n\n#### Dynamic content\n\nFireCrawl gathers data even if a website uses javascript to render content.\n\n#### To Markdown\n\nFireCrawl returns clean, well formatted markdown - ready for use in LLM applications\n\n#### Continuous updates\n\nSchedule syncs with FireCrawl. No cron jobs or orchestration required.\n\n#### Caching\n\nFireCrawl caches content, so you don\'t have to wait for a full scrape unless new content exists.\n\n#### Built for AI\n\nBuilt by LLM engineers, for LLM engineers. Giving you clean data the way you want it.\n\nPricing Plans\n=============\n\nStarter\n-------\n\n50k credits ($1.00/1k)\n\n$50/month\n\n* Scrape 50,000 pages\n* Credits valid for 6 months\n* 2 simultaneous scrapers\\*\n\nSubscribe\n\nStandard\n--------\n\n500k credits ($0.75/1k)\n\n$375/month\n\n* Scrape 500,000 pages\n* Credits valid for 6 months\n* 4 simultaneous scrapers\\*\n\nSubscribe\n\nScale\n-----\n\n12.5M credits ($0.30/1k)\n\n$1,250/month\n\n* Scrape 2,500,000 pages\n* Credits valid for 6 months\n* 10 simultaneous scrapes\\*\n\nSubscribe\n\n\\* a "scraper" refers to how many scraper jobs you can simultaneously submit.\n\nWhat sites work?\n----------------\n\nFirecrawl is best suited for business websites, docs and help centers.\n\nBuisness websites\n\nGathering business intelligence or connecting company data to your AI\n\nBlogs, Documentation and Help centers\n\nGather content from documentation and other textual sources\n\nSocial Media\n\nComing soon\n\n![Feature 01](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-business-2.b6c6b56a.png&w=1920&q=75)\n\n![Feature 02](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-docs-sites.11eef02d.png&w=1920&q=75)\n\nComing Soon\n-----------\n\n[But I want it now!](https://calendly.com/d/cp3d-rvx-58g/mendable-meeting)\n\\* Schedule a meeting\n\n![Feature 04](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fexample-business-2.b6c6b56a.png&w=1920&q=75)\n\n![Slack Logo](/images/slack_logo_icon.png)\n\nNew message in: #coach-gtm\n==========================\n\n@CoachGTM: Your meeting prep for Pied Piper < > WindFlow Dynamics is ready! Meeting starts in 30 minutes\n\n[🔥](/)\n\nReady to _Build?_\n-----------------\n\n[Meet with us](https://calendly.com/d/cp3d-rvx-58g/mendable-meeting)\n\n[Try 100 queries free](/signin)\n\n[Discord](https://discord.gg/gSmWdAkdwd)\n\nFAQ\n---\n\nFrequently asked questions about FireCrawl\n\nWhat is FireCrawl?\n\nFireCrawl is an advanced web crawling and data conversion tool designed to transform any website into clean, LLM-ready markdown. Ideal for AI developers and data scientists, it automates the collection, cleaning, and formatting of web data, streamlining the preparation process for Large Language Model (LLM) applications.\n\nHow does FireCrawl handle dynamic content on websites?\n\nUnlike traditional web scrapers, FireCrawl is equipped to handle dynamic content rendered with JavaScript. It ensures comprehensive data collection from all accessible subpages, making it a reliable tool for scraping websites that rely heavily on JS for content delivery.\n\nCan FireCrawl crawl websites without a sitemap?\n\nYes, FireCrawl can access and crawl all accessible subpages of a website, even in the absence of a sitemap. This feature enables users to gather data from a wide array of web sources with minimal setup.\n\nWhat formats can FireCrawl convert web data into?\n\nFireCrawl specializes in converting web data into clean, well-formatted markdown. This format is particularly suited for LLM applications, offering a structured yet flexible way to represent web content.\n\nHow does FireCrawl ensure the cleanliness of the data?\n\nFireCrawl employs advanced algorithms to clean and structure the scraped data, removing unnecessary elements and formatting the content into readable markdown. This process ensures that the data is ready for use in LLM applications without further preprocessing.\n\nIs FireCrawl suitable for large-scale data scraping projects?\n\nAbsolutely. FireCrawl offers various pricing plans, including a Scale plan that supports scraping of millions of pages. With features like caching and scheduled syncs, it\'s designed to efficiently handle large-scale data scraping and continuous updates, making it ideal for enterprises and large projects.\n\nWhat measures does FireCrawl take to handle web scraping challenges like rate limits and caching?\n\nFireCrawl is built to navigate common web scraping challenges, including reverse proxies, rate limits, and caching. It smartly manages requests and employs caching techniques to minimize bandwidth usage and avoid triggering anti-scraping mechanisms, ensuring reliable data collection.\n\nHow can I try FireCrawl?\n\nYou can start with FireCrawl by trying our free trial, which includes 100 pages. This trial allows you to experience firsthand how FireCrawl can streamline your data collection and conversion processes. Sign up and begin transforming web content into LLM-ready data today!\n\nWho can benefit from using FireCrawl?\n\nFireCrawl is tailored for LLM engineers, data scientists, AI researchers, and developers looking to harness web data for training machine learning models, market research, content aggregation, and more. It simplifies the data preparation process, allowing professionals to focus on insights and model development.\n\n[🔥](/)\n\n© A product by Mendable.ai - All rights reserved.\n\n[Twitter](https://twitter.com/mendableai)\n[GitHub](https://github.com/sideguide)\n[Discord](https://discord.gg/gSmWdAkdwd)\n\nBacked by![Y Combinator Logo](/images/yc.svg)\n\n![SOC 2 Type II](/soc2type2badge.png)\n\n###### Company\n\n* [About us](#0)\n \n* [Diversity & Inclusion](#0)\n \n* [Blog](#0)\n \n* [Careers](#0)\n \n* [Financial statements](#0)\n \n\n###### Resources\n\n* [Community](#0)\n \n* [Terms of service](#0)\n \n* [Collaboration features](#0)\n \n\n###### Legals\n\n* [Refund policy](#0)\n \n* [Terms & Conditions](#0)\n \n* [Privacy policy](#0)\n \n* [Brand Kit](#0)', metadata={'title': 'Home - FireCrawl', 'description': 'FireCrawl crawls and converts any website into clean markdown.', 'language': None, 'sourceURL': 'https://firecrawl.dev'})] You can also pass params to the loader. This is a dictionary of options to pass to the crawler. See the FireCrawl API documentation for more information.
https://python.langchain.com/docs/integrations/document_loaders/odt/
## Open Document Format (ODT) > The [Open Document Format for Office Applications (ODF)](https://en.wikipedia.org/wiki/OpenDocument), also known as `OpenDocument`, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applications. > The standard is developed and maintained by a technical committee in the Organization for the Advancement of Structured Information Standards (`OASIS`) consortium. It was based on the Sun Microsystems specification for OpenOffice.org XML, the default format for `OpenOffice.org` and `LibreOffice`. It was originally developed for `StarOffice` “to provide an open standard for office documents.” The `UnstructuredODTLoader` is used to load `Open Office ODT` files. ``` from langchain_community.document_loaders import UnstructuredODTLoader ``` ``` loader = UnstructuredODTLoader("example_data/fake.odt", mode="elements")docs = loader.load()docs[0] ``` ``` Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.odt', 'filename': 'example_data/fake.odt', 'category': 'Title'}) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:28.517Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/odt/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/odt/", "description": "The [Open Document Format for Office Applications", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4396", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"odt\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:27 GMT", "etag": "W/\"2fef5e487670f229d4bcd3220f958e8e\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::tjlr2-1713753567941-83c84a58d811" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/odt/", "property": "og:url" }, { "content": "Open Document Format (ODT) | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The [Open Document Format for Office Applications", "property": "og:description" } ], "title": "Open Document Format (ODT) | 🦜️🔗 LangChain" }
Open Document Format (ODT) The Open Document Format for Office Applications (ODF), also known as OpenDocument, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applications. The standard is developed and maintained by a technical committee in the Organization for the Advancement of Structured Information Standards (OASIS) consortium. It was based on the Sun Microsystems specification for OpenOffice.org XML, the default format for OpenOffice.org and LibreOffice. It was originally developed for StarOffice “to provide an open standard for office documents.” The UnstructuredODTLoader is used to load Open Office ODT files. from langchain_community.document_loaders import UnstructuredODTLoader loader = UnstructuredODTLoader("example_data/fake.odt", mode="elements") docs = loader.load() docs[0] Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.odt', 'filename': 'example_data/fake.odt', 'category': 'Title'}) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/org_mode/
## Org-mode > A [Org Mode document](https://en.wikipedia.org/wiki/Org-mode) is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs. ## `UnstructuredOrgModeLoader`[​](#unstructuredorgmodeloader "Direct link to unstructuredorgmodeloader") You can load data from Org-mode files with `UnstructuredOrgModeLoader` using the following workflow. ``` from langchain_community.document_loaders import UnstructuredOrgModeLoader ``` ``` loader = UnstructuredOrgModeLoader(file_path="example_data/README.org", mode="elements")docs = loader.load() ``` ``` page_content='Example Docs' metadata={'source': 'example_data/README.org', 'filename': 'README.org', 'file_directory': 'example_data', 'filetype': 'text/org', 'page_number': 1, 'category': 'Title'} ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:28.760Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/org_mode/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/org_mode/", "description": "A Org Mode document is a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3466", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"org_mode\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:28 GMT", "etag": "W/\"54103b6b3edf195f6aadcde50b81690f\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::zdbfw-1713753568657-ffb56c7d729c" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/org_mode/", "property": "og:url" }, { "content": "Org-mode | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "A Org Mode document is a", "property": "og:description" } ], "title": "Org-mode | 🦜️🔗 LangChain" }
Org-mode A Org Mode document is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs. UnstructuredOrgModeLoader​ You can load data from Org-mode files with UnstructuredOrgModeLoader using the following workflow. from langchain_community.document_loaders import UnstructuredOrgModeLoader loader = UnstructuredOrgModeLoader(file_path="example_data/README.org", mode="elements") docs = loader.load() page_content='Example Docs' metadata={'source': 'example_data/README.org', 'filename': 'README.org', 'file_directory': 'example_data', 'filetype': 'text/org', 'page_number': 1, 'category': 'Title'} Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/oracleadb_loader/
Oracle autonomous database is a cloud database that uses machine learning to automate database tuning, security, backups, updates, and other routine management tasks traditionally performed by DBAs. This notebook covers how to load documents from oracle autonomous database, the loader supports connection with connection string or tns configuration. With mutual TLS authentication (mTLS), wallet\_location and wallet\_password are required to create the connection, user can create connection by providing either connection string or tns configuration details. ``` SQL_QUERY = "select prod_id, time_id from sh.costs fetch first 5 rows only"doc_loader_1 = OracleAutonomousDatabaseLoader( query=SQL_QUERY, user=s.USERNAME, password=s.PASSWORD, schema=s.SCHEMA, config_dir=s.CONFIG_DIR, wallet_location=s.WALLET_LOCATION, wallet_password=s.PASSWORD, tns_name=s.TNS_NAME,)doc_1 = doc_loader_1.load()doc_loader_2 = OracleAutonomousDatabaseLoader( query=SQL_QUERY, user=s.USERNAME, password=s.PASSWORD, schema=s.SCHEMA, connection_string=s.CONNECTION_STRING, wallet_location=s.WALLET_LOCATION, wallet_password=s.PASSWORD,)doc_2 = doc_loader_2.load() ``` With TLS authentication, wallet\_location and wallet\_password are not required. ``` doc_loader_3 = OracleAutonomousDatabaseLoader( query=SQL_QUERY, user=s.USERNAME, password=s.PASSWORD, schema=s.SCHEMA, config_dir=s.CONFIG_DIR, tns_name=s.TNS_NAME,)doc_3 = doc_loader_3.load()doc_loader_4 = OracleAutonomousDatabaseLoader( query=SQL_QUERY, user=s.USERNAME, password=s.PASSWORD, schema=s.SCHEMA, connection_string=s.CONNECTION_STRING,)doc_4 = doc_loader_4.load() ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:28.859Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/oracleadb_loader/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/oracleadb_loader/", "description": "Oracle autonomous database is a cloud database that uses machine", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3466", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"oracleadb_loader\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:28 GMT", "etag": "W/\"cdd00b6ee787fc47af9c858dbdd443b5\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::hcnxs-1713753568633-1a805fdbe3c9" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/oracleadb_loader/", "property": "og:url" }, { "content": "Oracle Autonomous Database | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Oracle autonomous database is a cloud database that uses machine", "property": "og:description" } ], "title": "Oracle Autonomous Database | 🦜️🔗 LangChain" }
Oracle autonomous database is a cloud database that uses machine learning to automate database tuning, security, backups, updates, and other routine management tasks traditionally performed by DBAs. This notebook covers how to load documents from oracle autonomous database, the loader supports connection with connection string or tns configuration. With mutual TLS authentication (mTLS), wallet_location and wallet_password are required to create the connection, user can create connection by providing either connection string or tns configuration details. SQL_QUERY = "select prod_id, time_id from sh.costs fetch first 5 rows only" doc_loader_1 = OracleAutonomousDatabaseLoader( query=SQL_QUERY, user=s.USERNAME, password=s.PASSWORD, schema=s.SCHEMA, config_dir=s.CONFIG_DIR, wallet_location=s.WALLET_LOCATION, wallet_password=s.PASSWORD, tns_name=s.TNS_NAME, ) doc_1 = doc_loader_1.load() doc_loader_2 = OracleAutonomousDatabaseLoader( query=SQL_QUERY, user=s.USERNAME, password=s.PASSWORD, schema=s.SCHEMA, connection_string=s.CONNECTION_STRING, wallet_location=s.WALLET_LOCATION, wallet_password=s.PASSWORD, ) doc_2 = doc_loader_2.load() With TLS authentication, wallet_location and wallet_password are not required. doc_loader_3 = OracleAutonomousDatabaseLoader( query=SQL_QUERY, user=s.USERNAME, password=s.PASSWORD, schema=s.SCHEMA, config_dir=s.CONFIG_DIR, tns_name=s.TNS_NAME, ) doc_3 = doc_loader_3.load() doc_loader_4 = OracleAutonomousDatabaseLoader( query=SQL_QUERY, user=s.USERNAME, password=s.PASSWORD, schema=s.SCHEMA, connection_string=s.CONNECTION_STRING, ) doc_4 = doc_loader_4.load()
https://python.langchain.com/docs/integrations/document_loaders/open_city_data/
That provides you with the `dataset identifier`. Use the dataset identifier to grab specific tables for a given city\_id (`data.sfgov.org`) - ``` WARNING:root:Requests made without an app_token will be subject to strict throttling limits. ``` ``` {'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309'} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:29.153Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/open_city_data/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/open_city_data/", "description": "Socrata", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4396", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"open_city_data\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:28 GMT", "etag": "W/\"7f92deaa47a9f3c50aa3c25f69dce52e\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::w7sgp-1713753568673-7e0e83cf0533" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/open_city_data/", "property": "og:url" }, { "content": "Open City Data | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Socrata", "property": "og:description" } ], "title": "Open City Data | 🦜️🔗 LangChain" }
That provides you with the dataset identifier. Use the dataset identifier to grab specific tables for a given city_id (data.sfgov.org) - WARNING:root:Requests made without an app_token will be subject to strict throttling limits. {'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309'}
https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe/
``` [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})] ``` ``` page_content='Nationals' metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}page_content='Reds' metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}page_content='Rangers' metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}page_content='Orioles' metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}page_content='Rays' metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}page_content='Angels' metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}page_content='Tigers' metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}page_content='Cardinals' metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}page_content='Dodgers' metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}page_content='White Sox' metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}page_content='Brewers' metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}page_content='Phillies' metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}page_content='Diamondbacks' metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}page_content='Pirates' metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}page_content='Royals' metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}page_content='Marlins' metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}page_content='Red Sox' metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}page_content='Indians' metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}page_content='Twins' metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}page_content='Rockies' metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}page_content='Cubs' metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}page_content='Astros' metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:29.347Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe/", "description": "This notebook goes over how to load data from a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "2335", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"pandas_dataframe\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:29 GMT", "etag": "W/\"aa917e26f07e308706e75e5d78136c12\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::grqfv-1713753569032-ddede9a69590" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe/", "property": "og:url" }, { "content": "Pandas DataFrame | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook goes over how to load data from a", "property": "og:description" } ], "title": "Pandas DataFrame | 🦜️🔗 LangChain" }
[Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})] page_content='Nationals' metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98} page_content='Reds' metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97} page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95} page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94} page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94} page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94} page_content='Rangers' metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93} page_content='Orioles' metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93} page_content='Rays' metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90} page_content='Angels' metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89} page_content='Tigers' metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88} page_content='Cardinals' metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88} page_content='Dodgers' metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86} page_content='White Sox' metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85} page_content='Brewers' metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83} page_content='Phillies' metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81} page_content='Diamondbacks' metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81} page_content='Pirates' metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79} page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76} page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75} page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74} page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73} page_content='Royals' metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72} page_content='Marlins' metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69} page_content='Red Sox' metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69} page_content='Indians' metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68} page_content='Twins' metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66} page_content='Rockies' metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64} page_content='Cubs' metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61} page_content='Astros' metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55}
https://python.langchain.com/docs/integrations/document_loaders/polars_dataframe/
``` [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})] ``` ``` page_content='Nationals' metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}page_content='Reds' metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}page_content='Rangers' metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}page_content='Orioles' metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}page_content='Rays' metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}page_content='Angels' metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}page_content='Tigers' metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}page_content='Cardinals' metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}page_content='Dodgers' metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}page_content='White Sox' metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}page_content='Brewers' metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}page_content='Phillies' metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}page_content='Diamondbacks' metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}page_content='Pirates' metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}page_content='Royals' metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}page_content='Marlins' metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}page_content='Red Sox' metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}page_content='Indians' metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}page_content='Twins' metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}page_content='Rockies' metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}page_content='Cubs' metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}page_content='Astros' metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:29.623Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/polars_dataframe/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/polars_dataframe/", "description": "This notebook goes over how to load data from a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3466", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"polars_dataframe\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:29 GMT", "etag": "W/\"44a272ecc90de256bdf412ffe97ccb48\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::cb5mv-1713753569172-4fa4bafaf5a6" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/polars_dataframe/", "property": "og:url" }, { "content": "Polars DataFrame | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook goes over how to load data from a", "property": "og:description" } ], "title": "Polars DataFrame | 🦜️🔗 LangChain" }
[Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})] page_content='Nationals' metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98} page_content='Reds' metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97} page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95} page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94} page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94} page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94} page_content='Rangers' metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93} page_content='Orioles' metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93} page_content='Rays' metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90} page_content='Angels' metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89} page_content='Tigers' metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88} page_content='Cardinals' metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88} page_content='Dodgers' metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86} page_content='White Sox' metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85} page_content='Brewers' metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83} page_content='Phillies' metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81} page_content='Diamondbacks' metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81} page_content='Pirates' metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79} page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76} page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75} page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74} page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73} page_content='Royals' metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72} page_content='Marlins' metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69} page_content='Red Sox' metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69} page_content='Indians' metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68} page_content='Twins' metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66} page_content='Rockies' metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64} page_content='Cubs' metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61} page_content='Astros' metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55}
https://python.langchain.com/docs/integrations/document_loaders/pubmed/
[PubMed®](https://pubmed.ncbi.nlm.nih.gov/) by `The National Center for Biotechnology Information, National Library of Medicine` comprises more than 35 million citations for biomedical literature from `MEDLINE`, life science journals, and online books. Citations may include links to full text content from `PubMed Central` and publisher web sites. ``` {'uid': '37548997', 'Title': 'Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.', 'Published': '2023-08-07', 'Copyright Information': '©Robin J Borchert, Charlotte R Hickman, Jack Pepys, Timothy J Sadler. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2023.'} ``` ``` "BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\nOBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills, professionalism, and ethics.\nMETHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors.\nCONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics." ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:29.850Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/pubmed/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/pubmed/", "description": "PubMed® by", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3466", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"pubmed\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:29 GMT", "etag": "W/\"309d1c7ab84199a43a5b476b2b4d4ec2\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::tcjh5-1713753569325-a5755541ea2d" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/pubmed/", "property": "og:url" }, { "content": "PubMed | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "PubMed® by", "property": "og:description" } ], "title": "PubMed | 🦜️🔗 LangChain" }
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. {'uid': '37548997', 'Title': 'Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.', 'Published': '2023-08-07', 'Copyright Information': '©Robin J Borchert, Charlotte R Hickman, Jack Pepys, Timothy J Sadler. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2023.'} "BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\nOBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills, professionalism, and ethics.\nMETHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors.\nCONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics."
https://python.langchain.com/docs/integrations/document_loaders/psychic/
## Psychic This notebook covers how to load documents from `Psychic`. See [here](https://python.langchain.com/docs/integrations/providers/psychic/) for more details. ## Prerequisites[​](#prerequisites "Direct link to Prerequisites") 1. Follow the Quick Start section in [this document](https://python.langchain.com/docs/integrations/providers/psychic/) 2. Log into the [Psychic dashboard](https://dashboard.psychic.dev/) and get your secret key 3. Install the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify. ## Loading documents[​](#loading-documents "Direct link to Loading documents") Use the `PsychicLoader` class to load in documents from a connection. Each connection has a connector id (corresponding to the SaaS app that was connected) and a connection id (which you passed in to the frontend library). ``` # Uncomment this to install psychicapi if you don't already have it installed!poetry run pip -q install psychicapi langchain-chroma ``` ``` [notice] A new release of pip is available: 23.0.1 -> 23.1.2[notice] To update, run: pip install --upgrade pip ``` ``` from langchain_community.document_loaders import PsychicLoaderfrom psychicapi import ConnectorId# Create a document loader for google drive. We can also load from other connectors by setting the connector_id to the appropriate value e.g. ConnectorId.notion.value# This loader uses our test credentialsgoogle_drive_loader = PsychicLoader( api_key="7ddb61c1-8b6a-4d31-a58e-30d1c9ea480e", connector_id=ConnectorId.gdrive.value, connection_id="google-test",)documents = google_drive_loader.load() ``` ## Converting the docs to embeddings[​](#converting-the-docs-to-embeddings "Direct link to Converting the docs to embeddings") We can now convert these documents into embeddings and store them in a vector database like Chroma ``` from langchain.chains import RetrievalQAWithSourcesChainfrom langchain_chroma import Chromafrom langchain_openai import OpenAI, OpenAIEmbeddingsfrom langchain_text_splitters import CharacterTextSplitter ``` ``` text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_documents(texts, embeddings)chain = RetrievalQAWithSourcesChain.from_chain_type( OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain({"question": "what is psychic?"}, return_only_outputs=True) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:29.937Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/psychic/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/psychic/", "description": "This notebook covers how to load documents from Psychic. See", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3466", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"psychic\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:29 GMT", "etag": "W/\"849f90b6ca8b8f85d4ef003b59360e53\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::lllfh-1713753569335-dd49b5ad28be" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/psychic/", "property": "og:url" }, { "content": "Psychic | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook covers how to load documents from Psychic. See", "property": "og:description" } ], "title": "Psychic | 🦜️🔗 LangChain" }
Psychic This notebook covers how to load documents from Psychic. See here for more details. Prerequisites​ Follow the Quick Start section in this document Log into the Psychic dashboard and get your secret key Install the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify. Loading documents​ Use the PsychicLoader class to load in documents from a connection. Each connection has a connector id (corresponding to the SaaS app that was connected) and a connection id (which you passed in to the frontend library). # Uncomment this to install psychicapi if you don't already have it installed !poetry run pip -q install psychicapi langchain-chroma [notice] A new release of pip is available: 23.0.1 -> 23.1.2 [notice] To update, run: pip install --upgrade pip from langchain_community.document_loaders import PsychicLoader from psychicapi import ConnectorId # Create a document loader for google drive. We can also load from other connectors by setting the connector_id to the appropriate value e.g. ConnectorId.notion.value # This loader uses our test credentials google_drive_loader = PsychicLoader( api_key="7ddb61c1-8b6a-4d31-a58e-30d1c9ea480e", connector_id=ConnectorId.gdrive.value, connection_id="google-test", ) documents = google_drive_loader.load() Converting the docs to embeddings​ We can now convert these documents into embeddings and store them in a vector database like Chroma from langchain.chains import RetrievalQAWithSourcesChain from langchain_chroma import Chroma from langchain_openai import OpenAI, OpenAIEmbeddings from langchain_text_splitters import CharacterTextSplitter text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(texts, embeddings) chain = RetrievalQAWithSourcesChain.from_chain_type( OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever() ) chain({"question": "what is psychic?"}, return_only_outputs=True) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe/
``` Setting default log level to "WARN".To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).23/05/31 14:08:33 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable ``` ``` [Document(page_content='Nationals', metadata={' "Payroll (millions)"': ' 81.34', ' "Wins"': ' 98'}), Document(page_content='Reds', metadata={' "Payroll (millions)"': ' 82.20', ' "Wins"': ' 97'}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': ' 197.96', ' "Wins"': ' 95'}), Document(page_content='Giants', metadata={' "Payroll (millions)"': ' 117.62', ' "Wins"': ' 94'}), Document(page_content='Braves', metadata={' "Payroll (millions)"': ' 83.31', ' "Wins"': ' 94'}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': ' 55.37', ' "Wins"': ' 94'}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': ' 120.51', ' "Wins"': ' 93'}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': ' 81.43', ' "Wins"': ' 93'}), Document(page_content='Rays', metadata={' "Payroll (millions)"': ' 64.17', ' "Wins"': ' 90'}), Document(page_content='Angels', metadata={' "Payroll (millions)"': ' 154.49', ' "Wins"': ' 89'}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': ' 132.30', ' "Wins"': ' 88'}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': ' 110.30', ' "Wins"': ' 88'}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': ' 95.14', ' "Wins"': ' 86'}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': ' 96.92', ' "Wins"': ' 85'}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': ' 97.65', ' "Wins"': ' 83'}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': ' 174.54', ' "Wins"': ' 81'}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': ' 74.28', ' "Wins"': ' 81'}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': ' 63.43', ' "Wins"': ' 79'}), Document(page_content='Padres', metadata={' "Payroll (millions)"': ' 55.24', ' "Wins"': ' 76'}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': ' 81.97', ' "Wins"': ' 75'}), Document(page_content='Mets', metadata={' "Payroll (millions)"': ' 93.35', ' "Wins"': ' 74'}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': ' 75.48', ' "Wins"': ' 73'}), Document(page_content='Royals', metadata={' "Payroll (millions)"': ' 60.91', ' "Wins"': ' 72'}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': ' 118.07', ' "Wins"': ' 69'}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': ' 173.18', ' "Wins"': ' 69'}), Document(page_content='Indians', metadata={' "Payroll (millions)"': ' 78.43', ' "Wins"': ' 68'}), Document(page_content='Twins', metadata={' "Payroll (millions)"': ' 94.08', ' "Wins"': ' 66'}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': ' 78.06', ' "Wins"': ' 64'}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': ' 88.19', ' "Wins"': ' 61'}), Document(page_content='Astros', metadata={' "Payroll (millions)"': ' 60.65', ' "Wins"': ' 55'})] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:30.160Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe/", "description": "This notebook goes over how to load data from a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4396", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"pyspark_dataframe\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:29 GMT", "etag": "W/\"f59b6670495355abb9707987a533fb9c\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::ssks4-1713753569654-bc3233a000de" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe/", "property": "og:url" }, { "content": "PySpark | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook goes over how to load data from a", "property": "og:description" } ], "title": "PySpark | 🦜️🔗 LangChain" }
Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 23/05/31 14:08:33 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [Document(page_content='Nationals', metadata={' "Payroll (millions)"': ' 81.34', ' "Wins"': ' 98'}), Document(page_content='Reds', metadata={' "Payroll (millions)"': ' 82.20', ' "Wins"': ' 97'}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': ' 197.96', ' "Wins"': ' 95'}), Document(page_content='Giants', metadata={' "Payroll (millions)"': ' 117.62', ' "Wins"': ' 94'}), Document(page_content='Braves', metadata={' "Payroll (millions)"': ' 83.31', ' "Wins"': ' 94'}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': ' 55.37', ' "Wins"': ' 94'}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': ' 120.51', ' "Wins"': ' 93'}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': ' 81.43', ' "Wins"': ' 93'}), Document(page_content='Rays', metadata={' "Payroll (millions)"': ' 64.17', ' "Wins"': ' 90'}), Document(page_content='Angels', metadata={' "Payroll (millions)"': ' 154.49', ' "Wins"': ' 89'}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': ' 132.30', ' "Wins"': ' 88'}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': ' 110.30', ' "Wins"': ' 88'}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': ' 95.14', ' "Wins"': ' 86'}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': ' 96.92', ' "Wins"': ' 85'}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': ' 97.65', ' "Wins"': ' 83'}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': ' 174.54', ' "Wins"': ' 81'}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': ' 74.28', ' "Wins"': ' 81'}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': ' 63.43', ' "Wins"': ' 79'}), Document(page_content='Padres', metadata={' "Payroll (millions)"': ' 55.24', ' "Wins"': ' 76'}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': ' 81.97', ' "Wins"': ' 75'}), Document(page_content='Mets', metadata={' "Payroll (millions)"': ' 93.35', ' "Wins"': ' 74'}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': ' 75.48', ' "Wins"': ' 73'}), Document(page_content='Royals', metadata={' "Payroll (millions)"': ' 60.91', ' "Wins"': ' 72'}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': ' 118.07', ' "Wins"': ' 69'}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': ' 173.18', ' "Wins"': ' 69'}), Document(page_content='Indians', metadata={' "Payroll (millions)"': ' 78.43', ' "Wins"': ' 68'}), Document(page_content='Twins', metadata={' "Payroll (millions)"': ' 94.08', ' "Wins"': ' 66'}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': ' 78.06', ' "Wins"': ' 64'}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': ' 88.19', ' "Wins"': ' 61'}), Document(page_content='Astros', metadata={' "Payroll (millions)"': ' 60.65', ' "Wins"': ' 55'})]
https://python.langchain.com/docs/integrations/document_loaders/pebblo/
## Pebblo Safe DocumentLoader > [Pebblo](https://github.com/daxa-ai/pebblo) enables developers to safely load data and promote their Gen AI app to deployment without worrying about the organization’s compliance and security requirements. The project identifies semantic topics and entities found in the loaded data and summarizes them on the UI or a PDF report. Pebblo has two components. 1. Pebblo Safe DocumentLoader for Langchain 2. Pebblo Daemon This document describes how to augment your existing Langchain DocumentLoader with Pebblo Safe DocumentLoader to get deep data visibility on the types of Topics and Entities ingested into the Gen-AI Langchain application. For details on `Pebblo Daemon` see this [pebblo daemon](https://daxa-ai.github.io/pebblo-docs/daemon.html) document. Pebblo Safeloader enables safe data ingestion for Langchain `DocumentLoader`. This is done by wrapping the document loader call with `Pebblo Safe DocumentLoader`. #### How to Pebblo enable Document Loading?[​](#how-to-pebblo-enable-document-loading "Direct link to How to Pebblo enable Document Loading?") Assume a Langchain RAG application snippet using `CSVLoader` to read a CSV document for inference. Here is the snippet of Document loading using `CSVLoader`. ``` from langchain.document_loaders.csv_loader import CSVLoaderloader = CSVLoader("data/corp_sens_data.csv")documents = loader.load()print(documents) ``` The Pebblo SafeLoader can be enabled with few lines of code change to the above snippet. ``` from langchain.document_loaders.csv_loader import CSVLoaderfrom langchain_community.document_loaders import PebbloSafeLoaderloader = PebbloSafeLoader( CSVLoader("data/corp_sens_data.csv"), name="acme-corp-rag-1", # App name (Mandatory) owner="Joe Smith", # Owner (Optional) description="Support productivity RAG application", # Description (Optional))documents = loader.load()print(documents) ``` ### Send semantic topics and identities to Pebblo cloud server[​](#send-semantic-topics-and-identities-to-pebblo-cloud-server "Direct link to Send semantic topics and identities to Pebblo cloud server") To send semantic data to pebblo-cloud, pass api-key to PebbloSafeLoader as an argument or alternatively, put the api-ket in `PEBBLO_API_KEY` environment variable. ``` from langchain.document_loaders.csv_loader import CSVLoaderfrom langchain_community.document_loaders import PebbloSafeLoaderloader = PebbloSafeLoader( CSVLoader("data/corp_sens_data.csv"), name="acme-corp-rag-1", # App name (Mandatory) owner="Joe Smith", # Owner (Optional) description="Support productivity RAG application", # Description (Optional) api_key="my-api-key", # API key (Optional, can be set in the environment variable PEBBLO_API_KEY))documents = loader.load()print(documents) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:30.458Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/pebblo/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/pebblo/", "description": "Pebblo enables developers to", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4396", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"pebblo\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:29 GMT", "etag": "W/\"e8ad77e2c332f6735c40c55e55debcf8\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::4vch7-1713753569677-70b7d771968e" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/pebblo/", "property": "og:url" }, { "content": "Pebblo Safe DocumentLoader | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Pebblo enables developers to", "property": "og:description" } ], "title": "Pebblo Safe DocumentLoader | 🦜️🔗 LangChain" }
Pebblo Safe DocumentLoader Pebblo enables developers to safely load data and promote their Gen AI app to deployment without worrying about the organization’s compliance and security requirements. The project identifies semantic topics and entities found in the loaded data and summarizes them on the UI or a PDF report. Pebblo has two components. Pebblo Safe DocumentLoader for Langchain Pebblo Daemon This document describes how to augment your existing Langchain DocumentLoader with Pebblo Safe DocumentLoader to get deep data visibility on the types of Topics and Entities ingested into the Gen-AI Langchain application. For details on Pebblo Daemon see this pebblo daemon document. Pebblo Safeloader enables safe data ingestion for Langchain DocumentLoader. This is done by wrapping the document loader call with Pebblo Safe DocumentLoader. How to Pebblo enable Document Loading?​ Assume a Langchain RAG application snippet using CSVLoader to read a CSV document for inference. Here is the snippet of Document loading using CSVLoader. from langchain.document_loaders.csv_loader import CSVLoader loader = CSVLoader("data/corp_sens_data.csv") documents = loader.load() print(documents) The Pebblo SafeLoader can be enabled with few lines of code change to the above snippet. from langchain.document_loaders.csv_loader import CSVLoader from langchain_community.document_loaders import PebbloSafeLoader loader = PebbloSafeLoader( CSVLoader("data/corp_sens_data.csv"), name="acme-corp-rag-1", # App name (Mandatory) owner="Joe Smith", # Owner (Optional) description="Support productivity RAG application", # Description (Optional) ) documents = loader.load() print(documents) Send semantic topics and identities to Pebblo cloud server​ To send semantic data to pebblo-cloud, pass api-key to PebbloSafeLoader as an argument or alternatively, put the api-ket in PEBBLO_API_KEY environment variable. from langchain.document_loaders.csv_loader import CSVLoader from langchain_community.document_loaders import PebbloSafeLoader loader = PebbloSafeLoader( CSVLoader("data/corp_sens_data.csv"), name="acme-corp-rag-1", # App name (Mandatory) owner="Joe Smith", # Owner (Optional) description="Support productivity RAG application", # Description (Optional) api_key="my-api-key", # API key (Optional, can be set in the environment variable PEBBLO_API_KEY) ) documents = loader.load() print(documents) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/quip/
A loader for `Quip` docs. Specify a list `folder_ids` and/or `thread_ids` to load in the corresponding docs into Document objects, if both are specified, loader will get all `thread_ids` belong to this folder based on `folder_ids`, combine with passed `thread_ids`, the union of both sets will be returned. You can also set `include_all_folders` as `True` will fetch group\_folder\_ids and You can also specify a boolean `include_attachments` to include attachments, this is set to False by default, if set to True all attachments will be downloaded and QuipLoader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: `PDF`, `PNG`, `JPEG/JPG`, `SVG`, `Word` and `Excel`. Also you can sepcify a boolean `include_comments` to include comments in document, this is set to False by default, if set to True all comments in document will be fetched and QuipLoader will add them to Document objec. Before using QuipLoader make sure you have the latest version of the quip-api package installed: ``` from langchain_community.document_loaders.quip import QuipLoaderloader = QuipLoader( api_url="https://platform.quip.com", access_token="change_me", request_timeout=60)documents = loader.load( folder_ids={"123", "456"}, thread_ids={"abc", "efg"}, include_attachments=False, include_comments=False,) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:30.932Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/quip/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/quip/", "description": "Quip is a collaborative productivity software", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"quip\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:30 GMT", "etag": "W/\"e838606bf7ddef47b1b38552f8903335\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::lgvxn-1713753570181-eb4d989d44af" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/quip/", "property": "og:url" }, { "content": "Quip | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Quip is a collaborative productivity software", "property": "og:description" } ], "title": "Quip | 🦜️🔗 LangChain" }
A loader for Quip docs. Specify a list folder_ids and/or thread_ids to load in the corresponding docs into Document objects, if both are specified, loader will get all thread_ids belong to this folder based on folder_ids, combine with passed thread_ids, the union of both sets will be returned. You can also set include_all_folders as True will fetch group_folder_ids and You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and QuipLoader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel. Also you can sepcify a boolean include_comments to include comments in document, this is set to False by default, if set to True all comments in document will be fetched and QuipLoader will add them to Document objec. Before using QuipLoader make sure you have the latest version of the quip-api package installed: from langchain_community.document_loaders.quip import QuipLoader loader = QuipLoader( api_url="https://platform.quip.com", access_token="change_me", request_timeout=60 ) documents = loader.load( folder_ids={"123", "456"}, thread_ids={"abc", "efg"}, include_attachments=False, include_comments=False, )
https://python.langchain.com/docs/integrations/document_loaders/readthedocs_documentation/
## ReadTheDocs Documentation > [Read the Docs](https://readthedocs.org/) is an open-sourced free software documentation hosting platform. It generates documentation written with the `Sphinx` documentation generator. This notebook covers how to load content from HTML that was generated as part of a `Read-The-Docs` build. For an example of this in the wild, see [here](https://github.com/langchain-ai/chat-langchain). This assumes that the HTML has already been scraped into a folder. This can be done by uncommenting and running the following command ``` %pip install --upgrade --quiet beautifulsoup4 ``` ``` #!wget -r -A.html -P rtdocs https://python.langchain.com/en/latest/ ``` ``` from langchain_community.document_loaders import ReadTheDocsLoader ``` ``` loader = ReadTheDocsLoader("rtdocs", features="html.parser") ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:31.040Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/readthedocs_documentation/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/readthedocs_documentation/", "description": "Read the Docs is an open-sourced free", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3467", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"readthedocs_documentation\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:30 GMT", "etag": "W/\"1b980ddc7476fb1ff39c61916e68ad4c\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::sxhrq-1713753570478-f7572f90aa52" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/readthedocs_documentation/", "property": "og:url" }, { "content": "ReadTheDocs Documentation | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Read the Docs is an open-sourced free", "property": "og:description" } ], "title": "ReadTheDocs Documentation | 🦜️🔗 LangChain" }
ReadTheDocs Documentation Read the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator. This notebook covers how to load content from HTML that was generated as part of a Read-The-Docs build. For an example of this in the wild, see here. This assumes that the HTML has already been scraped into a folder. This can be done by uncommenting and running the following command %pip install --upgrade --quiet beautifulsoup4 #!wget -r -A.html -P rtdocs https://python.langchain.com/en/latest/ from langchain_community.document_loaders import ReadTheDocsLoader loader = ReadTheDocsLoader("rtdocs", features="html.parser") Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/recursive_url/
## Recursive URL We may want to process load all URLs under a root directory. For example, let’s look at the [Python 3.9 Document](https://docs.python.org/3.9/). This has many interesting child pages that we may want to read in bulk. Of course, the `WebBaseLoader` can load a list of pages. But, the challenge is traversing the tree of child pages and actually assembling that list! We do this using the `RecursiveUrlLoader`. This also gives us the flexibility to exclude some children, customize the extractor, and more. ## Parameters * url: str, the target url to crawl. * exclude\_dirs: Optional\[str\], webpage directories to exclude. * use\_async: Optional\[bool\], wether to use async requests, using async requests is usually faster in large tasks. However, async will disable the lazy loading feature(the function still works, but it is not lazy). By default, it is set to False. * extractor: Optional\[Callable\[\[str\], str\]\], a function to extract the text of the document from the webpage, by default it returns the page as it is. It is recommended to use tools like goose3 and beautifulsoup to extract the text. By default, it just returns the page as it is. * max\_depth: Optional\[int\] = None, the maximum depth to crawl. By default, it is set to 2. If you need to crawl the whole website, set it to a number that is large enough would simply do the job. * timeout: Optional\[int\] = None, the timeout for each request, in the unit of seconds. By default, it is set to 10. * prevent\_outside: Optional\[bool\] = None, whether to prevent crawling outside the root url. By default, it is set to True. ``` from langchain_community.document_loaders.recursive_url_loader import RecursiveUrlLoader ``` Let’s try a simple example. ``` from bs4 import BeautifulSoup as Soupurl = "https://docs.python.org/3.9/"loader = RecursiveUrlLoader( url=url, max_depth=2, extractor=lambda x: Soup(x, "html.parser").text)docs = loader.load() ``` ``` docs[0].page_content[:50] ``` ``` '\n\n\n\n\nPython Frequently Asked Questions — Python 3.' ``` ``` {'source': 'https://docs.python.org/3.9/library/index.html', 'title': 'The Python Standard Library — Python 3.9.17 documentation', 'language': None} ``` However, since it’s hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it’s needed. Most of the time, the returned results are good enough. Testing on LangChain docs. ``` url = "https://js.langchain.com/docs/modules/memory/integrations/"loader = RecursiveUrlLoader(url=url)docs = loader.load()len(docs) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:32.680Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/recursive_url/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/recursive_url/", "description": "We may want to process load all URLs under a root directory.", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3469", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"recursive_url\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:32 GMT", "etag": "W/\"a826cbb1b2ddd67d586dfd745e521db6\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::mg4n2-1713753572460-e7ffb690599f" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/recursive_url/", "property": "og:url" }, { "content": "Recursive URL | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "We may want to process load all URLs under a root directory.", "property": "og:description" } ], "title": "Recursive URL | 🦜️🔗 LangChain" }
Recursive URL We may want to process load all URLs under a root directory. For example, let’s look at the Python 3.9 Document. This has many interesting child pages that we may want to read in bulk. Of course, the WebBaseLoader can load a list of pages. But, the challenge is traversing the tree of child pages and actually assembling that list! We do this using the RecursiveUrlLoader. This also gives us the flexibility to exclude some children, customize the extractor, and more. Parameters url: str, the target url to crawl. exclude_dirs: Optional[str], webpage directories to exclude. use_async: Optional[bool], wether to use async requests, using async requests is usually faster in large tasks. However, async will disable the lazy loading feature(the function still works, but it is not lazy). By default, it is set to False. extractor: Optional[Callable[[str], str]], a function to extract the text of the document from the webpage, by default it returns the page as it is. It is recommended to use tools like goose3 and beautifulsoup to extract the text. By default, it just returns the page as it is. max_depth: Optional[int] = None, the maximum depth to crawl. By default, it is set to 2. If you need to crawl the whole website, set it to a number that is large enough would simply do the job. timeout: Optional[int] = None, the timeout for each request, in the unit of seconds. By default, it is set to 10. prevent_outside: Optional[bool] = None, whether to prevent crawling outside the root url. By default, it is set to True. from langchain_community.document_loaders.recursive_url_loader import RecursiveUrlLoader Let’s try a simple example. from bs4 import BeautifulSoup as Soup url = "https://docs.python.org/3.9/" loader = RecursiveUrlLoader( url=url, max_depth=2, extractor=lambda x: Soup(x, "html.parser").text ) docs = loader.load() docs[0].page_content[:50] '\n\n\n\n\nPython Frequently Asked Questions — Python 3.' {'source': 'https://docs.python.org/3.9/library/index.html', 'title': 'The Python Standard Library — Python 3.9.17 documentation', 'language': None} However, since it’s hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it’s needed. Most of the time, the returned results are good enough. Testing on LangChain docs. url = "https://js.langchain.com/docs/modules/memory/integrations/" loader = RecursiveUrlLoader(url=url) docs = loader.load() len(docs) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/reddit/
This loader fetches the text from the Posts of Subreddits or Reddit users, using the `praw` Python package. Make a [Reddit Application](https://www.reddit.com/prefs/apps/) and initialize the loader with with your Reddit API credentials. ``` # load using 'subreddit' modeloader = RedditPostsLoader( client_id="YOUR CLIENT ID", client_secret="YOUR CLIENT SECRET", user_agent="extractor by u/Master_Ocelot8179", categories=["new", "hot"], # List of categories to load posts from mode="subreddit", search_queries=[ "investing", "wallstreetbets", ], # List of subreddits to load posts from number_posts=20, # Default value is 10)# # or load using 'username' mode# loader = RedditPostsLoader(# client_id="YOUR CLIENT ID",# client_secret="YOUR CLIENT SECRET",# user_agent="extractor by u/Master_Ocelot8179",# categories=['new', 'hot'],# mode = 'username',# search_queries=['ga3far', 'Master_Ocelot8179'], # List of usernames to load posts from# number_posts=20# )# Note: Categories can be only of following value - "controversial" "hot" "new" "rising" "top" ``` ``` [Document(page_content='Hello, I am not looking for investment advice. I will apply my own due diligence. However, I am interested if anyone knows as a UK resident how fees and exchange rate differences would impact performance?\n\nI am planning to create a pie of index funds (perhaps UK, US, europe) or find a fund with a good track record of long term growth at low rates. \n\nDoes anyone have any ideas?', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Long term retirement funds fees/exchange rate query', 'post_score': 1, 'post_id': '130pa6m', 'post_url': 'https://www.reddit.com/r/investing/comments/130pa6m/long_term_retirement_funds_feesexchange_rate_query/', 'post_author': Redditor(name='Badmanshiz')}), Document(page_content='I much prefer the Roth IRA and would rather rollover my 401k to that every year instead of keeping it in the limited 401k options. But if I rollover, will I be able to continue contributing to my 401k? Or will that close my account? I realize that there are tax implications of doing this but I still think it is the better option.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Is it possible to rollover my 401k every year?', 'post_score': 3, 'post_id': '130ja0h', 'post_url': 'https://www.reddit.com/r/investing/comments/130ja0h/is_it_possible_to_rollover_my_401k_every_year/', 'post_author': Redditor(name='AnCap_Catholic')}), Document(page_content='Have a general question? Want to offer some commentary on markets? Maybe you would just like to throw out a neat fact that doesn\'t warrant a self post? Feel free to post here! \n\nIf your question is "I have $10,000, what do I do?" or other "advice for my personal situation" questions, you should include relevant information, such as the following:\n\n* How old are you? What country do you live in? \n* Are you employed/making income? How much? \n* What are your objectives with this money? (Buy a house? Retirement savings?) \n* What is your time horizon? Do you need this money next month? Next 20yrs? \n* What is your risk tolerance? (Do you mind risking it at blackjack or do you need to know its 100% safe?) \n* What are you current holdings? (Do you already have exposure to specific funds and sectors? Any other assets?) \n* Any big debts (include interest rate) or expenses? \n* And any other relevant financial information will be useful to give you a proper answer. \n\nPlease consider consulting our FAQ first - https://www.reddit.com/r/investing/wiki/faq\nAnd our [side bar](https://www.reddit.com/r/investing/about/sidebar) also has useful resources. \n\nIf you are new to investing - please refer to Wiki - [Getting Started](https://www.reddit.com/r/investing/wiki/index/gettingstarted/)\n\nThe reading list in the wiki has a list of books ranging from light reading to advanced topics depending on your knowledge level. Link here - [Reading List](https://www.reddit.com/r/investing/wiki/readinglist)\n\nCheck the resources in the sidebar.\n\nBe aware that these answers are just opinions of Redditors and should be used as a starting point for your research. You should strongly consider seeing a registered investment adviser if you need professional support before making any financial decisions!', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Daily General Discussion and Advice Thread - April 27, 2023', 'post_score': 5, 'post_id': '130eszz', 'post_url': 'https://www.reddit.com/r/investing/comments/130eszz/daily_general_discussion_and_advice_thread_april/', 'post_author': Redditor(name='AutoModerator')}), Document(page_content="Based on recent news about salt battery advancements and the overall issues of lithium, I was wondering what would be feasible ways to invest into non-lithium based battery technologies? CATL is of course a choice, but the selection of brokers I currently have in my disposal don't provide HK stocks at all.", metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Investing in non-lithium battery technologies?', 'post_score': 2, 'post_id': '130d6qp', 'post_url': 'https://www.reddit.com/r/investing/comments/130d6qp/investing_in_nonlithium_battery_technologies/', 'post_author': Redditor(name='-manabreak')}), Document(page_content='Hello everyone,\n\nI would really like to invest in an ETF that follows spy or another big index, as I think this form of investment suits me best. \n\nThe problem is, that I live in Denmark where ETFs and funds are taxed annually on unrealised gains at quite a steep rate. This means that an ETF growing say 10% per year will only grow about 6%, which really ruins the long term effects of compounding interest.\n\nHowever stocks are only taxed on realised gains which is why they look more interesting to hold long term.\n\nI do not like the lack of diversification this brings, as I am looking to spend tonnes of time picking the right long term stocks.\n\nIt would be ideal to find a few stocks that over the long term somewhat follows the indexes. Does anyone have suggestions?\n\nI have looked at Nasdaq Inc. which quite closely follows Nasdaq 100. \n\nI really appreciate any help.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Stocks that track an index', 'post_score': 7, 'post_id': '130auvj', 'post_url': 'https://www.reddit.com/r/investing/comments/130auvj/stocks_that_track_an_index/', 'post_author': Redditor(name='LeAlbertP')})] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:33.389Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/reddit/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/reddit/", "description": "Reddit is an American social news", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3470", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"reddit\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:33 GMT", "etag": "W/\"b6367a5d8b6f7fc9af1c433737ff42d6\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::85vkj-1713753573328-d3280f5ab9c1" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/reddit/", "property": "og:url" }, { "content": "Reddit | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Reddit is an American social news", "property": "og:description" } ], "title": "Reddit | 🦜️🔗 LangChain" }
This loader fetches the text from the Posts of Subreddits or Reddit users, using the praw Python package. Make a Reddit Application and initialize the loader with with your Reddit API credentials. # load using 'subreddit' mode loader = RedditPostsLoader( client_id="YOUR CLIENT ID", client_secret="YOUR CLIENT SECRET", user_agent="extractor by u/Master_Ocelot8179", categories=["new", "hot"], # List of categories to load posts from mode="subreddit", search_queries=[ "investing", "wallstreetbets", ], # List of subreddits to load posts from number_posts=20, # Default value is 10 ) # # or load using 'username' mode # loader = RedditPostsLoader( # client_id="YOUR CLIENT ID", # client_secret="YOUR CLIENT SECRET", # user_agent="extractor by u/Master_Ocelot8179", # categories=['new', 'hot'], # mode = 'username', # search_queries=['ga3far', 'Master_Ocelot8179'], # List of usernames to load posts from # number_posts=20 # ) # Note: Categories can be only of following value - "controversial" "hot" "new" "rising" "top" [Document(page_content='Hello, I am not looking for investment advice. I will apply my own due diligence. However, I am interested if anyone knows as a UK resident how fees and exchange rate differences would impact performance?\n\nI am planning to create a pie of index funds (perhaps UK, US, europe) or find a fund with a good track record of long term growth at low rates. \n\nDoes anyone have any ideas?', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Long term retirement funds fees/exchange rate query', 'post_score': 1, 'post_id': '130pa6m', 'post_url': 'https://www.reddit.com/r/investing/comments/130pa6m/long_term_retirement_funds_feesexchange_rate_query/', 'post_author': Redditor(name='Badmanshiz')}), Document(page_content='I much prefer the Roth IRA and would rather rollover my 401k to that every year instead of keeping it in the limited 401k options. But if I rollover, will I be able to continue contributing to my 401k? Or will that close my account? I realize that there are tax implications of doing this but I still think it is the better option.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Is it possible to rollover my 401k every year?', 'post_score': 3, 'post_id': '130ja0h', 'post_url': 'https://www.reddit.com/r/investing/comments/130ja0h/is_it_possible_to_rollover_my_401k_every_year/', 'post_author': Redditor(name='AnCap_Catholic')}), Document(page_content='Have a general question? Want to offer some commentary on markets? Maybe you would just like to throw out a neat fact that doesn\'t warrant a self post? Feel free to post here! \n\nIf your question is "I have $10,000, what do I do?" or other "advice for my personal situation" questions, you should include relevant information, such as the following:\n\n* How old are you? What country do you live in? \n* Are you employed/making income? How much? \n* What are your objectives with this money? (Buy a house? Retirement savings?) \n* What is your time horizon? Do you need this money next month? Next 20yrs? \n* What is your risk tolerance? (Do you mind risking it at blackjack or do you need to know its 100% safe?) \n* What are you current holdings? (Do you already have exposure to specific funds and sectors? Any other assets?) \n* Any big debts (include interest rate) or expenses? \n* And any other relevant financial information will be useful to give you a proper answer. \n\nPlease consider consulting our FAQ first - https://www.reddit.com/r/investing/wiki/faq\nAnd our [side bar](https://www.reddit.com/r/investing/about/sidebar) also has useful resources. \n\nIf you are new to investing - please refer to Wiki - [Getting Started](https://www.reddit.com/r/investing/wiki/index/gettingstarted/)\n\nThe reading list in the wiki has a list of books ranging from light reading to advanced topics depending on your knowledge level. Link here - [Reading List](https://www.reddit.com/r/investing/wiki/readinglist)\n\nCheck the resources in the sidebar.\n\nBe aware that these answers are just opinions of Redditors and should be used as a starting point for your research. You should strongly consider seeing a registered investment adviser if you need professional support before making any financial decisions!', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Daily General Discussion and Advice Thread - April 27, 2023', 'post_score': 5, 'post_id': '130eszz', 'post_url': 'https://www.reddit.com/r/investing/comments/130eszz/daily_general_discussion_and_advice_thread_april/', 'post_author': Redditor(name='AutoModerator')}), Document(page_content="Based on recent news about salt battery advancements and the overall issues of lithium, I was wondering what would be feasible ways to invest into non-lithium based battery technologies? CATL is of course a choice, but the selection of brokers I currently have in my disposal don't provide HK stocks at all.", metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Investing in non-lithium battery technologies?', 'post_score': 2, 'post_id': '130d6qp', 'post_url': 'https://www.reddit.com/r/investing/comments/130d6qp/investing_in_nonlithium_battery_technologies/', 'post_author': Redditor(name='-manabreak')}), Document(page_content='Hello everyone,\n\nI would really like to invest in an ETF that follows spy or another big index, as I think this form of investment suits me best. \n\nThe problem is, that I live in Denmark where ETFs and funds are taxed annually on unrealised gains at quite a steep rate. This means that an ETF growing say 10% per year will only grow about 6%, which really ruins the long term effects of compounding interest.\n\nHowever stocks are only taxed on realised gains which is why they look more interesting to hold long term.\n\nI do not like the lack of diversification this brings, as I am looking to spend tonnes of time picking the right long term stocks.\n\nIt would be ideal to find a few stocks that over the long term somewhat follows the indexes. Does anyone have suggestions?\n\nI have looked at Nasdaq Inc. which quite closely follows Nasdaq 100. \n\nI really appreciate any help.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Stocks that track an index', 'post_score': 7, 'post_id': '130auvj', 'post_url': 'https://www.reddit.com/r/investing/comments/130auvj/stocks_that_track_an_index/', 'post_author': Redditor(name='LeAlbertP')})]
https://python.langchain.com/docs/integrations/document_loaders/roam/
This notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo [here](https://github.com/JimmyLv/roam-qa). Export your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking `Export`. When exporting, make sure to select the `Markdown & CSV` format option. This will produce a `.zip` file in your Downloads folder. Move the `.zip` file into this repository. Run the following command to unzip the zip file (replace the `Export...` with your own file name as needed). ``` unzip Roam-Export-1675782732639.zip -d Roam_DB ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:34.440Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/roam/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/roam/", "description": "ROAM is a note-taking tool for networked", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4399", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"roam\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:34 GMT", "etag": "W/\"359ce61de6f5e952c55717a4c430ac7b\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::rrvbb-1713753574329-b02c044437ef" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/roam/", "property": "og:url" }, { "content": "Roam | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "ROAM is a note-taking tool for networked", "property": "og:description" } ], "title": "Roam | 🦜️🔗 LangChain" }
This notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo here. Export your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export. When exporting, make sure to select the Markdown & CSV format option. This will produce a .zip file in your Downloads folder. Move the .zip file into this repository. Run the following command to unzip the zip file (replace the Export... with your own file name as needed). unzip Roam-Export-1675782732639.zip -d Roam_DB
https://python.langchain.com/docs/integrations/document_loaders/rockset/
## Rockset > Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups). This notebook demonstrates how to use Rockset as a document loader in langchain. To get started, make sure you have a Rockset account and an API key available. ## Setting up the environment[​](#setting-up-the-environment "Direct link to Setting up the environment") 1. Go to the [Rockset console](https://console.rockset.com/apikeys) and get an API key. Find your API region from the [API reference](https://rockset.com/docs/rest-api/#introduction). For the purpose of this notebook, we will assume you’re using Rockset from `Oregon(us-west-2)`. 2. Set your the environment variable `ROCKSET_API_KEY`. 3. Install the Rockset python client, which will be used by langchain to interact with the Rockset database. ``` %pip install --upgrade --quiet rockset ``` ## Loading Documents The Rockset integration with LangChain allows you to load documents from Rockset collections with SQL queries. In order to do this you must construct a `RocksetLoader` object. Here is an example snippet that initializes a `RocksetLoader`. ``` from langchain_community.document_loaders import RocksetLoaderfrom rockset import Regions, RocksetClient, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 3"), # SQL query ["text"], # content columns metadata_keys=["id", "date"], # metadata columns) ``` Here, you can see that the following query is run: ``` SELECT * FROM langchain_demo LIMIT 3 ``` The `text` column in the collection is used as the page content, and the record’s `id` and `date` columns are used as metadata (if you do not pass anything into `metadata_keys`, the whole Rockset document will be used as metadata). To execute the query and access an iterator over the resulting `Document`s, run: To execute the query and access all resulting `Document`s at once, run: Here is an example response of `loader.load()`: ``` [ Document( page_content="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas a libero porta, dictum ipsum eget, hendrerit neque. Morbi blandit, ex ut suscipit viverra, enim velit tincidunt tellus, a tempor velit nunc et ex. Proin hendrerit odio nec convallis lobortis. Aenean in purus dolor. Vestibulum orci orci, laoreet eget magna in, commodo euismod justo.", metadata={"id": 83209, "date": "2022-11-13T18:26:45.000000Z"} ), Document( page_content="Integer at finibus odio. Nam sit amet enim cursus lacus gravida feugiat vestibulum sed libero. Aenean eleifend est quis elementum tincidunt. Curabitur sit amet ornare erat. Nulla id dolor ut magna volutpat sodales fringilla vel ipsum. Donec ultricies, lacus sed fermentum dignissim, lorem elit aliquam ligula, sed suscipit sapien purus nec ligula.", metadata={"id": 89313, "date": "2022-11-13T18:28:53.000000Z"} ), Document( page_content="Morbi tortor enim, commodo id efficitur vitae, fringilla nec mi. Nullam molestie faucibus aliquet. Praesent a est facilisis, condimentum justo sit amet, viverra erat. Fusce volutpat nisi vel purus blandit, et facilisis felis accumsan. Phasellus luctus ligula ultrices tellus tempor hendrerit. Donec at ultricies leo.", metadata={"id": 87732, "date": "2022-11-13T18:49:04.000000Z"} )] ``` ## Using multiple columns as content[​](#using-multiple-columns-as-content "Direct link to Using multiple columns as content") You can choose to use multiple columns as content: ``` from langchain_community.document_loaders import RocksetLoaderfrom rockset import Regions, RocksetClient, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], # TWO content columns) ``` Assuming the “sentence1” field is `"This is the first sentence."` and the “sentence2” field is `"This is the second sentence."`, the `page_content` of the resulting `Document` would be: ``` This is the first sentence.This is the second sentence. ``` You can define you own function to join content columns by setting the `content_columns_joiner` argument in the `RocksetLoader` constructor. `content_columns_joiner` is a method that takes in a `List[Tuple[str, Any]]]` as an argument, representing a list of tuples of (column name, column value). By default, this is a method that joins each column value with a new line. For example, if you wanted to join sentence1 and sentence2 with a space instead of a new line, you could set `content_columns_joiner` like so: ``` RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: " ".join( [doc[1] for doc in docs] ), # join with space instead of /n) ``` The `page_content` of the resulting `Document` would be: ``` This is the first sentence. This is the second sentence. ``` Oftentimes you want to include the column name in the `page_content`. You can do that like this: ``` RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: "\n".join( [f"{doc[0]}: {doc[1]}" for doc in docs] ),) ``` This would result in the following `page_content`: ``` sentence1: This is the first sentence.sentence2: This is the second sentence. ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:35.203Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/rockset/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/rockset/", "description": "Rockset is a real-time analytics database which enables queries on", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"rockset\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:35 GMT", "etag": "W/\"c41038ed0c08bfa2afdd2f05c178e5a1\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::42t2g-1713753575027-38d9d341c94b" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/rockset/", "property": "og:url" }, { "content": "Rockset | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Rockset is a real-time analytics database which enables queries on", "property": "og:description" } ], "title": "Rockset | 🦜️🔗 LangChain" }
Rockset Rockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups). This notebook demonstrates how to use Rockset as a document loader in langchain. To get started, make sure you have a Rockset account and an API key available. Setting up the environment​ Go to the Rockset console and get an API key. Find your API region from the API reference. For the purpose of this notebook, we will assume you’re using Rockset from Oregon(us-west-2). Set your the environment variable ROCKSET_API_KEY. Install the Rockset python client, which will be used by langchain to interact with the Rockset database. %pip install --upgrade --quiet rockset Loading Documents The Rockset integration with LangChain allows you to load documents from Rockset collections with SQL queries. In order to do this you must construct a RocksetLoader object. Here is an example snippet that initializes a RocksetLoader. from langchain_community.document_loaders import RocksetLoader from rockset import Regions, RocksetClient, models loader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 3"), # SQL query ["text"], # content columns metadata_keys=["id", "date"], # metadata columns ) Here, you can see that the following query is run: SELECT * FROM langchain_demo LIMIT 3 The text column in the collection is used as the page content, and the record’s id and date columns are used as metadata (if you do not pass anything into metadata_keys, the whole Rockset document will be used as metadata). To execute the query and access an iterator over the resulting Documents, run: To execute the query and access all resulting Documents at once, run: Here is an example response of loader.load(): [ Document( page_content="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas a libero porta, dictum ipsum eget, hendrerit neque. Morbi blandit, ex ut suscipit viverra, enim velit tincidunt tellus, a tempor velit nunc et ex. Proin hendrerit odio nec convallis lobortis. Aenean in purus dolor. Vestibulum orci orci, laoreet eget magna in, commodo euismod justo.", metadata={"id": 83209, "date": "2022-11-13T18:26:45.000000Z"} ), Document( page_content="Integer at finibus odio. Nam sit amet enim cursus lacus gravida feugiat vestibulum sed libero. Aenean eleifend est quis elementum tincidunt. Curabitur sit amet ornare erat. Nulla id dolor ut magna volutpat sodales fringilla vel ipsum. Donec ultricies, lacus sed fermentum dignissim, lorem elit aliquam ligula, sed suscipit sapien purus nec ligula.", metadata={"id": 89313, "date": "2022-11-13T18:28:53.000000Z"} ), Document( page_content="Morbi tortor enim, commodo id efficitur vitae, fringilla nec mi. Nullam molestie faucibus aliquet. Praesent a est facilisis, condimentum justo sit amet, viverra erat. Fusce volutpat nisi vel purus blandit, et facilisis felis accumsan. Phasellus luctus ligula ultrices tellus tempor hendrerit. Donec at ultricies leo.", metadata={"id": 87732, "date": "2022-11-13T18:49:04.000000Z"} ) ] Using multiple columns as content​ You can choose to use multiple columns as content: from langchain_community.document_loaders import RocksetLoader from rockset import Regions, RocksetClient, models loader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], # TWO content columns ) Assuming the “sentence1” field is "This is the first sentence." and the “sentence2” field is "This is the second sentence.", the page_content of the resulting Document would be: This is the first sentence. This is the second sentence. You can define you own function to join content columns by setting the content_columns_joiner argument in the RocksetLoader constructor. content_columns_joiner is a method that takes in a List[Tuple[str, Any]]] as an argument, representing a list of tuples of (column name, column value). By default, this is a method that joins each column value with a new line. For example, if you wanted to join sentence1 and sentence2 with a space instead of a new line, you could set content_columns_joiner like so: RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: " ".join( [doc[1] for doc in docs] ), # join with space instead of /n ) The page_content of the resulting Document would be: This is the first sentence. This is the second sentence. Oftentimes you want to include the column name in the page_content. You can do that like this: RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: "\n".join( [f"{doc[0]}: {doc[1]}" for doc in docs] ), ) This would result in the following page_content: sentence1: This is the first sentence. sentence2: This is the second sentence.
https://python.langchain.com/docs/integrations/document_loaders/rspace/
## rspace This notebook shows how to use the RSpace document loader to import research notes and documents from RSpace Electronic Lab Notebook into Langchain pipelines. To start you’ll need an RSpace account and an API key. You can set up a free account at [https://community.researchspace.com](https://community.researchspace.com/) or use your institutional RSpace. You can get an RSpace API token from your account’s profile page. ``` %pip install --upgrade --quiet rspace_client ``` It’s best to store your RSpace API key as an environment variable. ``` RSPACE_API_KEY=<YOUR_KEY> ``` You’ll also need to set the URL of your RSpace installation e.g. ``` RSPACE_URL=https://community.researchspace.com ``` If you use these exact environment variable names, they will be detected automatically. ``` from langchain_community.document_loaders.rspace import RSpaceLoader ``` You can import various items from RSpace: * A single RSpace structured or basic document. This will map 1-1 to a Langchain document. * A folder or noteook. All documents inside the notebook or folder are imported as Langchain documents. * If you have PDF files in the RSpace Gallery, these can be imported individually as well. Under the hood, Langchain’s PDF loader will be used and this creates one Langchain document per PDF page. ``` ## replace these ids with some from your own research notes.## Make sure to use global ids (with the 2 character prefix). This helps the loader know which API calls to make## to RSpace API.rspace_ids = ["NB1932027", "FL1921314", "SD1932029", "GL1932384"]for rs_id in rspace_ids: loader = RSpaceLoader(global_id=rs_id) docs = loader.load() for doc in docs: ## the name and ID are added to the 'source' metadata property. print(doc.metadata) print(doc.page_content[:500]) ``` If you don’t want to use the environment variables as above, you can pass these into the RSpaceLoader ``` loader = RSpaceLoader( global_id=rs_id, api_key="MY_API_KEY", url="https://my.researchspace.com") ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:36.797Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/rspace/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/rspace/", "description": "This notebook shows how to use the RSpace document loader to import", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4401", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"rspace\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:36 GMT", "etag": "W/\"ba51f88039fa7c22ecb5de09767ced20\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::sgxwt-1713753576547-3b51b2513bd4" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/rspace/", "property": "og:url" }, { "content": "rspace | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook shows how to use the RSpace document loader to import", "property": "og:description" } ], "title": "rspace | 🦜️🔗 LangChain" }
rspace This notebook shows how to use the RSpace document loader to import research notes and documents from RSpace Electronic Lab Notebook into Langchain pipelines. To start you’ll need an RSpace account and an API key. You can set up a free account at https://community.researchspace.com or use your institutional RSpace. You can get an RSpace API token from your account’s profile page. %pip install --upgrade --quiet rspace_client It’s best to store your RSpace API key as an environment variable. RSPACE_API_KEY=<YOUR_KEY> You’ll also need to set the URL of your RSpace installation e.g. RSPACE_URL=https://community.researchspace.com If you use these exact environment variable names, they will be detected automatically. from langchain_community.document_loaders.rspace import RSpaceLoader You can import various items from RSpace: A single RSpace structured or basic document. This will map 1-1 to a Langchain document. A folder or noteook. All documents inside the notebook or folder are imported as Langchain documents. If you have PDF files in the RSpace Gallery, these can be imported individually as well. Under the hood, Langchain’s PDF loader will be used and this creates one Langchain document per PDF page. ## replace these ids with some from your own research notes. ## Make sure to use global ids (with the 2 character prefix). This helps the loader know which API calls to make ## to RSpace API. rspace_ids = ["NB1932027", "FL1921314", "SD1932029", "GL1932384"] for rs_id in rspace_ids: loader = RSpaceLoader(global_id=rs_id) docs = loader.load() for doc in docs: ## the name and ID are added to the 'source' metadata property. print(doc.metadata) print(doc.page_content[:500]) If you don’t want to use the environment variables as above, you can pass these into the RSpaceLoader loader = RSpaceLoader( global_id=rs_id, api_key="MY_API_KEY", url="https://my.researchspace.com" )
https://python.langchain.com/docs/integrations/document_loaders/snowflake/
``` QUERY = "select text, survey_id from CLOUD_DATA_SOLUTIONS.HAPPY_OR_NOT.OPEN_FEEDBACK limit 10"snowflake_loader = SnowflakeLoader( query=QUERY, user=s.SNOWFLAKE_USER, password=s.SNOWFLAKE_PASS, account=s.SNOWFLAKE_ACCOUNT, warehouse=s.SNOWFLAKE_WAREHOUSE, role=s.SNOWFLAKE_ROLE, database=s.SNOWFLAKE_DATABASE, schema=s.SNOWFLAKE_SCHEMA,)snowflake_documents = snowflake_loader.load()print(snowflake_documents) ``` ``` import settings as sfrom snowflakeLoader import SnowflakeLoaderQUERY = "select text, survey_id as source from CLOUD_DATA_SOLUTIONS.HAPPY_OR_NOT.OPEN_FEEDBACK limit 10"snowflake_loader = SnowflakeLoader( query=QUERY, user=s.SNOWFLAKE_USER, password=s.SNOWFLAKE_PASS, account=s.SNOWFLAKE_ACCOUNT, warehouse=s.SNOWFLAKE_WAREHOUSE, role=s.SNOWFLAKE_ROLE, database=s.SNOWFLAKE_DATABASE, schema=s.SNOWFLAKE_SCHEMA, metadata_columns=["source"],)snowflake_documents = snowflake_loader.load()print(snowflake_documents) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:37.390Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/snowflake/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/snowflake/", "description": "This notebooks goes over how to load documents from Snowflake", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3473", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"snowflake\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:37 GMT", "etag": "W/\"6d7a4f1fd4c95a287362f8561f5e6c46\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::cjjk8-1713753577331-94408d2239ef" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/snowflake/", "property": "og:url" }, { "content": "Snowflake | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebooks goes over how to load documents from Snowflake", "property": "og:description" } ], "title": "Snowflake | 🦜️🔗 LangChain" }
QUERY = "select text, survey_id from CLOUD_DATA_SOLUTIONS.HAPPY_OR_NOT.OPEN_FEEDBACK limit 10" snowflake_loader = SnowflakeLoader( query=QUERY, user=s.SNOWFLAKE_USER, password=s.SNOWFLAKE_PASS, account=s.SNOWFLAKE_ACCOUNT, warehouse=s.SNOWFLAKE_WAREHOUSE, role=s.SNOWFLAKE_ROLE, database=s.SNOWFLAKE_DATABASE, schema=s.SNOWFLAKE_SCHEMA, ) snowflake_documents = snowflake_loader.load() print(snowflake_documents) import settings as s from snowflakeLoader import SnowflakeLoader QUERY = "select text, survey_id as source from CLOUD_DATA_SOLUTIONS.HAPPY_OR_NOT.OPEN_FEEDBACK limit 10" snowflake_loader = SnowflakeLoader( query=QUERY, user=s.SNOWFLAKE_USER, password=s.SNOWFLAKE_PASS, account=s.SNOWFLAKE_ACCOUNT, warehouse=s.SNOWFLAKE_WAREHOUSE, role=s.SNOWFLAKE_ROLE, database=s.SNOWFLAKE_DATABASE, schema=s.SNOWFLAKE_SCHEMA, metadata_columns=["source"], ) snowflake_documents = snowflake_loader.load() print(snowflake_documents)
https://python.langchain.com/docs/integrations/document_loaders/subtitle/
## Subtitle > [The SubRip file format](https://en.wikipedia.org/wiki/SubRip#SubRip_file_format) is described on the `Matroska` multimedia container format website as “perhaps the most basic of all subtitle formats.” `SubRip (SubRip Text)` files are named with the extension `.srt`, and contain formatted lines of plain text in groups separated by a blank line. Subtitles are numbered sequentially, starting at 1. The timecode format used is hours:minutes:seconds,milliseconds with time units fixed to two zero-padded digits and fractions fixed to three zero-padded digits (00:00:00,000). The fractional separator used is the comma, since the program was written in France. How to load data from subtitle (`.srt`) files Please, download the [example .srt file from here](https://www.opensubtitles.org/en/subtitles/5575150/star-wars-the-clone-wars-crisis-at-the-heart-en). ``` %pip install --upgrade --quiet pysrt ``` ``` from langchain_community.document_loaders import SRTLoader ``` ``` loader = SRTLoader( "example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt") ``` ``` docs[0].page_content[:100] ``` ``` '<i>Corruption discovered\nat the core of the Banking Clan!</i> <i>Reunited, Rush Clovis\nand Senator A' ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:37.532Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/subtitle/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/subtitle/", "description": "[The SubRip file", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3473", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"subtitle\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:37 GMT", "etag": "W/\"43da96205f79fd922e3254ef7a1edda6\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::tcjh5-1713753577338-4914cbe358ba" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/subtitle/", "property": "og:url" }, { "content": "Subtitle | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[The SubRip file", "property": "og:description" } ], "title": "Subtitle | 🦜️🔗 LangChain" }
Subtitle The SubRip file format is described on the Matroska multimedia container format website as “perhaps the most basic of all subtitle formats.” SubRip (SubRip Text) files are named with the extension .srt, and contain formatted lines of plain text in groups separated by a blank line. Subtitles are numbered sequentially, starting at 1. The timecode format used is hours:minutes:seconds,milliseconds with time units fixed to two zero-padded digits and fractions fixed to three zero-padded digits (00:00:00,000). The fractional separator used is the comma, since the program was written in France. How to load data from subtitle (.srt) files Please, download the example .srt file from here. %pip install --upgrade --quiet pysrt from langchain_community.document_loaders import SRTLoader loader = SRTLoader( "example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt" ) docs[0].page_content[:100] '<i>Corruption discovered\nat the core of the Banking Clan!</i> <i>Reunited, Rush Clovis\nand Senator A' Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/sitemap/
## Sitemap Extends from the `WebBaseLoader`, `SitemapLoader` loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren’t concerned about being a good citizen, or you control the scrapped server, or don’t care about load. Note, while this will speed up the scraping process, but it may cause the server to block you. Be careful! ``` %pip install --upgrade --quiet nest_asyncio ``` ``` # fixes a bug with asyncio and jupyterimport nest_asyncionest_asyncio.apply() ``` ``` from langchain_community.document_loaders.sitemap import SitemapLoader ``` ``` sitemap_loader = SitemapLoader(web_path="https://api.python.langchain.com/sitemap.xml")docs = sitemap_loader.load() ``` You can change the `requests_per_second` parameter to increase the max concurrent requests. and use `requests_kwargs` to pass kwargs when send requests. ``` sitemap_loader.requests_per_second = 2# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issuesitemap_loader.requests_kwargs = {"verify": False} ``` ``` Document(page_content='\n\n\n\n\n\n\n\n\n\nLangChain Python API Reference Documentation.\n\n\nYou will be automatically redirected to the new location of this page.\n\n', metadata={'source': 'https://api.python.langchain.com/en/stable/', 'loc': 'https://api.python.langchain.com/en/stable/', 'lastmod': '2024-02-09T01:10:49.422114+00:00', 'changefreq': 'weekly', 'priority': '1'}) ``` ## Filtering sitemap URLs[​](#filtering-sitemap-urls "Direct link to Filtering sitemap URLs") Sitemaps can be massive files, with thousands of URLs. Often you don’t need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the `filter_urls` parameter. Only URLs that match one of the patterns will be loaded. ``` loader = SitemapLoader( web_path="https://api.python.langchain.com/sitemap.xml", filter_urls=["https://api.python.langchain.com/en/latest"],)documents = loader.load() ``` ``` Document(page_content='\n\n\n\n\n\n\n\n\n\nLangChain Python API Reference Documentation.\n\n\nYou will be automatically redirected to the new location of this page.\n\n', metadata={'source': 'https://api.python.langchain.com/en/latest/', 'loc': 'https://api.python.langchain.com/en/latest/', 'lastmod': '2024-02-12T05:26:10.971077+00:00', 'changefreq': 'daily', 'priority': '0.9'}) ``` ## Add custom scraping rules[​](#add-custom-scraping-rules "Direct link to Add custom scraping rules") The `SitemapLoader` uses `beautifulsoup4` for the scraping process, and it scrapes every element on the page by default. The `SitemapLoader` constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers or navigation elements. The following example shows how to develop and use a custom function to avoid navigation and header elements. Import the `beautifulsoup4` library and define the custom function. ``` pip install beautifulsoup4 ``` ``` from bs4 import BeautifulSoupdef remove_nav_and_header_elements(content: BeautifulSoup) -> str: # Find all 'nav' and 'header' elements in the BeautifulSoup object nav_elements = content.find_all("nav") header_elements = content.find_all("header") # Remove each 'nav' and 'header' element from the BeautifulSoup object for element in nav_elements + header_elements: element.decompose() return str(content.get_text()) ``` Add your custom function to the `SitemapLoader` object. ``` loader = SitemapLoader( "https://api.python.langchain.com/sitemap.xml", filter_urls=["https://api.python.langchain.com/en/latest/"], parsing_function=remove_nav_and_header_elements,) ``` ## Local Sitemap[​](#local-sitemap "Direct link to Local Sitemap") The sitemap loader can also be used to load local files. ``` sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True)docs = sitemap_loader.load() ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:37.929Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/sitemap/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/sitemap/", "description": "Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3473", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"sitemap\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:37 GMT", "etag": "W/\"d4251d8f7057a780148a97aac7a09211\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::9tn2v-1713753577546-6198b634be60" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/sitemap/", "property": "og:url" }, { "content": "Sitemap | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a", "property": "og:description" } ], "title": "Sitemap | 🦜️🔗 LangChain" }
Sitemap Extends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document. The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren’t concerned about being a good citizen, or you control the scrapped server, or don’t care about load. Note, while this will speed up the scraping process, but it may cause the server to block you. Be careful! %pip install --upgrade --quiet nest_asyncio # fixes a bug with asyncio and jupyter import nest_asyncio nest_asyncio.apply() from langchain_community.document_loaders.sitemap import SitemapLoader sitemap_loader = SitemapLoader(web_path="https://api.python.langchain.com/sitemap.xml") docs = sitemap_loader.load() You can change the requests_per_second parameter to increase the max concurrent requests. and use requests_kwargs to pass kwargs when send requests. sitemap_loader.requests_per_second = 2 # Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issue sitemap_loader.requests_kwargs = {"verify": False} Document(page_content='\n\n\n\n\n\n\n\n\n\nLangChain Python API Reference Documentation.\n\n\nYou will be automatically redirected to the new location of this page.\n\n', metadata={'source': 'https://api.python.langchain.com/en/stable/', 'loc': 'https://api.python.langchain.com/en/stable/', 'lastmod': '2024-02-09T01:10:49.422114+00:00', 'changefreq': 'weekly', 'priority': '1'}) Filtering sitemap URLs​ Sitemaps can be massive files, with thousands of URLs. Often you don’t need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the filter_urls parameter. Only URLs that match one of the patterns will be loaded. loader = SitemapLoader( web_path="https://api.python.langchain.com/sitemap.xml", filter_urls=["https://api.python.langchain.com/en/latest"], ) documents = loader.load() Document(page_content='\n\n\n\n\n\n\n\n\n\nLangChain Python API Reference Documentation.\n\n\nYou will be automatically redirected to the new location of this page.\n\n', metadata={'source': 'https://api.python.langchain.com/en/latest/', 'loc': 'https://api.python.langchain.com/en/latest/', 'lastmod': '2024-02-12T05:26:10.971077+00:00', 'changefreq': 'daily', 'priority': '0.9'}) Add custom scraping rules​ The SitemapLoader uses beautifulsoup4 for the scraping process, and it scrapes every element on the page by default. The SitemapLoader constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers or navigation elements. The following example shows how to develop and use a custom function to avoid navigation and header elements. Import the beautifulsoup4 library and define the custom function. pip install beautifulsoup4 from bs4 import BeautifulSoup def remove_nav_and_header_elements(content: BeautifulSoup) -> str: # Find all 'nav' and 'header' elements in the BeautifulSoup object nav_elements = content.find_all("nav") header_elements = content.find_all("header") # Remove each 'nav' and 'header' element from the BeautifulSoup object for element in nav_elements + header_elements: element.decompose() return str(content.get_text()) Add your custom function to the SitemapLoader object. loader = SitemapLoader( "https://api.python.langchain.com/sitemap.xml", filter_urls=["https://api.python.langchain.com/en/latest/"], parsing_function=remove_nav_and_header_elements, ) Local Sitemap​ The sitemap loader can also be used to load local files. sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True) docs = sitemap_loader.load()
https://python.langchain.com/docs/integrations/document_loaders/stripe/
This notebook covers how to load data from the `Stripe REST API` into a format that can be ingested into LangChain, along with example usage for vectorization. The Stripe API requires an access token, which can be found inside of the Stripe dashboard. This document loader also requires a `resource` option which defines what data you want to load. ``` # Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([stripe_loader])stripe_doc_retriever = index.vectorstore.as_retriever() ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:38.119Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/stripe/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/stripe/", "description": "Stripe is an Irish-American financial", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3473", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"stripe\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:37 GMT", "etag": "W/\"6f6876259bb2d457002269286080d131\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::6zqpn-1713753577654-7a561647e20f" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/stripe/", "property": "og:url" }, { "content": "Stripe | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Stripe is an Irish-American financial", "property": "og:description" } ], "title": "Stripe | 🦜️🔗 LangChain" }
This notebook covers how to load data from the Stripe REST API into a format that can be ingested into LangChain, along with example usage for vectorization. The Stripe API requires an access token, which can be found inside of the Stripe dashboard. This document loader also requires a resource option which defines what data you want to load. # Create a vectorstore retriever from the loader # see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([stripe_loader]) stripe_doc_retriever = index.vectorstore.as_retriever()
https://python.langchain.com/docs/integrations/document_loaders/slack/
This notebook covers how to load documents from a Zipfile generated from a `Slack` export. Export your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option ({your\_slack\_domain}.slack.com/services/export). Then, choose the right date range and click `Start export`. Slack will send you an email and a DM when the export is ready. The download will produce a `.zip` file in your Downloads folder (or wherever your downloads can be found, depending on your OS configuration). Copy the path to the `.zip` file, and assign it as `LOCAL_ZIPFILE` below. ``` # Optionally set your Slack URL. This will give you proper URLs in the docs sources.SLACK_WORKSPACE_URL = "https://xxx.slack.com"LOCAL_ZIPFILE = "" # Paste the local paty to your Slack zip file here.loader = SlackDirectoryLoader(LOCAL_ZIPFILE, SLACK_WORKSPACE_URL) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:38.361Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/slack/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/slack/", "description": "Slack is an instant messaging program.", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3473", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"slack\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:37 GMT", "etag": "W/\"6cfc6ebe9922011f2ceeec04855d3553\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::jq5s4-1713753577656-f3d93da7f7f1" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/slack/", "property": "og:url" }, { "content": "Slack | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Slack is an instant messaging program.", "property": "og:description" } ], "title": "Slack | 🦜️🔗 LangChain" }
This notebook covers how to load documents from a Zipfile generated from a Slack export. Export your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option ({your_slack_domain}.slack.com/services/export). Then, choose the right date range and click Start export. Slack will send you an email and a DM when the export is ready. The download will produce a .zip file in your Downloads folder (or wherever your downloads can be found, depending on your OS configuration). Copy the path to the .zip file, and assign it as LOCAL_ZIPFILE below. # Optionally set your Slack URL. This will give you proper URLs in the docs sources. SLACK_WORKSPACE_URL = "https://xxx.slack.com" LOCAL_ZIPFILE = "" # Paste the local paty to your Slack zip file here. loader = SlackDirectoryLoader(LOCAL_ZIPFILE, SLACK_WORKSPACE_URL)
https://python.langchain.com/docs/integrations/document_loaders/rss/
This covers how to load HTML news articles from a list of RSS feed URLs into a document format that we can use downstream. ``` (next Rich)04 August 2023Rich HickeyIt is with a mixture of heartache and optimism that I announce today my (long planned) retirement from commercial software development, and my employment at Nubank. It’s been thrilling to see Clojure and Datomic successfully applied at scale.I look forward to continuing to lead ongoing work maintaining and enhancing Clojure with Alex, Stu, Fogus and many others, as an independent developer once again. We have many useful things planned for 1.12 and beyond. The community remains friendly, mature and productive, and is taking Clojure into many interesting new domains.I want to highlight and thank Nubank for their ongoing sponsorship of Alex, Fogus and the core team, as well as the Clojure community at large.Stu will continue to lead the development of Datomic at Nubank, where the Datomic team grows and thrives. I’m particularly excited to see where the new free availability of Datomic will lead.My time with Cognitect remains the highlight of my career. I have learned from absolutely everyone on our team, and am forever grateful to all for our interactions. There are too many people to thank here, but I must extend my sincerest appreciation and love to Stu and Justin for (repeatedly) taking a risk on me and my ideas, and for being the best of partners and friends, at all times fully embodying the notion of integrity. And of course to Alex Miller - who possesses in abundance many skills I lack, and without whose indomitable spirit, positivity and friendship Clojure would not have become what it did.I have made many friends through Clojure and Cognitect, and I hope to nurture those friendships moving forward.Retirement returns me to the freedom and independence I had when originally developing Clojure. The journey continues! ``` You can pass arguments to the NewsURLLoader which it uses to load articles. ``` Error fetching or processing https://twitter.com/andrewmccalip/status/1687405505604734978, exception: You must `parse()` an article first!Error processing entry https://twitter.com/andrewmccalip/status/1687405505604734978, exception: list index out of range ``` ``` ['nubank', 'alex', 'stu', 'taking', 'team', 'remains', 'rich', 'clojure', 'thank', 'planned', 'datomic'] ``` ``` 'It’s been thrilling to see Clojure and Datomic successfully applied at scale.\nI look forward to continuing to lead ongoing work maintaining and enhancing Clojure with Alex, Stu, Fogus and many others, as an independent developer once again.\nThe community remains friendly, mature and productive, and is taking Clojure into many interesting new domains.\nI want to highlight and thank Nubank for their ongoing sponsorship of Alex, Fogus and the core team, as well as the Clojure community at large.\nStu will continue to lead the development of Datomic at Nubank, where the Datomic team grows and thrives.' ``` You can also use an OPML file such as a Feedly export. Pass in either a URL or the OPML contents. ``` Error fetching http://www.engadget.com/rss-full.xml, exception: Error fetching http://www.engadget.com/rss-full.xml, exception: document declared as us-ascii, but parsed as utf-8 ``` ``` 'The electric vehicle startup Fisker made a splash in Huntington Beach last night, showing off a range of new EVs it plans to build alongside the Fisker Ocean, which is slowly beginning deliveries in Europe and the US. With shades of Lotus circa 2010, it seems there\'s something for most tastes, with a powerful four-door GT, a versatile pickup truck, and an affordable electric city car.\n\n"We want the world to know that we have big plans and intend to move into several different segments, redefining each with our unique blend of design, innovation, and sustainability," said CEO Henrik Fisker.\n\nStarting with the cheapest, the Fisker PEAR—a cutesy acronym for "Personal Electric Automotive Revolution"—is said to use 35 percent fewer parts than other small EVs. Although it\'s a smaller car, the PEAR seats six thanks to front and rear bench seats. Oh, and it has a frunk, which the company is calling the "froot," something that will satisfy some British English speakers like Ars\' friend and motoring journalist Jonny Smith.\n\nBut most exciting is the price—starting at $29,900 and scheduled for 2025. Fisker plans to contract with Foxconn to build the PEAR in Lordstown, Ohio, meaning it would be eligible for federal tax incentives.\n\nAdvertisement\n\nThe Fisker Alaska is the company\'s pickup truck, built on a modified version of the platform used by the Ocean. It has an extendable cargo bed, which can be as little as 4.5 feet (1,371 mm) or as much as 9.2 feet (2,804 mm) long. Fisker claims it will be both the lightest EV pickup on sale and the most sustainable pickup truck in the world. Range will be an estimated 230–240 miles (370–386 km).\n\nThis, too, is slated for 2025, and also at a relatively affordable price, starting at $45,400. Fisker hopes to build this car in North America as well, although it isn\'t saying where that might take place.\n\nFinally, there\'s the Ronin, a four-door GT that bears more than a passing resemblance to the Fisker Karma, Henrik Fisker\'s 2012 creation. There\'s no price for this one, but Fisker says its all-wheel drive powertrain will boast 1,000 hp (745 kW) and will hit 60 mph from a standing start in two seconds—just about as fast as modern tires will allow. Expect a massive battery in this one, as Fisker says it\'s targeting a 600-mile (956 km) range.\n\n"Innovation and sustainability, along with design, are our three brand values. By 2027, we intend to produce the world’s first climate-neutral vehicle, and as our customers reinvent their relationships with mobility, we want to be a leader in software-defined transportation," Fisker said.' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:38.180Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/rss/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/rss/", "description": "This covers how to load HTML news articles from a list of RSS feed URLs", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3474", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"rss\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:37 GMT", "etag": "W/\"3e2719f774e48761927bc3573c396015\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::56wnp-1713753577547-3d48ab8b669a" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/rss/", "property": "og:url" }, { "content": "RSS Feeds | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This covers how to load HTML news articles from a list of RSS feed URLs", "property": "og:description" } ], "title": "RSS Feeds | 🦜️🔗 LangChain" }
This covers how to load HTML news articles from a list of RSS feed URLs into a document format that we can use downstream. (next Rich) 04 August 2023 Rich Hickey It is with a mixture of heartache and optimism that I announce today my (long planned) retirement from commercial software development, and my employment at Nubank. It’s been thrilling to see Clojure and Datomic successfully applied at scale. I look forward to continuing to lead ongoing work maintaining and enhancing Clojure with Alex, Stu, Fogus and many others, as an independent developer once again. We have many useful things planned for 1.12 and beyond. The community remains friendly, mature and productive, and is taking Clojure into many interesting new domains. I want to highlight and thank Nubank for their ongoing sponsorship of Alex, Fogus and the core team, as well as the Clojure community at large. Stu will continue to lead the development of Datomic at Nubank, where the Datomic team grows and thrives. I’m particularly excited to see where the new free availability of Datomic will lead. My time with Cognitect remains the highlight of my career. I have learned from absolutely everyone on our team, and am forever grateful to all for our interactions. There are too many people to thank here, but I must extend my sincerest appreciation and love to Stu and Justin for (repeatedly) taking a risk on me and my ideas, and for being the best of partners and friends, at all times fully embodying the notion of integrity. And of course to Alex Miller - who possesses in abundance many skills I lack, and without whose indomitable spirit, positivity and friendship Clojure would not have become what it did. I have made many friends through Clojure and Cognitect, and I hope to nurture those friendships moving forward. Retirement returns me to the freedom and independence I had when originally developing Clojure. The journey continues! You can pass arguments to the NewsURLLoader which it uses to load articles. Error fetching or processing https://twitter.com/andrewmccalip/status/1687405505604734978, exception: You must `parse()` an article first! Error processing entry https://twitter.com/andrewmccalip/status/1687405505604734978, exception: list index out of range ['nubank', 'alex', 'stu', 'taking', 'team', 'remains', 'rich', 'clojure', 'thank', 'planned', 'datomic'] 'It’s been thrilling to see Clojure and Datomic successfully applied at scale.\nI look forward to continuing to lead ongoing work maintaining and enhancing Clojure with Alex, Stu, Fogus and many others, as an independent developer once again.\nThe community remains friendly, mature and productive, and is taking Clojure into many interesting new domains.\nI want to highlight and thank Nubank for their ongoing sponsorship of Alex, Fogus and the core team, as well as the Clojure community at large.\nStu will continue to lead the development of Datomic at Nubank, where the Datomic team grows and thrives.' You can also use an OPML file such as a Feedly export. Pass in either a URL or the OPML contents. Error fetching http://www.engadget.com/rss-full.xml, exception: Error fetching http://www.engadget.com/rss-full.xml, exception: document declared as us-ascii, but parsed as utf-8 'The electric vehicle startup Fisker made a splash in Huntington Beach last night, showing off a range of new EVs it plans to build alongside the Fisker Ocean, which is slowly beginning deliveries in Europe and the US. With shades of Lotus circa 2010, it seems there\'s something for most tastes, with a powerful four-door GT, a versatile pickup truck, and an affordable electric city car.\n\n"We want the world to know that we have big plans and intend to move into several different segments, redefining each with our unique blend of design, innovation, and sustainability," said CEO Henrik Fisker.\n\nStarting with the cheapest, the Fisker PEAR—a cutesy acronym for "Personal Electric Automotive Revolution"—is said to use 35 percent fewer parts than other small EVs. Although it\'s a smaller car, the PEAR seats six thanks to front and rear bench seats. Oh, and it has a frunk, which the company is calling the "froot," something that will satisfy some British English speakers like Ars\' friend and motoring journalist Jonny Smith.\n\nBut most exciting is the price—starting at $29,900 and scheduled for 2025. Fisker plans to contract with Foxconn to build the PEAR in Lordstown, Ohio, meaning it would be eligible for federal tax incentives.\n\nAdvertisement\n\nThe Fisker Alaska is the company\'s pickup truck, built on a modified version of the platform used by the Ocean. It has an extendable cargo bed, which can be as little as 4.5 feet (1,371 mm) or as much as 9.2 feet (2,804 mm) long. Fisker claims it will be both the lightest EV pickup on sale and the most sustainable pickup truck in the world. Range will be an estimated 230–240 miles (370–386 km).\n\nThis, too, is slated for 2025, and also at a relatively affordable price, starting at $45,400. Fisker hopes to build this car in North America as well, although it isn\'t saying where that might take place.\n\nFinally, there\'s the Ronin, a four-door GT that bears more than a passing resemblance to the Fisker Karma, Henrik Fisker\'s 2012 creation. There\'s no price for this one, but Fisker says its all-wheel drive powertrain will boast 1,000 hp (745 kW) and will hit 60 mph from a standing start in two seconds—just about as fast as modern tires will allow. Expect a massive battery in this one, as Fisker says it\'s targeting a 600-mile (956 km) range.\n\n"Innovation and sustainability, along with design, are our three brand values. By 2027, we intend to produce the world’s first climate-neutral vehicle, and as our customers reinvent their relationships with mobility, we want to be a leader in software-defined transportation," Fisker said.'
https://python.langchain.com/docs/integrations/document_loaders/source_code/
## Source Code This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. This approach can potentially improve the accuracy of QA models over source code. The supported languages for code parsing are: * C (\*) * C++ (\*) * C# (\*) * COBOL * Go (\*) * Java (\*) * JavaScript (requires package `esprima`) * Kotlin (\*) * Lua (\*) * Perl (\*) * Python * Ruby (\*) * Rust (\*) * Scala (\*) * TypeScript (\*) Items marked with (\*) require the packages `tree_sitter` and `tree_sitter_languages`. It is straightforward to add support for additional languages using `tree_sitter`, although this currently requires modifying LangChain. The language used for parsing can be configured, along with the minimum number of lines required to activate the splitting based on syntax. If a language is not explicitly specified, `LanguageParser` will infer one from filename extensions, if present. ``` %pip install -qU esprima esprima tree_sitter tree_sitter_languages ``` ``` import warningswarnings.filterwarnings("ignore")from pprint import pprintfrom langchain_community.document_loaders.generic import GenericLoaderfrom langchain_community.document_loaders.parsers import LanguageParserfrom langchain_text_splitters import Language ``` ``` loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py", ".js"], parser=LanguageParser(),)docs = loader.load() ``` ``` for document in docs: pprint(document.metadata) ``` ``` {'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'}{'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'}{'content_type': 'simplified_code', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'}{'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'}{'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'}{'content_type': 'simplified_code', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} ``` ``` print("\n\n--8<--\n\n".join([document.page_content for document in docs])) ``` ``` class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!")--8<--def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet()--8<--# Code for: class MyClass:# Code for: def main():if __name__ == "__main__": main()--8<--class MyClass { constructor(name) { this.name = name; } greet() { console.log(`Hello, ${this.name}!`); }}--8<--function main() { const name = prompt("Enter your name:"); const obj = new MyClass(name); obj.greet();}--8<--// Code for: class MyClass {// Code for: function main() {main(); ``` The parser can be disabled for small files. The parameter `parser_threshold` indicates the minimum number of lines that the source code file must have to be segmented using the parser. ``` loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=1000),)docs = loader.load() ``` ``` print(docs[0].page_content) ``` ``` class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!")def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet()if __name__ == "__main__": main() ``` ## Splitting[​](#splitting "Direct link to Splitting") Additional splitting could be needed for those functions, classes, or scripts that are too big. ``` loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".js"], parser=LanguageParser(language=Language.JS),)docs = loader.load() ``` ``` from langchain_text_splitters import ( Language, RecursiveCharacterTextSplitter,) ``` ``` js_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0) ``` ``` result = js_splitter.split_documents(docs) ``` ``` print("\n\n--8<--\n\n".join([document.page_content for document in result])) ``` ``` class MyClass { constructor(name) { this.name = name;--8<--}--8<--greet() { console.log(`Hello, ${this.name}!`); }}--8<--function main() { const name = prompt("Enter your name:");--8<--const obj = new MyClass(name); obj.greet();}--8<--// Code for: class MyClass {// Code for: function main() {--8<--main(); ``` ## Adding Languages using Tree-sitter Template[​](#adding-languages-using-tree-sitter-template "Direct link to Adding Languages using Tree-sitter Template") Expanding language support using the Tree-Sitter template involves a few essential steps: 1. **Creating a New Language File**: * Begin by creating a new file in the designated directory (langchain/libs/community/langchain\_community/document\_loaders/parsers/language). * Model this file based on the structure and parsing logic of existing language files like **`cpp.py`**. * You will also need to create a file in the langchain directory (langchain/libs/langchain/langchain/document\_loaders/parsers/language). 2. **Parsing Language Specifics**: * Mimic the structure used in the **`cpp.py`** file, adapting it to suit the language you are incorporating. * The primary alteration involves adjusting the chunk query array to suit the syntax and structure of the language you are parsing. 3. **Testing the Language Parser**: * For thorough validation, generate a test file specific to the new language. Create **`test_language.py`** in the designated directory(langchain/libs/community/tests/unit\_tests/document\_loaders/parsers/language). * Follow the example set by **`test_cpp.py`** to establish fundamental tests for the parsed elements in the new language. 4. **Integration into the Parser and Text Splitter**: * Incorporate your new language within the **`language_parser.py`** file. Ensure to update LANGUAGE\_EXTENSIONS and LANGUAGE\_SEGMENTERS along with the docstring for LanguageParser to recognize and handle the added language. * Also, confirm that your language is included in **`text_splitter.py`** in class Language for proper parsing. By following these steps and ensuring comprehensive testing and integration, you’ll successfully extend language support using the Tree-Sitter template. Best of luck!
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:38.782Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/source_code/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/source_code/", "description": "This notebook covers how to load source code files using a special", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"source_code\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:37 GMT", "etag": "W/\"7bacec46eff8495def4ea45cee42ce62\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::ffzml-1713753577645-555559eab1d5" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/source_code/", "property": "og:url" }, { "content": "Source Code | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook covers how to load source code files using a special", "property": "og:description" } ], "title": "Source Code | 🦜️🔗 LangChain" }
Source Code This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document. This approach can potentially improve the accuracy of QA models over source code. The supported languages for code parsing are: C (*) C++ (*) C# (*) COBOL Go (*) Java (*) JavaScript (requires package esprima) Kotlin (*) Lua (*) Perl (*) Python Ruby (*) Rust (*) Scala (*) TypeScript (*) Items marked with (*) require the packages tree_sitter and tree_sitter_languages. It is straightforward to add support for additional languages using tree_sitter, although this currently requires modifying LangChain. The language used for parsing can be configured, along with the minimum number of lines required to activate the splitting based on syntax. If a language is not explicitly specified, LanguageParser will infer one from filename extensions, if present. %pip install -qU esprima esprima tree_sitter tree_sitter_languages import warnings warnings.filterwarnings("ignore") from pprint import pprint from langchain_community.document_loaders.generic import GenericLoader from langchain_community.document_loaders.parsers import LanguageParser from langchain_text_splitters import Language loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py", ".js"], parser=LanguageParser(), ) docs = loader.load() for document in docs: pprint(document.metadata) {'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'simplified_code', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} {'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} {'content_type': 'simplified_code', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} print("\n\n--8<--\n\n".join([document.page_content for document in docs])) class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") --8<-- def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() --8<-- # Code for: class MyClass: # Code for: def main(): if __name__ == "__main__": main() --8<-- class MyClass { constructor(name) { this.name = name; } greet() { console.log(`Hello, ${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { main(); The parser can be disabled for small files. The parameter parser_threshold indicates the minimum number of lines that the source code file must have to be segmented using the parser. loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=1000), ) docs = loader.load() print(docs[0].page_content) class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() if __name__ == "__main__": main() Splitting​ Additional splitting could be needed for those functions, classes, or scripts that are too big. loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".js"], parser=LanguageParser(language=Language.JS), ) docs = loader.load() from langchain_text_splitters import ( Language, RecursiveCharacterTextSplitter, ) js_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0 ) result = js_splitter.split_documents(docs) print("\n\n--8<--\n\n".join([document.page_content for document in result])) class MyClass { constructor(name) { this.name = name; --8<-- } --8<-- greet() { console.log(`Hello, ${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); --8<-- const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { --8<-- main(); Adding Languages using Tree-sitter Template​ Expanding language support using the Tree-Sitter template involves a few essential steps: Creating a New Language File: Begin by creating a new file in the designated directory (langchain/libs/community/langchain_community/document_loaders/parsers/language). Model this file based on the structure and parsing logic of existing language files like cpp.py. You will also need to create a file in the langchain directory (langchain/libs/langchain/langchain/document_loaders/parsers/language). Parsing Language Specifics: Mimic the structure used in the cpp.py file, adapting it to suit the language you are incorporating. The primary alteration involves adjusting the chunk query array to suit the syntax and structure of the language you are parsing. Testing the Language Parser: For thorough validation, generate a test file specific to the new language. Create test_language.py in the designated directory(langchain/libs/community/tests/unit_tests/document_loaders/parsers/language). Follow the example set by test_cpp.py to establish fundamental tests for the parsed elements in the new language. Integration into the Parser and Text Splitter: Incorporate your new language within the language_parser.py file. Ensure to update LANGUAGE_EXTENSIONS and LANGUAGE_SEGMENTERS along with the docstring for LanguageParser to recognize and handle the added language. Also, confirm that your language is included in text_splitter.py in class Language for proper parsing. By following these steps and ensuring comprehensive testing and integration, you’ll successfully extend language support using the Tree-Sitter template. Best of luck!