id
stringlengths 14
16
| text
stringlengths 29
2.73k
| source
stringlengths 49
115
|
---|---|---|
dd6f7e7daf16-1 | StableLM, by Stability AI#
See Stability AI’s organization page for a list of available models.
repo_id = "stabilityai/stablelm-tuned-alpha-3b"
# Others include stabilityai/stablelm-base-alpha-3b
# as well as 7B parameter versions
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
Dolly, by DataBricks#
See DataBricks organization page for a list of available models.
from langchain import HuggingFaceHub
repo_id = "databricks/dolly-v2-3b"
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
Camel, by Writer#
See Writer’s organization page for a list of available models.
from langchain import HuggingFaceHub
repo_id = "Writer/camel-5b-hf" # See https://huggingface.co/Writer for other options
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
And many more!
previous
GPT4All
next
Hugging Face Local Pipelines
Contents
Examples
StableLM, by Stability AI | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
dd6f7e7daf16-2 | Hugging Face Local Pipelines
Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
5799ee15cda5-0 | .ipynb
.pdf
Petals
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
Petals#
Petals runs 100B+ language models at home, BitTorrent-style.
This notebook goes over how to use Langchain with Petals.
Install petals#
The petals package is required to use the Petals API. Install petals using pip3 install petals.
!pip3 install petals
Imports#
import os
from langchain.llms import Petals
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from Huggingface.
from getpass import getpass
HUGGINGFACE_API_KEY = getpass()
os.environ["HUGGINGFACE_API_KEY"] = HUGGINGFACE_API_KEY
Create the Petals instance#
You can specify different parameters such as the model name, max new tokens, temperature, etc.
# this can take several minutes to download big files!
llm = Petals(model_name="bigscience/bloom-petals")
Downloading: 1%|▏ | 40.8M/7.19G [00:24<15:44, 7.57MB/s]
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain. | https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html |
5799ee15cda5-1 | Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
OpenAI
next
PipelineAI
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html |
99b2b8322c5a-0 | .ipynb
.pdf
SageMakerEndpoint
Contents
Set up
Example
SageMakerEndpoint#
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.
!pip3 install langchain boto3
Set up#
You have to set up following required parameters of the SagemakerEndpoint call:
endpoint_name: The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
credentials_profile_name: The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
Example#
from langchain.docstore.document import Document
example_doc_1 = """
Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.
Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.
Therefore, Peter stayed with her at the hospital for 3 days without leaving.
"""
docs = [
Document(
page_content=example_doc_1,
)
]
from typing import Dict
from langchain import PromptTemplate, SagemakerEndpoint
from langchain.llms.sagemaker_endpoint import ContentHandlerBase
from langchain.chains.question_answering import load_qa_chain
import json
query = """How long was Elizabeth hospitalized?
""" | https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html |
99b2b8322c5a-1 | import json
query = """How long was Elizabeth hospitalized?
"""
prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
class ContentHandler(ContentHandlerBase):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_str = json.dumps({prompt: prompt, **model_kwargs})
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
return response_json[0]["generated_text"]
content_handler = ContentHandler()
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpoint-name",
credentials_profile_name="credentials-profile-name",
region_name="us-west-2",
model_kwargs={"temperature":1e-10},
content_handler=content_handler
),
prompt=PROMPT
)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
previous
Runhouse
next
StochasticAI
Contents
Set up
Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html |
947cdc4e8954-0 | .ipynb
.pdf
OpenAI
OpenAI#
OpenAI offers a spectrum of models with different levels of power suitable for different tasks.
This example goes over how to use LangChain to interact with OpenAI models
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
import os
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = OpenAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
' Justin Bieber was born in 1994, so we are looking for the Super Bowl winner from that year. The Super Bowl in 1994 was Super Bowl XXVIII, and the winner was the Dallas Cowboys.'
previous
NLP Cloud
next
Petals
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/openai.html |
0e33de8b38af-0 | .ipynb
.pdf
AI21
AI21#
AI21 Studio provides API access to Jurassic-2 large language models.
This example goes over how to use LangChain to interact with AI21 models.
# install the package:
!pip install ai21
# get AI21_API_KEY. Use https://studio.ai21.com/account/account
from getpass import getpass
AI21_API_KEY = getpass()
from langchain.llms import AI21
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AI21(ai21_api_key=AI21_API_KEY)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
'\n1. What year was Justin Bieber born?\nJustin Bieber was born in 1994.\n2. What team won the Super Bowl in 1994?\nThe Dallas Cowboys won the Super Bowl in 1994.'
previous
Integrations
next
Aleph Alpha
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/ai21.html |
67cde862cf9d-0 | .ipynb
.pdf
PredictionGuard
Contents
Basic LLM usage
Chaining
PredictionGuard#
How to use PredictionGuard wrapper
! pip install predictionguard langchain
import predictionguard as pg
from langchain.llms import PredictionGuard
Basic LLM usage#
pgllm = PredictionGuard(name="default-text-gen", token="<your access token>")
pgllm("Tell me a joke")
Chaining#
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)
template = """Write a {adjective} poem about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
llm_chain.predict(adjective="sad", subject="ducks")
previous
PipelineAI
next
PromptLayer OpenAI
Contents
Basic LLM usage
Chaining
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html |
502bc67be365-0 | .ipynb
.pdf
Manifest
Contents
Compare HF Models
Manifest#
This notebook goes over how to use Manifest and LangChain.
For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest
Another example of using Manifest with Langchain.
!pip install manifest-ml
from manifest import Manifest
from langchain.llms.manifest import ManifestWrapper
manifest = Manifest(
client_name = "huggingface",
client_connection = "http://127.0.0.1:5000"
)
print(manifest.client.get_model_params())
llm = ManifestWrapper(client=manifest, llm_kwargs={"temperature": 0.001, "max_tokens": 256})
# Map reduce example
from langchain import PromptTemplate
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
_prompt = """Write a concise summary of the following:
{text}
CONCISE SUMMARY:"""
prompt = PromptTemplate(template=_prompt, input_variables=["text"])
text_splitter = CharacterTextSplitter()
mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter)
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
mp_chain.run(state_of_the_union) | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
502bc67be365-1 | state_of_the_union = f.read()
mp_chain.run(state_of_the_union)
'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will continue to send vaccines to 112 countries, more than any other nation. "We have lost so much to COVID-19," Trump said. "Time with one another. And worst of all, so much loss of life." He said the CDC is working on a vaccine for kids under 5, and that the government will be ready with plenty of vaccines when they are available. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government is launching a "Test to Treat" initiative that will allow people to get tested at a pharmacy and get antiviral pills on the spot at no cost. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government will continue to send vaccines to 112 countries, more than any other nation. "We are coming for your'
Compare HF Models#
from langchain.model_laboratory import ModelLaboratory
manifest1 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5000"
),
llm_kwargs={"temperature": 0.01}
)
manifest2 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5001"
),
llm_kwargs={"temperature": 0.01}
)
manifest3 = ManifestWrapper( | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
502bc67be365-2 | )
manifest3 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5002"
),
llm_kwargs={"temperature": 0.01}
)
llms = [manifest1, manifest2, manifest3]
model_lab = ModelLaboratory(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
ManifestWrapper
Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B', 'temperature': 0.01}
pink
ManifestWrapper
Params: {'model_name': 'EleutherAI/gpt-neo-125M', 'model_path': 'EleutherAI/gpt-neo-125M', 'temperature': 0.01}
A flamingo is a small, round
ManifestWrapper
Params: {'model_name': 'google/flan-t5-xl', 'model_path': 'google/flan-t5-xl', 'temperature': 0.01}
pink
previous
Llama-cpp
next
Modal
Contents
Compare HF Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
2c2a00cc0345-0 | .ipynb
.pdf
DeepInfra
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
DeepInfra#
DeepInfra provides several LLMs.
This notebook goes over how to use Langchain with DeepInfra.
Imports#
import os
from langchain.llms import DeepInfra
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from DeepInfra. You have to Login and get a new token.
You are given a 1 hour free of serverless GPU compute to test different models. (see here)
You can print your token with deepctl auth token
# get a new token: https://deepinfra.com/login?from=%2Fdash
from getpass import getpass
DEEPINFRA_API_TOKEN = getpass()
os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKEN
Create the DeepInfra instance#
Make sure to deploy your model first via deepctl deploy create -m google/flat-t5-xl (see here)
llm = DeepInfra(model_id="DEPLOYED MODEL ID")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in 2015?"
llm_chain.run(question)
previous
Cohere
next
ForefrontAI | https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html |
2c2a00cc0345-1 | llm_chain.run(question)
previous
Cohere
next
ForefrontAI
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html |
1037e30bec06-0 | .ipynb
.pdf
GooseAI
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
GooseAI#
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.
This notebook goes over how to use Langchain with GooseAI.
Install openai#
The openai package is required to use the GooseAI API. Install openai using pip3 install openai.
$ pip3 install openai
Imports#
import os
from langchain.llms import GooseAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from GooseAI. You are given $10 in free credits to test different models.
from getpass import getpass
GOOSEAI_API_KEY = getpass()
os.environ["GOOSEAI_API_KEY"] = GOOSEAI_API_KEY
Create the GooseAI instance#
You can specify different parameters such as the model name, max tokens generated, temperature, etc.
llm = GooseAI()
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
ForefrontAI
next
GPT4All
Contents
Install openai
Imports | https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html |
1037e30bec06-1 | ForefrontAI
next
GPT4All
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html |
bd8fa7db0586-0 | .ipynb
.pdf
Hugging Face Local Pipelines
Contents
Load the model
Integrate the model in an LLMChain
Hugging Face Local Pipelines#
Hugging Face models can be run locally through the HuggingFacePipeline class.
The Hugging Face Model Hub hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the HuggingFaceHub notebook.
To use, you should have the transformers python package installed.
!pip install transformers > /dev/null
Load the model#
from langchain import HuggingFacePipeline
llm = HuggingFacePipeline.from_model_id(model_id="bigscience/bloom-1b7", task="text-generation", model_kwargs={"temperature":0, "max_length":64})
WARNING:root:Failed to default session, using empty session: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /sessions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x1117f9790>: Failed to establish a new connection: [Errno 61] Connection refused'))
Integrate the model in an LLMChain#
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is electroencephalography?"
print(llm_chain.run(question)) | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html |
bd8fa7db0586-1 | question = "What is electroencephalography?"
print(llm_chain.run(question))
/Users/wfh/code/lc/lckg/.venv/lib/python3.11/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (64) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x144d06910>: Failed to establish a new connection: [Errno 61] Connection refused'))
First, we need to understand what is an electroencephalogram. An electroencephalogram is a recording of brain activity. It is a recording of brain activity that is made by placing electrodes on the scalp. The electrodes are placed
previous
Hugging Face Hub
next
Llama-cpp
Contents
Load the model
Integrate the model in an LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html |
f1d16be94226-0 | .ipynb
.pdf
Banana
Banana#
Banana is focused on building the machine learning infrastructure.
This example goes over how to use LangChain to interact with Banana models
# Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/python
!pip install banana-dev
# get new tokens: https://app.banana.dev/
# We need two tokens, not just an `api_key`: `BANANA_API_KEY` and `YOUR_MODEL_KEY`
import os
from getpass import getpass
os.environ["BANANA_API_KEY"] = "YOUR_API_KEY"
# OR
# BANANA_API_KEY = getpass()
from langchain.llms import Banana
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Banana(model_key="YOUR_MODEL_KEY")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Azure OpenAI
next
CerebriumAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/banana.html |
3eab6e553f31-0 | .ipynb
.pdf
GPT4All
Contents
Specify Model
GPT4All#
GitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.
This example goes over how to use LangChain to interact with GPT4All models.
%pip install pygpt4all > /dev/null
Note: you may need to restart the kernel to use updated packages.
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Specify Model#
To run locally, download a compatible ggml-formatted model. For more info, visit https://github.com/nomic-ai/pygpt4all
For full installation instructions go here.
The GPT4All Chat installer needs to decompress a 3GB LLM model during the installation process!
Note that new models are uploaded regularly - check the link above for the most recent .bin URL
local_path = './models/ggml-gpt4all-l13b-snoozy.bin' # replace with your desired local file path
Uncomment the below block to download a model. You may want to update url to a new version.
# import requests
# from pathlib import Path
# from tqdm import tqdm
# Path(local_path).parent.mkdir(parents=True, exist_ok=True)
# # Example model. Check https://github.com/nomic-ai/pygpt4all for the latest models.
# url = 'http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin' | https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html |
3eab6e553f31-1 | # # send a GET request to the URL to download the file. Stream since it's large
# response = requests.get(url, stream=True)
# # open the file in binary mode and write the contents of the response to it in chunks
# # This is a large file, so be prepared to wait.
# with open(local_path, 'wb') as f:
# for chunk in tqdm(response.iter_content(chunk_size=8192)):
# if chunk:
# f.write(chunk)
# Callbacks support token-wise streaming
callbacks = [StreamingStdOutCallbackHandler()]
# Verbose is required to pass to the callback manager
llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
previous
GooseAI
next
Hugging Face Hub
Contents
Specify Model
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html |
74119e94e4a9-0 | .ipynb
.pdf
StochasticAI
StochasticAI#
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.
This example goes over how to use LangChain to interact with StochasticAI models.
You have to get the API_KEY and the API_URL here.
from getpass import getpass
STOCHASTICAI_API_KEY = getpass()
import os
os.environ["STOCHASTICAI_API_KEY"] = STOCHASTICAI_API_KEY
YOUR_API_URL = getpass()
from langchain.llms import StochasticAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = StochasticAI(api_url=YOUR_API_URL)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
"\n\nStep 1: In 1999, the St. Louis Rams won the Super Bowl.\n\nStep 2: In 1999, Beiber was born.\n\nStep 3: The Rams were in Los Angeles at the time.\n\nStep 4: So they didn't play in the Super Bowl that year.\n"
previous
SageMakerEndpoint
next
Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/stochasticai.html |
709d2333e973-0 | .ipynb
.pdf
Runhouse
Runhouse#
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs.
This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.
Note: Code uses SelfHosted name instead of the Runhouse.
!pip install runhouse
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
from langchain import PromptTemplate, LLMChain
import runhouse as rh
INFO | 2023-04-17 16:47:36,173 | No auth token provided, so not using RNS API to save and load configs
# For an on-demand A100 with GCP, Azure, or Lambda
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)
# For an on-demand A10G with AWS (no single A100s on AWS)
# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')
# For an existing cluster
# gpu = rh.cluster(ips=['<ip of the cluster>'],
# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},
# name='rh-a10x')
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = SelfHostedHuggingFaceLLM(model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])
llm_chain = LLMChain(prompt=prompt, llm=llm) | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
709d2333e973-1 | llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds
"\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber"
You can also load more custom models through the SelfHostedHuggingFaceLLM interface:
llm = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-small",
task="text2text-generation",
hardware=gpu,
)
llm("What is the capital of Germany?")
INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC
INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds
'berlin'
Using a custom load function, we can load a custom pipeline directly on the remote hardware:
def load_pipeline():
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Need to be inside the fn in notebooks
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
return pipe
def inference_fn(pipeline, prompt, stop = None): | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
709d2333e973-2 | )
return pipe
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"][len(prompt):]
llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)
llm("Who is the current US president?")
INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds
'john w. bush'
You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:
pipeline = load_pipeline()
llm = SelfHostedPipeline.from_pipeline(
pipeline=pipeline, hardware=gpu, model_reqs=model_reqs
)
Instead, we can also send it to the hardware’s filesystem, which will be much faster.
rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models")
llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu)
previous
Replicate
next
SageMakerEndpoint
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
c74fcdfc635c-0 | .ipynb
.pdf
Cohere
Cohere#
Let’s load the Cohere Embedding class.
from langchain.embeddings import CohereEmbeddings
embeddings = CohereEmbeddings(cohere_api_key=cohere_api_key)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
AzureOpenAI
next
Fake Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/cohere.html |
d6c0583f79cc-0 | .ipynb
.pdf
Aleph Alpha
Contents
Asymmetric
Symmetric
Aleph Alpha#
There are two possible ways to use Aleph Alpha’s semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric embeddings are the suggested approach.
Asymmetric#
from langchain.embeddings import AlephAlphaAsymmetricSemanticEmbedding
document = "This is a content of the document"
query = "What is the contnt of the document?"
embeddings = AlephAlphaAsymmetricSemanticEmbedding()
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
Symmetric#
from langchain.embeddings import AlephAlphaSymmetricSemanticEmbedding
text = "This is a test text"
embeddings = AlephAlphaSymmetricSemanticEmbedding()
doc_result = embeddings.embed_documents([text])
query_result = embeddings.embed_query(text)
previous
Text Embedding Models
next
AzureOpenAI
Contents
Asymmetric
Symmetric
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/aleph_alpha.html |
baedb97185de-0 | .ipynb
.pdf
Llama-cpp
Llama-cpp#
This notebook goes over how to use Llama-cpp embeddings within LangChain
!pip install llama-cpp-python
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin")
text = "This is a test document."
query_result = llama.embed_query(text)
doc_result = llama.embed_documents([text])
previous
Jina
next
OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/llamacpp.html |
47024a4bddb2-0 | .ipynb
.pdf
Fake Embeddings
Fake Embeddings#
LangChain also provides a fake embedding class. You can use this to test your pipelines.
from langchain.embeddings import FakeEmbeddings
embeddings = FakeEmbeddings(size=1352)
query_result = embeddings.embed_query("foo")
doc_results = embeddings.embed_documents(["foo"])
previous
Cohere
next
Hugging Face Hub
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/fake.html |
529dfce5b9d5-0 | .ipynb
.pdf
Sentence Transformers Embeddings
Sentence Transformers Embeddings#
SentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package.
SentenceTransformers is a python package that can generate text and image embeddings, originating from Sentence-BERT
!pip install sentence_transformers > /dev/null
[notice] A new release of pip is available: 23.0.1 -> 23.1.1
[notice] To update, run: pip install --upgrade pip
from langchain.embeddings import HuggingFaceEmbeddings, SentenceTransformerEmbeddings
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
# Equivalent to SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text, "This is not a test document."])
previous
Self Hosted Embeddings
next
TensorflowHub
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sentence_transformers.html |
979dd632a2df-0 | .ipynb
.pdf
SageMaker Endpoint Embeddings
SageMaker Endpoint Embeddings#
Let’s load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.
For instructions on how to do this, please see here. Note: In order to handle batched requests, you will need to adjust the return line in the predict_fn() function within the custom inference.py script:
Change from
return {"vectors": sentence_embeddings[0].tolist()}
to:
return {"vectors": sentence_embeddings.tolist()}.
!pip3 install langchain boto3
from typing import Dict, List
from langchain.embeddings import SagemakerEndpointEmbeddings
from langchain.llms.sagemaker_endpoint import ContentHandlerBase
import json
class ContentHandler(ContentHandlerBase):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, inputs: list[str], model_kwargs: Dict) -> bytes:
input_str = json.dumps({"inputs": inputs, **model_kwargs})
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> List[List[float]]:
response_json = json.loads(output.read().decode("utf-8"))
return response_json["vectors"]
content_handler = ContentHandler()
embeddings = SagemakerEndpointEmbeddings(
# endpoint_name="endpoint-name",
# credentials_profile_name="credentials-profile-name",
endpoint_name="huggingface-pytorch-inference-2023-03-21-16-14-03-834",
region_name="us-east-1",
content_handler=content_handler
)
query_result = embeddings.embed_query("foo")
doc_results = embeddings.embed_documents(["foo"]) | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sagemaker-endpoint.html |
979dd632a2df-1 | query_result = embeddings.embed_query("foo")
doc_results = embeddings.embed_documents(["foo"])
doc_results
previous
OpenAI
next
Self Hosted Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sagemaker-endpoint.html |
07be7c8f1267-0 | .ipynb
.pdf
OpenAI
OpenAI#
Let’s load the OpenAI Embedding class.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
Let’s load the OpenAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see here
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model_name="ada")
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
Llama-cpp
next
SageMaker Endpoint Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/openai.html |
5a04e31ed87e-0 | .ipynb
.pdf
AzureOpenAI
AzureOpenAI#
Let’s load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.
# set the environment variables needed for openai package to know to reach out to azure
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model="your-embeddings-deployment-name")
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
Aleph Alpha
next
Cohere
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/azureopenai.html |
69e1316bce69-0 | .ipynb
.pdf
Jina
Jina#
Let’s load the Jina Embedding class.
from langchain.embeddings import JinaEmbeddings
embeddings = JinaEmbeddings(jina_auth_token=jina_auth_token, model_name="ViT-B-32::openai")
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
In the above example, ViT-B-32::openai, OpenAI’s pretrained ViT-B-32 model is used. For a full list of models, see here.
previous
InstructEmbeddings
next
Llama-cpp
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/jina.html |
e561feb47af2-0 | .ipynb
.pdf
TensorflowHub
TensorflowHub#
Let’s load the TensorflowHub Embedding class.
from langchain.embeddings import TensorflowHubEmbeddings
embeddings = TensorflowHubEmbeddings()
2023-01-30 23:53:01.652176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-30 23:53:34.362802: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_results = embeddings.embed_documents(["foo"])
doc_results
previous
Sentence Transformers Embeddings
next
Prompts
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/tensorflowhub.html |
d80f45f9c1f2-0 | .ipynb
.pdf
Self Hosted Embeddings
Self Hosted Embeddings#
Let’s load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.
from langchain.embeddings import (
SelfHostedEmbeddings,
SelfHostedHuggingFaceEmbeddings,
SelfHostedHuggingFaceInstructEmbeddings,
)
import runhouse as rh
# For an on-demand A100 with GCP, Azure, or Lambda
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)
# For an on-demand A10G with AWS (no single A100s on AWS)
# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')
# For an existing cluster
# gpu = rh.cluster(ips=['<ip of the cluster>'],
# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},
# name='my-cluster')
embeddings = SelfHostedHuggingFaceEmbeddings(hardware=gpu)
text = "This is a test document."
query_result = embeddings.embed_query(text)
And similarly for SelfHostedHuggingFaceInstructEmbeddings:
embeddings = SelfHostedHuggingFaceInstructEmbeddings(hardware=gpu)
Now let’s load an embedding model with a custom load function:
def get_pipeline():
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
pipeline,
) # Must be inside the function in notebooks
model_id = "facebook/bart-base"
tokenizer = AutoTokenizer.from_pretrained(model_id) | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html |
d80f45f9c1f2-1 | tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
def inference_fn(pipeline, prompt):
# Return last hidden state of the model
if isinstance(prompt, list):
return [emb[0][-1] for emb in pipeline(prompt)]
return pipeline(prompt)[0][-1]
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
inference_fn=inference_fn,
)
query_result = embeddings.embed_query(text)
previous
SageMaker Endpoint Embeddings
next
Sentence Transformers Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html |
6f5aa0d06c2f-0 | .ipynb
.pdf
Hugging Face Hub
Hugging Face Hub#
Let’s load the Hugging Face Embedding class.
from langchain.embeddings import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings()
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
Fake Embeddings
next
InstructEmbeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/huggingfacehub.html |
dcf3bfe4e0bf-0 | .ipynb
.pdf
InstructEmbeddings
InstructEmbeddings#
Let’s load the HuggingFace instruct Embeddings class.
from langchain.embeddings import HuggingFaceInstructEmbeddings
embeddings = HuggingFaceInstructEmbeddings(
query_instruction="Represent the query for retrieval: "
)
load INSTRUCTOR_Transformer
max_seq_length 512
text = "This is a test document."
query_result = embeddings.embed_query(text)
previous
Hugging Face Hub
next
Jina
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/instruct_embeddings.html |
496e76d77cbe-0 | .rst
.pdf
Document Loaders
Document Loaders#
Note
Conceptual Guide
Combining language models with your own text data is a powerful way to differentiate them.
The first step in doing this is to load the data into “documents” - a fancy way of say some pieces of text.
This module is aimed at making this easy.
A primary driver of a lot of this is the Unstructured python package.
This package is a great way to transform all types of files - text, powerpoint, images, html, pdf, etc - into text data.
For detailed instructions on how to get set up with Unstructured, see installation guidelines here.
The following document loaders are provided:
CoNLL-U
Airbyte JSON
Apify Dataset
Arxiv
AZLyrics
Azure Blob Storage Container
Azure Blob Storage File
BigQuery Loader
Bilibili
Blackboard
Blockchain
ChatGPT Data Loader
College Confidential
Confluence
Copy Paste
CSV Loader
DataFrame Loader
Diffbot
Directory Loader
Discord
DuckDB Loader
Email
EPubs
EverNote
Facebook Chat
Figma
GCS Directory
GCS File Storage
Git
GitBook
Google Drive
Gutenberg
Hacker News
HTML
HuggingFace dataset loader
iFixit
Images
Image captions
IMSDb
Markdown
Modern Treasury
Notebook
Notion
Notion DB Loader
Obsidian
PDF
PowerPoint
ReadTheDocs Documentation
Reddit
Roam
s3 Directory
s3 File
Sitemap Loader
Slack (Local Exported Zipfile)
Spreedly
Subtitle Files
Stripe
Telegram
Twitter
Unstructured File Loader
URL
Selenium URL Loader
Playwright URL Loader
Web Base
WhatsApp Chat
Word Documents
YouTube
previous
Getting Started
next
CoNLL-U
By Harrison Chase | https://python.langchain.com/en/latest/modules/indexes/document_loaders.html |
496e76d77cbe-1 | YouTube
previous
Getting Started
next
CoNLL-U
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders.html |
20d7328542a7-0 | .rst
.pdf
Vectorstores
Vectorstores#
Note
Conceptual Guide
Vectorstores are one of the most important components of building indexes.
For an introduction to vectorstores and generic functionality see:
Getting Started
We also have documentation for all the types of vectorstores that are supported.
Please see below for that list.
AnalyticDB
Annoy
AtlasDB
Chroma
Deep Lake
ElasticSearch
FAISS
LanceDB
Milvus
MyScale
OpenSearch
PGVector
Pinecone
Qdrant
Redis
SupabaseVectorStore
Tair
Weaviate
Zilliz
previous
TiktokenText Splitter
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores.html |
273301a108b5-0 | .rst
.pdf
Text Splitters
Text Splitters#
Note
Conceptual Guide
When you want to deal with long pieces of text, it is necessary to split up that text into chunks.
As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What “semantically related” means could depend on the type of text.
This notebook showcases several ways to do that.
At a high level, text splitters work as following:
Split the text up into small, semantically meaningful chunks (often sentences).
Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).
Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).
That means there two different axes along which you can customize your text splitter:
How the text is split
How the chunk size is measured
For an introduction to the default text splitter and generic functionality see:
Getting Started
We also have documentation for all the types of text splitters that are supported.
Please see below for that list.
Character Text Splitter
Hugging Face Length Function
Latex Text Splitter
Markdown Text Splitter
NLTK Text Splitter
Python Code Text Splitter
RecursiveCharacterTextSplitter
Spacy Text Splitter
tiktoken (OpenAI) Length Function
TiktokenText Splitter
previous
YouTube
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters.html |
d3b05155c40b-0 | .rst
.pdf
Retrievers
Retrievers#
Note
Conceptual Guide
The retriever interface is a generic interface that makes it easy to combine documents with
language models. This interface exposes a get_relevant_documents method which takes in a query
(a string) and returns a list of documents.
Please see below for a list of all the retrievers supported.
ChatGPT Plugin Retriever
Cohere Reranker
Contextual Compression Retriever
Stringing compressors and document transformers together
Databerry
ElasticSearch BM25
Metal
Pinecone Hybrid Search
Self-querying retriever
Creating our self-querying retriever
Testing it out
SVM Retriever
TF-IDF Retriever
Time Weighted VectorStore Retriever
VectorStore Retriever
Vespa retriever
Weaviate Hybrid Search
previous
Zilliz
next
ChatGPT Plugin Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers.html |
640d3c25830c-0 | .ipynb
.pdf
Getting Started
Contents
One Line Index Creation
Walkthrough
Getting Started#
LangChain primary focuses on constructing indexes with the goal of using them as a Retriever. In order to best understand what this means, it’s worth highlighting what the base Retriever interface is. The BaseRetriever class in LangChain is as follows:
from abc import ABC, abstractmethod
from typing import List
from langchain.schema import Document
class BaseRetriever(ABC):
@abstractmethod
def get_relevant_documents(self, query: str) -> List[Document]:
"""Get texts relevant for a query.
Args:
query: string to find relevant texts for
Returns:
List of relevant documents
"""
It’s that simple! The get_relevant_documents method can be implemented however you see fit.
Of course, we also help construct what we think useful Retrievers are. The main type of Retriever that we focus on is a Vectorstore retriever. We will focus on that for the rest of this guide.
In order to understand what a vectorstore retriever is, it’s important to understand what a Vectorstore is. So let’s look at that.
By default, LangChain uses Chroma as the vectorstore to index and search embeddings. To walk through this tutorial, we’ll first need to install chromadb.
pip install chromadb
This example showcases question answering over documents.
We have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vectorstores) and then also shows how to use them in a chain.
Question answering over documents consists of four steps:
Create an index
Create a Retriever from that index
Create a question answering chain
Ask questions! | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
640d3c25830c-1 | Create a Retriever from that index
Create a question answering chain
Ask questions!
Each of the steps has multiple sub steps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on.
First, let’s import some common classes we’ll use no matter what.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
Next in the generic setup, let’s specify the document loader we want to use. You can download the state_of_the_union.txt file here
from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt', encoding='utf8')
One Line Index Creation#
To get started as quickly as possible, we can use the VectorstoreIndexCreator.
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Now that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide.
query = "What did the president say about Ketanji Brown Jackson"
index.query(query)
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query) | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
640d3c25830c-2 | index.query_with_sources(query)
{'question': 'What did the president say about Ketanji Brown Jackson',
'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n",
'sources': '../state_of_the_union.txt'}
What is returned from the VectorstoreIndexCreator is VectorStoreIndexWrapper, which provides these nice query and query_with_sources functionality. If we just wanted to access the vectorstore directly, we can also do that.
index.vectorstore
<langchain.vectorstores.chroma.Chroma at 0x119aa5940>
If we then want to access the VectorstoreRetriever, we can do that with:
index.vectorstore.as_retriever()
VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={})
Walkthrough#
Okay, so what’s actually going on? How is this index getting created?
A lot of the magic is being hid in this VectorstoreIndexCreator. What is this doing?
There are three main steps going on after the documents are loaded:
Splitting documents into chunks
Creating embeddings for each document
Storing documents and embeddings in a vectorstore
Let’s walk through this in code
documents = loader.load()
Next, we will split the documents into chunks.
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
We will then select which embeddings we want to use. | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
640d3c25830c-3 | We will then select which embeddings we want to use.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
We now create the vectorstore to use as the index.
from langchain.vectorstores import Chroma
db = Chroma.from_documents(texts, embeddings)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
So that’s creating the index. Then, we expose this index in a retriever interface.
retriever = db.as_retriever()
Then, as before, we create a chain and use it to answer questions!
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever)
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
" The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans."
VectorstoreIndexCreator is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below:
index_creator = VectorstoreIndexCreator(
vectorstore_cls=Chroma,
embedding=OpenAIEmbeddings(),
text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
) | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
640d3c25830c-4 | )
Hopefully this highlights what is going on under the hood of VectorstoreIndexCreator. While we think it’s important to have a simple way to create indexes, we also think it’s important to understand what’s going on under the hood.
previous
Indexes
next
Document Loaders
Contents
One Line Index Creation
Walkthrough
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
6a9a9d0186fb-0 | .ipynb
.pdf
Getting Started
Getting Started#
The default recommended text splitter is the RecursiveCharacterTextSplitter. This text splitter takes a list of characters. It tries to create chunks based on splitting on the first character, but if any chunks are too large it then moves onto the next character, and so forth. By default the characters it tries to split on are ["\n\n", "\n", " ", ""]
In addition to controlling which characters you can split on, you can also control a few other things:
length_function: how the length of chunks is calculated. Defaults to just counting number of characters, but it’s pretty common to pass a token counter here.
chunk_size: the maximum size of your chunks (as measured by the length function).
chunk_overlap: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (eg do a sliding window).
# This is a long document we can split up.
with open('../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 100,
chunk_overlap = 20,
length_function = len,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
print(texts[1])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0
page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0
previous
Text Splitters
next
Character Text Splitter
By Harrison Chase | https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html |
6a9a9d0186fb-1 | previous
Text Splitters
next
Character Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html |
96df7e27df83-0 | .ipynb
.pdf
Latex Text Splitter
Latex Text Splitter#
LatexTextSplitter splits text along Latex headings, headlines, enumerations and more. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Latex-specific separators. See the source code to see the Latex syntax expected by default.
How the text is split: by list of latex specific tags
How the chunk size is measured: by length function passed in (defaults to number of characters)
from langchain.text_splitter import LatexTextSplitter
latex_text = """
\documentclass{article}
\begin{document}
\maketitle
\section{Introduction}
Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.
\subsection{History of LLMs}
The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.
\subsection{Applications of LLMs}
LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.
\end{document}
"""
latex_splitter = LatexTextSplitter(chunk_size=400, chunk_overlap=0)
docs = latex_splitter.create_documents([latex_text])
docs | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/latex.html |
96df7e27df83-1 | docs = latex_splitter.create_documents([latex_text])
docs
[Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='Introduction}\nLarge language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='History of LLMs}\nThe earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='Applications of LLMs}\nLLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\n\n\\end{document}', lookup_str='', metadata={}, lookup_index=0)]
previous
Hugging Face Length Function
next
Markdown Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/latex.html |
bd46572f7419-0 | .ipynb
.pdf
RecursiveCharacterTextSplitter
RecursiveCharacterTextSplitter#
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.
How the text is split: by list of characters
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 100,
chunk_overlap = 20,
length_function = len,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
print(texts[1])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0
page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0
previous
Python Code Text Splitter
next
Spacy Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/recursive_text_splitter.html |
55d66d17beff-0 | .ipynb
.pdf
Python Code Text Splitter
Python Code Text Splitter#
PythonCodeTextSplitter splits text along python class and method definitions. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Python-specific separators. See the source code to see the Python syntax expected by default.
How the text is split: by list of python specific characters
How the chunk size is measured: by length function passed in (defaults to number of characters)
from langchain.text_splitter import PythonCodeTextSplitter
python_text = """
class Foo:
def bar():
def foo():
def testing_func():
def bar():
"""
python_splitter = PythonCodeTextSplitter(chunk_size=30, chunk_overlap=0)
docs = python_splitter.create_documents([python_text])
docs
[Document(page_content='Foo:\n\n def bar():', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='foo():\n\ndef testing_func():', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='bar():', lookup_str='', metadata={}, lookup_index=0)]
previous
NLTK Text Splitter
next
RecursiveCharacterTextSplitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/python.html |
561d429ea36a-0 | .ipynb
.pdf
NLTK Text Splitter
NLTK Text Splitter#
Rather than just splitting on “\n\n”, we can use NLTK to split based on tokenizers.
How the text is split: by NLTK
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import NLTKTextSplitter
text_splitter = NLTKTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
Members of Congress and the Cabinet.
Justices of the Supreme Court.
My fellow Americans.
Last year COVID-19 kept us apart.
This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents.
But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.
But he badly miscalculated.
He thought he could roll into Ukraine and the world would roll over.
Instead he met a wall of strength he never imagined.
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
Groups of citizens blocking tanks with their bodies.
previous
Markdown Text Splitter
next
Python Code Text Splitter
By Harrison Chase | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html |
561d429ea36a-1 | previous
Markdown Text Splitter
next
Python Code Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html |
861bfc68c6f0-0 | .ipynb
.pdf
Markdown Text Splitter
Markdown Text Splitter#
MarkdownTextSplitter splits text along Markdown headings, code blocks, or horizontal rules. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Markdown-specific separators. See the source code to see the Markdown syntax expected by default.
How the text is split: by list of markdown specific characters
How the chunk size is measured: by length function passed in (defaults to number of characters)
from langchain.text_splitter import MarkdownTextSplitter
markdown_text = """
# 🦜️🔗 LangChain
⚡ Building applications with LLMs through composability ⚡
## Quick Install
```bash
# Hopefully this code block isn't split
pip install langchain
```
As an open source project in a rapidly developing field, we are extremely open to contributions.
"""
markdown_splitter = MarkdownTextSplitter(chunk_size=100, chunk_overlap=0)
docs = markdown_splitter.create_documents([markdown_text])
docs
[Document(page_content='# 🦜️🔗 LangChain\n\n⚡ Building applications with LLMs through composability ⚡', lookup_str='', metadata={}, lookup_index=0),
Document(page_content="Quick Install\n\n```bash\n# Hopefully this code block isn't split\npip install langchain", lookup_str='', metadata={}, lookup_index=0),
Document(page_content='As an open source project in a rapidly developing field, we are extremely open to contributions.', lookup_str='', metadata={}, lookup_index=0)]
previous
Latex Text Splitter
next
NLTK Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/markdown.html |
7100537bad8c-0 | .ipynb
.pdf
tiktoken (OpenAI) Length Function
tiktoken (OpenAI) Length Function#
You can also use tiktoken, an open source tokenizer package from OpenAI to estimate tokens used. Will probably be more accurate for their models.
How the text is split: by character passed in
How the chunk size is measured: by tiktoken tokenizer
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
previous
Spacy Text Splitter
next
TiktokenText Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken.html |
ba87a372ffae-0 | .ipynb
.pdf
TiktokenText Splitter
TiktokenText Splitter#
How the text is split: by tiktoken tokens
How the chunk size is measured: by tiktoken tokens
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import TokenTextSplitter
text_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our
previous
tiktoken (OpenAI) Length Function
next
Vectorstores
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken_splitter.html |
640d3cc5f238-0 | .ipynb
.pdf
Spacy Text Splitter
Spacy Text Splitter#
Another alternative to NLTK is to use Spacy.
How the text is split: by Spacy
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import SpacyTextSplitter
text_splitter = SpacyTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
Members of Congress and the Cabinet.
Justices of the Supreme Court.
My fellow Americans.
Last year COVID-19 kept us apart.
This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents.
But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.
But he badly miscalculated.
He thought he could roll into Ukraine and the world would roll over.
Instead he met a wall of strength he never imagined.
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
previous
RecursiveCharacterTextSplitter
next
tiktoken (OpenAI) Length Function
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html |
640d3cc5f238-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html |
4ed318d5da6e-0 | .ipynb
.pdf
Character Text Splitter
Character Text Splitter#
This is a more simple method. This splits based on characters (by default “\n\n”) and measure chunk length by number of characters.
How the text is split: by single character
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(
separator = "\n\n",
chunk_size = 1000,
chunk_overlap = 200,
length_function = len,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0]) | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
4ed318d5da6e-1 | texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={} lookup_index=0
Here’s an example of passing metadata along with the documents, notice that it is split along with the documents.
metadatas = [{"document": 1}, {"document": 2}]
documents = text_splitter.create_documents([state_of_the_union, state_of_the_union], metadatas=metadatas)
print(documents[0]) | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
4ed318d5da6e-2 | print(documents[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={'document': 1} lookup_index=0
previous
Getting Started
next
Hugging Face Length Function
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
74e9ac198dfb-0 | .ipynb
.pdf
Hugging Face Length Function
Hugging Face Length Function#
Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters. In order to get a more accurate estimate, we can use Hugging Face tokenizers to count the text length.
How the text is split: by character passed in
How the chunk size is measured: by Hugging Face tokenizer
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
previous
Character Text Splitter
next
Latex Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/huggingface_length_function.html |
dd9f24f36011-0 | .ipynb
.pdf
HTML
Contents
Loading HTML with BeautifulSoup4
HTML#
This covers how to load HTML documents into a document format that we can use downstream.
from langchain.document_loaders import UnstructuredHTMLLoader
loader = UnstructuredHTMLLoader("example_data/fake-content.html")
data = loader.load()
data
[Document(page_content='My First Heading\n\nMy first paragraph.', lookup_str='', metadata={'source': 'example_data/fake-content.html'}, lookup_index=0)]
Loading HTML with BeautifulSoup4#
We can also use BeautifulSoup4 to load HTML documents using the BSHTMLLoader. This will extract the text from the html into page_content, and the page title as title into metadata.
from langchain.document_loaders import BSHTMLLoader
loader = BSHTMLLoader("example_data/fake-content.html")
data = loader.load()
data
[Document(page_content='\n\nTest Title\n\n\nMy First Heading\nMy first paragraph.\n\n\n', lookup_str='', metadata={'source': 'example_data/fake-content.html', 'title': 'Test Title'}, lookup_index=0)]
previous
Hacker News
next
HuggingFace dataset loader
Contents
Loading HTML with BeautifulSoup4
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/html.html |
f30b6664650f-0 | .ipynb
.pdf
Copy Paste
Contents
Metadata
Copy Paste#
This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don’t even need to use a DocumentLoader, but rather can just construct the Document directly.
from langchain.docstore.document import Document
text = "..... put the text you copy pasted here......"
doc = Document(page_content=text)
Metadata#
If you want to add metadata about the where you got this piece of text, you easily can with the metadata key.
metadata = {"source": "internet", "date": "Friday"}
doc = Document(page_content=text, metadata=metadata)
previous
Confluence
next
CSV Loader
Contents
Metadata
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/copypaste.html |
0fd4f5cde0f5-0 | .ipynb
.pdf
Spreedly
Spreedly#
This notebook covers how to load data from the Spreedly REST API into a format that can be ingested into LangChain, along with example usage for vectorization.
Note: this notebook assumes the following packages are installed: openai, chromadb, and tiktoken.
import os
from langchain.document_loaders import SpreedlyLoader
from langchain.indexes import VectorstoreIndexCreator
Spreedly API requires an access token, which can be found inside the Spreedly Admin Console.
This document loader does not currently support pagination, nor access to more complex objects which require additional parameters. It also requires a resource option which defines what objects you want to load.
Following resources are available:
gateways_options: Documentation
gateways: Documentation
receivers_options: Documentation
receivers: Documentation
payment_methods: Documentation
certificates: Documentation
transactions: Documentation
environments: Documentation
spreedly_loader = SpreedlyLoader(os.environ["SPREEDLY_ACCESS_TOKEN"], "gateways_options")
# Create a vectorstore retriver from the loader
# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details
index = VectorstoreIndexCreator().from_loaders([spreedly_loader])
spreedly_doc_retriever = index.vectorstore.as_retriever()
Using embedded DuckDB without persistence: data will be transient
# Test the retriever
spreedly_doc_retriever.get_relevant_documents("CRC") | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html |
0fd4f5cde0f5-1 | # Test the retriever
spreedly_doc_retriever.get_relevant_documents("CRC")
[Document(page_content='installment_grace_period_duration\nreference_data_code\ninvoice_number\ntax_management_indicator\noriginal_amount\ninvoice_amount\nvat_tax_rate\nmobile_remote_payment_type\ngratuity_amount\nmdd_field_1\nmdd_field_2\nmdd_field_3\nmdd_field_4\nmdd_field_5\nmdd_field_6\nmdd_field_7\nmdd_field_8\nmdd_field_9\nmdd_field_10\nmdd_field_11\nmdd_field_12\nmdd_field_13\nmdd_field_14\nmdd_field_15\nmdd_field_16\nmdd_field_17\nmdd_field_18\nmdd_field_19\nmdd_field_20\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\ndankort\nmaestro\nelo\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://ics2wsa.ic3.com/commerce/1.x/transactionProcessor\ncompany_name: CyberSource', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html |
0fd4f5cde0f5-2 | Document(page_content='BG\nBH\nBI\nBJ\nBM\nBN\nBO\nBR\nBS\nBT\nBW\nBY\nBZ\nCA\nCC\nCF\nCH\nCK\nCL\nCM\nCN\nCO\nCR\nCV\nCX\nCY\nCZ\nDE\nDJ\nDK\nDO\nDZ\nEC\nEE\nEG\nEH\nES\nET\nFI\nFJ\nFK\nFM\nFO\nFR\nGA\nGB\nGD\nGE\nGF\nGG\nGH\nGI\nGL\nGM\nGN\nGP\nGQ\nGR\nGT\nGU\nGW\nGY\nHK\nHM\nHN\nHR\nHT\nHU\nID\nIE\nIL\nIM\nIN\nIO\nIS\nIT\nJE\nJM\nJO\nJP\nKE\nKG\nKH\nKI\nKM\nKN\nKR\nKW\nKY\nKZ\nLA\nLC\nLI\nLK\nL | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html |
0fd4f5cde0f5-3 | Z\nLA\nLC\nLI\nLK\nLS\nLT\nLU\nLV\nMA\nMC\nMD\nME\nMG\nMH\nMK\nML\nMN\nMO\nMP\nMQ\nMR\nMS\nMT\nMU\nMV\nMW\nMX\nMY\nMZ\nNA\nNC\nNE\nNF\nNG\nNI\nNL\nNO\nNP\nNR\nNU\nNZ\nOM\nPA\nPE\nPF\nPH\nPK\nPL\nPN\nPR\nPT\nPW\nPY\nQA\nRE\nRO\nRS\nRU\nRW\nSA\nSB\nSC\nSE\nSG\nSI\nSK\nSL\nSM\nSN\nST\nSV\nSZ\nTC\nTD\nTF\nTG\nTH\nTJ\nTK\nTM\nTO\nTR\nTT\nTV\nTW\nTZ\nUA\nUG\nUS\nUY\nUZ\nVA\nVC\nVE\nVI\nVN\nVU\nWF\nWS\nY | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html |
0fd4f5cde0f5-4 | I\nVN\nVU\nWF\nWS\nYE\nYT\nZA\nZM\nsupported_cardtypes: | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html |
0fd4f5cde0f5-5 | visa\nmaster\namerican_express\ndiscover\njcb\nmaestro\nelo\nnaranja\ncabal\nunionpay\nregions: asia_pacific\neurope\nmiddle_east\nnorth_america\nhomepage: http://worldpay.com\ndisplay_api_url: https://secure.worldpay.com/jsp/merchant/xml/paymentService.jsp\ncompany_name: WorldPay', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html |
0fd4f5cde0f5-6 | Document(page_content='gateway_specific_fields: receipt_email\nradar_session_id\nskip_radar_rules\napplication_fee\nstripe_account\nmetadata\nidempotency_key\nreason\nrefund_application_fee\nrefund_fee_amount\nreverse_transfer\naccount_id\ncustomer_id\nvalidate\nmake_default\ncancellation_reason\ncapture_method\nconfirm\nconfirmation_method\ncustomer\ndescription\nmoto\noff_session\non_behalf_of\npayment_method_types\nreturn_email\nreturn_url\nsave_payment_method\nsetup_future_usage\nstatement_descriptor\nstatement_descriptor_suffix\ntransfer_amount\ntransfer_destination\ntransfer_group\napplication_fee_amount\nrequest_three_d_secure\nerror_on_requires_action\nnetwork_transaction_id\nclaim_without_transaction_id\nfulfillment_date\nevent_type\nmodal_challenge\nidempotent_request\nmerchant_reference\ncustomer_reference\nshipping_address_zip\nshipping_from_zip\nshipping_amount\nline_items\nsupported_countries: AE\nAT\nAU\nBE\nBG\nBR\nCA\nCH\nCY\nCZ\nDE\nDK\nEE\nES\nFI\nFR\nGB\nGR\nHK\nHU\nIE\nIN\nIT\nJP\nLT\nLU\nLV\nMT\nMX\nMY\nNL\nNO\nNZ\nPL\nPT\nRO\nSE\nSG\nSI\nSK\nUS\nsupported_cardtypes: visa', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html |
0fd4f5cde0f5-7 | Document(page_content='mdd_field_57\nmdd_field_58\nmdd_field_59\nmdd_field_60\nmdd_field_61\nmdd_field_62\nmdd_field_63\nmdd_field_64\nmdd_field_65\nmdd_field_66\nmdd_field_67\nmdd_field_68\nmdd_field_69\nmdd_field_70\nmdd_field_71\nmdd_field_72\nmdd_field_73\nmdd_field_74\nmdd_field_75\nmdd_field_76\nmdd_field_77\nmdd_field_78\nmdd_field_79\nmdd_field_80\nmdd_field_81\nmdd_field_82\nmdd_field_83\nmdd_field_84\nmdd_field_85\nmdd_field_86\nmdd_field_87\nmdd_field_88\nmdd_field_89\nmdd_field_90\nmdd_field_91\nmdd_field_92\nmdd_field_93\nmdd_field_94\nmdd_field_95\nmdd_field_96\nmdd_field_97\nmdd_field_98\nmdd_field_99\nmdd_field_100\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\nmaestro\nelo\nunion_pay\ncartes_bancaires\nmada\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://api.cybersource.com\ncompany_name: | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html |
0fd4f5cde0f5-8 | ERROR: type should be string, got "https://api.cybersource.com\\ncompany_name: CyberSource REST', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'})]" | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html |
0fd4f5cde0f5-9 | previous
Slack (Local Exported Zipfile)
next
Subtitle Files
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html |
a2ffaa6550ac-0 | .ipynb
.pdf
AZLyrics
AZLyrics#
This covers how to load AZLyrics webpages into a document format that we can use downstream.
from langchain.document_loaders import AZLyricsLoader
loader = AZLyricsLoader("https://www.azlyrics.com/lyrics/mileycyrus/flowers.html")
data = loader.load()
data | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azlyrics.html |
a2ffaa6550ac-1 | [Document(page_content="Miley Cyrus - Flowers Lyrics | AZLyrics.com\n\r\nWe were good, we were gold\nKinda dream that can't be sold\nWe were right till we weren't\nBuilt a home and watched it burn\n\nI didn't wanna leave you\nI didn't wanna lie\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\n\nPaint my nails, cherry red\nMatch the roses that you left\nNo remorse, no regret\nI forgive every word you said\n\nI didn't wanna leave you, baby\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours, yeah\nSay things you don't | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azlyrics.html |
a2ffaa6550ac-2 | to myself for hours, yeah\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI\n\nI didn't wanna wanna leave you\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours (Yeah)\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than\nYeah, I can love me better than you can, uh\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby (Than you can)\nCan love me better\nI can love me better, baby\nCan love me better\nI\n", lookup_str='', metadata={'source': | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azlyrics.html |
a2ffaa6550ac-3 | love me better\nI\n", lookup_str='', metadata={'source': 'https://www.azlyrics.com/lyrics/mileycyrus/flowers.html'}, lookup_index=0)] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azlyrics.html |
a2ffaa6550ac-4 | previous
Arxiv
next
Azure Blob Storage Container
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azlyrics.html |
f4d1c8e40f72-0 | .ipynb
.pdf
Airbyte JSON
Airbyte JSON#
This covers how to load any source from Airbyte into a local JSON file that can be read in as a document
Prereqs:
Have docker desktop installed
Steps:
Clone Airbyte from GitHub - git clone https://github.com/airbytehq/airbyte.git
Switch into Airbyte directory - cd airbyte
Start Airbyte - docker compose up
In your browser, just visit http://localhost:8000. You will be asked for a username and password. By default, that’s username airbyte and password password.
Setup any source you wish.
Set destination as Local JSON, with specified destination path - lets say /json_data. Set up manual sync.
Run the connection!
To see what files are create, you can navigate to: file:///tmp/airbyte_local
Find your data and copy path. That path should be saved in the file variable below. It should start with /tmp/airbyte_local
from langchain.document_loaders import AirbyteJSONLoader
!ls /tmp/airbyte_local/json_data/
_airbyte_raw_pokemon.jsonl
loader = AirbyteJSONLoader('/tmp/airbyte_local/json_data/_airbyte_raw_pokemon.jsonl')
data = loader.load()
print(data[0].page_content[:500])
abilities:
ability:
name: blaze
url: https://pokeapi.co/api/v2/ability/66/
is_hidden: False
slot: 1
ability:
name: solar-power
url: https://pokeapi.co/api/v2/ability/94/
is_hidden: True
slot: 3
base_experience: 267
forms:
name: charizard
url: https://pokeapi.co/api/v2/pokemon-form/6/
game_indices: | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/airbyte_json.html |
f4d1c8e40f72-1 | game_indices:
game_index: 180
version:
name: red
url: https://pokeapi.co/api/v2/version/1/
game_index: 180
version:
name: blue
url: https://pokeapi.co/api/v2/version/2/
game_index: 180
version:
n
previous
CoNLL-U
next
Apify Dataset
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/airbyte_json.html |
359c858a7e5e-0 | .ipynb
.pdf
Azure Blob Storage Container
Contents
Specifying a prefix
Azure Blob Storage Container#
This covers how to load document objects from a container on Azure Blob Storage.
from langchain.document_loaders import AzureBlobStorageContainerLoader
#!pip install azure-storage-blob
loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>")
loader.load()
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]
Specifying a prefix#
You can also specify a prefix for more finegrained control over what files to load.
loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>", prefix="<prefix>")
loader.load()
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]
previous
AZLyrics
next
Azure Blob Storage File
Contents
Specifying a prefix
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_container.html |
d32392d023bd-0 | .ipynb
.pdf
Arxiv
Contents
Installation
Examples
Arxiv#
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
This notebook shows how to load scientific articles from Arxiv.org into a document format that we can use downstream.
Installation#
First, you need to install arxiv python package.
!pip install arxiv
Second, you need to install PyMuPDF python package which transform PDF files from the arxiv.org site into the text format.
!pip install pymupdf
Examples#
ArxivLoader has these arguments:
query: free text which used to find documents in the Arxiv
optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments.
optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.
from langchain.document_loaders.base import Document
from langchain.document_loaders import ArxivLoader
docs = ArxivLoader(query="1605.08386", load_max_docs=2).load()
len(docs)
docs[0].metadata # meta-information of the Document
{'Published': '2016-05-26',
'Title': 'Heat-bath random walks with Markov bases',
'Authors': 'Caprice Stanley, Tobias Windisch', | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/arxiv.html |
d32392d023bd-1 | 'Authors': 'Caprice Stanley, Tobias Windisch',
'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}
docs[0].page_content[:400] # all pages of the Document content
'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'
previous
Apify Dataset
next
AZLyrics
Contents
Installation
Examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/arxiv.html |
11f3512cfd59-0 | .ipynb
.pdf
Confluence
Confluence#
A loader for Confluence pages.
This currently supports both username/api_key and Oauth2 login.
Specify a list page_ids and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel.
Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE", include_attachments=True, limit=50)
previous
College Confidential
next
Copy Paste
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/confluence.html |
c138389ac069-0 | .ipynb
.pdf
Apify Dataset
Contents
Prerequisites
An example with question answering
Apify Dataset#
This notebook shows how to load Apify datasets to LangChain.
Apify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases.
Prerequisites#
You need to have an existing dataset on the Apify platform. If you don’t have one, please first check out this notebook on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs.
First, import ApifyDatasetLoader into your source code:
from langchain.document_loaders import ApifyDatasetLoader
from langchain.document_loaders.base import Document
Then provide a function that maps Apify dataset record fields to LangChain Document format.
For example, if your dataset items are structured like this:
{
"url": "https://apify.com",
"text": "Apify is the best web scraping and automation platform."
}
The mapping function in the code below will convert them to LangChain Document format, so that you can use them further with any LLM model (e.g. for question answering).
loader = ApifyDatasetLoader(
dataset_id="your-dataset-id",
dataset_mapping_function=lambda dataset_item: Document(
page_content=dataset_item["text"], metadata={"source": dataset_item["url"]}
),
)
data = loader.load()
An example with question answering#
In this example, we use data from a dataset to answer a question.
from langchain.docstore.document import Document | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html |
c138389ac069-1 | from langchain.docstore.document import Document
from langchain.document_loaders import ApifyDatasetLoader
from langchain.indexes import VectorstoreIndexCreator
loader = ApifyDatasetLoader(
dataset_id="your-dataset-id",
dataset_mapping_function=lambda item: Document(
page_content=item["text"] or "", metadata={"source": item["url"]}
),
)
index = VectorstoreIndexCreator().from_loaders([loader])
query = "What is Apify?"
result = index.query_with_sources(query)
print(result["answer"])
print(result["sources"])
Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform.
https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examples
previous
Airbyte JSON
next
Arxiv
Contents
Prerequisites
An example with question answering
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 02, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html |
952944adbbc4-0 | .ipynb
.pdf
HuggingFace dataset loader
Contents
Example
HuggingFace dataset loader#
This notebook shows how to load Hugging Face Hub datasets to LangChain.
The Hugging Face Hub hosts a large number of community-curated datasets for a diverse range of tasks such as translation, automatic speech recognition, and image classification.
from langchain.document_loaders import HuggingFaceDatasetLoader
dataset_name="imdb"
page_content_column="text"
loader=HuggingFaceDatasetLoader(dataset_name,page_content_column)
data = loader.load()
data[:15] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html |
952944adbbc4-1 | data = loader.load()
data[:15]
[Document(page_content='I rented I AM CURIOUS-YELLOW from my video store because of all the controversy that surrounded it when it was first released in 1967. I also heard that at first it was seized by U.S. customs if it ever tried to enter this country, therefore being a fan of films considered "controversial" I really had to see this for myself.<br /><br />The plot is centered around a young Swedish drama student named Lena who wants to learn everything she can about life. In particular she wants to focus her attentions to making some sort of documentary on what the average Swede thought about certain political issues such as the Vietnam War and race issues in the United States. In between asking politicians and ordinary denizens of Stockholm about their opinions on politics, she has sex with her drama teacher, classmates, and married men.<br /><br />What kills me about I AM CURIOUS-YELLOW is that 40 years ago, this was considered pornographic. Really, the sex and nudity scenes are few and far between, even then it\'s not shot like some cheaply made porno. While my countrymen mind find it shocking, in reality sex and nudity are a major staple in Swedish cinema. Even Ingmar Bergman, arguably their answer to good old boy John Ford, had sex scenes in his films.<br /><br />I do commend the filmmakers for the fact that any sex shown in the film is shown for artistic purposes rather than just to shock people and make money to be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for anyone wanting to study the meat and potatoes (no pun intended) of Swedish cinema. But really, this film doesn\'t have much of a plot.', metadata={'label': 0}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html |
952944adbbc4-2 | Document(page_content='"I Am Curious: Yellow" is a risible and pretentious steaming pile. It doesn\'t matter what one\'s political views are because this film can hardly be taken seriously on any level. As for the claim that frontal male nudity is an automatic NC-17, that isn\'t true. I\'ve seen R-rated films with male nudity. Granted, they only offer some fleeting views, but where are the R-rated films with gaping vulvas and flapping labia? Nowhere, because they don\'t exist. The same goes for those crappy cable shows: schlongs swinging in the breeze but not a clitoris in sight. And those pretentious indie movies like The Brown Bunny, in which we\'re treated to the site of Vincent Gallo\'s throbbing johnson, but not a trace of pink visible on Chloe Sevigny. Before crying (or implying) "double-standard" in matters of nudity, the mentally obtuse should take into account one unavoidably obvious anatomical difference between men and women: there are no genitals on display when actresses appears nude, and the same cannot be said for a man. In fact, you generally won\'t see female genitals in an American film in anything short of porn or explicit erotica. This alleged double-standard is less a double standard than an admittedly depressing ability to come to terms culturally with the insides of women\'s bodies.', metadata={'label': 0}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html |
952944adbbc4-3 | Document(page_content="If only to avoid making this type of film in the future. This film is interesting as an experiment but tells no cogent story.<br /><br />One might feel virtuous for sitting thru it because it touches on so many IMPORTANT issues but it does so without any discernable motive. The viewer comes away with no new perspectives (unless one comes up with one while one's mind wanders, as it will invariably do during this pointless film).<br /><br />One might better spend one's time staring out a window at a tree growing.<br /><br />", metadata={'label': 0}),
Document(page_content="This film was probably inspired by Godard's Masculin, féminin and I urge you to see that film instead.<br /><br />The film has two strong elements and those are, (1) the realistic acting (2) the impressive, undeservedly good, photo. Apart from that, what strikes me most is the endless stream of silliness. Lena Nyman has to be most annoying actress in the world. She acts so stupid and with all the nudity in this film,...it's unattractive. Comparing to Godard's film, intellectuality has been replaced with stupidity. Without going too far on this subject, I would say that follows from the difference in ideals between the French and the Swedish society.<br /><br />A movie of its time, and place. 2/10.", metadata={'label': 0}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html |
952944adbbc4-4 | Document(page_content='Oh, brother...after hearing about this ridiculous film for umpteen years all I can think of is that old Peggy Lee song..<br /><br />"Is that all there is??" ...I was just an early teen when this smoked fish hit the U.S. I was too young to get in the theater (although I did manage to sneak into "Goodbye Columbus"). Then a screening at a local film museum beckoned - Finally I could see this film, except now I was as old as my parents were when they schlepped to see it!!<br /><br />The ONLY reason this film was not condemned to the anonymous sands of time was because of the obscenity case sparked by its U.S. release. MILLIONS of people flocked to this stinker, thinking they were going to see a sex film...Instead, they got lots of closeups of gnarly, repulsive Swedes, on-street interviews in bland shopping malls, asinie political pretension...and feeble who-cares simulated sex scenes with | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html |
952944adbbc4-5 | pretension...and feeble who-cares simulated sex scenes with saggy, pale actors.<br /><br />Cultural icon, holy grail, historic artifact..whatever this thing was, shred it, burn it, then stuff the ashes in a lead box!<br /><br />Elite esthetes still scrape to find value in its boring pseudo revolutionary political spewings..But if it weren\'t for the censorship scandal, it would have been ignored, then forgotten.<br /><br />Instead, the "I Am Blank, Blank" rhythymed title was repeated endlessly for years as a titilation for porno films (I am Curious, Lavender - for gay films, I Am Curious, Black - for blaxploitation films, etc..) and every ten years or so the thing rises from the dead, to be viewed by a new generation of suckers who want to see that "naughty sex film" that "revolutionized the film industry"...<br /><br />Yeesh, avoid like the plague..Or if you MUST see it - rent the video and fast forward to the | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html |
952944adbbc4-6 | it - rent the video and fast forward to the "dirty" parts, just to get it over with.<br /><br />', metadata={'label': 0}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html |
952944adbbc4-7 | Document(page_content="I would put this at the top of my list of films in the category of unwatchable trash! There are films that are bad, but the worst kind are the ones that are unwatchable but you are suppose to like them because they are supposed to be good for you! The sex sequences, so shocking in its day, couldn't even arouse a rabbit. The so called controversial politics is strictly high school sophomore amateur night Marxism. The film is self-consciously arty in the worst sense of the term. The photography is in a harsh grainy black and white. Some scenes are out of focus or taken from the wrong angle. Even the sound is bad! And some people call this art?<br /><br />", metadata={'label': 0}),
Document(page_content="Whoever wrote the screenplay for this movie obviously never consulted any books about Lucille Ball, especially her autobiography. I've never seen so many mistakes in a biopic, ranging from her early years in Celoron and Jamestown to her later years with Desi. I could write a whole list of factual errors, but it would go on for pages. In all, I believe that Lucille Ball is one of those inimitable people who simply cannot be portrayed by anyone other than themselves. If I were Lucie Arnaz and Desi, Jr., I would be irate at how many mistakes were made in this film. The filmmakers tried hard, but the movie seems awfully sloppy to me.", metadata={'label': 0}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.