id
stringlengths 14
16
| text
stringlengths 44
2.73k
| source
stringlengths 49
114
|
---|---|---|
216148d98508-1
|
sum(correct) / len(correct)
1.0
for i, example in enumerate(dataset):
print("input: ", example["question"])
print("expected output :", example["answer"])
print("prediction: ", numeric_output[i])
input: 5
expected output : 5.0
prediction: 5.0
input: 5 + 3
expected output : 8.0
prediction: 8.0
input: 2^3.171
expected output : 9.006708689094099
prediction: 9.006708689094099
input: 2 ^3.171
expected output : 9.006708689094099
prediction: 9.006708689094099
input: two to the power of three point one hundred seventy one
expected output : 9.006708689094099
prediction: 9.006708689094099
input: five + three squared minus 1
expected output : 13.0
prediction: 13.0
input: 2097 times 27.31
expected output : 57269.07
prediction: 57269.07
input: two thousand ninety seven times twenty seven point thirty one
expected output : 57269.07
prediction: 57269.07
input: 209758 / 2714
expected output : 77.28739867354459
prediction: 77.28739867354459
input: 209758.857 divided by 2714.31
expected output : 77.27888745205964
prediction: 77.27888745205964
previous
Using Hugging Face Datasets
next
Evaluating an OpenAPI Chain
Contents
Setting up a chain
|
https://python.langchain.com/en/latest/use_cases/evaluation/llm_math.html
|
216148d98508-2
|
next
Evaluating an OpenAPI Chain
Contents
Setting up a chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/use_cases/evaluation/llm_math.html
|
32f70891e404-0
|
.ipynb
.pdf
Question Answering Benchmarking: State of the Union Address
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
Question Answering Benchmarking: State of the Union Address#
Here we go over how to benchmark performance on a question answering task over a state of the union address.
It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Loading the data#
First, let’s load the data.
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("question-answering-state-of-the-union")
Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--question-answering-state-of-the-union-a7e5a3b2db4f440d/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)
Setting up a chain#
Now we need to create some pipelines for doing question answering. Step one in that is creating an index over the data in question.
from langchain.document_loaders import TextLoader
loader = TextLoader("../../modules/state_of_the_union.txt")
from langchain.indexes import VectorstoreIndexCreator
vectorstore = VectorstoreIndexCreator().from_loaders([loader]).vectorstore
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Now we can create a question answering chain.
|
https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html
|
32f70891e404-1
|
Now we can create a question answering chain.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
chain = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=vectorstore.as_retriever(), input_key="question")
Make a prediction#
First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
chain(dataset[0])
{'question': 'What is the purpose of the NATO Alliance?',
'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',
'result': ' The NATO Alliance was created to secure peace and stability in Europe after World War 2.'}
Make many predictions#
Now we can make predictions
predictions = chain.apply(dataset)
Evaluate performance#
Now we can evaluate the predictions. The first thing we can do is look at them by eye.
predictions[0]
{'question': 'What is the purpose of the NATO Alliance?',
'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',
'result': ' The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.'}
Next, we can use a language model to score them programatically
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(dataset, predictions, question_key="question", prediction_key="result")
We can add in the graded output to the predictions dict and then get a count of the grades.
for i, prediction in enumerate(predictions):
|
https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html
|
32f70891e404-2
|
for i, prediction in enumerate(predictions):
prediction['grade'] = graded_outputs[i]['text']
from collections import Counter
Counter([pred['grade'] for pred in predictions])
Counter({' CORRECT': 7, ' INCORRECT': 4})
We can also filter the datapoints to the incorrect examples and look at them.
incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
incorrect[0]
{'question': 'What is the U.S. Department of Justice doing to combat the crimes of Russian oligarchs?',
'answer': 'The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs.',
'result': ' The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and is naming a chief prosecutor for pandemic fraud.',
'grade': ' INCORRECT'}
previous
Question Answering Benchmarking: Paul Graham Essay
next
QA Generation
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html
|
e0b74c2944c1-0
|
.ipynb
.pdf
QA Generation
QA Generation#
This notebook shows how to use the QAGenerationChain to come up with question-answer pairs over a specific document.
This is important because often times you may not have data to evaluate your question-answer system over, so this is a cheap and lightweight way to generate it!
from langchain.document_loaders import TextLoader
loader = TextLoader("../../modules/state_of_the_union.txt")
doc = loader.load()[0]
from langchain.chat_models import ChatOpenAI
from langchain.chains import QAGenerationChain
chain = QAGenerationChain.from_llm(ChatOpenAI(temperature = 0))
qa = chain.run(doc.page_content)
qa[1]
{'question': 'What is the U.S. Department of Justice doing to combat the crimes of Russian oligarchs?',
'answer': 'The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs.'}
previous
Question Answering Benchmarking: State of the Union Address
next
Question Answering
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/use_cases/evaluation/qa_generation.html
|
7b818799bdd5-0
|
.ipynb
.pdf
SQL Question Answering Benchmarking: Chinook
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
SQL Question Answering Benchmarking: Chinook#
Here we go over how to benchmark performance on a question answering task over a SQL database.
It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up.
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Loading the data#
First, let’s load the data.
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("sql-qa-chinook")
Downloading and preparing dataset json/LangChainDatasets--sql-qa-chinook to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--sql-qa-chinook-7528565d2d992b47/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Dataset json downloaded and prepared to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--sql-qa-chinook-7528565d2d992b47/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data.
dataset[0]
{'question': 'How many employees are there?', 'answer': '8'}
|
https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html
|
7b818799bdd5-1
|
{'question': 'How many employees are there?', 'answer': '8'}
Setting up a chain#
This uses the example Chinook database.
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.
Note that here we load a simple chain. If you want to experiment with more complex chains, or an agent, just create the chain object in a different way.
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
db = SQLDatabase.from_uri("sqlite:///../../../notebooks/Chinook.db")
llm = OpenAI(temperature=0)
Now we can create a SQL database chain.
chain = SQLDatabaseChain(llm=llm, database=db, input_key="question")
Make a prediction#
First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
chain(dataset[0])
{'question': 'How many employees are there?',
'answer': '8',
'result': ' There are 8 employees.'}
Make many predictions#
Now we can make predictions. Note that we add a try-except because this chain can sometimes error (if SQL is written incorrectly, etc)
predictions = []
predicted_dataset = []
error_dataset = []
for data in dataset:
try:
predictions.append(chain(data))
predicted_dataset.append(data)
except:
error_dataset.append(data)
Evaluate performance#
Now we can evaluate the predictions. We can use a language model to score them programatically
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
|
https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html
|
7b818799bdd5-2
|
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(predicted_dataset, predictions, question_key="question", prediction_key="result")
We can add in the graded output to the predictions dict and then get a count of the grades.
for i, prediction in enumerate(predictions):
prediction['grade'] = graded_outputs[i]['text']
from collections import Counter
Counter([pred['grade'] for pred in predictions])
Counter({' CORRECT': 3, ' INCORRECT': 4})
We can also filter the datapoints to the incorrect examples and look at them.
incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
incorrect[0]
{'question': 'How many employees are also customers?',
'answer': 'None',
'result': ' 59 employees are also customers.',
'grade': ' INCORRECT'}
previous
Question Answering
next
Installation
Contents
Loading the data
Setting up a chain
Make a prediction
Make many predictions
Evaluate performance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html
|
392312a03b69-0
|
.ipynb
.pdf
Voice Assistant
Voice Assistant#
This chain creates a clone of ChatGPT with a few modifications to make it a voice assistant.
It uses the pyttsx3 and speech_recognition libraries to convert text to speech and speech to text respectively. The prompt template is also changed to make it more suitable for voice assistant use.
from langchain import OpenAI, ConversationChain, LLMChain, PromptTemplate
from langchain.memory import ConversationBufferWindowMemory
template = """Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-1
|
{history}
Human: {human_input}
Assistant:"""
prompt = PromptTemplate(
input_variables=["history", "human_input"],
template=template
)
chatgpt_chain = LLMChain(
llm=OpenAI(temperature=0),
prompt=prompt,
verbose=True,
memory=ConversationBufferWindowMemory(k=2),
)
import speech_recognition as sr
import pyttsx3
engine = pyttsx3.init()
def listen():
r = sr.Recognizer()
with sr.Microphone() as source:
print('Calibrating...')
r.adjust_for_ambient_noise(source, duration=5)
# optional parameters to adjust microphone sensitivity
# r.energy_threshold = 200
# r.pause_threshold=0.5
print('Okay, go!')
while(1):
text = ''
print('listening now...')
try:
audio = r.listen(source, timeout=5, phrase_time_limit=30)
print('Recognizing...')
# whisper model options are found here: https://github.com/openai/whisper#available-models-and-languages
# other speech recognition models are also available.
text = r.recognize_whisper(audio, model='medium.en', show_dict=True, )['text']
except Exception as e:
unrecognized_speech_text = f'Sorry, I didn\'t catch that. Exception was: {e}s'
text = unrecognized_speech_text
print(text)
response_text = chatgpt_chain.predict(human_input=text)
print(response_text)
engine.say(response_text)
engine.runAndWait()
listen(None)
Calibrating...
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-2
|
engine.runAndWait()
listen(None)
Calibrating...
Okay, go!
listening now...
Recognizing...
C:\Users\jaden\AppData\Roaming\Python\Python310\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
Hello, Assistant. What's going on?
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-3
|
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Hello, Assistant. What's going on?
Assistant:
> Finished chain.
Hi there! It's great to hear from you. I'm doing well. How can I help you today?
listening now...
Recognizing...
That's cool. Isn't that neat? Yeah, I'm doing great.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-4
|
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Hello, Assistant. What's going on?
AI: Hi there! It's great to hear from you. I'm doing well. How can I help you today?
Human: That's cool. Isn't that neat? Yeah, I'm doing great.
Assistant:
> Finished chain.
That's great to hear! What can I do for you today?
listening now...
Recognizing...
Thank you.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-5
|
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Hello, Assistant. What's going on?
AI: Hi there! It's great to hear from you. I'm doing well. How can I help you today?
Human: That's cool. Isn't that neat? Yeah, I'm doing great.
AI: That's great to hear! What can I do for you today?
Human: Thank you.
Assistant:
> Finished chain.
You're welcome! Is there anything else I can help you with?
listening now...
Recognizing...
I'd like to learn more about neural networks.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-6
|
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: That's cool. Isn't that neat? Yeah, I'm doing great.
AI: That's great to hear! What can I do for you today?
Human: Thank you.
AI: You're welcome! Is there anything else I can help you with?
Human: I'd like to learn more about neural networks.
Assistant:
> Finished chain.
Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms.
listening now...
Recognizing...
Tell me a fun fact about neural networks.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-7
|
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Thank you.
AI: You're welcome! Is there anything else I can help you with?
Human: I'd like to learn more about neural networks.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-8
|
Human: I'd like to learn more about neural networks.
AI: Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms.
Human: Tell me a fun fact about neural networks.
Assistant:
> Finished chain.
Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do.
listening now...
Recognizing...
Tell me about a brand new discovered bird species.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-9
|
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: I'd like to learn more about neural networks.
AI: Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms.
Human: Tell me a fun fact about neural networks.
AI: Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do.
Human: Tell me about a brand new discovered bird species.
Assistant:
> Finished chain.
A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years.
listening now...
Recognizing...
Tell me a children's story about the importance of honesty and trust.
> Entering new LLMChain chain...
Prompt after formatting:
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-10
|
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Tell me a fun fact about neural networks.
AI: Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do.
Human: Tell me about a brand new discovered bird species.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-11
|
Human: Tell me about a brand new discovered bird species.
AI: A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years.
Human: Tell me a children's story about the importance of honesty and trust.
Assistant:
> Finished chain.
Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life.
listening now...
Recognizing...
Wow, Assistant, that was a really good story. Congratulations!
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-12
|
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Tell me about a brand new discovered bird species.
AI: A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years.
Human: Tell me a children's story about the importance of honesty and trust.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-13
|
Human: Tell me a children's story about the importance of honesty and trust.
AI: Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life.
Human: Wow, Assistant, that was a really good story. Congratulations!
Assistant:
> Finished chain.
Thank you! I'm glad you enjoyed it.
listening now...
Recognizing...
Thank you.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-14
|
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Tell me a children's story about the importance of honesty and trust.
AI: Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life.
Human: Wow, Assistant, that was a really good story. Congratulations!
AI: Thank you! I'm glad you enjoyed it.
Human: Thank you.
Assistant:
> Finished chain.
You're welcome!
listening now...
Recognizing...
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-15
|
Assistant:
> Finished chain.
You're welcome!
listening now...
Recognizing...
Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way?
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Wow, Assistant, that was a really good story. Congratulations!
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-16
|
Human: Wow, Assistant, that was a really good story. Congratulations!
AI: Thank you! I'm glad you enjoyed it.
Human: Thank you.
AI: You're welcome!
Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way?
Assistant:
> Finished chain.
Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software.
listening now...
Recognizing...
Our whole process of awesome is free.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-17
|
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Thank you.
AI: You're welcome!
Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way?
AI: Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software.
Human: Our whole process of awesome is free.
Assistant:
> Finished chain.
That's great! It's always nice to have access to free tools and resources.
listening now...
Recognizing...
No, I meant to ask, are those options that you mentioned free? No, I meant to ask, are those options that you mentioned free?
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-18
|
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way?
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-19
|
AI: Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software.
Human: Our whole process of awesome is free.
AI: That's great! It's always nice to have access to free tools and resources.
Human: No, I meant to ask, are those options that you mentioned free? No, I meant to ask, are those options that you mentioned free?
Assistant:
> Finished chain.
Yes, the online brands I mentioned are all free to use. Adobe Photoshop Express, Pixlr, and Fotor are all free to use, and Freq is a free music production platform.
listening now...
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
Cell In[6], line 1
----> 1 listen(None)
Cell In[5], line 20, in listen(command_queue)
18 print('listening now...')
19 try:
---> 20 audio = r.listen(source, timeout=5, phrase_time_limit=30)
21 # audio = r.record(source,duration = 5)
22 print('Recognizing...')
File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\speech_recognition\__init__.py:523, in Recognizer.listen(self, source, timeout, phrase_time_limit, snowboy_configuration)
520 if phrase_time_limit and elapsed_time - phrase_start_time > phrase_time_limit:
521 break
--> 523 buffer = source.stream.read(source.CHUNK)
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
392312a03b69-20
|
521 break
--> 523 buffer = source.stream.read(source.CHUNK)
524 if len(buffer) == 0: break # reached end of the stream
525 frames.append(buffer)
File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\speech_recognition\__init__.py:199, in Microphone.MicrophoneStream.read(self, size)
198 def read(self, size):
--> 199 return self.pyaudio_stream.read(size, exception_on_overflow=False)
File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\pyaudio\__init__.py:570, in PyAudio.Stream.read(self, num_frames, exception_on_overflow)
567 if not self._is_input:
568 raise IOError("Not input stream",
569 paCanNotReadFromAnOutputOnlyStream)
--> 570 return pa.read_stream(self._stream, num_frames,
571 exception_on_overflow)
KeyboardInterrupt:
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
|
21b2a73e523a-0
|
.ipynb
.pdf
Question answering over a group chat messages
Contents
1. Install required packages
2. Add API keys
2. Create sample data
3. Ingest chat embeddings
4. Ask questions
Question answering over a group chat messages#
In this tutorial, we are going to use Langchain + Deep Lake with GPT4 to semantically search and ask questions over a group chat.
View a working demo here
1. Install required packages#
!python3 -m pip install --upgrade langchain deeplake openai tiktoken
2. Add API keys#
import os
import getpass
from langchain.document_loaders import PyPDFLoader, TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
from langchain.vectorstores import DeepLake
from langchain.chains import ConversationalRetrievalChain, RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')
os.environ['ACTIVELOOP_ORG'] = getpass.getpass('Activeloop Org:')
org = os.environ['ACTIVELOOP_ORG']
embeddings = OpenAIEmbeddings()
dataset_path = 'hub://' + org + '/data'
2. Create sample data#
You can generate a sample group chat conversation using ChatGPT with this prompt:
Generate a group chat conversation with three friends talking about their day, referencing real places and fictional names. Make it funny and as detailed as possible.
I’ve already generated such a chat in messages.txt. We can keep it simple and use this for our example.
3. Ingest chat embeddings#
|
https://python.langchain.com/en/latest/use_cases/question_answering/semantic-search-over-chat.html
|
21b2a73e523a-1
|
3. Ingest chat embeddings#
We load the messages in the text file, chunk and upload to ActiveLoop Vector store.
with open("messages.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
pages = text_splitter.split_text(state_of_the_union)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_documents(pages)
print (texts)
dataset_path = 'hub://'+org+'/data'
embeddings = OpenAIEmbeddings()
db = DeepLake.from_documents(texts, embeddings, dataset_path=dataset_path, overwrite=True)
4. Ask questions#
Now we can ask a question and get an answer back with a semantic search:
db = DeepLake(dataset_path=dataset_path, read_only=True, embedding_function=embeddings)
retriever = db.as_retriever()
retriever.search_kwargs['distance_metric'] = 'cos'
retriever.search_kwargs['k'] = 4
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=False)
# What was the restaurant the group was talking about called?
query = input("Enter query:")
# The Hungry Lobster
ans = qa({"query": query})
print(ans)
Contents
1. Install required packages
2. Add API keys
2. Create sample data
3. Ingest chat embeddings
4. Ask questions
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/use_cases/question_answering/semantic-search-over-chat.html
|
e9ee51076378-0
|
.md
.pdf
Quickstart Guide
Contents
Installation
Environment Setup
Building a Language Model Application: LLMs
LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs
Chains: Combine LLMs and prompts in multi-step workflows
Agents: Dynamically Call Chains Based on User Input
Memory: Add State to Chains and Agents
Building a Language Model Application: Chat Models
Get Message Completions from a Chat Model
Chat Prompt Templates
Chains with Chat Models
Agents with Chat Models
Memory: Add State to Chains and Agents
Quickstart Guide#
This tutorial gives you a quick walkthrough about building an end-to-end language model application with LangChain.
Installation#
To get started, install LangChain with the following command:
pip install langchain
# or
conda install langchain -c conda-forge
Environment Setup#
Using LangChain will usually require integrations with one or more model providers, data stores, apis, etc.
For this example, we will be using OpenAI’s APIs, so we will first need to install their SDK:
pip install openai
We will then need to set the environment variable in the terminal.
export OPENAI_API_KEY="..."
Alternatively, you could do this from inside the Jupyter notebook (or Python script):
import os
os.environ["OPENAI_API_KEY"] = "..."
Building a Language Model Application: LLMs#
Now that we have installed LangChain and set up our environment, we can start building our language model application.
LangChain provides many modules that can be used to build language model applications. Modules can be combined to create more complex applications, or be used individually for simple applications.
LLMs: Get predictions from a language model#
The most basic building block of LangChain is calling an LLM on some input.
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-1
|
The most basic building block of LangChain is calling an LLM on some input.
Let’s walk through a simple example of how to do this.
For this purpose, let’s pretend we are building a service that generates a company name based on what the company makes.
In order to do this, we first need to import the LLM wrapper.
from langchain.llms import OpenAI
We can then initialize the wrapper with any arguments.
In this example, we probably want the outputs to be MORE random, so we’ll initialize it with a HIGH temperature.
llm = OpenAI(temperature=0.9)
We can now call it on some input!
text = "What would be a good company name for a company that makes colorful socks?"
print(llm(text))
Feetful of Fun
For more details on how to use LLMs within LangChain, see the LLM getting started guide.
Prompt Templates: Manage prompts for LLMs#
Calling an LLM is a great first step, but it’s just the beginning.
Normally when you use an LLM in an application, you are not sending user input directly to the LLM.
Instead, you are probably taking user input and constructing a prompt, and then sending that to the LLM.
For example, in the previous example, the text we passed in was hardcoded to ask for a name for a company that made colorful socks.
In this imaginary service, what we would want to do is take only the user input describing what the company does, and then format the prompt with that information.
This is easy to do with LangChain!
First lets define the prompt template:
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-2
|
template="What is a good name for a company that makes {product}?",
)
Let’s now see how this works! We can call the .format method to format it.
print(prompt.format(product="colorful socks"))
What is a good name for a company that makes colorful socks?
For more details, check out the getting started guide for prompts.
Chains: Combine LLMs and prompts in multi-step workflows#
Up until now, we’ve worked with the PromptTemplate and LLM primitives by themselves. But of course, a real application is not just one primitive, but rather a combination of them.
A chain in LangChain is made up of links, which can be either primitives like LLMs or other chains.
The most core type of chain is an LLMChain, which consists of a PromptTemplate and an LLM.
Extending the previous example, we can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM.
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM:
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
Now we can run that chain only specifying the product!
chain.run("colorful socks")
# -> '\n\nSocktastic!'
There we go! There’s the first chain - an LLM Chain.
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-3
|
There we go! There’s the first chain - an LLM Chain.
This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains.
For more details, check out the getting started guide for chains.
Agents: Dynamically Call Chains Based on User Input#
So far the chains we’ve looked at run in a predetermined order.
Agents no longer do: they use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning to the user.
When used correctly agents can be extremely powerful. In this tutorial, we show you how to easily use agents through the simplest, highest level API.
In order to load agents, you should understand the following concepts:
Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output.
LLM: The language model powering the agent.
Agent: The agent to use. This should be a string that references a support agent class. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents (coming soon).
Agents: For a list of supported agents and their specifications, see here.
Tools: For a list of predefined tools and their specifications, see here.
For this example, you will also need to install the SerpAPI Python package.
pip install google-search-results
And set the appropriate environment variables.
import os
os.environ["SERPAPI_API_KEY"] = "..."
Now we can get started!
from langchain.agents import load_tools
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-4
|
Now we can get started!
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
llm = OpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")
> Entering new AgentExecutor chain...
I need to find the temperature first, then use the calculator to raise it to the .023 power.
Action: Search
Action Input: "High temperature in SF yesterday"
Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...
Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.
Action: Calculator
Action Input: 57^.023
Observation: Answer: 1.0974509573251117
Thought: I now know the final answer
Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-5
|
> Finished chain.
Memory: Add State to Chains and Agents#
So far, all the chains and agents we’ve gone through have been stateless. But often, you may want a chain or agent to have some concept of “memory” so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of “short-term memory”. On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of “long-term memory”. For more concrete ideas on the latter, see this awesome paper.
LangChain provides several specially created chains just for this purpose. This notebook walks through using one of those chains (the ConversationChain) with two different types of memory.
By default, the ConversationChain has a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Let’s take a look at using this chain (setting verbose=True so we can see the prompt).
from langchain import OpenAI, ConversationChain
llm = OpenAI(temperature=0)
conversation = ConversationChain(llm=llm, verbose=True)
output = conversation.predict(input="Hi there!")
print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI:
> Finished chain.
' Hello! How are you today?'
output = conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
print(output)
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-6
|
print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: Hello! How are you today?
Human: I'm doing well! Just having a conversation with an AI.
AI:
> Finished chain.
" That's great! What would you like to talk about?"
Building a Language Model Application: Chat Models#
Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
Get Message Completions from a Chat Model#
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage.
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
You can get completions by passing in a single message.
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-7
|
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can also pass in multiple messages for OpenAI’s gpt-3.5-turbo and gpt-4 models.
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter:
batch_messages = [
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.")
],
]
result = chat.generate(batch_messages)
result
# -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}})
You can recover things like token usage from this LLMResult:
result.llm_output['token_usage']
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-8
|
result.llm_output['token_usage']
# -> {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}
Chat Prompt Templates#
Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
Chains with Chat Models#
The LLMChain discussed in the above section can be used with chat models as well:
from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-9
|
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
# -> "J'aime programmer."
Agents with Chat Models#
Agents can also be used with chat models, you can initialize one using AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION as the agent type.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-10
|
# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
"action": "Search",
"action_input": "Olivia Wilde boyfriend"
}
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought:I need to use a search engine to find Harry Styles' current age.
Action:
{
"action": "Search",
"action_input": "Harry Styles age"
}
Observation: 29 years
Thought:Now I need to calculate 29 raised to the 0.23 power.
Action:
{
"action": "Calculator",
"action_input": "29^0.23"
}
Observation: Answer: 2.169459462491557
Thought:I now know the final answer.
Final Answer: 2.169459462491557
> Finished chain.
'2.169459462491557'
Memory: Add State to Chains and Agents#
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-11
|
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
])
llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
conversation.predict(input="Hi there!")
# -> 'Hello! How can I assist you today?'
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"
conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"
previous
Welcome to LangChain
next
Models
Contents
Installation
Environment Setup
Building a Language Model Application: LLMs
LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
e9ee51076378-12
|
LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs
Chains: Combine LLMs and prompts in multi-step workflows
Agents: Dynamically Call Chains Based on User Input
Memory: Add State to Chains and Agents
Building a Language Model Application: Chat Models
Get Message Completions from a Chat Model
Chat Prompt Templates
Chains with Chat Models
Agents with Chat Models
Memory: Add State to Chains and Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/getting_started/getting_started.html
|
d2efdf129f69-0
|
.rst
.pdf
Memory
Memory#
Note
Conceptual Guide
By default, Chains and Agents are stateless,
meaning that they treat each incoming query independently (as are the underlying LLMs and chat models).
In some applications (chatbots being a GREAT example) it is highly important
to remember previous interactions, both at a short term but also at a long term level.
The concept of “Memory” exists to do exactly that.
LangChain provides memory components in two forms.
First, LangChain provides helper utilities for managing and manipulating previous chat messages.
These are designed to be modular and useful regardless of how they are used.
Secondly, LangChain provides easy ways to incorporate these utilities into chains.
The following sections of documentation are provided:
Getting Started: An overview of how to get started with different types of memory.
How-To Guides: A collection of how-to guides. These highlight different types of memory, as well as how to use memory in chains.
Memory
Getting Started
How-To Guides
previous
Weaviate Hybrid Search
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/memory.html
|
27acb259cd3b-0
|
.rst
.pdf
Chains
Chains#
Note
Conceptual Guide
Using an LLM in isolation is fine for some simple applications,
but many more complex ones require chaining LLMs - either with each other or with other experts.
LangChain provides a standard interface for Chains, as well as some common implementations of chains for ease of use.
The following sections of documentation are provided:
Getting Started: A getting started guide for chains, to get you up and running quickly.
How-To Guides: A collection of how-to guides. These highlight how to use various types of chains.
Reference: API reference documentation for all Chain classes.
previous
Redis Chat Message History
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/chains.html
|
a80be5c64876-0
|
.rst
.pdf
Indexes
Contents
Go Deeper
Indexes#
Note
Conceptual Guide
Indexes refer to ways to structure documents so that LLMs can best interact with them.
This module contains utility functions for working with documents, different types of indexes, and then examples for using those indexes in chains.
The most common way that indexes are used in chains is in a “retrieval” step.
This step refers to taking a user’s query and returning the most relevant documents.
We draw this distinction because (1) an index can be used for other things besides retrieval, and (2) retrieval can use other logic besides an index to find relevant documents.
We therefore have a concept of a “Retriever” interface - this is the interface that most chains work with.
Most of the time when we talk about indexes and retrieval we are talking about indexing and retrieving unstructured data (like text documents).
For interacting with structured data (SQL tables, etc) or APIs, please see the corresponding use case sections for links to relevant functionality.
The primary index and retrieval types supported by LangChain are currently centered around vector databases, and therefore
a lot of the functionality we dive deep on those topics.
For an overview of everything related to this, please see the below notebook for getting started:
Getting Started
We then provide a deep dive on the four main components.
Document Loaders
How to load documents from a variety of sources.
Text Splitters
An overview of the abstractions and implementions around splitting text.
VectorStores
An overview of VectorStores and the many integrations LangChain provides.
Retrievers
An overview of Retrievers and the implementations LangChain provides.
Go Deeper#
Document Loaders
Text Splitters
Vectorstores
Retrievers
previous
Structured Output Parser
next
Getting Started
Contents
Go Deeper
|
https://python.langchain.com/en/latest/modules/indexes.html
|
a80be5c64876-1
|
previous
Structured Output Parser
next
Getting Started
Contents
Go Deeper
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes.html
|
117f4b38a7ea-0
|
.rst
.pdf
Agents
Contents
Go Deeper
Agents#
Note
Conceptual Guide
Some applications will require not just a predetermined chain of calls to LLMs/other tools,
but potentially an unknown chain that depends on the user’s input.
In these types of chains, there is a “agent” which has access to a suite of tools.
Depending on the user input, the agent can then decide which, if any, of these tools to call.
In this section of documentation, we first start with a Getting Started notebook to cover how to use all things related to agents in an end-to-end manner.
We then split the documentation into the following sections:
Tools
An overview of the various tools LangChain supports.
Agents
An overview of the different agent types.
Toolkits
An overview of toolkits, and examples of the different ones LangChain supports.
Agent Executor
An overview of the Agent Executor class and examples of how to use it.
Go Deeper#
Tools
Agents
Toolkits
Agent Executors
previous
Chains
next
Getting Started
Contents
Go Deeper
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/agents.html
|
aa0ecb6efb68-0
|
.rst
.pdf
Models
Contents
Go Deeper
Models#
Note
Conceptual Guide
This section of the documentation deals with different types of models that are used in LangChain.
On this page we will go over the model types at a high level,
but we have individual pages for each model type.
The pages contain more detailed “how-to” guides for working with that model,
as well as a list of different model providers.
LLMs
Large Language Models (LLMs) are the first type of models we cover.
These models take a text string as input, and return a text string as output.
Chat Models
Chat Models are the second type of models we cover.
These models are usually backed by a language model, but their APIs are more structured.
Specifically, these models take a list of Chat Messages as input, and return a Chat Message.
Text Embedding Models
The third type of models we cover are text embedding models.
These models take text as input and return a list of floats.
Go Deeper#
LLMs
Chat Models
Text Embedding Models
previous
Quickstart Guide
next
LLMs
Contents
Go Deeper
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models.html
|
96f7988a8da7-0
|
.rst
.pdf
Prompts
Contents
Go Deeper
Prompts#
Note
Conceptual Guide
The new way of programming models is through prompts.
A “prompt” refers to the input to the model.
This input is rarely hard coded, but rather is often constructed from multiple components.
A PromptTemplate is responsible for the construction of this input.
LangChain provides several classes and functions to make constructing and working with prompts easy.
This section of documentation is split into four sections:
LLM Prompt Templates
How to use PromptTemplates to prompt Language Models.
Chat Prompt Templates
How to use PromptTemplates to prompt Chat Models.
Example Selectors
Often times it is useful to include examples in prompts.
These examples can be hardcoded, but it is often more powerful if they are dynamically selected.
This section goes over example selection.
Output Parsers
Language models (and Chat Models) output text.
But many times you may want to get more structured information than just text back.
This is where output parsers come in.
Output Parsers are responsible for (1) instructing the model how output should be formatted,
(2) parsing output into the desired formatting (including retrying if necessary).
Go Deeper#
Prompt Templates
Chat Prompt Template
Example Selectors
Output Parsers
previous
TensorflowHub
next
Prompt Templates
Contents
Go Deeper
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/prompts.html
|
d34f5fd0e3fe-0
|
.rst
.pdf
LLMs
LLMs#
Note
Conceptual Guide
Large Language Models (LLMs) are a core component of LangChain.
LangChain is not a provider of LLMs, but rather provides a standard interface through which
you can interact with a variety of LLMs.
The following sections of documentation are provided:
Getting Started: An overview of all the functionality the LangChain LLM class provides.
How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class (streaming, async, etc).
Integrations: A collection of examples on how to integrate different LLM providers with LangChain (OpenAI, Hugging Face, etc).
Reference: API reference documentation for all LLM classes.
previous
Models
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms.html
|
fc1e36053fd5-0
|
.rst
.pdf
Text Embedding Models
Text Embedding Models#
Note
Conceptual Guide
This documentation goes over how to use the Embedding class in LangChain.
The Embedding class is a class designed for interfacing with embeddings. There are lots of Embedding providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
The base Embedding class in LangChain exposes two methods: embed_documents and embed_query. The largest difference is that these two methods have different interfaces: one works over multiple documents, while the other works over a single document. Besides this, another reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
The following integrations exist for text embeddings.
Aleph Alpha
AzureOpenAI
Cohere
Fake Embeddings
Hugging Face Hub
InstructEmbeddings
Jina
Llama-cpp
OpenAI
SageMaker Endpoint Embeddings
Self Hosted Embeddings
Sentence Transformers Embeddings
TensorflowHub
previous
PromptLayer ChatOpenAI
next
Aleph Alpha
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/text_embedding.html
|
3e91112c74f6-0
|
.rst
.pdf
Chat Models
Chat Models#
Note
Conceptual Guide
Chat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
The following sections of documentation are provided:
Getting Started: An overview of all the functionality the LangChain LLM class provides.
How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class (streaming, async, etc).
Integrations: A collection of examples on how to integrate different LLM providers with LangChain (OpenAI, Hugging Face, etc).
previous
LLMs
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/chat.html
|
57c029a81b74-0
|
.rst
.pdf
Generic Functionality
Generic Functionality#
The examples here all address certain “how-to” guides for working with LLMs.
How to use the async API for LLMs
How to write a custom LLM wrapper
How (and why) to use the fake LLM
How to cache LLM calls
How to serialize LLM classes
How to stream LLM and Chat Model responses
How to track token usage
previous
Getting Started
next
How to use the async API for LLMs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/how_to_guides.html
|
b473c7531319-0
|
.ipynb
.pdf
Getting Started
Getting Started#
This notebook goes over how to use the LLM class in LangChain.
The LLM class is a class designed for interfacing with LLMs. There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. In this part of the documentation, we will focus on generic LLM functionality. For details on working with a specific LLM wrapper, please see the examples in the How-To section.
For this notebook, we will work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.
from langchain.llms import OpenAI
llm = OpenAI(model_name="text-ada-001", n=2, best_of=2)
Generate Text: The most basic functionality an LLM has is just the ability to call it, passing in a string and getting back a string.
llm("Tell me a joke")
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Generate: More broadly, you can call it with a list of inputs, getting back a more complete response than just the text. This complete response includes things like multiple top responses, as well as LLM provider specific information
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)
len(llm_result.generations)
30
llm_result.generations[0]
[Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'),
Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side.')]
llm_result.generations[-1]
|
https://python.langchain.com/en/latest/modules/models/llms/getting_started.html
|
b473c7531319-1
|
llm_result.generations[-1]
[Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just feel what love is for us\n\nAnd we love each other with all our heart\n\nWe just don't know how\n\nHow it will go\n\nBut we know that love is something strong\n\nAnd we'll always have each other\n\nIn our lives."),
Generation(text='\n\nOnce upon a time\n\nThere was a love so pure and true\n\nIt lasted for centuries\n\nAnd never became stale or dry\n\nIt was moving and alive\n\nAnd the heart of the love-ick\n\nIs still beating strong and true.')]
You can also access provider specific information that is returned. This information is NOT standardized across providers.
llm_result.llm_output
{'token_usage': {'completion_tokens': 3903,
'total_tokens': 4023,
'prompt_tokens': 120}}
Number of Tokens: You can also estimate how many tokens a piece of text will be in that model. This is useful because models have a context length (and cost more for more tokens), which means you need to be aware of how long the text you are passing in is.
Notice that by default the tokens are estimated using tiktoken (except for legacy version <3.8, where a Hugging Face tokenizer is used)
llm.get_num_tokens("what a joke")
3
previous
LLMs
next
Generic Functionality
By Harrison Chase
© Copyright 2023, Harrison Chase.
|
https://python.langchain.com/en/latest/modules/models/llms/getting_started.html
|
b473c7531319-2
|
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/getting_started.html
|
ba4f5d3e3dea-0
|
.rst
.pdf
Integrations
Integrations#
The examples here are all “how-to” guides for how to integrate with various LLM providers.
AI21
Aleph Alpha
Anthropic
Azure OpenAI
Banana
CerebriumAI
Cohere
DeepInfra
ForefrontAI
GooseAI
GPT4All
Hugging Face Hub
Hugging Face Local Pipelines
Llama-cpp
Manifest
Modal
NLP Cloud
OpenAI
Petals
PredictionGuard
PromptLayer OpenAI
Replicate
Runhouse
SageMakerEndpoint
StochasticAI
Writer
previous
How to track token usage
next
AI21
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations.html
|
1d8afc7bac0d-0
|
.ipynb
.pdf
How to cache LLM calls
Contents
In Memory Cache
SQLite Cache
Redis Cache
GPTCache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
How to cache LLM calls#
This notebook covers how to cache results of individual LLM calls.
from langchain.llms import OpenAI
In Memory Cache#
import langchain
from langchain.cache import InMemoryCache
langchain.llm_cache = InMemoryCache()
# To make the caching really obvious, lets use a slower model.
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 14.2 ms, sys: 4.9 ms, total: 19.1 ms
Wall time: 1.1 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 162 µs, sys: 7 µs, total: 169 µs
Wall time: 175 µs
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
SQLite Cache#
!rm .langchain.db
# We can do the same thing with a SQLite cache
from langchain.cache import SQLiteCache
langchain.llm_cache = SQLiteCache(database_path=".langchain.db")
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
1d8afc7bac0d-1
|
llm("Tell me a joke")
CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms
Wall time: 825 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms
Wall time: 2.67 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Redis Cache#
# We can do the same thing with a Redis cache
# (make sure your local Redis instance is running first before running this example)
from redis import Redis
from langchain.cache import RedisCache
langchain.llm_cache = RedisCache(redis_=Redis())
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
GPTCache#
We can use GPTCache for exact match caching OR to cache results based on semantic similarity
Let’s first start with an example of exact match
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
from langchain.cache import GPTCache
# Avoid multiple caches using the same file, causing different llm model caches to affect each other
i = 0
file_prefix = "data_map"
def init_gptcache_map(cache_obj: gptcache.Cache):
global i
cache_path = f'{file_prefix}_{i}.txt'
cache_obj.init(
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
1d8afc7bac0d-2
|
cache_path = f'{file_prefix}_{i}.txt'
cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=get_data_manager(data_path=cache_path),
)
i += 1
langchain.llm_cache = GPTCache(init_gptcache_map)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 8.6 ms, sys: 3.82 ms, total: 12.4 ms
Wall time: 881 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 286 µs, sys: 21 µs, total: 307 µs
Wall time: 316 µs
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Let’s now show an example of similarity caching
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
from langchain.cache import GPTCache
from gptcache.manager import get_data_manager, CacheBase, VectorBase
from gptcache import Cache
from gptcache.embedding import Onnx
from gptcache.similarity_evaluation.distance import SearchDistanceEvaluation
# Avoid multiple caches using the same file, causing different llm model caches to affect each other
i = 0
file_prefix = "data_map"
llm_cache = Cache()
def init_gptcache_map(cache_obj: gptcache.Cache):
global i
cache_path = f'{file_prefix}_{i}.txt'
onnx = Onnx()
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
1d8afc7bac0d-3
|
onnx = Onnx()
cache_base = CacheBase('sqlite')
vector_base = VectorBase('faiss', dimension=onnx.dimension)
data_manager = get_data_manager(cache_base, vector_base, max_size=10, clean_size=2)
cache_obj.init(
pre_embedding_func=get_prompt,
embedding_func=onnx.to_embeddings,
data_manager=data_manager,
similarity_evaluation=SearchDistanceEvaluation(),
)
i += 1
langchain.llm_cache = GPTCache(init_gptcache_map)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 1.01 s, sys: 153 ms, total: 1.16 s
Wall time: 2.49 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is an exact match, so it finds it in the cache
llm("Tell me a joke")
CPU times: user 745 ms, sys: 13.2 ms, total: 758 ms
Wall time: 136 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is not an exact match, but semantically within distance so it hits!
llm("Tell me joke")
CPU times: user 737 ms, sys: 7.79 ms, total: 745 ms
Wall time: 135 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
SQLAlchemy Cache#
# You can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy.
# from langchain.cache import SQLAlchemyCache
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
1d8afc7bac0d-4
|
# from langchain.cache import SQLAlchemyCache
# from sqlalchemy import create_engine
# engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
# langchain.llm_cache = SQLAlchemyCache(engine)
Custom SQLAlchemy Schemas#
# You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:
from sqlalchemy import Column, Integer, String, Computed, Index, Sequence
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy_utils import TSVectorType
from langchain.cache import SQLAlchemyCache
Base = declarative_base()
class FulltextLLMCache(Base): # type: ignore
"""Postgres table for fulltext-indexed LLM Cache"""
__tablename__ = "llm_cache_fulltext"
id = Column(Integer, Sequence('cache_id'), primary_key=True)
prompt = Column(String, nullable=False)
llm = Column(String, nullable=False)
idx = Column(Integer)
response = Column(String)
prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True))
__table_args__ = (
Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"),
)
engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache)
Optional Caching#
You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLM
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
1d8afc7bac0d-5
|
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False)
%%time
llm("Tell me a joke")
CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms
Wall time: 745 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
llm("Tell me a joke")
CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms
Wall time: 623 ms
'\n\nTwo guys stole a calendar. They got six months each.'
Optional Caching in Chains#
You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.
As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.
llm = OpenAI(model_name="text-davinci-002")
no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
text_splitter = CharacterTextSplitter()
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
texts = text_splitter.split_text(state_of_the_union)
from langchain.docstore.document import Document
docs = [Document(page_content=t) for t in texts[:3]]
from langchain.chains.summarize import load_summarize_chain
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
1d8afc7bac0d-6
|
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm)
%%time
chain.run(docs)
CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms
Wall time: 5.09 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'
When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step.
%%time
chain.run(docs)
CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms
Wall time: 1.04 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'
previous
How (and why) to use the fake LLM
next
How to serialize LLM classes
Contents
In Memory Cache
SQLite Cache
Redis Cache
GPTCache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
1d8afc7bac0d-7
|
Redis Cache
GPTCache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
|
8aacc9e79fa0-0
|
.ipynb
.pdf
How to serialize LLM classes
Contents
Loading
Saving
How to serialize LLM classes#
This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc).
from langchain.llms import OpenAI
from langchain.llms.loading import load_llm
Loading#
First, lets go over loading an LLM from disk. LLMs can be saved on disk in two formats: json or yaml. No matter the extension, they are loaded in the same way.
!cat llm.json
{
"model_name": "text-davinci-003",
"temperature": 0.7,
"max_tokens": 256,
"top_p": 1.0,
"frequency_penalty": 0.0,
"presence_penalty": 0.0,
"n": 1,
"best_of": 1,
"request_timeout": null,
"_type": "openai"
}
llm = load_llm("llm.json")
!cat llm.yaml
_type: openai
best_of: 1
frequency_penalty: 0.0
max_tokens: 256
model_name: text-davinci-003
n: 1
presence_penalty: 0.0
request_timeout: null
temperature: 0.7
top_p: 1.0
llm = load_llm("llm.yaml")
Saving#
If you want to go from an LLM in memory to a serialized version of it, you can do so easily by calling the .save method. Again, this supports both json and yaml.
llm.save("llm.json")
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html
|
8aacc9e79fa0-1
|
llm.save("llm.json")
llm.save("llm.yaml")
previous
How to cache LLM calls
next
How to stream LLM and Chat Model responses
Contents
Loading
Saving
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html
|
5e24d1211d8b-0
|
.ipynb
.pdf
How to track token usage
How to track token usage#
This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.
Let’s first look at an extremely simple example of tracking token usage for a single LLM call.
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
with get_openai_callback() as cb:
result = llm("Tell me a joke")
print(cb)
Tokens Used: 42
Prompt Tokens: 4
Completion Tokens: 38
Successful Requests: 1
Total Cost (USD): $0.00084
Anything inside the context manager will get tracked. Here’s an example of using it to track multiple calls in sequence.
with get_openai_callback() as cb:
result = llm("Tell me a joke")
result2 = llm("Tell me a joke")
print(cb.total_tokens)
91
If a chain or agent with multiple steps in it is used, it will track all those steps.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
with get_openai_callback() as cb:
response = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
|
https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html
|
5e24d1211d8b-1
|
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "Olivia Wilde boyfriend"
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought: I need to find out Harry Styles' age.
Action: Search
Action Input: "Harry Styles age"
Observation: 29 years
Thought: I need to calculate 29 raised to the 0.23 power.
Action: Calculator
Action Input: 29^0.23
Observation: Answer: 2.169459462491557
Thought: I now know the final answer.
Final Answer: Harry Styles, Olivia Wilde's boyfriend, is 29 years old and his age raised to the 0.23 power is 2.169459462491557.
> Finished chain.
Total Tokens: 1506
Prompt Tokens: 1350
Completion Tokens: 156
Total Cost (USD): $0.03012
previous
How to stream LLM and Chat Model responses
next
Integrations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html
|
ab86cf4942d7-0
|
.ipynb
.pdf
How (and why) to use the fake LLM
How (and why) to use the fake LLM#
We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.
In this notebook we go over how to use this.
We start this with using the FakeLLM in an agent.
from langchain.llms.fake import FakeListLLM
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
tools = load_tools(["python_repl"])
responses=[
"Action: Python REPL\nAction Input: print(2 + 2)",
"Final Answer: 4"
]
llm = FakeListLLM(responses=responses)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("whats 2 + 2")
> Entering new AgentExecutor chain...
Action: Python REPL
Action Input: print(2 + 2)
Observation: 4
Thought:Final Answer: 4
> Finished chain.
'4'
previous
How to write a custom LLM wrapper
next
How to cache LLM calls
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/fake_llm.html
|
46ef74edf356-0
|
.ipynb
.pdf
How to write a custom LLM wrapper
How to write a custom LLM wrapper#
This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.
There is only one required thing that a custom LLM needs to implement:
A _call method that takes in a string, some optional stop words, and returns a string
There is a second optional thing it can implement:
An _identifying_params property that is used to help with printing of this class. Should return a dictionary.
Let’s implement a very simple custom LLM that just returns the first N characters of the input.
from langchain.llms.base import LLM
from typing import Optional, List, Mapping, Any
class CustomLLM(LLM):
n: int
@property
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
return prompt[:self.n]
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"n": self.n}
We can now use this as an any other LLM.
llm = CustomLLM(n=10)
llm("This is a foobar thing")
'This is a '
We can also print the LLM and see its custom print.
print(llm)
CustomLLM
Params: {'n': 10}
previous
How to use the async API for LLMs
next
|
https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html
|
46ef74edf356-1
|
previous
How to use the async API for LLMs
next
How (and why) to use the fake LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html
|
8eabfe444a55-0
|
.ipynb
.pdf
How to use the async API for LLMs
How to use the async API for LLMs#
LangChain provides async support for LLMs by leveraging the asyncio library.
Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, OpenAI, PromptLayerOpenAI, ChatOpenAI and Anthropic are supported, but async support for other LLMs is on the roadmap.
You can use the agenerate method to call an OpenAI LLM asynchronously.
import time
import asyncio
from langchain.llms import OpenAI
def generate_serially():
llm = OpenAI(temperature=0.9)
for _ in range(10):
resp = llm.generate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def async_generate(llm):
resp = await llm.agenerate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
tasks = [async_generate(llm) for _ in range(10)]
await asyncio.gather(*tasks)
s = time.perf_counter()
# If running this outside of Jupyter, use asyncio.run(generate_concurrently())
await generate_concurrently()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m')
s = time.perf_counter()
generate_serially()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m')
|
https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html
|
8eabfe444a55-1
|
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, how about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
Concurrent executed in 1.39 seconds.
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
I'm doing well, thanks! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
Serial executed in 5.77 seconds.
previous
Generic Functionality
next
How to write a custom LLM wrapper
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html
|
ae65522222f5-0
|
.ipynb
.pdf
How to stream LLM and Chat Model responses
How to stream LLM and Chat Model responses#
LangChain provides streaming support for LLMs. Currently, we support streaming for the OpenAI, ChatOpenAI. and Anthropic implementations, but streaming support for other LLM implementations is on the roadmap. To utilize streaming, use a CallbackHandler that implements on_llm_new_token. In this example, we are using StreamingStdOutCallbackHandler.
from langchain.llms import OpenAI, Anthropic
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import HumanMessage
llm = OpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = llm("Write me a song about sparkling water.")
Verse 1
I'm sippin' on sparkling water,
It's so refreshing and light,
It's the perfect way to quench my thirst
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 2
I'm sippin' on sparkling water,
It's so bubbly and bright,
It's the perfect way to cool me down
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 3
I'm sippin' on sparkling water,
It's so light and so clear,
It's the perfect way to keep me cool
On a hot summer night.
Chorus
|
https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html
|
ae65522222f5-1
|
It's the perfect way to keep me cool
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.
llm.generate(["Tell me a joke."])
Q: What did the fish say when it hit the wall?
A: Dam!
LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': None, 'logprobs': None})]], llm_output={'token_usage': {}, 'model_name': 'text-davinci-003'})
Here’s an example with the ChatOpenAI chat model implementation:
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's oh so pure
Sparkling water, I can't ignore
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Verse 2:
No sugar, no calories, just H2O
A drink that's good for me, don't you know
With lemon or lime, you're even better
Sparkling water, you're my forever
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
|
https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html
|
ae65522222f5-2
|
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Bridge:
You're my go-to drink, day or night
You make me feel so light
I'll never give you up, you're my true love
Sparkling water, you're sent from above
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Outro:
Sparkling water, you're the one for me
I'll never let you go, can't you see
You're my drink of choice, forevermore
Sparkling water, I adore.
Here is an example with the Anthropic LLM implementation, which uses their claude model.
llm = Anthropic(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
llm("Write me a song about sparkling water.")
Sparkling water, bubbles so bright,
Fizzing and popping in the light.
No sugar or calories, a healthy delight,
Sparkling water, refreshing and light.
Carbonation that tickles the tongue,
In flavors of lemon and lime unsung.
Sparkling water, a drink quite all right,
Bubbles sparkling in the light.
'\nSparkling water, bubbles so bright,\n\nFizzing and popping in the light.\n\nNo sugar or calories, a healthy delight,\n\nSparkling water, refreshing and light.\n\nCarbonation that tickles the tongue,\n\nIn flavors of lemon and lime unsung.\n\nSparkling water, a drink quite all right,\n\nBubbles sparkling in the light.'
previous
|
https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html
|
ae65522222f5-3
|
previous
How to serialize LLM classes
next
How to track token usage
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html
|
2e0974ec6d41-0
|
.ipynb
.pdf
Llama-cpp
Llama-cpp#
llama-cpp is a Python binding for llama.cpp.
It supports several LLMs.
This notebook goes over how to run llama-cpp within LangChain.
!pip install llama-cpp-python
Make sure you are following all instructions to install all necessary model files.
You don’t need an API_TOKEN!
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Verbose is required to pass to the callback manager
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="./ggml-model-q4_0.bin", callback_manager=callback_manager, verbose=True
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
First we need to identify what year Justin Beiber was born in. A quick google search reveals that he was born on March 1st, 1994. Now we know when the Super Bowl was played in, so we can look up which NFL team won it. The NFL Superbowl of the year 1994 was won by the San Francisco 49ers against the San Diego Chargers.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html
|
2e0974ec6d41-1
|
' First we need to identify what year Justin Beiber was born in. A quick google search reveals that he was born on March 1st, 1994. Now we know when the Super Bowl was played in, so we can look up which NFL team won it. The NFL Superbowl of the year 1994 was won by the San Francisco 49ers against the San Diego Chargers.'
previous
Hugging Face Local Pipelines
next
Manifest
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html
|
c38c2418258b-0
|
.ipynb
.pdf
Aleph Alpha
Aleph Alpha#
The Luminous series is a family of large language models.
This example goes over how to use LangChain to interact with Aleph Alpha models
# Install the package
!pip install aleph-alpha-client
# create a new token: https://docs.aleph-alpha.com/docs/account/#create-a-new-token
from getpass import getpass
ALEPH_ALPHA_API_KEY = getpass()
from langchain.llms import AlephAlpha
from langchain import PromptTemplate, LLMChain
template = """Q: {question}
A:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AlephAlpha(model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"], aleph_alpha_api_key=ALEPH_ALPHA_API_KEY)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is AI?"
llm_chain.run(question)
' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.\n'
previous
AI21
next
Anthropic
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/aleph_alpha.html
|
61294cabcb08-0
|
.ipynb
.pdf
Azure OpenAI
Contents
API configuration
Deployments
Azure OpenAI#
This notebook goes over how to use Langchain with Azure OpenAI.
The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.
API configuration#
You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:
# Set this to `azure`
export OPENAI_API_TYPE=azure
# The API version you want to use: set this to `2022-12-01` for the released version.
export OPENAI_API_VERSION=2022-12-01
# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_BASE=https://your-resource-name.openai.azure.com
# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_KEY=<your Azure OpenAI API key>
Alternatively, you can configure the API right within your running Python environment:
import os
os.environ["OPENAI_API_TYPE"] = "azure"
...
Deployments#
With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.
Let’s say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example:
import openai
response = openai.Completion.create(
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html
|
61294cabcb08-1
|
import openai
response = openai.Completion.create(
engine="text-davinci-002-prod",
prompt="This is a test",
max_tokens=5
)
!pip install openai
# Import Azure OpenAI
from langchain.llms import AzureOpenAI
# Create an instance of Azure OpenAI
# Replace the deployment name with your own
llm = AzureOpenAI(deployment_name="text-davinci-002-prod", model_name="text-davinci-002")
# Run the LLM
llm("Tell me a joke")
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
We can also print the LLM and see its custom print.
print(llm)
AzureOpenAI
Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
previous
Anthropic
next
Banana
Contents
API configuration
Deployments
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html
|
bc3620b4ac72-0
|
.ipynb
.pdf
PredictionGuard
Contents
Basic LLM usage
Chaining
PredictionGuard#
How to use PredictionGuard wrapper
! pip install predictionguard langchain
import predictionguard as pg
from langchain.llms import PredictionGuard
Basic LLM usage#
pgllm = PredictionGuard(name="default-text-gen", token="<your access token>")
pgllm("Tell me a joke")
Chaining#
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)
template = """Write a {adjective} poem about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
llm_chain.predict(adjective="sad", subject="ducks")
previous
Petals
next
PromptLayer OpenAI
Contents
Basic LLM usage
Chaining
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html
|
a91bd92220fe-0
|
.ipynb
.pdf
Hugging Face Hub
Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
Hugging Face Hub#
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
This example showcases how to connect to the Hugging Face Hub.
To use, you should have the huggingface_hub python package installed.
!pip install huggingface_hub > /dev/null
# get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-token
from getpass import getpass
HUGGINGFACEHUB_API_TOKEN = getpass()
import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN
Select a Model
from langchain import HuggingFaceHub
repo_id = "google/flan-t5-xl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "Who won the FIFA World Cup in the year 1994? "
print(llm_chain.run(question))
Examples#
Below are some examples of models you can access through the Hugging Face Hub integration.
StableLM, by Stability AI#
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html
|
a91bd92220fe-1
|
StableLM, by Stability AI#
See Stability AI’s organization page for a list of available models.
repo_id = "stabilityai/stablelm-tuned-alpha-3b"
# Others include stabilityai/stablelm-base-alpha-3b
# as well as 7B parameter versions
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
Dolly, by DataBricks#
See DataBricks organization page for a list of available models.
from langchain import HuggingFaceHub
repo_id = "databricks/dolly-v2-3b"
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
Camel, by Writer#
See Writer’s organization page for a list of available models.
from langchain import HuggingFaceHub
repo_id = "Writer/camel-5b-hf" # See https://huggingface.co/Writer for other options
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
And many more!
previous
GPT4All
next
Hugging Face Local Pipelines
Contents
Examples
StableLM, by Stability AI
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html
|
a91bd92220fe-2
|
Hugging Face Local Pipelines
Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html
|
0a9e9b4b9290-0
|
.ipynb
.pdf
Writer
Writer#
Writer is a platform to generate different language content.
This example goes over how to use LangChain to interact with Writer models.
You have to get the WRITER_API_KEY here.
from getpass import getpass
WRITER_API_KEY = getpass()
import os
os.environ["WRITER_API_KEY"] = WRITER_API_KEY
from langchain.llms import Writer
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
# If you get an error, probably, you need to set up the "base_url" parameter that can be taken from the error log.
llm = Writer()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
StochasticAI
next
LLMs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/writer.html
|
a1ffcd557c8b-0
|
.ipynb
.pdf
Manifest
Contents
Compare HF Models
Manifest#
This notebook goes over how to use Manifest and LangChain.
For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest
Another example of using Manifest with Langchain.
!pip install manifest-ml
from manifest import Manifest
from langchain.llms.manifest import ManifestWrapper
manifest = Manifest(
client_name = "huggingface",
client_connection = "http://127.0.0.1:5000"
)
print(manifest.client.get_model_params())
llm = ManifestWrapper(client=manifest, llm_kwargs={"temperature": 0.001, "max_tokens": 256})
# Map reduce example
from langchain import PromptTemplate
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
_prompt = """Write a concise summary of the following:
{text}
CONCISE SUMMARY:"""
prompt = PromptTemplate(template=_prompt, input_variables=["text"])
text_splitter = CharacterTextSplitter()
mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter)
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
mp_chain.run(state_of_the_union)
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html
|
a1ffcd557c8b-1
|
state_of_the_union = f.read()
mp_chain.run(state_of_the_union)
'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will continue to send vaccines to 112 countries, more than any other nation. "We have lost so much to COVID-19," Trump said. "Time with one another. And worst of all, so much loss of life." He said the CDC is working on a vaccine for kids under 5, and that the government will be ready with plenty of vaccines when they are available. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government is launching a "Test to Treat" initiative that will allow people to get tested at a pharmacy and get antiviral pills on the spot at no cost. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government will continue to send vaccines to 112 countries, more than any other nation. "We are coming for your'
Compare HF Models#
from langchain.model_laboratory import ModelLaboratory
manifest1 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5000"
),
llm_kwargs={"temperature": 0.01}
)
manifest2 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5001"
),
llm_kwargs={"temperature": 0.01}
)
manifest3 = ManifestWrapper(
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html
|
a1ffcd557c8b-2
|
)
manifest3 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5002"
),
llm_kwargs={"temperature": 0.01}
)
llms = [manifest1, manifest2, manifest3]
model_lab = ModelLaboratory(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
ManifestWrapper
Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B', 'temperature': 0.01}
pink
ManifestWrapper
Params: {'model_name': 'EleutherAI/gpt-neo-125M', 'model_path': 'EleutherAI/gpt-neo-125M', 'temperature': 0.01}
A flamingo is a small, round
ManifestWrapper
Params: {'model_name': 'google/flan-t5-xl', 'model_path': 'google/flan-t5-xl', 'temperature': 0.01}
pink
previous
Llama-cpp
next
Modal
Contents
Compare HF Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html
|
55c8534923f4-0
|
.ipynb
.pdf
CerebriumAI
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
CerebriumAI#
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.
This notebook goes over how to use Langchain with CerebriumAI.
Install cerebrium#
The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium.
# Install the package
!pip3 install cerebrium
Imports#
import os
from langchain.llms import CerebriumAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from CerebriumAI. See here. You are given a 1 hour free of serverless GPU compute to test different models.
os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE"
Create the CerebriumAI instance#
You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url.
llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html
|
55c8534923f4-1
|
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Banana
next
Cohere
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html
|
f3aef77f3d35-0
|
.ipynb
.pdf
Hugging Face Local Pipelines
Contents
Load the model
Integrate the model in an LLMChain
Hugging Face Local Pipelines#
Hugging Face models can be run locally through the HuggingFacePipeline class.
The Hugging Face Model Hub hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the HugigngFaceHub notebook.
To use, you should have the transformers python package installed.
!pip install transformers > /dev/null
Load the model#
from langchain import HuggingFacePipeline
llm = HuggingFacePipeline.from_model_id(model_id="bigscience/bloom-1b7", task="text-generation", model_kwargs={"temperature":0, "max_length":64})
WARNING:root:Failed to default session, using empty session: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /sessions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x1117f9790>: Failed to establish a new connection: [Errno 61] Connection refused'))
Integrate the model in an LLMChain#
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is electroencephalography?"
print(llm_chain.run(question))
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html
|
f3aef77f3d35-1
|
question = "What is electroencephalography?"
print(llm_chain.run(question))
/Users/wfh/code/lc/lckg/.venv/lib/python3.11/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (64) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x144d06910>: Failed to establish a new connection: [Errno 61] Connection refused'))
First, we need to understand what is an electroencephalogram. An electroencephalogram is a recording of brain activity. It is a recording of brain activity that is made by placing electrodes on the scalp. The electrodes are placed
previous
Hugging Face Hub
next
Llama-cpp
Contents
Load the model
Integrate the model in an LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html
|
4e1c6890c55c-0
|
.ipynb
.pdf
AI21
AI21#
AI21 Studio provides API access to Jurassic-2 large language models.
This example goes over how to use LangChain to interact with AI21 models.
# install the package:
!pip install ai21
# get AI21_API_KEY. Use https://studio.ai21.com/account/account
from getpass import getpass
AI21_API_KEY = getpass()
from langchain.llms import AI21
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AI21(ai21_api_key=AI21_API_KEY)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
'\n1. What year was Justin Bieber born?\nJustin Bieber was born in 1994.\n2. What team won the Super Bowl in 1994?\nThe Dallas Cowboys won the Super Bowl in 1994.'
previous
Integrations
next
Aleph Alpha
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/ai21.html
|
fcec652f7622-0
|
.ipynb
.pdf
GooseAI
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
GooseAI#
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.
This notebook goes over how to use Langchain with GooseAI.
Install openai#
The openai package is required to use the GooseAI API. Install openai using pip3 install openai.
$ pip3 install openai
Imports#
import os
from langchain.llms import GooseAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from GooseAI. You are given $10 in free credits to test different models.
from getpass import getpass
GOOSEAI_API_KEY = getpass()
os.environ["GOOSEAI_API_KEY"] = GOOSEAI_API_KEY
Create the GooseAI instance#
You can specify different parameters such as the model name, max tokens generated, temperature, etc.
llm = GooseAI()
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
ForefrontAI
next
GPT4All
Contents
Install openai
Imports
|
https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.