id
stringlengths 14
16
| text
stringlengths 29
2.73k
| source
stringlengths 49
117
|
---|---|---|
7cd9c88f0159-1 | Note that the input to this prompt template is just driver_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).
from langchain.prompts import PromptTemplate, StringPromptTemplate
template = """Given the driver's up to date stats, write them note relaying those stats to them.
If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better
Here are the drivers stats:
Conversation rate: {conv_rate}
Acceptance rate: {acc_rate}
Average Daily Trips: {avg_daily_trips}
Your response:"""
prompt = PromptTemplate.from_template(template)
class FeastPromptTemplate(StringPromptTemplate):
def format(self, **kwargs) -> str:
driver_id = kwargs.pop("driver_id")
feature_vector = store.get_online_features(
features=[
'driver_hourly_stats:conv_rate',
'driver_hourly_stats:acc_rate',
'driver_hourly_stats:avg_daily_trips'
],
entity_rows=[{"driver_id": driver_id}]
).to_dict()
kwargs["conv_rate"] = feature_vector["conv_rate"][0]
kwargs["acc_rate"] = feature_vector["acc_rate"][0]
kwargs["avg_daily_trips"] = feature_vector["avg_daily_trips"][0]
return prompt.format(**kwargs)
prompt_template = FeastPromptTemplate(input_variables=["driver_id"])
print(prompt_template.format(driver_id=1001))
Given the driver's up to date stats, write them note relaying those stats to them.
If they have a conversation rate above .5, give them a compliment. Otherwise, make a silly joke about chickens at the end to make them feel better
Here are the drivers stats: | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
7cd9c88f0159-2 | Here are the drivers stats:
Conversation rate: 0.4745151400566101
Acceptance rate: 0.055561766028404236
Average Daily Trips: 936
Your response:
Use in a chain#
We can now use this in a chain, successfully creating a chain that achieves personalization backed by a feature store
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)
chain.run(1001)
"Hi there! I wanted to update you on your current stats. Your acceptance rate is 0.055561766028404236 and your average daily trips are 936. While your conversation rate is currently 0.4745151400566101, I have no doubt that with a little extra effort, you'll be able to exceed that .5 mark! Keep up the great work! And remember, even chickens can't always cross the road, but they still give it their best shot."
Tecton#
Above, we showed how you could use Feast, a popular open source and self-managed feature store, with LangChain. Our examples below will show a similar integration using Tecton. Tecton is a fully managed feature platform built to orchestrate the complete ML feature lifecycle, from transformation to online serving, with enterprise-grade SLAs.
Prerequisites#
Tecton Deployment (sign up at https://tecton.ai)
TECTON_API_KEY environment variable set to a valid Service Account key
Define and Load Features#
We will use the user_transaction_counts Feature View from the Tecton tutorial as part of a Feature Service. For simplicity, we are only using a single Feature View; however, more sophisticated applications may require more feature views to retrieve the features needed for its prompt.
user_transaction_metrics = FeatureService( | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
7cd9c88f0159-3 | user_transaction_metrics = FeatureService(
name = "user_transaction_metrics",
features = [user_transaction_counts]
)
The above Feature Service is expected to be applied to a live workspace. For this example, we will be using the “prod” workspace.
import tecton
workspace = tecton.get_workspace("prod")
feature_service = workspace.get_feature_service("user_transaction_metrics")
Prompts#
Here we will set up a custom TectonPromptTemplate. This prompt template will take in a user_id , look up their stats, and format those stats into a prompt.
Note that the input to this prompt template is just user_id, since that is the only user defined piece (all other variables are looked up inside the prompt template).
from langchain.prompts import PromptTemplate, StringPromptTemplate
template = """Given the vendor's up to date transaction stats, write them a note based on the following rules:
1. If they had a transaction in the last day, write a short congratulations message on their recent sales
2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more.
3. Always add a silly joke about chickens at the end
Here are the vendor's stats:
Number of Transactions Last Day: {transaction_count_1d}
Number of Transactions Last 30 Days: {transaction_count_30d}
Your response:"""
prompt = PromptTemplate.from_template(template)
class TectonPromptTemplate(StringPromptTemplate):
def format(self, **kwargs) -> str:
user_id = kwargs.pop("user_id")
feature_vector = feature_service.get_online_features(join_keys={"user_id": user_id}).to_dict()
kwargs["transaction_count_1d"] = feature_vector["user_transaction_counts.transaction_count_1d_1d"] | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
7cd9c88f0159-4 | kwargs["transaction_count_30d"] = feature_vector["user_transaction_counts.transaction_count_30d_1d"]
return prompt.format(**kwargs)
prompt_template = TectonPromptTemplate(input_variables=["user_id"])
print(prompt_template.format(user_id="user_469998441571"))
Given the vendor's up to date transaction stats, write them a note based on the following rules:
1. If they had a transaction in the last day, write a short congratulations message on their recent sales
2. If no transaction in the last day, but they had a transaction in the last 30 days, playfully encourage them to sell more.
3. Always add a silly joke about chickens at the end
Here are the vendor's stats:
Number of Transactions Last Day: 657
Number of Transactions Last 30 Days: 20326
Your response:
Use in a chain#
We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Tecton Feature Platform
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)
chain.run("user_469998441571")
'Wow, congratulations on your recent sales! Your business is really soaring like a chicken on a hot air balloon! Keep up the great work!'
Featureform#
Finally, we will use Featureform an open-source and enterprise-grade feature store to run the same example. Featureform allows you to work with your infrastructure like Spark or locally to define your feature transformations.
Initialize Featureform#
You can follow in the instructions in the README to initialize your transformations and features in Featureform.
import featureform as ff
client = ff.Client(host="demo.featureform.com")
Prompts# | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
7cd9c88f0159-5 | client = ff.Client(host="demo.featureform.com")
Prompts#
Here we will set up a custom FeatureformPromptTemplate. This prompt template will take in the average amount a user pays per transactions.
Note that the input to this prompt template is just avg_transaction, since that is the only user defined piece (all other variables are looked up inside the prompt template).
from langchain.prompts import PromptTemplate, StringPromptTemplate
template = """Given the amount a user spends on average per transaction, let them know if they are a high roller. Otherwise, make a silly joke about chickens at the end to make them feel better
Here are the user's stats:
Average Amount per Transaction: ${avg_transcation}
Your response:"""
prompt = PromptTemplate.from_template(template)
class FeatureformPromptTemplate(StringPromptTemplate):
def format(self, **kwargs) -> str:
user_id = kwargs.pop("user_id")
fpf = client.features([("avg_transactions", "quickstart")], {"user": user_id})
return prompt.format(**kwargs)
prompt_template = FeatureformPrompTemplate(input_variables=["user_id"])
print(prompt_template.format(user_id="C1410926"))
Use in a chain#
We can now use this in a chain, successfully creating a chain that achieves personalization backed by the Featureform Feature Platform
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
chain = LLMChain(llm=ChatOpenAI(), prompt=prompt_template)
chain.run("C1410926")
previous
How-To Guides
next
How to create a custom prompt template
Contents
Feast
Load Feast Store
Prompts
Use in a chain
Tecton
Prerequisites
Define and Load Features
Prompts
Use in a chain
Featureform
Initialize Featureform | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
7cd9c88f0159-6 | Define and Load Features
Prompts
Use in a chain
Featureform
Initialize Featureform
Prompts
Use in a chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html |
6c32f77cf1b0-0 | .ipynb
.pdf
How to create a prompt template that uses few shot examples
Contents
Use Case
Using an example set
Create the example set
Create a formatter for the few shot examples
Feed examples and formatter to FewShotPromptTemplate
Using an example selector
Feed examples into ExampleSelector
Feed example selector into FewShotPromptTemplate
How to create a prompt template that uses few shot examples#
In this tutorial, we’ll learn how to create a prompt template that uses few shot examples.
We’ll use the FewShotPromptTemplate class to create a prompt template that uses few shot examples. This class either takes in a set of examples, or an ExampleSelector object. In this tutorial, we’ll go over both options.
Use Case#
In this tutorial, we’ll configure few shot examples for self-ask with search.
Using an example set#
Create the example set#
To get started, create a list of few shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables.
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.prompt import PromptTemplate
examples = [
{
"question": "Who lived longer, Muhammad Ali or Alan Turing?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
"""
},
{
"question": "When was the founder of craigslist born?",
"answer":
"""
Are follow up questions needed here: Yes. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
6c32f77cf1b0-1 | "answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
"""
},
{
"question": "Who was the maternal grandfather of George Washington?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
"""
},
{
"question": "Are both the directors of Jaws and Casino Royale from the same country?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
"""
}
]
Create a formatter for the few shot examples#
Configure a formatter that will format the few shot examples into a string. This formatter should be a PromptTemplate object.
example_prompt = PromptTemplate(input_variables=["question", "answer"], template="Question: {question}\n{answer}") | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
6c32f77cf1b0-2 | print(example_prompt.format(**examples[0]))
Question: Who lived longer, Muhammad Ali or Alan Turing?
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
Feed examples and formatter to FewShotPromptTemplate#
Finally, create a FewShotPromptTemplate object. This object takes in the few shot examples and the formatter for the few shot examples.
prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
suffix="Question: {input}",
input_variables=["input"]
)
print(prompt.format(input="Who was the father of Mary Ball Washington?"))
Question: Who lived longer, Muhammad Ali or Alan Turing?
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
Question: When was the founder of craigslist born?
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
Question: Who was the maternal grandfather of George Washington?
Are follow up questions needed here: Yes. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
6c32f77cf1b0-3 | Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Question: Are both the directors of Jaws and Casino Royale from the same country?
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
Question: Who was the father of Mary Ball Washington?
Using an example selector#
Feed examples into ExampleSelector#
We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the FewShotPromptTemplate object, we will feed them into an ExampleSelector object.
In this tutorial, we will use the SemanticSimilarityExampleSelector class. This class selects few shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few shot examples, as well as a vector store to perform the nearest neighbor search.
from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
example_selector = SemanticSimilarityExampleSelector.from_examples(
# This is the list of examples available to select from.
examples, | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
6c32f77cf1b0-4 | # This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=1
)
# Select the most similar example to the input.
question = "Who was the father of Mary Ball Washington?"
selected_examples = example_selector.select_examples({"question": question})
print(f"Examples most similar to the input: {question}")
for example in selected_examples:
print("\n")
for k, v in example.items():
print(f"{k}: {v}")
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Examples most similar to the input: Who was the father of Mary Ball Washington?
question: Who was the maternal grandfather of George Washington?
answer:
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Feed example selector into FewShotPromptTemplate#
Finally, create a FewShotPromptTemplate object. This object takes in the example selector and the formatter for the few shot examples.
prompt = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
suffix="Question: {input}",
input_variables=["input"]
) | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
6c32f77cf1b0-5 | suffix="Question: {input}",
input_variables=["input"]
)
print(prompt.format(input="Who was the father of Mary Ball Washington?"))
Question: Who was the maternal grandfather of George Washington?
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Question: Who was the father of Mary Ball Washington?
previous
How to create a custom prompt template
next
How to work with partial Prompt Templates
Contents
Use Case
Using an example set
Create the example set
Create a formatter for the few shot examples
Feed examples and formatter to FewShotPromptTemplate
Using an example selector
Feed examples into ExampleSelector
Feed example selector into FewShotPromptTemplate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html |
6d039cb11352-0 | .ipynb
.pdf
How to serialize prompts
Contents
PromptTemplate
Loading from YAML
Loading from JSON
Loading Template from a File
FewShotPromptTemplate
Examples
Loading from YAML
Loading from JSON
Examples in the Config
Example Prompt from a File
PromptTempalte with OutputParser
How to serialize prompts#
It is often preferrable to store prompts not as python code but as files. This can make it easy to share, store, and version prompts. This notebook covers how to do that in LangChain, walking through all the different types of prompts and the different serialization options.
At a high level, the following design principles are applied to serialization:
Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported.
We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both.
There is also a single entry point to load prompts from disk, making it easy to load any type of prompt.
# All prompts are loaded through the `load_prompt` function.
from langchain.prompts import load_prompt
PromptTemplate#
This section covers examples for loading a PromptTemplate.
Loading from YAML#
This shows an example of loading a PromptTemplate from YAML.
!cat simple_prompt.yaml
_type: prompt
input_variables:
["adjective", "content"]
template:
Tell me a {adjective} joke about {content}.
prompt = load_prompt("simple_prompt.yaml") | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
6d039cb11352-1 | prompt = load_prompt("simple_prompt.yaml")
print(prompt.format(adjective="funny", content="chickens"))
Tell me a funny joke about chickens.
Loading from JSON#
This shows an example of loading a PromptTemplate from JSON.
!cat simple_prompt.json
{
"_type": "prompt",
"input_variables": ["adjective", "content"],
"template": "Tell me a {adjective} joke about {content}."
}
prompt = load_prompt("simple_prompt.json")
print(prompt.format(adjective="funny", content="chickens"))
Tell me a funny joke about chickens.
Loading Template from a File#
This shows an example of storing the template in a separate file and then referencing it in the config. Notice that the key changes from template to template_path.
!cat simple_template.txt
Tell me a {adjective} joke about {content}.
!cat simple_prompt_with_template_file.json
{
"_type": "prompt",
"input_variables": ["adjective", "content"],
"template_path": "simple_template.txt"
}
prompt = load_prompt("simple_prompt_with_template_file.json")
print(prompt.format(adjective="funny", content="chickens"))
Tell me a funny joke about chickens.
FewShotPromptTemplate#
This section covers examples for loading few shot prompt templates.
Examples#
This shows an example of what examples stored as json might look like.
!cat examples.json
[
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"}
]
And here is what the same examples stored as yaml might look like.
!cat examples.yaml
- input: happy
output: sad
- input: tall
output: short
Loading from YAML# | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
6d039cb11352-2 | output: sad
- input: tall
output: short
Loading from YAML#
This shows an example of loading a few shot example from YAML.
!cat few_shot_prompt.yaml
_type: few_shot
input_variables:
["adjective"]
prefix:
Write antonyms for the following words.
example_prompt:
_type: prompt
input_variables:
["input", "output"]
template:
"Input: {input}\nOutput: {output}"
examples:
examples.json
suffix:
"Input: {adjective}\nOutput:"
prompt = load_prompt("few_shot_prompt.yaml")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
The same would work if you loaded examples from the yaml file.
!cat few_shot_prompt_yaml_examples.yaml
_type: few_shot
input_variables:
["adjective"]
prefix:
Write antonyms for the following words.
example_prompt:
_type: prompt
input_variables:
["input", "output"]
template:
"Input: {input}\nOutput: {output}"
examples:
examples.yaml
suffix:
"Input: {adjective}\nOutput:"
prompt = load_prompt("few_shot_prompt_yaml_examples.yaml")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Loading from JSON#
This shows an example of loading a few shot example from JSON.
!cat few_shot_prompt.json
{
"_type": "few_shot", | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
6d039cb11352-3 | !cat few_shot_prompt.json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt": {
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
},
"examples": "examples.json",
"suffix": "Input: {adjective}\nOutput:"
}
prompt = load_prompt("few_shot_prompt.json")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Examples in the Config#
This shows an example of referencing the examples directly in the config.
!cat few_shot_prompt_examples_in.json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt": {
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
},
"examples": [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"}
],
"suffix": "Input: {adjective}\nOutput:"
}
prompt = load_prompt("few_shot_prompt_examples_in.json")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Example Prompt from a File# | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
6d039cb11352-4 | Output: short
Input: funny
Output:
Example Prompt from a File#
This shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from example_prompt to example_prompt_path.
!cat example_prompt.json
{
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
}
!cat few_shot_prompt_example_prompt.json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt_path": "example_prompt.json",
"examples": "examples.json",
"suffix": "Input: {adjective}\nOutput:"
}
prompt = load_prompt("few_shot_prompt_example_prompt.json")
print(prompt.format(adjective="funny"))
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
PromptTempalte with OutputParser#
This shows an example of loading a prompt along with an OutputParser from a file.
! cat prompt_with_output_parser.json
{
"input_variables": [
"question",
"student_answer"
],
"output_parser": {
"regex": "(.*?)\\nScore: (.*)",
"output_keys": [
"answer",
"score"
],
"default_output_key": null,
"_type": "regex_parser"
},
"partial_variables": {}, | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
6d039cb11352-5 | "_type": "regex_parser"
},
"partial_variables": {},
"template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:",
"template_format": "f-string",
"validate_template": true,
"_type": "prompt"
}
prompt = load_prompt("prompt_with_output_parser.json")
prompt.output_parser.parse("George Washington was born in 1732 and died in 1799.\nScore: 1/2")
{'answer': 'George Washington was born in 1732 and died in 1799.',
'score': '1/2'}
previous
How to work with partial Prompt Templates
next
Prompts
Contents
PromptTemplate
Loading from YAML
Loading from JSON
Loading Template from a File
FewShotPromptTemplate
Examples
Loading from YAML
Loading from JSON
Examples in the Config
Example Prompt from a File
PromptTempalte with OutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html |
00148f8e8872-0 | .ipynb
.pdf
How to create a custom prompt template
Contents
Why are custom prompt templates needed?
Creating a Custom Prompt Template
Use the custom prompt template
How to create a custom prompt template#
Let’s suppose we want the LLM to generate English language explanations of a function given its name. To achieve this task, we will create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.
Why are custom prompt templates needed?#
LangChain provides a set of default prompt templates that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template.
Take a look at the current set of default prompt templates here.
Creating a Custom Prompt Template#
There are essentially two distinct prompt templates available - string prompt templates and chat prompt templates. String prompt templates provides a simple prompt in string format, while chat prompt templates produces a more structured prompt to be used with a chat API.
In this guide, we will create a custom prompt using a string prompt template.
To create a custom string prompt template, there are two requirements:
It has an input_variables attribute that exposes what input variables the prompt template expects.
It exposes a format method that takes in keyword arguments corresponding to the expected input_variables and returns the formatted prompt.
We will create a custom prompt template that takes in the function name as input and formats the prompt to provide the source code of the function. To achieve this, let’s first create a function that will return the source code of a function given its name.
import inspect
def get_source_code(function_name):
# Get the source code of the function | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html |
00148f8e8872-1 | def get_source_code(function_name):
# Get the source code of the function
return inspect.getsource(function_name)
Next, we’ll create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function.
from langchain.prompts import StringPromptTemplate
from pydantic import BaseModel, validator
class FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel):
""" A custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. """
@validator("input_variables")
def validate_input_variables(cls, v):
""" Validate that the input variables are correct. """
if len(v) != 1 or "function_name" not in v:
raise ValueError("function_name must be the only input_variable.")
return v
def format(self, **kwargs) -> str:
# Get the source code of the function
source_code = get_source_code(kwargs["function_name"])
# Generate the prompt to be sent to the language model
prompt = f"""
Given the function name and source code, generate an English language explanation of the function.
Function Name: {kwargs["function_name"].__name__}
Source Code:
{source_code}
Explanation:
"""
return prompt
def _prompt_type(self):
return "function-explainer"
Use the custom prompt template#
Now that we have created a custom prompt template, we can use it to generate prompts for our task.
fn_explainer = FunctionExplainerPromptTemplate(input_variables=["function_name"])
# Generate a prompt for the function "get_source_code"
prompt = fn_explainer.format(function_name=get_source_code)
print(prompt) | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html |
00148f8e8872-2 | prompt = fn_explainer.format(function_name=get_source_code)
print(prompt)
Given the function name and source code, generate an English language explanation of the function.
Function Name: get_source_code
Source Code:
def get_source_code(function_name):
# Get the source code of the function
return inspect.getsource(function_name)
Explanation:
previous
Connecting to a Feature Store
next
How to create a prompt template that uses few shot examples
Contents
Why are custom prompt templates needed?
Creating a Custom Prompt Template
Use the custom prompt template
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html |
650011c4cc74-0 | .ipynb
.pdf
How to work with partial Prompt Templates
Contents
Partial With Strings
Partial With Functions
How to work with partial Prompt Templates#
A prompt template is a class with a .format method which takes in a key-value map and returns a string (a prompt) to pass to the language model. Like other methods, it can make sense to “partial” a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
LangChain supports this in two ways: we allow for partially formatted prompts (1) with string values, (2) with functions that return string values. These two different ways support different use cases. In the documentation below we go over the motivations for both use cases as well as how to do it in LangChain.
Partial With Strings#
One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(template="{foo}{bar}", input_variables=["foo", "bar"])
partial_prompt = prompt.partial(foo="foo");
print(partial_prompt.format(bar="baz"))
foobaz
You can also just initialize the prompt with the partialed variables.
prompt = PromptTemplate(template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"})
print(prompt.format(bar="baz"))
foobaz | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html |
650011c4cc74-1 | print(prompt.format(bar="baz"))
foobaz
Partial With Functions#
The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can’t hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it’s very handy to be able to partial the prompt with a function that always returns the current date.
from datetime import datetime
def _get_datetime():
now = datetime.now()
return now.strftime("%m/%d/%Y, %H:%M:%S")
prompt = PromptTemplate(
template="Tell me a {adjective} joke about the day {date}",
input_variables=["adjective", "date"]
);
partial_prompt = prompt.partial(date=_get_datetime)
print(partial_prompt.format(adjective="funny"))
Tell me a funny joke about the day 02/27/2023, 22:15:16
You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow.
prompt = PromptTemplate(
template="Tell me a {adjective} joke about the day {date}",
input_variables=["adjective"],
partial_variables={"date": _get_datetime}
);
print(prompt.format(adjective="funny"))
Tell me a funny joke about the day 02/27/2023, 22:15:16
previous
How to create a prompt template that uses few shot examples
next
How to serialize prompts
Contents
Partial With Strings
Partial With Functions
By Harrison Chase | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html |
650011c4cc74-2 | Contents
Partial With Strings
Partial With Functions
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/partial.html |
143d58f4f82a-0 | .ipynb
.pdf
Output Parsers
Output Parsers#
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
get_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted.
parse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
parse_with_prompt(str, PromptValue) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
Below we go over the main type of output parser, the PydanticOutputParser. See the examples folder for other options.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
model_name = 'text-davinci-003'
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke") | https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html |
143d58f4f82a-1 | punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator('setup')
def question_ends_with_question_mark(cls, field):
if field[-1] != '?':
raise ValueError("Badly formed question!")
return field
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
# And a query intented to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
_input = prompt.format_prompt(query=joke_query)
output = model(_input.to_string())
parser.parse(output)
Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')
previous
Output Parsers
next
CommaSeparatedListOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html |
d96c49d5cecc-0 | .ipynb
.pdf
RetryOutputParser
RetryOutputParser#
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can’t. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser, OutputFixingParser, RetryOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
template = """Based on the user question, provide an Action and Action Input for what step should be taken.
{format_instructions}
Question: {query}
Response:"""
class Action(BaseModel):
action: str = Field(description="action to take")
action_input: str = Field(description="input to the action")
parser = PydanticOutputParser(pydantic_object=Action)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
prompt_value = prompt.format_prompt(query="who is leo di caprios gf?")
bad_response = '{"action": "search"}'
If we try to parse this response as is, we will get an error
parser.parse(bad_response)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:24, in PydanticOutputParser.parse(self, text)
23 json_object = json.loads(json_str) | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html |
d96c49d5cecc-1 | 23 json_object = json.loads(json_str)
---> 24 return self.pydantic_object.parse_obj(json_object)
26 except (json.JSONDecodeError, ValidationError) as e:
File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj()
File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Action
action_input
field required (type=value_error.missing)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[6], line 1
----> 1 parser.parse(bad_response)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {text}. Got: {e}"
---> 29 raise OutputParserException(msg)
OutputParserException: Failed to parse Action from completion {"action": "search"}. Got: 1 validation error for Action
action_input
field required (type=value_error.missing)
If we try to use the OutputFixingParser to fix this error, it will be confused - namely, it doesn’t know what to actually put for action input.
fix_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())
fix_parser.parse(bad_response)
Action(action='search', action_input='') | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html |
d96c49d5cecc-2 | fix_parser.parse(bad_response)
Action(action='search', action_input='')
Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response.
from langchain.output_parsers import RetryWithErrorOutputParser
retry_parser = RetryWithErrorOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0))
retry_parser.parse_with_prompt(bad_response, prompt_value)
Action(action='search', action_input='who is leo di caprios gf?')
previous
PydanticOutputParser
next
Structured Output Parser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html |
89d1eaa1f860-0 | .ipynb
.pdf
Enum Output Parser
Enum Output Parser#
This notebook shows how to use an Enum output parser
from langchain.output_parsers.enum import EnumOutputParser
from enum import Enum
class Colors(Enum):
RED = "red"
GREEN = "green"
BLUE = "blue"
parser = EnumOutputParser(enum=Colors)
parser.parse("red")
<Colors.RED: 'red'>
# Can handle spaces
parser.parse(" green")
<Colors.GREEN: 'green'>
# And new lines
parser.parse("blue\n")
<Colors.BLUE: 'blue'>
# And raises errors when appropriate
parser.parse("yellow")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/enum.py:25, in EnumOutputParser.parse(self, response)
24 try:
---> 25 return self.enum(response.strip())
26 except ValueError:
File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:315, in EnumMeta.__call__(cls, value, names, module, qualname, type, start)
314 if names is None: # simple value lookup
--> 315 return cls.__new__(cls, value)
316 # otherwise, functional API: we're creating a new Enum type
File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:611, in Enum.__new__(cls, value)
610 if result is None and exc is None:
--> 611 raise ve_exc
612 elif exc is None:
ValueError: 'yellow' is not a valid Colors
During handling of the above exception, another exception occurred: | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html |
89d1eaa1f860-1 | During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[8], line 2
1 # And raises errors when appropriate
----> 2 parser.parse("yellow")
File ~/workplace/langchain/langchain/output_parsers/enum.py:27, in EnumOutputParser.parse(self, response)
25 return self.enum(response.strip())
26 except ValueError:
---> 27 raise OutputParserException(
28 f"Response '{response}' is not one of the "
29 f"expected values: {self._valid_values}"
30 )
OutputParserException: Response 'yellow' is not one of the expected values: ['red', 'green', 'blue']
previous
Datetime
next
OutputFixingParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html |
acb1d174f78a-0 | .ipynb
.pdf
OutputFixingParser
OutputFixingParser#
This output parser wraps another output parser and tries to fix any mistakes
The Pydantic guardrail simply tries to parse the LLM response. If it does not parse correctly, then it errors.
But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.
For this example, we’ll use the above OutputParser. Here’s what happens if we pass it a result that does not comply with the schema:
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
actor_query = "Generate the filmography for a random actor."
parser = PydanticOutputParser(pydantic_object=Actor)
misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}"
parser.parse(misformatted)
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:23, in PydanticOutputParser.parse(self, text)
22 json_str = match.group()
---> 23 json_object = json.loads(json_str)
24 return self.pydantic_object.parse_obj(json_object) | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html |
acb1d174f78a-1 | 24 return self.pydantic_object.parse_obj(json_object)
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx)
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[6], line 1
----> 1 parser.parse(misformatted) | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html |
acb1d174f78a-2 | Cell In[6], line 1
----> 1 parser.parse(misformatted)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {text}. Got: {e}"
---> 29 raise OutputParserException(msg)
OutputParserException: Failed to parse Actor from completion {'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
Now we can construct and use a OutputFixingParser. This output parser takes as an argument another output parser but also an LLM with which to try to correct any formatting mistakes.
from langchain.output_parsers import OutputFixingParser
new_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())
new_parser.parse(misformatted)
Actor(name='Tom Hanks', film_names=['Forrest Gump'])
previous
Enum Output Parser
next
PydanticOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html |
0740e80feeee-0 | .ipynb
.pdf
PydanticOutputParser
PydanticOutputParser#
This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema.
Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-formed JSON. In the OpenAI family, DaVinci can do reliably but Curie’s ability already drops off dramatically.
Use Pydantic to declare your data model. Pydantic’s BaseModel like a Python dataclass, but with actual type checking + coercion.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
model_name = 'text-davinci-003'
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator('setup')
def question_ends_with_question_mark(cls, field):
if field[-1] != '?':
raise ValueError("Badly formed question!")
return field
# And a query intented to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate( | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html |
0740e80feeee-1 | prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
_input = prompt.format_prompt(query=joke_query)
output = model(_input.to_string())
parser.parse(output)
Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')
# Here's another example, but with a compound typed field.
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
actor_query = "Generate the filmography for a random actor."
parser = PydanticOutputParser(pydantic_object=Actor)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
_input = prompt.format_prompt(query=actor_query)
output = model(_input.to_string())
parser.parse(output)
Actor(name='Tom Hanks', film_names=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Cast Away', 'Toy Story'])
previous
OutputFixingParser
next
RetryOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html |
6a8183fd3437-0 | .ipynb
.pdf
Structured Output Parser
Structured Output Parser#
While the Pydantic/JSON parser is more powerful, we initially experimented data structures having text fields only.
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
Here we define the response schema we want to receive.
response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(name="source", description="source used to answer the user's question, should be a website.")
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
We now get a string that contains instructions for how the response should be formatted, and we then insert that into our prompt.
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
template="answer the users question as best as possible.\n{format_instructions}\n{question}",
input_variables=["question"],
partial_variables={"format_instructions": format_instructions}
)
We can now use this to format a prompt to send to the language model, and then parse the returned result.
model = OpenAI(temperature=0)
_input = prompt.format_prompt(question="what's the capital of france?")
output = model(_input.to_string())
output_parser.parse(output)
{'answer': 'Paris',
'source': 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html'}
And here’s an example of using this in a chat model
chat_model = ChatOpenAI(temperature=0)
prompt = ChatPromptTemplate(
messages=[ | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html |
6a8183fd3437-1 | prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}")
],
input_variables=["question"],
partial_variables={"format_instructions": format_instructions}
)
_input = prompt.format_prompt(question="what's the capital of france?")
output = chat_model(_input.to_messages())
output_parser.parse(output.content)
{'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'}
previous
RetryOutputParser
next
Memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html |
c4ac9ccc5060-0 | .ipynb
.pdf
Datetime
Datetime#
This OutputParser shows out to parse LLM output into datetime format.
from langchain.prompts import PromptTemplate
from langchain.output_parsers import DatetimeOutputParser
from langchain.chains import LLMChain
from langchain.llms import OpenAI
output_parser = DatetimeOutputParser()
template = """Answer the users question:
{question}
{format_instructions}"""
prompt = PromptTemplate.from_template(template, partial_variables={"format_instructions": output_parser.get_format_instructions()})
chain = LLMChain(prompt=prompt, llm=OpenAI())
output = chain.run("around when was bitcoin founded?")
output
'\n\n2008-01-03T18:15:05.000000Z'
output_parser.parse(output)
datetime.datetime(2008, 1, 3, 18, 15, 5)
previous
CommaSeparatedListOutputParser
next
Enum Output Parser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/datetime.html |
ac26bb1dc437-0 | .ipynb
.pdf
CommaSeparatedListOutputParser
CommaSeparatedListOutputParser#
Here’s another parser strictly less powerful than Pydantic/JSON parsing.
from langchain.output_parsers import CommaSeparatedListOutputParser
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
output_parser = CommaSeparatedListOutputParser()
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
template="List five {subject}.\n{format_instructions}",
input_variables=["subject"],
partial_variables={"format_instructions": format_instructions}
)
model = OpenAI(temperature=0)
_input = prompt.format(subject="ice cream flavors")
output = model(_input)
output_parser.parse(output)
['Vanilla',
'Chocolate',
'Strawberry',
'Mint Chocolate Chip',
'Cookies and Cream']
previous
Output Parsers
next
Datetime
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/comma_separated.html |
040695d7d131-0 | .ipynb
.pdf
Maximal Marginal Relevance ExampleSelector
Maximal Marginal Relevance ExampleSelector#
The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.
from langchain.prompts.example_selector import MaxMarginalRelevanceExampleSelector, SemanticSimilarityExampleSelector
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_selector = MaxMarginalRelevanceExampleSelector.from_examples(
# This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
FAISS,
# This is the number of examples to produce.
k=2
) | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html |
040695d7d131-1 | # This is the number of examples to produce.
k=2
)
mmr_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
# Input is a feeling, so should select the happy/sad example as the first one
print(mmr_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: windy
Output: calm
Input: worried
Output:
# Let's compare this to what we would just get if we went solely off of similarity,
# by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector.
example_selector = SemanticSimilarityExampleSelector.from_examples(
# This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
FAISS,
# This is the number of examples to produce.
k=2
)
similar_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
print(similar_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html |
040695d7d131-2 | Give the antonym of every input
Input: happy
Output: sad
Input: sunny
Output: gloomy
Input: worried
Output:
previous
LengthBased ExampleSelector
next
NGram Overlap ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html |
df338a369dbf-0 | .md
.pdf
How to create a custom example selector
Contents
Implement custom example selector
Use custom example selector
How to create a custom example selector#
In this tutorial, we’ll create a custom example selector that selects every alternate example from a given list of examples.
An ExampleSelector must implement two methods:
An add_example method which takes in an example and adds it into the ExampleSelector
A select_examples method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few shot prompt.
Let’s implement a custom ExampleSelector that just selects two examples at random.
Note
Take a look at the current set of example selector implementations supported in LangChain here.
Implement custom example selector#
from langchain.prompts.example_selector.base import BaseExampleSelector
from typing import Dict, List
import numpy as np
class CustomExampleSelector(BaseExampleSelector):
def __init__(self, examples: List[Dict[str, str]]):
self.examples = examples
def add_example(self, example: Dict[str, str]) -> None:
"""Add new example to store for a key."""
self.examples.append(example)
def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:
"""Select which examples to use based on the inputs."""
return np.random.choice(self.examples, size=2, replace=False)
Use custom example selector#
examples = [
{"foo": "1"},
{"foo": "2"},
{"foo": "3"}
]
# Initialize example selector.
example_selector = CustomExampleSelector(examples)
# Select examples
example_selector.select_examples({"foo": "foo"})
# -> array([{'foo': '2'}, {'foo': '3'}], dtype=object) | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html |
df338a369dbf-1 | # Add new example to the set of examples
example_selector.add_example({"foo": "4"})
example_selector.examples
# -> [{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}]
# Select examples
example_selector.select_examples({"foo": "foo"})
# -> array([{'foo': '1'}, {'foo': '4'}], dtype=object)
previous
Example Selectors
next
LengthBased ExampleSelector
Contents
Implement custom example selector
Use custom example selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html |
36e97fbbb6f3-0 | .ipynb
.pdf
NGram Overlap ExampleSelector
NGram Overlap ExampleSelector#
The NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive.
The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input.
from langchain.prompts import PromptTemplate
from langchain.prompts.example_selector.ngram_overlap import NGramOverlapExampleSelector
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
# These are examples of a fictional translation task.
examples = [
{"input": "See Spot run.", "output": "Ver correr a Spot."},
{"input": "My dog barks.", "output": "Mi perro ladra."},
{"input": "Spot can run.", "output": "Spot puede correr."}, | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
36e97fbbb6f3-1 | {"input": "Spot can run.", "output": "Spot puede correr."},
]
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
example_selector = NGramOverlapExampleSelector(
# These are the examples it has available to choose from.
examples=examples,
# This is the PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# This is the threshold, at which selector stops.
# It is set to -1.0 by default.
threshold=-1.0,
# For negative threshold:
# Selector sorts examples by ngram overlap score, and excludes none.
# For threshold greater than 1.0:
# Selector excludes all examples, and returns an empty list.
# For threshold equal to 0.0:
# Selector sorts examples by ngram overlap score,
# and excludes those with no ngram overlap with input.
)
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the Spanish translation of every input",
suffix="Input: {sentence}\nOutput:",
input_variables=["sentence"],
)
# An example input with large ngram overlap with "Spot can run."
# and no overlap with "My dog barks."
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: My dog barks. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
36e97fbbb6f3-2 | Output: Ver correr a Spot.
Input: My dog barks.
Output: Mi perro ladra.
Input: Spot can run fast.
Output:
# You can add examples to NGramOverlapExampleSelector as well.
new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."}
example_selector.add_example(new_example)
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: My dog barks.
Output: Mi perro ladra.
Input: Spot can run fast.
Output:
# You can set a threshold at which examples are excluded.
# For example, setting threshold equal to 0.0
# excludes examples with no ngram overlaps with input.
# Since "My dog barks." has no ngram overlaps with "Spot can run fast."
# it is excluded.
example_selector.threshold=0.0
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: Spot can run fast.
Output:
# Setting small nonzero threshold
example_selector.threshold=0.09
print(dynamic_prompt.format(sentence="Spot can play fetch."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: Spot plays fetch.
Output: Spot juega a buscar. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
36e97fbbb6f3-3 | Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: Spot can play fetch.
Output:
# Setting threshold greater than 1.0
example_selector.threshold=1.0+1e-9
print(dynamic_prompt.format(sentence="Spot can play fetch."))
Give the Spanish translation of every input
Input: Spot can play fetch.
Output:
previous
Maximal Marginal Relevance ExampleSelector
next
Similarity ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
d776f7426b1e-0 | .ipynb
.pdf
Similarity ExampleSelector
Similarity ExampleSelector#
The SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_selector = SemanticSimilarityExampleSelector.from_examples(
# This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=1
)
similar_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input", | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html |
d776f7426b1e-1 | example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
# Input is a feeling, so should select the happy/sad example
print(similar_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: worried
Output:
# Input is a measurement, so should select the tall/short example
print(similar_prompt.format(adjective="fat"))
Give the antonym of every input
Input: happy
Output: sad
Input: fat
Output:
# You can add new examples to the SemanticSimilarityExampleSelector as well
similar_prompt.example_selector.add_example({"input": "enthusiastic", "output": "apathetic"})
print(similar_prompt.format(adjective="joyful"))
Give the antonym of every input
Input: happy
Output: sad
Input: joyful
Output:
previous
NGram Overlap ExampleSelector
next
Output Parsers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html |
b7dddab2fbf8-0 | .ipynb
.pdf
LengthBased ExampleSelector
LengthBased ExampleSelector#
This ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
from langchain.prompts import PromptTemplate
from langchain.prompts import FewShotPromptTemplate
from langchain.prompts.example_selector import LengthBasedExampleSelector
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
example_selector = LengthBasedExampleSelector(
# These are the examples it has available to choose from.
examples=examples,
# This is the PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# This is the maximum length that the formatted examples should be.
# Length is measured by the get_text_length function below.
max_length=25,
# This is the function used to get the length of a string, which is used
# to determine which examples to include. It is commented out because
# it is provided as a default value if none is specified. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html |
b7dddab2fbf8-1 | # it is provided as a default value if none is specified.
# get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x))
)
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
# An example with small input, so it selects all examples.
print(dynamic_prompt.format(adjective="big"))
Give the antonym of every input
Input: happy
Output: sad
Input: tall
Output: short
Input: energetic
Output: lethargic
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output:
# An example with long input, so it selects only one example.
long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"
print(dynamic_prompt.format(adjective=long_string))
Give the antonym of every input
Input: happy
Output: sad
Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else
Output:
# You can add an example to an example selector as well.
new_example = {"input": "big", "output": "small"}
dynamic_prompt.example_selector.add_example(new_example)
print(dynamic_prompt.format(adjective="enthusiastic"))
Give the antonym of every input
Input: happy
Output: sad
Input: tall
Output: short
Input: energetic
Output: lethargic
Input: sunny
Output: gloomy
Input: windy
Output: calm | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html |
b7dddab2fbf8-2 | Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output: small
Input: enthusiastic
Output:
previous
How to create a custom example selector
next
Maximal Marginal Relevance ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html |
2fef064145cc-0 | .rst
.pdf
Chat Models
Chat Models#
Note
Conceptual Guide
Chat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
The following sections of documentation are provided:
Getting Started: An overview of all the functionality the LangChain LLM class provides.
How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class (streaming, async, etc).
Integrations: A collection of examples on how to integrate different LLM providers with LangChain (OpenAI, Hugging Face, etc).
previous
LLMs
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/chat.html |
40921b52a82e-0 | .rst
.pdf
Text Embedding Models
Text Embedding Models#
Note
Conceptual Guide
This documentation goes over how to use the Embedding class in LangChain.
The Embedding class is a class designed for interfacing with embeddings. There are lots of Embedding providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
The base Embedding class in LangChain exposes two methods: embed_documents and embed_query. The largest difference is that these two methods have different interfaces: one works over multiple documents, while the other works over a single document. Besides this, another reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
The following integrations exist for text embeddings.
Aleph Alpha
AzureOpenAI
Bedrock Embeddings
Cohere
Elasticsearch
Fake Embeddings
Google Cloud Platform Vertex AI PaLM
Hugging Face Hub
InstructEmbeddings
Jina
Llama-cpp
MiniMax
ModelScope
MosaicML embeddings
OpenAI
SageMaker Endpoint Embeddings
Self Hosted Embeddings
Sentence Transformers Embeddings
TensorflowHub
previous
PromptLayer ChatOpenAI
next
Aleph Alpha
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding.html |
efdff5a448f3-0 | .ipynb
.pdf
Getting Started
Contents
Language Models
text -> text interface
messages -> message interface
Getting Started#
One of the core value props of LangChain is that it provides a standard interface to models. This allows you to swap easily between models. At a high level, there are two main types of models:
Language Models: good for text generation
Text Embedding Models: good for turning text into a numerical representation
Language Models#
There are two different sub-types of Language Models:
LLMs: these wrap APIs which take text in and return text
ChatModels: these wrap models which take chat messages in and return a chat message
This is a subtle difference, but a value prop of LangChain is that we provide a unified interface accross these. This is nice because although the underlying APIs are actually quite different, you often want to use them interchangeably.
To see this, let’s look at OpenAI (a wrapper around OpenAI’s LLM) vs ChatOpenAI (a wrapper around OpenAI’s ChatModel).
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
llm = OpenAI()
chat_model = ChatOpenAI()
text -> text interface#
llm.predict("say hi!")
'\n\nHi there!'
chat_model.predict("say hi!")
'Hello there!'
messages -> message interface#
from langchain.schema import HumanMessage
llm.predict_messages([HumanMessage(content="say hi!")])
AIMessage(content='\n\nHello! Nice to meet you!', additional_kwargs={}, example=False)
chat_model.predict_messages([HumanMessage(content="say hi!")])
AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False)
previous
Models
next
LLMs
Contents
Language Models
text -> text interface
messages -> message interface
By Harrison Chase | https://python.langchain.com/en/latest/modules/models/getting_started.html |
efdff5a448f3-1 | Language Models
text -> text interface
messages -> message interface
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/getting_started.html |
5df9f3e7a291-0 | .rst
.pdf
LLMs
LLMs#
Note
Conceptual Guide
Large Language Models (LLMs) are a core component of LangChain.
LangChain is not a provider of LLMs, but rather provides a standard interface through which
you can interact with a variety of LLMs.
The following sections of documentation are provided:
Getting Started: An overview of all the functionality the LangChain LLM class provides.
How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class (streaming, async, etc).
Integrations: A collection of examples on how to integrate different LLM providers with LangChain (OpenAI, Hugging Face, etc).
Reference: API reference documentation for all LLM classes.
previous
Getting Started
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms.html |
6e7168377ec2-0 | .rst
.pdf
Integrations
Integrations#
The examples here are all “how-to” guides for how to integrate with various LLM providers.
AI21
Aleph Alpha
Anyscale
Azure OpenAI
Banana
Beam integration for langchain
Amazon Bedrock
CerebriumAI
Cohere
C Transformers
Databricks
DeepInfra
ForefrontAI
Google Cloud Platform Vertex AI PaLM
GooseAI
GPT4All
Hugging Face Hub
Hugging Face Local Pipelines
Huggingface TextGen Inference
Structured Decoding with JSONFormer
Llama-cpp
Manifest
Modal
MosaicML
NLP Cloud
OpenAI
OpenLM
Petals
PipelineAI
Basic LLM usage
Control the output structure/ type of LLMs
Chaining
PromptLayer OpenAI
Structured Decoding with RELLM
Replicate
Runhouse
SageMakerEndpoint
StochasticAI
Writer
previous
How to track token usage
next
AI21
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations.html |
1a206d5936d7-0 | .rst
.pdf
Generic Functionality
Generic Functionality#
The examples here all address certain “how-to” guides for working with LLMs.
How to use the async API for LLMs
How to write a custom LLM wrapper
How (and why) to use the fake LLM
How (and why) to use the human input LLM
How to cache LLM calls
How to serialize LLM classes
How to stream LLM and Chat Model responses
How to track token usage
previous
Getting Started
next
How to use the async API for LLMs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/how_to_guides.html |
ce120e6d0a36-0 | .ipynb
.pdf
Getting Started
Getting Started#
This notebook goes over how to use the LLM class in LangChain.
The LLM class is a class designed for interfacing with LLMs. There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. In this part of the documentation, we will focus on generic LLM functionality. For details on working with a specific LLM wrapper, please see the examples in the How-To section.
For this notebook, we will work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.
from langchain.llms import OpenAI
llm = OpenAI(model_name="text-ada-001", n=2, best_of=2)
Generate Text: The most basic functionality an LLM has is just the ability to call it, passing in a string and getting back a string.
llm("Tell me a joke")
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Generate: More broadly, you can call it with a list of inputs, getting back a more complete response than just the text. This complete response includes things like multiple top responses, as well as LLM provider specific information
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)
len(llm_result.generations)
30
llm_result.generations[0]
[Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'),
Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side.')]
llm_result.generations[-1] | https://python.langchain.com/en/latest/modules/models/llms/getting_started.html |
ce120e6d0a36-1 | llm_result.generations[-1]
[Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just feel what love is for us\n\nAnd we love each other with all our heart\n\nWe just don't know how\n\nHow it will go\n\nBut we know that love is something strong\n\nAnd we'll always have each other\n\nIn our lives."),
Generation(text='\n\nOnce upon a time\n\nThere was a love so pure and true\n\nIt lasted for centuries\n\nAnd never became stale or dry\n\nIt was moving and alive\n\nAnd the heart of the love-ick\n\nIs still beating strong and true.')]
You can also access provider specific information that is returned. This information is NOT standardized across providers.
llm_result.llm_output
{'token_usage': {'completion_tokens': 3903,
'total_tokens': 4023,
'prompt_tokens': 120}}
Number of Tokens: You can also estimate how many tokens a piece of text will be in that model. This is useful because models have a context length (and cost more for more tokens), which means you need to be aware of how long the text you are passing in is.
Notice that by default the tokens are estimated using tiktoken (except for legacy version <3.8, where a Hugging Face tokenizer is used)
llm.get_num_tokens("what a joke")
3
previous
LLMs
next
Generic Functionality
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/modules/models/llms/getting_started.html |
ce120e6d0a36-2 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/getting_started.html |
055918386f2d-0 | .ipynb
.pdf
Beam integration for langchain
Beam integration for langchain#
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.
Create an account, if you don’t have one already. Grab your API keys from the dashboard.
Install the Beam CLI
!curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh
Register API Keys and set your beam client id and secret environment variables:
import os
import subprocess
beam_client_id = "<Your beam client id>"
beam_client_secret = "<Your beam client secret>"
# Set the environment variables
os.environ['BEAM_CLIENT_ID'] = beam_client_id
os.environ['BEAM_CLIENT_SECRET'] = beam_client_secret
# Run the beam configure command
!beam configure --clientId={beam_client_id} --clientSecret={beam_client_secret}
Install the Beam SDK:
!pip install beam-sdk
Deploy and call Beam directly from langchain!
Note that a cold start might take a couple of minutes to return the response, but subsequent calls will be faster!
from langchain.llms.beam import Beam
llm = Beam(model_name="gpt2",
name="langchain-gpt2-test",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate", | https://python.langchain.com/en/latest/modules/models/llms/integrations/beam.html |
055918386f2d-1 | "torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length="50",
verbose=False)
llm._deploy()
response = llm._call("Running machine learning on a remote GPU")
print(response)
previous
Banana
next
Amazon Bedrock
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/beam.html |
ec23e5c7b605-0 | .ipynb
.pdf
OpenAI
OpenAI#
OpenAI offers a spectrum of models with different levels of power suitable for different tasks.
This example goes over how to use LangChain to interact with OpenAI models
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
········
import os
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = OpenAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
' Justin Bieber was born in 1994, so we are looking for the Super Bowl winner from that year. The Super Bowl in 1994 was Super Bowl XXVIII, and the winner was the Dallas Cowboys.'
If you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through
os.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"
previous
NLP Cloud
next
OpenLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/openai.html |
f26bb7448e78-0 | .ipynb
.pdf
Azure OpenAI
Contents
API configuration
Deployments
Azure OpenAI#
This notebook goes over how to use Langchain with Azure OpenAI.
The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.
API configuration#
You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:
# Set this to `azure`
export OPENAI_API_TYPE=azure
# The API version you want to use: set this to `2022-12-01` for the released version.
export OPENAI_API_VERSION=2022-12-01
# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_BASE=https://your-resource-name.openai.azure.com
# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_KEY=<your Azure OpenAI API key>
Alternatively, you can configure the API right within your running Python environment:
import os
os.environ["OPENAI_API_TYPE"] = "azure"
...
Deployments#
With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.
Let’s say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example:
import openai
response = openai.Completion.create( | https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html |
f26bb7448e78-1 | import openai
response = openai.Completion.create(
engine="text-davinci-002-prod",
prompt="This is a test",
max_tokens=5
)
!pip install openai
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2022-12-01"
os.environ["OPENAI_API_BASE"] = "..."
os.environ["OPENAI_API_KEY"] = "..."
# Import Azure OpenAI
from langchain.llms import AzureOpenAI
# Create an instance of Azure OpenAI
# Replace the deployment name with your own
llm = AzureOpenAI(
deployment_name="td2",
model_name="text-davinci-002",
)
# Run the LLM
llm("Tell me a joke")
"\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"
We can also print the LLM and see its custom print.
print(llm)
AzureOpenAI
Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
previous
Anyscale
next
Banana
Contents
API configuration
Deployments
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html |
0f02eed8b210-0 | .ipynb
.pdf
Basic LLM usage
Contents
Basic LLM usage
Control the output structure/ type of LLMs
Chaining
! pip install predictionguard langchain
import os
import predictionguard as pg
from langchain.llms import PredictionGuard
from langchain import PromptTemplate, LLMChain
Basic LLM usage#
# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows
# you to access all the latest open access models (see https://docs.predictionguard.com)
os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>"
# Your Prediction Guard API key. Get one at predictionguard.com
os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"
pgllm = PredictionGuard(model="OpenAI-text-davinci-003")
pgllm("Tell me a joke")
Control the output structure/ type of LLMs#
template = """Respond to the following query based on the context.
Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦
Exclusive Candle Box - $80
Monthly Candle Box - $45 (NEW!)
Scent of The Month Box - $28 (NEW!)
Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉
Query: {query}
Result: """
prompt = PromptTemplate(template=template, input_variables=["query"])
# Without "guarding" or controlling the output of the LLM.
pgllm(prompt.format(query="What kind of post is this?")) | https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html |
0f02eed8b210-1 | pgllm(prompt.format(query="What kind of post is this?"))
# With "guarding" or controlling the output of the LLM. See the
# Prediction Guard docs (https://docs.predictionguard.com) to learn how to
# control the output with integer, float, boolean, JSON, and other types and
# structures.
pgllm = PredictionGuard(model="OpenAI-text-davinci-003",
output={
"type": "categorical",
"categories": [
"product announcement",
"apology",
"relational"
]
})
pgllm(prompt.format(query="What kind of post is this?"))
Chaining#
pgllm = PredictionGuard(model="OpenAI-text-davinci-003")
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)
template = """Write a {adjective} poem about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
llm_chain.predict(adjective="sad", subject="ducks")
previous
PipelineAI
next
PromptLayer OpenAI
Contents
Basic LLM usage
Control the output structure/ type of LLMs
Chaining
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html |
0f02eed8b210-2 | Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html |
25dd0210e2f5-0 | .ipynb
.pdf
ForefrontAI
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
ForefrontAI#
The Forefront platform gives you the ability to fine-tune and use open source large language models.
This notebook goes over how to use Langchain with ForefrontAI.
Imports#
import os
from langchain.llms import ForefrontAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models.
# get a new token: https://docs.forefront.ai/forefront/api-reference/authentication
from getpass import getpass
FOREFRONTAI_API_KEY = getpass()
os.environ["FOREFRONTAI_API_KEY"] = FOREFRONTAI_API_KEY
Create the ForefrontAI instance#
You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url.
llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
DeepInfra
next
Google Cloud Platform Vertex AI PaLM
Contents
Imports | https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html |
25dd0210e2f5-1 | DeepInfra
next
Google Cloud Platform Vertex AI PaLM
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html |
ddec6b3f4625-0 | .ipynb
.pdf
Petals
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
Petals#
Petals runs 100B+ language models at home, BitTorrent-style.
This notebook goes over how to use Langchain with Petals.
Install petals#
The petals package is required to use the Petals API. Install petals using pip3 install petals.
!pip3 install petals
Imports#
import os
from langchain.llms import Petals
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from Huggingface.
from getpass import getpass
HUGGINGFACE_API_KEY = getpass()
os.environ["HUGGINGFACE_API_KEY"] = HUGGINGFACE_API_KEY
Create the Petals instance#
You can specify different parameters such as the model name, max new tokens, temperature, etc.
# this can take several minutes to download big files!
llm = Petals(model_name="bigscience/bloom-petals")
Downloading: 1%|▏ | 40.8M/7.19G [00:24<15:44, 7.57MB/s]
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain. | https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html |
ddec6b3f4625-1 | Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
OpenLM
next
PipelineAI
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html |
d0d3b387fe10-0 | .ipynb
.pdf
Databricks
Contents
Wrapping a serving endpoint
Wrapping a cluster driver proxy app
Databricks#
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.
This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain.
It supports two endpoint types:
Serving endpoint, recommended for production and development,
Cluster driver proxy app, recommended for iteractive development.
from langchain.llms import Databricks
Wrapping a serving endpoint#
Prerequisites:
An LLM was registered and deployed to a Databricks serving endpoint.
You have “Can Query” permission to the endpoint.
The expected MLflow model signature is:
inputs: [{"name": "prompt", "type": "string"}, {"name": "stop", "type": "list[string]"}]
outputs: [{"type": "string"}]
If the model signature is incompatible or you want to insert extra configs, you can set transform_input_fn and transform_output_fn accordingly.
# If running a Databricks notebook attached to an interactive cluster in "single user"
# or "no isolation shared" mode, you only need to specify the endpoint name to create
# a `Databricks` instance to query a serving endpoint in the same workspace.
llm = Databricks(endpoint_name="dolly")
llm("How are you?")
'I am happy to hear that you are in good health and as always, you are appreciated.'
llm("How are you?", stop=["."])
'Good'
# Otherwise, you can manually specify the Databricks workspace hostname and personal access token
# or set `DATABRICKS_HOST` and `DATABRICKS_API_TOKEN` environment variables, respectively. | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
d0d3b387fe10-1 | # See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens
# We strongly recommend not exposing the API token explicitly inside a notebook.
# You can use Databricks secret manager to store your API token securely.
# See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-utility-dbutilssecrets
import os
os.environ["DATABRICKS_API_TOKEN"] = dbutils.secrets.get("myworkspace", "api_token")
llm = Databricks(host="myworkspace.cloud.databricks.com", endpoint_name="dolly")
llm("How are you?")
'I am fine. Thank you!'
# If the serving endpoint accepts extra parameters like `temperature`,
# you can set them in `model_kwargs`.
llm = Databricks(endpoint_name="dolly", model_kwargs={"temperature": 0.1})
llm("How are you?")
'I am fine.'
# Use `transform_input_fn` and `transform_output_fn` if the serving endpoint
# expects a different input schema and does not return a JSON string,
# respectively, or you want to apply a prompt template on top.
def transform_input(**request):
full_prompt = f"""{request["prompt"]}
Be Concise.
"""
request["prompt"] = full_prompt
return request
llm = Databricks(endpoint_name="dolly", transform_input_fn=transform_input)
llm("How are you?")
'I’m Excellent. You?'
Wrapping a cluster driver proxy app#
Prerequisites:
An LLM loaded on a Databricks interactive cluster in “single user” or “no isolation shared” mode.
A local HTTP server running on the driver node to serve the model at "/" using HTTP POST with JSON input/output. | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
d0d3b387fe10-2 | It uses a port number between [3000, 8000] and litens to the driver IP address or simply 0.0.0.0 instead of localhost only.
You have “Can Attach To” permission to the cluster.
The expected server schema (using JSON schema) is:
inputs:
{"type": "object",
"properties": {
"prompt": {"type": "string"},
"stop": {"type": "array", "items": {"type": "string"}}},
"required": ["prompt"]}
outputs: {"type": "string"}
If the server schema is incompatible or you want to insert extra configs, you can use transform_input_fn and transform_output_fn accordingly.
The following is a minimal example for running a driver proxy app to serve an LLM:
from flask import Flask, request, jsonify
import torch
from transformers import pipeline, AutoTokenizer, StoppingCriteria
model = "databricks/dolly-v2-3b"
tokenizer = AutoTokenizer.from_pretrained(model, padding_side="left")
dolly = pipeline(model=model, tokenizer=tokenizer, trust_remote_code=True, device_map="auto")
device = dolly.device
class CheckStop(StoppingCriteria):
def __init__(self, stop=None):
super().__init__()
self.stop = stop or []
self.matched = ""
self.stop_ids = [tokenizer.encode(s, return_tensors='pt').to(device) for s in self.stop]
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs):
for i, s in enumerate(self.stop_ids):
if torch.all((s == input_ids[0][-s.shape[1]:])).item():
self.matched = self.stop[i]
return True
return False | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
d0d3b387fe10-3 | self.matched = self.stop[i]
return True
return False
def llm(prompt, stop=None, **kwargs):
check_stop = CheckStop(stop)
result = dolly(prompt, stopping_criteria=[check_stop], **kwargs)
return result[0]["generated_text"].rstrip(check_stop.matched)
app = Flask("dolly")
@app.route('/', methods=['POST'])
def serve_llm():
resp = llm(**request.json)
return jsonify(resp)
app.run(host="0.0.0.0", port="7777")
Once the server is running, you can create a Databricks instance to wrap it as an LLM.
# If running a Databricks notebook attached to the same cluster that runs the app,
# you only need to specify the driver port to create a `Databricks` instance.
llm = Databricks(cluster_driver_port="7777")
llm("How are you?")
'Hello, thank you for asking. It is wonderful to hear that you are well.'
# Otherwise, you can manually specify the cluster ID to use,
# as well as Databricks workspace hostname and personal access token.
llm = Databricks(cluster_id="0000-000000-xxxxxxxx", cluster_driver_port="7777")
llm("How are you?")
'I am well. You?'
# If the app accepts extra parameters like `temperature`,
# you can set them in `model_kwargs`.
llm = Databricks(cluster_driver_port="7777", model_kwargs={"temperature": 0.1})
llm("How are you?")
'I am very well. It is a pleasure to meet you.'
# Use `transform_input_fn` and `transform_output_fn` if the app | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
d0d3b387fe10-4 | # Use `transform_input_fn` and `transform_output_fn` if the app
# expects a different input schema and does not return a JSON string,
# respectively, or you want to apply a prompt template on top.
def transform_input(**request):
full_prompt = f"""{request["prompt"]}
Be Concise.
"""
request["prompt"] = full_prompt
return request
def transform_output(response):
return response.upper()
llm = Databricks(
cluster_driver_port="7777",
transform_input_fn=transform_input,
transform_output_fn=transform_output)
llm("How are you?")
'I AM DOING GREAT THANK YOU.'
previous
C Transformers
next
DeepInfra
Contents
Wrapping a serving endpoint
Wrapping a cluster driver proxy app
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
81797160b7ad-0 | .ipynb
.pdf
Huggingface TextGen Inference
Huggingface TextGen Inference#
Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.
This notebooks goes over how to use a self hosted LLM using Text Generation Inference.
To use, you should have the text_generation python package installed.
# !pip3 install text_generation
llm = HuggingFaceTextGenInference(
inference_server_url='http://localhost:8010/',
max_new_tokens=512,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
)
llm("What did foo say about bar?")
previous
Hugging Face Local Pipelines
next
Structured Decoding with JSONFormer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_textgen_inference.html |
daddbb5a14b3-0 | .ipynb
.pdf
Cohere
Cohere#
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.
This example goes over how to use LangChain to interact with Cohere models.
# Install the package
!pip install cohere
# get a new token: https://dashboard.cohere.ai/
from getpass import getpass
COHERE_API_KEY = getpass()
from langchain.llms import Cohere
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Cohere(cohere_api_key=COHERE_API_KEY)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question) | https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html |
daddbb5a14b3-1 | llm_chain.run(question)
" Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our information, the Green Bay Packers last won the Super Bowl in the 2010-2011 season. Now, we can't go back in time, so let's go from 2011 when the Packers won the Super Bowl, back to 1984. That is the year that the Packers won the Super Bowl over the Raiders.\n\nSo, we have the year that Justin Beiber was born, 1994, and the year that the Packers last won the Super Bowl, 2011, and now we have to go in the middle, 1986. That is the year that the New York Giants won the Super Bowl over the Denver Broncos. The Giants won Super Bowl 21.\n\nThe New York Giants won the Super Bowl in 1986. This means that the Green Bay Packers won the Super Bowl in 2011.\n\nDid you get it right? If you are still a bit confused, just try to go back to the question again and review the answer"
previous
CerebriumAI
next
C Transformers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html |
8bfdcf41eb30-0 | .ipynb
.pdf
NLP Cloud
NLP Cloud#
The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.
This example goes over how to use LangChain to interact with NLP Cloud models.
!pip install nlpcloud
# get a token: https://docs.nlpcloud.com/#authentication
from getpass import getpass
NLPCLOUD_API_KEY = getpass()
import os
os.environ["NLPCLOUD_API_KEY"] = NLPCLOUD_API_KEY
from langchain.llms import NLPCloud
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = NLPCloud()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
' Justin Bieber was born in 1994, so the team that won the Super Bowl that year was the San Francisco 49ers.'
previous
MosaicML
next
OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/nlpcloud.html |
83e5719a69ec-0 | .ipynb
.pdf
Hugging Face Hub
Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
Hugging Face Hub#
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
This example showcases how to connect to the Hugging Face Hub.
To use, you should have the huggingface_hub python package installed.
!pip install huggingface_hub > /dev/null
# get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-token
from getpass import getpass
HUGGINGFACEHUB_API_TOKEN = getpass()
import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN
Select a Model
from langchain import HuggingFaceHub
repo_id = "google/flan-t5-xl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "Who won the FIFA World Cup in the year 1994? "
print(llm_chain.run(question))
Examples#
Below are some examples of models you can access through the Hugging Face Hub integration.
StableLM, by Stability AI# | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
83e5719a69ec-1 | StableLM, by Stability AI#
See Stability AI’s organization page for a list of available models.
repo_id = "stabilityai/stablelm-tuned-alpha-3b"
# Others include stabilityai/stablelm-base-alpha-3b
# as well as 7B parameter versions
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
Dolly, by DataBricks#
See DataBricks organization page for a list of available models.
from langchain import HuggingFaceHub
repo_id = "databricks/dolly-v2-3b"
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
Camel, by Writer#
See Writer’s organization page for a list of available models.
from langchain import HuggingFaceHub
repo_id = "Writer/camel-5b-hf" # See https://huggingface.co/Writer for other options
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
And many more!
previous
GPT4All
next
Hugging Face Local Pipelines
Contents
Examples
StableLM, by Stability AI | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
83e5719a69ec-2 | Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
fc3a20aaed76-0 | .ipynb
.pdf
AI21
AI21#
AI21 Studio provides API access to Jurassic-2 large language models.
This example goes over how to use LangChain to interact with AI21 models.
# install the package:
!pip install ai21
# get AI21_API_KEY. Use https://studio.ai21.com/account/account
from getpass import getpass
AI21_API_KEY = getpass()
from langchain.llms import AI21
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AI21(ai21_api_key=AI21_API_KEY)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
'\n1. What year was Justin Bieber born?\nJustin Bieber was born in 1994.\n2. What team won the Super Bowl in 1994?\nThe Dallas Cowboys won the Super Bowl in 1994.'
previous
Integrations
next
Aleph Alpha
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/ai21.html |
174e2fa82dbd-0 | .ipynb
.pdf
Llama-cpp
Contents
Installation
CPU only installation
Installation with OpenBLAS / cuBLAS / CLBlast
Usage
CPU
GPU
Llama-cpp#
llama-cpp is a Python binding for llama.cpp.
It supports several LLMs.
This notebook goes over how to run llama-cpp within LangChain.
Installation#
There is a banch of options how to install the llama-cpp package:
only CPU usage
CPU + GPU (using one of many BLAS backends)
CPU only installation#
!pip install llama-cpp-python
Installation with OpenBLAS / cuBLAS / CLBlast#
lama.cpp supports multiple BLAS backends for faster processing. Use the FORCE_CMAKE=1 environment variable to force the use of cmake and install the pip package for the desired BLAS backend (source).
Example installation with cuBLAS backend:
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
IMPORTANT: If you have already installed a cpu only version of the package, you need to reinstall it from scratch: condiser the following command:
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python
Usage#
Make sure you are following all instructions to install all necessary model files.
You don’t need an API_TOKEN!
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
Consider using a template that suits your model! Check the models page on HuggingFace etc. to get a correct prompting template.
template = """Question: {question} | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html |
174e2fa82dbd-1 | template = """Question: {question}
Answer: Let's work this out in a step by step way to be sure we have the right answer."""
prompt = PromptTemplate(template=template, input_variables=["question"])
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Verbose is required to pass to the callback manager
CPU#
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="./ggml-model-q4_0.bin",
callback_manager=callback_manager,
verbose=True
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
1. First, find out when Justin Bieber was born.
2. We know that Justin Bieber was born on March 1, 1994.
3. Next, we need to look up when the Super Bowl was played in that year.
4. The Super Bowl was played on January 28, 1995.
5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers.
llama_print_timings: load time = 434.15 ms
llama_print_timings: sample time = 41.81 ms / 121 runs ( 0.35 ms per token)
llama_print_timings: prompt eval time = 2523.78 ms / 48 tokens ( 52.58 ms per token) | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html |
174e2fa82dbd-2 | llama_print_timings: eval time = 23971.57 ms / 121 runs ( 198.11 ms per token)
llama_print_timings: total time = 28945.95 ms
'\n\n1. First, find out when Justin Bieber was born.\n2. We know that Justin Bieber was born on March 1, 1994.\n3. Next, we need to look up when the Super Bowl was played in that year.\n4. The Super Bowl was played on January 28, 1995.\n5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers.'
GPU#
If the installation with BLAS backend was correct, you will see an BLAS = 1 indicator in model properties.
Two of the most important parameters for use with GPU are:
n_gpu_layers - determines how many layers of the model are offloaded to your GPU.
n_batch - how many tokens are processed in parallel.
Setting these parameters correctly will dramatically improve the evaluation speed (see wrapper code for more details).
n_gpu_layers = 40 # Change this value based on your model and your GPU VRAM pool.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="./ggml-model-q4_0.bin",
n_gpu_layers=n_gpu_layers, n_batch=n_batch,
callback_manager=callback_manager,
verbose=True
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html |
174e2fa82dbd-3 | question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
We are looking for an NFL team that won the Super Bowl when Justin Bieber (born March 1, 1994) was born.
First, let's look up which year is closest to when Justin Bieber was born:
* The year before he was born: 1993
* The year of his birth: 1994
* The year after he was born: 1995
We want to know what NFL team won the Super Bowl in the year that is closest to when Justin Bieber was born. Therefore, we should look up the NFL team that won the Super Bowl in either 1993 or 1994.
Now let's find out which NFL team did win the Super Bowl in either of those years:
* In 1993, the San Francisco 49ers won the Super Bowl against the Dallas Cowboys by a score of 20-16.
* In 1994, the San Francisco 49ers won the Super Bowl again, this time against the San Diego Chargers by a score of 49-26.
llama_print_timings: load time = 238.10 ms
llama_print_timings: sample time = 84.23 ms / 256 runs ( 0.33 ms per token)
llama_print_timings: prompt eval time = 238.04 ms / 49 tokens ( 4.86 ms per token)
llama_print_timings: eval time = 10391.96 ms / 255 runs ( 40.75 ms per token)
llama_print_timings: total time = 15664.80 ms | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html |
174e2fa82dbd-4 | llama_print_timings: total time = 15664.80 ms
" We are looking for an NFL team that won the Super Bowl when Justin Bieber (born March 1, 1994) was born. \n\nFirst, let's look up which year is closest to when Justin Bieber was born:\n\n* The year before he was born: 1993\n* The year of his birth: 1994\n* The year after he was born: 1995\n\nWe want to know what NFL team won the Super Bowl in the year that is closest to when Justin Bieber was born. Therefore, we should look up the NFL team that won the Super Bowl in either 1993 or 1994.\n\nNow let's find out which NFL team did win the Super Bowl in either of those years:\n\n* In 1993, the San Francisco 49ers won the Super Bowl against the Dallas Cowboys by a score of 20-16.\n* In 1994, the San Francisco 49ers won the Super Bowl again, this time against the San Diego Chargers by a score of 49-26.\n"
previous
Structured Decoding with JSONFormer
next
Manifest
Contents
Installation
CPU only installation
Installation with OpenBLAS / cuBLAS / CLBlast
Usage
CPU
GPU
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html |
901c775a231d-0 | .ipynb
.pdf
Modal
Modal#
The Modal Python Library provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.
The Modal itself does not provide any LLMs but only the infrastructure.
This example goes over how to use LangChain to interact with Modal.
Here is another example how to use LangChain to interact with Modal.
!pip install modal-client
# register and get a new token
!modal token new
[?25lLaunching login page in your browser window[33m...[0m
[2KIf this is not showing up, please copy this URL into your web browser manually:
[2Km⠙[0m Waiting for authentication in the web browser...
]8;id=417802;https://modal.com/token-flow/tf-ptEuGecm7T1T5YQe42kwM1\[4;94mhttps://modal.com/token-flow/tf-ptEuGecm7T1T5YQe42kwM1[0m]8;;\
[2K[32m⠙[0m Waiting for authentication in the web browser...
[1A[2K^C
[31mAborted.[0m
Follow these instructions to deal with secrets.
from langchain.llms import Modal
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Modal(endpoint_url="YOUR_ENDPOINT_URL")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Manifest
next
MosaicML
By Harrison Chase | https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html |
901c775a231d-1 | previous
Manifest
next
MosaicML
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html |
27f12aec27bf-0 | .ipynb
.pdf
Aleph Alpha
Aleph Alpha#
The Luminous series is a family of large language models.
This example goes over how to use LangChain to interact with Aleph Alpha models
# Install the package
!pip install aleph-alpha-client
# create a new token: https://docs.aleph-alpha.com/docs/account/#create-a-new-token
from getpass import getpass
ALEPH_ALPHA_API_KEY = getpass()
from langchain.llms import AlephAlpha
from langchain import PromptTemplate, LLMChain
template = """Q: {question}
A:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AlephAlpha(model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"], aleph_alpha_api_key=ALEPH_ALPHA_API_KEY)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is AI?"
llm_chain.run(question)
' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.\n'
previous
AI21
next
Anyscale
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/aleph_alpha.html |
2535554d7de0-0 | .ipynb
.pdf
Structured Decoding with JSONFormer
Contents
HuggingFace Baseline
JSONFormer LLM Wrapper
Structured Decoding with JSONFormer#
JSONFormer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.
It works by filling in the structure tokens and then sampling the content tokens from the model.
Warning - this module is still experimental
!pip install --upgrade jsonformer > /dev/null
HuggingFace Baseline#
First, let’s establish a qualitative baseline by checking the output of the model without structured decoding.
import logging
logging.basicConfig(level=logging.ERROR)
from typing import Optional
from langchain.tools import tool
import os
import json
import requests
HF_TOKEN = os.environ.get("HUGGINGFACE_API_KEY")
@tool
def ask_star_coder(query: str,
temperature: float = 1.0,
max_new_tokens: float = 250):
"""Query the BigCode StarCoder model about coding questions."""
url = "https://api-inference.huggingface.co/models/bigcode/starcoder"
headers = {
"Authorization": f"Bearer {HF_TOKEN}",
"content-type": "application/json"
}
payload = {
"inputs": f"{query}\n\nAnswer:",
"temperature": temperature,
"max_new_tokens": int(max_new_tokens),
}
response = requests.post(url, headers=headers, data=json.dumps(payload))
response.raise_for_status()
return json.loads(response.content.decode("utf-8"))
prompt = """You must respond using JSON format, with a single action and single action input.
You may 'ask_star_coder' for help on coding problems.
{arg_schema}
EXAMPLES
---- | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html |
2535554d7de0-1 | {arg_schema}
EXAMPLES
----
Human: "So what's all this about a GIL?"
AI Assistant:{{
"action": "ask_star_coder",
"action_input": {{"query": "What is a GIL?", "temperature": 0.0, "max_new_tokens": 100}}"
}}
Observation: "The GIL is python's Global Interpreter Lock"
Human: "Could you please write a calculator program in LISP?"
AI Assistant:{{
"action": "ask_star_coder",
"action_input": {{"query": "Write a calculator program in LISP", "temperature": 0.0, "max_new_tokens": 250}}
}}
Observation: "(defun add (x y) (+ x y))\n(defun sub (x y) (- x y ))"
Human: "What's the difference between an SVM and an LLM?"
AI Assistant:{{
"action": "ask_star_coder",
"action_input": {{"query": "What's the difference between SGD and an SVM?", "temperature": 1.0, "max_new_tokens": 250}}
}}
Observation: "SGD stands for stochastic gradient descent, while an SVM is a Support Vector Machine."
BEGIN! Answer the Human's question as best as you are able.
------
Human: 'What's the difference between an iterator and an iterable?'
AI Assistant:""".format(arg_schema=ask_star_coder.args)
from transformers import pipeline
from langchain.llms import HuggingFacePipeline
hf_model = pipeline("text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200)
original_model = HuggingFacePipeline(pipeline=hf_model) | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html |
2535554d7de0-2 | original_model = HuggingFacePipeline(pipeline=hf_model)
generated = original_model.predict(prompt, stop=["Observation:", "Human:"])
print(generated)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
'What's the difference between an iterator and an iterable?'
That’s not so impressive, is it? It didn’t follow the JSON format at all! Let’s try with the structured decoder.
JSONFormer LLM Wrapper#
Let’s try that again, now providing a the Action input’s JSON Schema to the model.
decoder_schema = {
"title": "Decoding Schema",
"type": "object",
"properties": {
"action": {"type": "string", "default": ask_star_coder.name},
"action_input": {
"type": "object",
"properties": ask_star_coder.args,
}
}
}
from langchain.experimental.llms import JsonFormer
json_former = JsonFormer(json_schema=decoder_schema, pipeline=hf_model)
results = json_former.predict(prompt, stop=["Observation:", "Human:"])
print(results)
{"action": "ask_star_coder", "action_input": {"query": "What's the difference between an iterator and an iter", "temperature": 0.0, "max_new_tokens": 50.0}}
Voila! Free of parsing errors.
previous
Huggingface TextGen Inference
next
Llama-cpp
Contents
HuggingFace Baseline
JSONFormer LLM Wrapper
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.