id
stringlengths 14
16
| text
stringlengths 31
2.73k
| metadata
dict |
---|---|---|
ad4caad821cf-0 | .ipynb
.pdf
tiktoken (OpenAI) Length Function
tiktoken (OpenAI) Length Function#
You can also use tiktoken, a open source tokenizer package from OpenAI to estimate tokens used. Will probably be more accurate for their models.
How the text is split: by character passed in
How the chunk size is measured: by tiktoken tokenizer
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
previous
Spacy Text Splitter
next
TiktokenText Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken.html"
} |
a0f888bbbf95-0 | .ipynb
.pdf
Character Text Splitter
Character Text Splitter#
This is a more simple method. This splits based on characters (by default “\n\n”) and measure chunk length by number of characters.
How the text is split: by single character
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(
separator = "\n\n",
chunk_size = 1000,
chunk_overlap = 200,
length_function = len,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0]) | {
"url": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html"
} |
a0f888bbbf95-1 | texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={} lookup_index=0
Here’s an example of passing metadata along with the documents, notice that it is split along with the documents.
metadatas = [{"document": 1}, {"document": 2}]
documents = text_splitter.create_documents([state_of_the_union, state_of_the_union], metadatas=metadatas)
print(documents[0]) | {
"url": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html"
} |
a0f888bbbf95-2 | print(documents[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={'document': 1} lookup_index=0
previous
Getting Started
next
Hugging Face Length Function
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html"
} |
51a7174b4495-0 | .ipynb
.pdf
NLTK Text Splitter
NLTK Text Splitter#
Rather than just splitting on “\n\n”, we can use NLTK to split based on tokenizers.
How the text is split: by NLTK
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import NLTKTextSplitter
text_splitter = NLTKTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
Members of Congress and the Cabinet.
Justices of the Supreme Court.
My fellow Americans.
Last year COVID-19 kept us apart.
This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents.
But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.
But he badly miscalculated.
He thought he could roll into Ukraine and the world would roll over.
Instead he met a wall of strength he never imagined.
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
Groups of citizens blocking tanks with their bodies.
previous
Markdown Text Splitter
next
Python Code Text Splitter
By Harrison Chase | {
"url": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html"
} |
51a7174b4495-1 | previous
Markdown Text Splitter
next
Python Code Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html"
} |
84e4428c346c-0 | .ipynb
.pdf
Spacy Text Splitter
Spacy Text Splitter#
Another alternative to NLTK is to use Spacy.
How the text is split: by Spacy
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import SpacyTextSplitter
text_splitter = SpacyTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
Members of Congress and the Cabinet.
Justices of the Supreme Court.
My fellow Americans.
Last year COVID-19 kept us apart.
This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents.
But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.
But he badly miscalculated.
He thought he could roll into Ukraine and the world would roll over.
Instead he met a wall of strength he never imagined.
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
previous
RecursiveCharacterTextSplitter
next
tiktoken (OpenAI) Length Function
By Harrison Chase
© Copyright 2023, Harrison Chase. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html"
} |
84e4428c346c-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html"
} |
ae220ebd18c0-0 | .rst
.pdf
LLMs
LLMs#
Note
Conceptual Guide
Large Language Models (LLMs) are a core component of LangChain.
LangChain is not a provider of LLMs, but rather provides a standard interface through which
you can interact with a variety of LLMs.
The following sections of documentation are provided:
Getting Started: An overview of all the functionality the LangChain LLM class provides.
How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class (streaming, async, etc).
Integrations: A collection of examples on how to integrate different LLM providers with LangChain (OpenAI, Hugging Face, etc).
Reference: API reference documentation for all LLM classes.
previous
Models
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms.html"
} |
ed11bdd5fd26-0 | .rst
.pdf
Text Embedding Models
Text Embedding Models#
Note
Conceptual Guide
This documentation goes over how to use the Embedding class in LangChain.
The Embedding class is a class designed for interfacing with embeddings. There are lots of Embedding providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
The base Embedding class in LangChain exposes two methods: embed_documents and embed_query. The largest difference is that these two methods have different interfaces: one works over multiple documents, while the other works over a single document. Besides this, another reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
The following integrations exist for text embeddings.
Aleph Alpha
AzureOpenAI
Cohere
Fake Embeddings
Hugging Face Hub
InstructEmbeddings
Jina
Llama-cpp
OpenAI
SageMaker Endpoint Embeddings
Self Hosted Embeddings
TensorflowHub
previous
PromptLayer ChatOpenAI
next
Aleph Alpha
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding.html"
} |
a1b5816605ab-0 | .rst
.pdf
Chat Models
Chat Models#
Note
Conceptual Guide
Chat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
The following sections of documentation are provided:
Getting Started: An overview of all the functionality the LangChain LLM class provides.
How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class (streaming, async, etc).
Integrations: A collection of examples on how to integrate different LLM providers with LangChain (OpenAI, Hugging Face, etc).
previous
LLMs
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/chat.html"
} |
187e488b14f3-0 | .ipynb
.pdf
Getting Started
Getting Started#
This notebook goes over how to use the LLM class in LangChain.
The LLM class is a class designed for interfacing with LLMs. There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. In this part of the documentation, we will focus on generic LLM functionality. For details on working with a specific LLM wrapper, please see the examples in the How-To section.
For this notebook, we will work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.
from langchain.llms import OpenAI
llm = OpenAI(model_name="text-ada-001", n=2, best_of=2)
Generate Text: The most basic functionality an LLM has is just the ability to call it, passing in a string and getting back a string.
llm("Tell me a joke")
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Generate: More broadly, you can call it with a list of inputs, getting back a more complete response than just the text. This complete response includes things like multiple top responses, as well as LLM provider specific information
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)
len(llm_result.generations)
30
llm_result.generations[0]
[Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'),
Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side.')]
llm_result.generations[-1] | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/getting_started.html"
} |
187e488b14f3-1 | llm_result.generations[-1]
[Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just feel what love is for us\n\nAnd we love each other with all our heart\n\nWe just don't know how\n\nHow it will go\n\nBut we know that love is something strong\n\nAnd we'll always have each other\n\nIn our lives."),
Generation(text='\n\nOnce upon a time\n\nThere was a love so pure and true\n\nIt lasted for centuries\n\nAnd never became stale or dry\n\nIt was moving and alive\n\nAnd the heart of the love-ick\n\nIs still beating strong and true.')]
You can also access provider specific information that is returned. This information is NOT standardized across providers.
llm_result.llm_output
{'token_usage': {'completion_tokens': 3903,
'total_tokens': 4023,
'prompt_tokens': 120}}
Number of Tokens: You can also estimate how many tokens a piece of text will be in that model. This is useful because models have a context length (and cost more for more tokens), which means you need to be aware of how long the text you are passing in is.
Notice that by default the tokens are estimated using a HuggingFace tokenizer.
llm.get_num_tokens("what a joke")
3
previous
LLMs
next
Generic Functionality
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/getting_started.html"
} |
234b5136a4d5-0 | .rst
.pdf
Generic Functionality
Generic Functionality#
The examples here all address certain “how-to” guides for working with LLMs.
How to use the async API for LLMs
How to write a custom LLM wrapper
How (and why) to use the fake LLM
How to cache LLM calls
How to serialize LLM classes
How to stream LLM and Chat Model responses
How to track token usage
previous
Getting Started
next
How to use the async API for LLMs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/how_to_guides.html"
} |
4a0527af3f95-0 | .rst
.pdf
Integrations
Integrations#
The examples here are all “how-to” guides for how to integrate with various LLM providers.
AI21
Aleph Alpha
Anthropic
Azure OpenAI LLM Example
Banana
CerebriumAI LLM Example
Cohere
DeepInfra LLM Example
ForefrontAI LLM Example
GooseAI LLM Example
GPT4all
Hugging Face Hub
Llama-cpp
Manifest
Modal
OpenAI
Petals LLM Example
PromptLayer OpenAI
Replicate
SageMakerEndpoint
Self-Hosted Models via Runhouse
StochasticAI
Writer
previous
How to track token usage
next
AI21
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations.html"
} |
8a76598c9e31-0 | .ipynb
.pdf
How to track token usage
How to track token usage#
This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.
Let’s first look at an extremely simple example of tracking token usage for a single LLM call.
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
with get_openai_callback() as cb:
result = llm("Tell me a joke")
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Successful Requests: {cb.successful_requests}")
print(f"Total Cost (USD): ${cb.total_cost}")
Total Tokens: 39
Prompt Tokens: 4
Completion Tokens: 35
Successful Requests: 1
Total Cost (USD): $0.0007800000000000001
Anything inside the context manager will get tracked. Here’s an example of using it to track multiple calls in sequence.
with get_openai_callback() as cb:
result = llm("Tell me a joke")
result2 = llm("Tell me a joke")
print(cb.total_tokens)
91
If a chain or agent with multiple steps in it is used, it will track all those steps.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm) | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html"
} |
8a76598c9e31-1 | agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
with get_openai_callback() as cb:
response = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "Olivia Wilde boyfriend"
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought: I need to find out Harry Styles' age.
Action: Search
Action Input: "Harry Styles age"
Observation: 29 years
Thought: I need to calculate 29 raised to the 0.23 power.
Action: Calculator
Action Input: 29^0.23
Observation: Answer: 2.169459462491557
Thought: I now know the final answer.
Final Answer: Harry Styles, Olivia Wilde's boyfriend, is 29 years old and his age raised to the 0.23 power is 2.169459462491557.
> Finished chain.
Total Tokens: 1506
Prompt Tokens: 1350
Completion Tokens: 156
Total Cost (USD): $0.03012 | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html"
} |
8a76598c9e31-2 | Completion Tokens: 156
Total Cost (USD): $0.03012
previous
How to stream LLM and Chat Model responses
next
Integrations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html"
} |
78632b376330-0 | .ipynb
.pdf
How (and why) to use the fake LLM
How (and why) to use the fake LLM#
We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.
In this notebook we go over how to use this.
We start this with using the FakeLLM in an agent.
from langchain.llms.fake import FakeListLLM
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
tools = load_tools(["python_repl"])
responses=[
"Action: Python REPL\nAction Input: print(2 + 2)",
"Final Answer: 4"
]
llm = FakeListLLM(responses=responses)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("whats 2 + 2")
> Entering new AgentExecutor chain...
Action: Python REPL
Action Input: print(2 + 2)
Observation: 4
Thought:Final Answer: 4
> Finished chain.
'4'
previous
How to write a custom LLM wrapper
next
How to cache LLM calls
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/fake_llm.html"
} |
b6c3d158ecd5-0 | .ipynb
.pdf
How to cache LLM calls
Contents
In Memory Cache
SQLite Cache
Redis Cache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
How to cache LLM calls#
This notebook covers how to cache results of individual LLM calls.
from langchain.llms import OpenAI
In Memory Cache#
import langchain
from langchain.cache import InMemoryCache
langchain.llm_cache = InMemoryCache()
# To make the caching really obvious, lets use a slower model.
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 30.7 ms, sys: 18.6 ms, total: 49.3 ms
Wall time: 791 ms
"\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 80 µs, sys: 0 ns, total: 80 µs
Wall time: 83.9 µs
"\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"
SQLite Cache#
!rm .langchain.db
# We can do the same thing with a SQLite cache
from langchain.cache import SQLiteCache
langchain.llm_cache = SQLiteCache(database_path=".langchain.db")
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke") | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html"
} |
b6c3d158ecd5-1 | llm("Tell me a joke")
CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms
Wall time: 825 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms
Wall time: 2.67 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Redis Cache#
# We can do the same thing with a Redis cache
# (make sure your local Redis instance is running first before running this example)
from redis import Redis
from langchain.cache import RedisCache
langchain.llm_cache = RedisCache(redis_=Redis())
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
SQLAlchemy Cache#
# You can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy.
# from langchain.cache import SQLAlchemyCache
# from sqlalchemy import create_engine
# engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
# langchain.llm_cache = SQLAlchemyCache(engine)
Custom SQLAlchemy Schemas#
# You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:
from sqlalchemy import Column, Integer, String, Computed, Index, Sequence
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html"
} |
b6c3d158ecd5-2 | from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy_utils import TSVectorType
from langchain.cache import SQLAlchemyCache
Base = declarative_base()
class FulltextLLMCache(Base): # type: ignore
"""Postgres table for fulltext-indexed LLM Cache"""
__tablename__ = "llm_cache_fulltext"
id = Column(Integer, Sequence('cache_id'), primary_key=True)
prompt = Column(String, nullable=False)
llm = Column(String, nullable=False)
idx = Column(Integer)
response = Column(String)
prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True))
__table_args__ = (
Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"),
)
engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache)
Optional Caching#
You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLM
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False)
%%time
llm("Tell me a joke")
CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms
Wall time: 745 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
llm("Tell me a joke") | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html"
} |
b6c3d158ecd5-3 | %%time
llm("Tell me a joke")
CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms
Wall time: 623 ms
'\n\nTwo guys stole a calendar. They got six months each.'
Optional Caching in Chains#
You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.
As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.
llm = OpenAI(model_name="text-davinci-002")
no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
text_splitter = CharacterTextSplitter()
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
texts = text_splitter.split_text(state_of_the_union)
from langchain.docstore.document import Document
docs = [Document(page_content=t) for t in texts[:3]]
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm)
%%time
chain.run(docs)
CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms
Wall time: 5.09 s | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html"
} |
b6c3d158ecd5-4 | Wall time: 5.09 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'
When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step.
%%time
chain.run(docs)
CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms
Wall time: 1.04 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'
previous
How (and why) to use the fake LLM
next
How to serialize LLM classes
Contents
In Memory Cache
SQLite Cache
Redis Cache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html"
} |
a3f3556c7c83-0 | .ipynb
.pdf
How to write a custom LLM wrapper
How to write a custom LLM wrapper#
This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.
There is only one required thing that a custom LLM needs to implement:
A _call method that takes in a string, some optional stop words, and returns a string
There is a second optional thing it can implement:
An _identifying_params property that is used to help with printing of this class. Should return a dictionary.
Let’s implement a very simple custom LLM that just returns the first N characters of the input.
from langchain.llms.base import LLM
from typing import Optional, List, Mapping, Any
class CustomLLM(LLM):
n: int
@property
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
return prompt[:self.n]
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"n": self.n}
We can now use this as an any other LLM.
llm = CustomLLM(n=10)
llm("This is a foobar thing")
'This is a '
We can also print the LLM and see its custom print.
print(llm)
CustomLLM
Params: {'n': 10}
previous
How to use the async API for LLMs
next | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html"
} |
a3f3556c7c83-1 | previous
How to use the async API for LLMs
next
How (and why) to use the fake LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html"
} |
42fbb192b2f7-0 | .ipynb
.pdf
How to serialize LLM classes
Contents
Loading
Saving
How to serialize LLM classes#
This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc).
from langchain.llms import OpenAI
from langchain.llms.loading import load_llm
Loading#
First, lets go over loading an LLM from disk. LLMs can be saved on disk in two formats: json or yaml. No matter the extension, they are loaded in the same way.
!cat llm.json
{
"model_name": "text-davinci-003",
"temperature": 0.7,
"max_tokens": 256,
"top_p": 1.0,
"frequency_penalty": 0.0,
"presence_penalty": 0.0,
"n": 1,
"best_of": 1,
"request_timeout": null,
"_type": "openai"
}
llm = load_llm("llm.json")
!cat llm.yaml
_type: openai
best_of: 1
frequency_penalty: 0.0
max_tokens: 256
model_name: text-davinci-003
n: 1
presence_penalty: 0.0
request_timeout: null
temperature: 0.7
top_p: 1.0
llm = load_llm("llm.yaml")
Saving#
If you want to go from an LLM in memory to a serialized version of it, you can do so easily by calling the .save method. Again, this supports both json and yaml.
llm.save("llm.json") | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html"
} |
42fbb192b2f7-1 | llm.save("llm.json")
llm.save("llm.yaml")
previous
How to cache LLM calls
next
How to stream LLM and Chat Model responses
Contents
Loading
Saving
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html"
} |
14b3a2032e60-0 | .ipynb
.pdf
How to stream LLM and Chat Model responses
How to stream LLM and Chat Model responses#
LangChain provides streaming support for LLMs. Currently, we support streaming for the OpenAI, ChatOpenAI. and Anthropic implementations, but streaming support for other LLM implementations is on the roadmap. To utilize streaming, use a CallbackHandler that implements on_llm_new_token. In this example, we are using StreamingStdOutCallbackHandler.
from langchain.llms import OpenAI, Anthropic
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import HumanMessage
llm = OpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = llm("Write me a song about sparkling water.")
Verse 1
I'm sippin' on sparkling water,
It's so refreshing and light,
It's the perfect way to quench my thirst
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 2
I'm sippin' on sparkling water,
It's so bubbly and bright,
It's the perfect way to cool me down
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 3
I'm sippin' on sparkling water,
It's so light and so clear,
It's the perfect way to keep me cool
On a hot summer night.
Chorus | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html"
} |
14b3a2032e60-1 | It's the perfect way to keep me cool
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.
llm.generate(["Tell me a joke."])
Q: What did the fish say when it hit the wall?
A: Dam!
LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': None, 'logprobs': None})]], llm_output={'token_usage': {}, 'model_name': 'text-davinci-003'})
Here’s an example with the ChatOpenAI chat model implementation:
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's oh so pure
Sparkling water, I can't ignore
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Verse 2:
No sugar, no calories, just H2O
A drink that's good for me, don't you know
With lemon or lime, you're even better
Sparkling water, you're my forever
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html"
} |
14b3a2032e60-2 | Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Bridge:
You're my go-to drink, day or night
You make me feel so light
I'll never give you up, you're my true love
Sparkling water, you're sent from above
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Outro:
Sparkling water, you're the one for me
I'll never let you go, can't you see
You're my drink of choice, forevermore
Sparkling water, I adore.
Here is an example with the Anthropic LLM implementation, which uses their claude model.
llm = Anthropic(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
llm("Write me a song about sparkling water.")
Sparkling water, bubbles so bright,
Fizzing and popping in the light.
No sugar or calories, a healthy delight,
Sparkling water, refreshing and light.
Carbonation that tickles the tongue,
In flavors of lemon and lime unsung.
Sparkling water, a drink quite all right,
Bubbles sparkling in the light.
'\nSparkling water, bubbles so bright,\n\nFizzing and popping in the light.\n\nNo sugar or calories, a healthy delight,\n\nSparkling water, refreshing and light.\n\nCarbonation that tickles the tongue,\n\nIn flavors of lemon and lime unsung.\n\nSparkling water, a drink quite all right,\n\nBubbles sparkling in the light.'
previous | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html"
} |
14b3a2032e60-3 | previous
How to serialize LLM classes
next
How to track token usage
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html"
} |
ebf73090b31e-0 | .ipynb
.pdf
How to use the async API for LLMs
How to use the async API for LLMs#
LangChain provides async support for LLMs by leveraging the asyncio library.
Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, OpenAI, PromptLayerOpenAI, ChatOpenAI and Anthropic are supported, but async support for other LLMs is on the roadmap.
You can use the agenerate method to call an OpenAI LLM asynchronously.
import time
import asyncio
from langchain.llms import OpenAI
def generate_serially():
llm = OpenAI(temperature=0.9)
for _ in range(10):
resp = llm.generate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def async_generate(llm):
resp = await llm.agenerate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
tasks = [async_generate(llm) for _ in range(10)]
await asyncio.gather(*tasks)
s = time.perf_counter()
# If running this outside of Jupyter, use asyncio.run(generate_concurrently())
await generate_concurrently()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m')
s = time.perf_counter()
generate_serially()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m') | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html"
} |
ebf73090b31e-1 | I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, how about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
Concurrent executed in 1.39 seconds.
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
I'm doing well, thanks! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
Serial executed in 5.77 seconds.
previous
Generic Functionality
next
How to write a custom LLM wrapper
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html"
} |
da2ed86ce4f5-0 | .ipynb
.pdf
Cohere
Cohere#
This example goes over how to use LangChain to interact with Cohere models
from langchain.llms import Cohere
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Cohere()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question) | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html"
} |
da2ed86ce4f5-1 | llm_chain.run(question)
" Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our information, the Green Bay Packers last won the Super Bowl in the 2010-2011 season. Now, we can't go back in time, so let's go from 2011 when the Packers won the Super Bowl, back to 1984. That is the year that the Packers won the Super Bowl over the Raiders.\n\nSo, we have the year that Justin Beiber was born, 1994, and the year that the Packers last won the Super Bowl, 2011, and now we have to go in the middle, 1986. That is the year that the New York Giants won the Super Bowl over the Denver Broncos. The Giants won Super Bowl 21.\n\nThe New York Giants won the Super Bowl in 1986. This means that the Green Bay Packers won the Super Bowl in 2011.\n\nDid you get it right? If you are still a bit confused, just try to go back to the question again and review the answer"
previous
CerebriumAI LLM Example
next
DeepInfra LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html"
} |
2c8675721a39-0 | .ipynb
.pdf
Azure OpenAI LLM Example
Contents
API configuration
Deployments
Azure OpenAI LLM Example#
This notebook goes over how to use Langchain with Azure OpenAI.
The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.
API configuration#
You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:
# Set this to `azure`
export OPENAI_API_TYPE=azure
# The API version you want to use: set this to `2022-12-01` for the released version.
export OPENAI_API_VERSION=2022-12-01
# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_BASE=https://your-resource-name.openai.azure.com
# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_KEY=<your Azure OpenAI API key>
Alternatively, you can configure the API right within your running Python environment:
import os
os.environ["OPENAI_API_TYPE"] = "azure"
...
Deployments#
With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.
Let’s say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example:
import openai
response = openai.Completion.create( | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html"
} |
2c8675721a39-1 | import openai
response = openai.Completion.create(
engine="text-davinci-002-prod",
prompt="This is a test",
max_tokens=5
)
# Import Azure OpenAI
from langchain.llms import AzureOpenAI
# Create an instance of Azure OpenAI
# Replace the deployment name with your own
llm = AzureOpenAI(deployment_name="text-davinci-002-prod", model_name="text-davinci-002")
# Run the LLM
llm("Tell me a joke")
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
We can also print the LLM and see its custom print.
print(llm)
AzureOpenAI
Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
previous
Anthropic
next
Banana
Contents
API configuration
Deployments
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html"
} |
5a24cf3d69d8-0 | .ipynb
.pdf
PromptLayer OpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer OpenAI#
This example showcases how to connect to PromptLayer to start recording your OpenAI requests.
Install PromptLayer#
The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.
pip install promptlayer
Imports#
import os
from langchain.llms import PromptLayerOpenAI
import promptlayer
Set the Environment API Key#
You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.
Set it as an environment variable called PROMPTLAYER_API_KEY.
os.environ["PROMPTLAYER_API_KEY"] = "********"
Use the PromptLayerOpenAI LLM like normal#
You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature.
llm = PromptLayerOpenAI(pl_tags=["langchain"])
llm("I am a cat and I want")
' to go outside\n\nUnfortunately, cats cannot go outside without being supervised by a human. Going outside can be dangerous for cats, as they may come into contact with cars, other animals, or other dangers. If you want to go outside, ask your human to take you on a supervised walk or to a safe, enclosed outdoor space.'
The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track#
If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id.
llm = PromptLayerOpenAI(return_pl_id=True)
llm_results = llm.generate(["Tell me a joke"])
for res in llm_results.generations: | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html"
} |
5a24cf3d69d8-1 | for res in llm_results.generations:
pl_request_id = res[0].generation_info["pl_request_id"]
promptlayer.track.score(request_id=pl_request_id, score=100)
Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.
Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.
previous
Petals LLM Example
next
Replicate
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html"
} |
5ff29923530f-0 | .ipynb
.pdf
Replicate
Contents
Setup
Calling a model
Chaining Calls
Replicate#
This example goes over how to use LangChain to interact with Replicate models
import os
from langchain.llms import Replicate
from langchain import PromptTemplate, LLMChain
os.environ["REPLICATE_API_TOKEN"] = "YOUR REPLICATE API TOKEN"
Setup#
To run this notebook, you’ll need to create a replicate account and install the replicate python client.
Calling a model#
Find a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version
For example, for this flan-t5 model, click on the API tab. The model name/version would be: daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8
Only the model param is required, but we can add other model params when initializing.
For example, if we were running stable diffusion and wanted to change the image dimensions:
Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})
Note that only the first output of a model will be returned.
llm = Replicate(model="daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8")
prompt = """
Answer the following yes/no question by reasoning step by step. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html"
} |
5ff29923530f-1 | prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
'The legal driving age of dogs is 2. Cars are designed for humans to drive. Therefore, the final answer is yes.'
We can call any replicate model using this syntax. For example, we can call stable diffusion.
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf",
input={'image_dimensions': '512x512'})
image_output = text2image("A cat riding a motorcycle by Picasso")
image_output
'https://replicate.delivery/pbxt/Cf07B1zqzFQLOSBQcKG7m9beE74wf7kuip5W9VxHJFembefKE/out-0.png'
The model spits out a URL. Let’s render it.
from PIL import Image
import requests
from io import BytesIO
response = requests.get(image_output)
img = Image.open(BytesIO(response.content))
img
Chaining Calls#
The whole point of langchain is to… chain! Here’s an example of how do that.
from langchain.chains import SimpleSequentialChain
First, let’s define the LLM for this model as a flan-5, and text2image as a stable diffusion model.
llm = Replicate(model="daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8") | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html"
} |
5ff29923530f-2 | text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf")
First prompt in the chain
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
Second prompt to get the logo for company description
second_prompt = PromptTemplate(
input_variables=["company_name"],
template="Write a description of a logo for this company: {company_name}",
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)
Third prompt, let’s create the image based on the description output from prompt 2
third_prompt = PromptTemplate(
input_variables=["company_logo_description"],
template="{company_logo_description}",
)
chain_three = LLMChain(llm=text2image, prompt=third_prompt)
Now let’s run it!
# Run the chain specifying only the input variable for the first chain.
overall_chain = SimpleSequentialChain(chains=[chain, chain_two, chain_three], verbose=True)
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...
novelty socks
todd & co.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png
> Finished chain. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html"
} |
5ff29923530f-3 | > Finished chain.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png
response = requests.get("https://replicate.delivery/pbxt/eq6foRJngThCAEBqse3nL3Km2MBfLnWQNd0Hy2SQRo2LuprCB/out-0.png")
img = Image.open(BytesIO(response.content))
img
previous
PromptLayer OpenAI
next
SageMakerEndpoint
Contents
Setup
Calling a model
Chaining Calls
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html"
} |
737112843e20-0 | .ipynb
.pdf
Writer
Writer#
This example goes over how to use LangChain to interact with Writer models
from langchain.llms import Writer
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Writer()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
StochasticAI
next
LLMs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/writer.html"
} |
20cbe13acd02-0 | .ipynb
.pdf
Banana
Banana#
This example goes over how to use LangChain to interact with Banana models
import os
from langchain.llms import Banana
from langchain import PromptTemplate, LLMChain
os.environ["BANANA_API_KEY"] = "YOUR_API_KEY"
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Banana(model_key="YOUR_MODEL_KEY")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Azure OpenAI LLM Example
next
CerebriumAI LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/banana.html"
} |
fc3e2fa1f7f8-0 | .ipynb
.pdf
Anthropic
Anthropic#
This example goes over how to use LangChain to interact with Anthropic models
from langchain.llms import Anthropic
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Anthropic()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
" Step 1: Justin Beiber was born on March 1, 1994\nStep 2: The NFL season ends with the Super Bowl in January/February\nStep 3: Therefore, the Super Bowl that occurred closest to Justin Beiber's birth would be Super Bowl XXIX in 1995\nStep 4: The San Francisco 49ers won Super Bowl XXIX in 1995\n\nTherefore, the answer is the San Francisco 49ers won the Super Bowl in the year Justin Beiber was born."
previous
Aleph Alpha
next
Azure OpenAI LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/anthropic_example.html"
} |
deafa53ade31-0 | .ipynb
.pdf
DeepInfra LLM Example
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
DeepInfra LLM Example#
This notebook goes over how to use Langchain with DeepInfra.
Imports#
import os
from langchain.llms import DeepInfra
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from DeepInfra. You are given a 1 hour free of serverless GPU compute to test different models.
You can print your token with deepctl auth token
os.environ["DEEPINFRA_API_TOKEN"] = "YOUR_KEY_HERE"
Create the DeepInfra instance#
Make sure to deploy your model first via deepctl deploy create -m google/flat-t5-xl (for example)
llm = DeepInfra(model_id="DEPLOYED MODEL ID")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in 2015?"
llm_chain.run(question)
previous
Cohere
next
ForefrontAI LLM Example
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html"
} |
deafa53ade31-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html"
} |
ef58bc9eb733-0 | .ipynb
.pdf
Hugging Face Hub
Hugging Face Hub#
This example showcases how to connect to the Hugging Face Hub.
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":0, "max_length":64}))
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
print(llm_chain.run(question))
The Seattle Seahawks won the Super Bowl in 2010. Justin Beiber was born in 2010. The final answer: Seattle Seahawks.
previous
GPT4all
next
Llama-cpp
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html"
} |
64c53d878c55-0 | .ipynb
.pdf
ForefrontAI LLM Example
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
ForefrontAI LLM Example#
This notebook goes over how to use Langchain with ForefrontAI.
Imports#
import os
from langchain.llms import ForefrontAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models.
os.environ["FOREFRONTAI_API_KEY"] = "YOUR_KEY_HERE"
Create the ForefrontAI instance#
You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url.
llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
DeepInfra LLM Example
next
GooseAI LLM Example
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html"
} |
64c53d878c55-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html"
} |
0393939a9dd2-0 | .ipynb
.pdf
Petals LLM Example
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
Petals LLM Example#
This notebook goes over how to use Langchain with Petals.
Install petals#
The petals package is required to use the Petals API. Install petals using pip3 install petals.
$ pip3 install petals
Imports#
import os
from langchain.llms import Petals
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from Huggingface.
os.environ["HUGGINGFACE_API_KEY"] = "YOUR_KEY_HERE"
Create the Petals instance#
You can specify different parameters such as the model name, max new tokens, temperature, etc.
llm = Petals(model_name="bigscience/bloom-petals")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
OpenAI
next
PromptLayer OpenAI
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html"
} |
0393939a9dd2-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html"
} |
464113dbf3bc-0 | .ipynb
.pdf
CerebriumAI LLM Example
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
CerebriumAI LLM Example#
This notebook goes over how to use Langchain with CerebriumAI.
Install cerebrium#
The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium.
$ pip3 install cerebrium
Imports#
import os
from langchain.llms import CerebriumAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from CerebriumAI. You are given a 1 hour free of serverless GPU compute to test different models.
os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE"
Create the CerebriumAI instance#
You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url.
llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Banana
next | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html"
} |
464113dbf3bc-1 | llm_chain.run(question)
previous
Banana
next
Cohere
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html"
} |
4f9cbc593c43-0 | .ipynb
.pdf
OpenAI
OpenAI#
This example goes over how to use LangChain to interact with OpenAI models
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = OpenAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
' Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in that year was the Dallas Cowboys.'
previous
Modal
next
Petals LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/openai.html"
} |
fc3691f31eb9-0 | .ipynb
.pdf
Modal
Modal#
This example goes over how to use LangChain to interact with Modal models
from langchain.llms import Modal
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Modal(endpoint_url="YOUR_ENDPOINT_URL")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Manifest
next
OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html"
} |
1f3d6d3ff19c-0 | .ipynb
.pdf
StochasticAI
StochasticAI#
This example goes over how to use LangChain to interact with StochasticAI models
from langchain.llms import StochasticAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = StochasticAI(api_url="YOUR_API_URL")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Self-Hosted Models via Runhouse
next
Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/stochasticai.html"
} |
ee62ab80373f-0 | .ipynb
.pdf
GooseAI LLM Example
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
GooseAI LLM Example#
This notebook goes over how to use Langchain with GooseAI.
Install openai#
The openai package is required to use the GooseAI API. Install openai using pip3 install openai.
$ pip3 install openai
Imports#
import os
from langchain.llms import GooseAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from GooseAI. You are given $10 in free credits to test different models.
os.environ["GOOSEAI_API_KEY"] = "YOUR_KEY_HERE"
Create the GooseAI instance#
You can specify different parameters such as the model name, max tokens generated, temperature, etc.
llm = GooseAI()
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
ForefrontAI LLM Example
next
GPT4all
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html"
} |
ee62ab80373f-1 | Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html"
} |
3d2d27553666-0 | .ipynb
.pdf
AI21
AI21#
This example goes over how to use LangChain to interact with AI21 models
from langchain.llms import AI21
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AI21()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Integrations
next
Aleph Alpha
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/ai21.html"
} |
174efd0e7069-0 | .ipynb
.pdf
Llama-cpp
Llama-cpp#
This notebook goes over how to run llama-cpp within LangChain
!pip install llama-cpp-python
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = LlamaCpp(model_path="./ggml-model-q4_0.bin")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
'\n\nWe know that Justin Bieber is currently 25 years old and that he was born on March 1st, 1994 and that he is a singer and he has an album called Purpose, so we know that he was born when Super Bowl XXXVIII was played between Dallas and Seattle and that it took place February 1st, 2004 and that the Seattle Seahawks won 24-21, so Seattle is our answer!'
previous
Hugging Face Hub
next
Manifest
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html"
} |
846c589a0f14-0 | .ipynb
.pdf
Aleph Alpha
Aleph Alpha#
This example goes over how to use LangChain to interact with Aleph Alpha models
from langchain.llms import AlephAlpha
from langchain import PromptTemplate, LLMChain
template = """Q: {question}
A:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AlephAlpha(model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is AI?"
llm_chain.run(question)
' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.\n'
previous
AI21
next
Anthropic
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/aleph_alpha.html"
} |
46fee0761a7b-0 | .ipynb
.pdf
SageMakerEndpoint
SageMakerEndpoint#
This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.
!pip3 install langchain boto3
from langchain.docstore.document import Document
example_doc_1 = """
Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.
Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.
Therefore, Peter stayed with her at the hospital for 3 days without leaving.
"""
docs = [
Document(
page_content=example_doc_1,
)
]
from typing import Dict
from langchain import PromptTemplate, SagemakerEndpoint
from langchain.llms.sagemaker_endpoint import ContentHandlerBase
from langchain.chains.question_answering import load_qa_chain
import json
query = """How long was Elizabeth hospitalized?
"""
prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
class ContentHandler(ContentHandlerBase):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_str = json.dumps({prompt: prompt, **model_kwargs})
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
return response_json[0]["generated_text"]
content_handler = ContentHandler() | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html"
} |
46fee0761a7b-1 | return response_json[0]["generated_text"]
content_handler = ContentHandler()
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpoint-name",
credentials_profile_name="credentials-profile-name",
region_name="us-west-2",
model_kwargs={"temperature":1e-10},
content_handler=content_handler
),
prompt=PROMPT
)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
previous
Replicate
next
Self-Hosted Models via Runhouse
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html"
} |
c1b6d32d0b7e-0 | .ipynb
.pdf
Manifest
Contents
Compare HF Models
Manifest#
This notebook goes over how to use Manifest and LangChain.
For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest
from manifest import Manifest
from langchain.llms.manifest import ManifestWrapper
manifest = Manifest(
client_name = "huggingface",
client_connection = "http://127.0.0.1:5000"
)
print(manifest.client.get_model_params())
{'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B'}
llm = ManifestWrapper(client=manifest, llm_kwargs={"temperature": 0.001, "max_tokens": 256})
# Map reduce example
from langchain import PromptTemplate
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
_prompt = """Write a concise summary of the following:
{text}
CONCISE SUMMARY:"""
prompt = PromptTemplate(template=_prompt, input_variables=["text"])
text_splitter = CharacterTextSplitter()
mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter)
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
mp_chain.run(state_of_the_union) | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html"
} |
c1b6d32d0b7e-1 | state_of_the_union = f.read()
mp_chain.run(state_of_the_union)
'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will continue to send vaccines to 112 countries, more than any other nation. "We have lost so much to COVID-19," Trump said. "Time with one another. And worst of all, so much loss of life." He said the CDC is working on a vaccine for kids under 5, and that the government will be ready with plenty of vaccines when they are available. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government is launching a "Test to Treat" initiative that will allow people to get tested at a pharmacy and get antiviral pills on the spot at no cost. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government will continue to send vaccines to 112 countries, more than any other nation. "We are coming for your'
Compare HF Models#
from langchain.model_laboratory import ModelLaboratory
manifest1 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5000"
),
llm_kwargs={"temperature": 0.01}
)
manifest2 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5001"
),
llm_kwargs={"temperature": 0.01}
)
manifest3 = ManifestWrapper( | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html"
} |
c1b6d32d0b7e-2 | )
manifest3 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5002"
),
llm_kwargs={"temperature": 0.01}
)
llms = [manifest1, manifest2, manifest3]
model_lab = ModelLaboratory(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
ManifestWrapper
Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B', 'temperature': 0.01}
pink
ManifestWrapper
Params: {'model_name': 'EleutherAI/gpt-neo-125M', 'model_path': 'EleutherAI/gpt-neo-125M', 'temperature': 0.01}
A flamingo is a small, round
ManifestWrapper
Params: {'model_name': 'google/flan-t5-xl', 'model_path': 'google/flan-t5-xl', 'temperature': 0.01}
pink
previous
Llama-cpp
next
Modal
Contents
Compare HF Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html"
} |
429dbfd51afa-0 | .ipynb
.pdf
GPT4all
GPT4all#
This example goes over how to use LangChain to interact with GPT4All models
!pip install pyllamacpp
from langchain.llms import GPT4All
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
# You'll need to download a compatible model and convert it to ggml.
# See: https://github.com/nomic-ai/gpt4all for more information.
llm = GPT4All(model="./models/gpt4all-model.bin")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
previous
GooseAI LLM Example
next
Hugging Face Hub
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html"
} |
b1664a7bf60d-0 | .ipynb
.pdf
Self-Hosted Models via Runhouse
Self-Hosted Models via Runhouse#
This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.
For more information, see Runhouse or the Runhouse docs.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
from langchain import PromptTemplate, LLMChain
import runhouse as rh
# For an on-demand A100 with GCP, Azure, or Lambda
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)
# For an on-demand A10G with AWS (no single A100s on AWS)
# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')
# For an existing cluster
# gpu = rh.cluster(ips=['<ip of the cluster>'],
# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},
# name='rh-a10x')
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = SelfHostedHuggingFaceLLM(model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/self_hosted_examples.html"
} |
b1664a7bf60d-1 | INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds
"\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber"
You can also load more custom models through the SelfHostedHuggingFaceLLM interface:
llm = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-small",
task="text2text-generation",
hardware=gpu,
)
llm("What is the capital of Germany?")
INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC
INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds
'berlin'
Using a custom load function, we can load a custom pipeline directly on the remote hardware:
def load_pipeline():
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Need to be inside the fn in notebooks
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
return pipe
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"][len(prompt):]
llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)
llm("Who is the current US president?")
INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/self_hosted_examples.html"
} |
b1664a7bf60d-2 | INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds
'john w. bush'
You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:
pipeline = load_pipeline()
llm = SelfHostedPipeline.from_pipeline(
pipeline=pipeline, hardware=gpu, model_reqs=model_reqs
)
Instead, we can also send it to the hardware’s filesystem, which will be much faster.
rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models")
llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu)
previous
SageMakerEndpoint
next
StochasticAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/llms/integrations/self_hosted_examples.html"
} |
f2e0914784fd-0 | .ipynb
.pdf
Getting Started
Contents
PromptTemplates
LLMChain
Streaming
Getting Started#
This notebook covers how to get started with chat models. The interface is based around messages rather than raw text.
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
AIMessage(content="J'aime programmer.", additional_kwargs={})
OpenAI’s chat model supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model:
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)
AIMessage(content="J'aime programmer.", additional_kwargs={})
You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter.
batch_messages = [
[
SystemMessage(content="You are a helpful assistant that translates English to French."), | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/getting_started.html"
} |
f2e0914784fd-1 | [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.")
],
]
result = chat.generate(batch_messages)
result
LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}})
You can recover things like token usage from this LLMResult
result.llm_output
{'token_usage': {'prompt_tokens': 71,
'completion_tokens': 18,
'total_tokens': 89}}
PromptTemplates#
You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template) | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/getting_started.html"
} |
f2e0914784fd-2 | system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
AIMessage(content="J'adore la programmation.", additional_kwargs={})
If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg:
prompt=PromptTemplate(
template="You are a helpful assistant that translates {input_language} to {output_language}.",
input_variables=["input_language", "output_language"],
)
system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)
LLMChain#
You can use the existing LLMChain in a very similar way to before - provide a prompt and a model.
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
"J'adore la programmation."
Streaming#
Streaming is supported for ChatOpenAI through callback handling.
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's pure delight
A taste that's sure to excite
Chorus:
Sparkling water, oh so fine | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/getting_started.html"
} |
f2e0914784fd-3 | A taste that's sure to excite
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Verse 2:
No sugar, no calories, just pure bliss
A drink that's hard to resist
It's the perfect way to quench my thirst
A drink that always comes first
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Bridge:
From the mountains to the sea
Sparkling water, you're the key
To a healthy life, a happy soul
A drink that makes me feel whole
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Outro:
Sparkling water, you're the one
A drink that's always so much fun
I'll never let you go, my friend
Sparkling
previous
Chat Models
next
How-To Guides
Contents
PromptTemplates
LLMChain
Streaming
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/getting_started.html"
} |
2f3472760408-0 | .rst
.pdf
How-To Guides
How-To Guides#
The examples here all address certain “how-to” guides for working with chat models.
How to use few shot examples
How to stream responses
previous
Getting Started
next
How to use few shot examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/how_to_guides.html"
} |
d1d2e2b9c0f5-0 | .rst
.pdf
Integrations
Integrations#
The examples here all highlight how to integrate with different chat models.
Azure
OpenAI
PromptLayer ChatOpenAI
previous
How to stream responses
next
Azure
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/integrations.html"
} |
09f604f90200-0 | .ipynb
.pdf
How to use few shot examples
Contents
Alternating Human/AI messages
System Messages
How to use few shot examples#
This notebook covers how to use few shot examples in chat models.
There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abstractions around this yet but rather using existing abstractions.
Alternating Human/AI messages#
The first way of doing few shot prompting relies on using alternating human/ai messages. See an example of this below.
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
template="You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = HumanMessagePromptTemplate.from_template("Hi")
example_ai = AIMessagePromptTemplate.from_template("Argh me mateys")
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
# get a chat completion from the formatted messages
chain.run("I love programming.")
"I be lovin' programmin', me hearty!"
System Messages#
OpenAI provides an optional name parameter that they also recommend using in conjunction with system messages to do few shot prompting. Here is an example of how to do that below. | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html"
} |
09f604f90200-1 | template="You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"})
example_ai = SystemMessagePromptTemplate.from_template("Argh me mateys", additional_kwargs={"name": "example_assistant"})
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
# get a chat completion from the formatted messages
chain.run("I love programming.")
"I be lovin' programmin', me hearty."
previous
How-To Guides
next
How to stream responses
Contents
Alternating Human/AI messages
System Messages
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html"
} |
d0cbb52d17ce-0 | .ipynb
.pdf
How to stream responses
How to stream responses#
This notebook goes over how to use streaming with a chat model.
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
)
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's pure delight
A taste that's sure to excite
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Verse 2:
No sugar, no calories, just pure bliss
A drink that's hard to resist
It's the perfect way to quench my thirst
A drink that always comes first
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Bridge:
From the mountains to the sea
Sparkling water, you're the key
To a healthy life, a happy soul
A drink that makes me feel whole
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Outro:
Sparkling water, you're the one
A drink that's always so much fun
I'll never let you go, my friend
Sparkling
previous
How to use few shot examples
next | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html"
} |
d0cbb52d17ce-1 | Sparkling
previous
How to use few shot examples
next
Integrations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html"
} |
ed2f1fd7dd65-0 | .ipynb
.pdf
Azure
Azure#
This notebook goes over how to connect to an Azure hosted OpenAI endpoint
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage
BASE_URL = "https://${TODO}.openai.azure.com"
API_KEY = "..."
DEPLOYMENT_NAME = "chat"
model = AzureChatOpenAI(
openai_api_base=BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name=DEPLOYMENT_NAME,
openai_api_key=API_KEY,
openai_api_type = "azure",
)
model([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
AIMessage(content="\n\nJ'aime programmer.", additional_kwargs={})
previous
Integrations
next
OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/integrations/azure_chat_openai.html"
} |
9bafd8f3f283-0 | .ipynb
.pdf
PromptLayer ChatOpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer ChatOpenAI#
This example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.
Install PromptLayer#
The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.
pip install promptlayer
Imports#
import os
from langchain.chat_models import PromptLayerChatOpenAI
from langchain.schema import HumanMessage
Set the Environment API Key#
You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.
Set it as an environment variable called PROMPTLAYER_API_KEY.
os.environ["PROMPTLAYER_API_KEY"] = "**********"
Use the PromptLayerOpenAI LLM like normal#
You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature.
chat = PromptLayerChatOpenAI(pl_tags=["langchain"])
chat([HumanMessage(content="I am a cat and I want")])
AIMessage(content='to take a nap in a cozy spot. I search around for a suitable place and finally settle on a soft cushion on the window sill. I curl up into a ball and close my eyes, relishing the warmth of the sun on my fur. As I drift off to sleep, I can hear the birds chirping outside and feel the gentle breeze blowing through the window. This is the life of a contented cat.', additional_kwargs={})
The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track#
If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html"
} |
9bafd8f3f283-1 | chat = PromptLayerChatOpenAI(return_pl_id=True)
chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]])
for res in chat_results.generations:
pl_request_id = res[0].generation_info["pl_request_id"]
promptlayer.track.score(request_id=pl_request_id, score=100)
Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.
Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.
previous
OpenAI
next
Text Embedding Models
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html"
} |
dcb942a1eb20-0 | .ipynb
.pdf
OpenAI
OpenAI#
This notebook covers how to get started with OpenAI chat models.
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)
AIMessage(content="J'aime programmer.", additional_kwargs={})
You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
AIMessage(content="J'adore la programmation.", additional_kwargs={}) | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html"
} |
dcb942a1eb20-1 | AIMessage(content="J'adore la programmation.", additional_kwargs={})
previous
Azure
next
PromptLayer ChatOpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html"
} |
63314fa31ca2-0 | .ipynb
.pdf
Cohere
Cohere#
Let’s load the Cohere Embedding class.
from langchain.embeddings import CohereEmbeddings
embeddings = CohereEmbeddings(cohere_api_key=cohere_api_key)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
AzureOpenAI
next
Fake Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/cohere.html"
} |
cc18af5e74f6-0 | .ipynb
.pdf
Hugging Face Hub
Hugging Face Hub#
Let’s load the Hugging Face Embedding class.
from langchain.embeddings import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings()
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
Fake Embeddings
next
InstructEmbeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/huggingfacehub.html"
} |
7454dc1048ab-0 | .ipynb
.pdf
Self Hosted Embeddings
Self Hosted Embeddings#
Let’s load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.
from langchain.embeddings import (
SelfHostedEmbeddings,
SelfHostedHuggingFaceEmbeddings,
SelfHostedHuggingFaceInstructEmbeddings,
)
import runhouse as rh
# For an on-demand A100 with GCP, Azure, or Lambda
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)
# For an on-demand A10G with AWS (no single A100s on AWS)
# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')
# For an existing cluster
# gpu = rh.cluster(ips=['<ip of the cluster>'],
# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},
# name='my-cluster')
embeddings = SelfHostedHuggingFaceEmbeddings(hardware=gpu)
text = "This is a test document."
query_result = embeddings.embed_query(text)
And similarly for SelfHostedHuggingFaceInstructEmbeddings:
embeddings = SelfHostedHuggingFaceInstructEmbeddings(hardware=gpu)
Now let’s load an embedding model with a custom load function:
def get_pipeline():
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
pipeline,
) # Must be inside the function in notebooks
model_id = "facebook/bart-base"
tokenizer = AutoTokenizer.from_pretrained(model_id) | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html"
} |
7454dc1048ab-1 | tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
def inference_fn(pipeline, prompt):
# Return last hidden state of the model
if isinstance(prompt, list):
return [emb[0][-1] for emb in pipeline(prompt)]
return pipeline(prompt)[0][-1]
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
inference_fn=inference_fn,
)
query_result = embeddings.embed_query(text)
previous
SageMaker Endpoint Embeddings
next
TensorflowHub
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html"
} |
f9daecdbb2bc-0 | .ipynb
.pdf
SageMaker Endpoint Embeddings
SageMaker Endpoint Embeddings#
Let’s load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.
For instrucstions on how to do this, please see here
!pip3 install langchain boto3
from typing import Dict
from langchain.embeddings import SagemakerEndpointEmbeddings
from langchain.llms.sagemaker_endpoint import ContentHandlerBase
import json
class ContentHandler(ContentHandlerBase):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_str = json.dumps({"inputs": prompt, **model_kwargs})
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
return response_json["embeddings"]
content_handler = ContentHandler()
embeddings = SagemakerEndpointEmbeddings(
# endpoint_name="endpoint-name",
# credentials_profile_name="credentials-profile-name",
endpoint_name="huggingface-pytorch-inference-2023-03-21-16-14-03-834",
region_name="us-east-1",
content_handler=content_handler
)
query_result = embeddings.embed_query("foo")
doc_results = embeddings.embed_documents(["foo"])
doc_results
previous
OpenAI
next
Self Hosted Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sagemaker-endpoint.html"
} |
d949a80ead3c-0 | .ipynb
.pdf
Jina
Jina#
Let’s load the Jina Embedding class.
from langchain.embeddings import JinaEmbeddings
embeddings = JinaEmbeddings(jina_auth_token=jina_auth_token, model_name="ViT-B-32::openai")
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
In the above example, ViT-B-32::openai, OpenAI’s pretrained ViT-B-32 model is used. For a full list of models, see here.
previous
InstructEmbeddings
next
Llama-cpp
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/jina.html"
} |
da8299c44940-0 | .ipynb
.pdf
Fake Embeddings
Fake Embeddings#
LangChain also provides a fake embedding class. You can use this to test your pipelines.
from langchain.embeddings import FakeEmbeddings
embeddings = FakeEmbeddings(size=1352)
query_result = embeddings.embed_query("foo")
doc_results = embeddings.embed_documents(["foo"])
previous
Cohere
next
Hugging Face Hub
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/fake.html"
} |
6fc88060b489-0 | .ipynb
.pdf
OpenAI
OpenAI#
Let’s load the OpenAI Embedding class.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
Let’s load the OpenAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see here
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model_name="ada")
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
Llama-cpp
next
SageMaker Endpoint Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/openai.html"
} |
a8bd18f042a1-0 | .ipynb
.pdf
Llama-cpp
Llama-cpp#
This notebook goes over how to use Llama-cpp embeddings within LangChain
!pip install llama-cpp-python
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin")
text = "This is a test document."
query_result = llama.embed_query(text)
doc_result = llama.embed_documents([text])
previous
Jina
next
OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/llamacpp.html"
} |
341bbbd45b21-0 | .ipynb
.pdf
Aleph Alpha
Contents
Asymmetric
Symmetric
Aleph Alpha#
There are two possible ways to use Aleph Alpha’s semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric embeddings are the suggested approach.
Asymmetric#
from langchain.embeddings import AlephAlphaAsymmetricSemanticEmbedding
document = "This is a content of the document"
query = "What is the contnt of the document?"
embeddings = AlephAlphaAsymmetricSemanticEmbedding()
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
Symmetric#
from langchain.embeddings import AlephAlphaSymmetricSemanticEmbedding
text = "This is a test text"
embeddings = AlephAlphaSymmetricSemanticEmbedding()
doc_result = embeddings.embed_documents([text])
query_result = embeddings.embed_query(text)
previous
Text Embedding Models
next
AzureOpenAI
Contents
Asymmetric
Symmetric
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/aleph_alpha.html"
} |
e109f997b3a7-0 | .ipynb
.pdf
InstructEmbeddings
InstructEmbeddings#
Let’s load the HuggingFace instruct Embeddings class.
from langchain.embeddings import HuggingFaceInstructEmbeddings
embeddings = HuggingFaceInstructEmbeddings(
query_instruction="Represent the query for retrieval: "
)
load INSTRUCTOR_Transformer
max_seq_length 512
text = "This is a test document."
query_result = embeddings.embed_query(text)
previous
Hugging Face Hub
next
Jina
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/instruct_embeddings.html"
} |
b87363b887a5-0 | .ipynb
.pdf
AzureOpenAI
AzureOpenAI#
Let’s load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.
# set the environment variables needed for openai package to know to reach out to azure
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
embeddings = OpenAIEmbeddings(model="your-embeddings-deployment-name")
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
Aleph Alpha
next
Cohere
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/models/text_embedding/examples/azureopenai.html"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.