id stringlengths 14 16 | text stringlengths 36 2.73k | source stringlengths 49 117 |
|---|---|---|
5a390d76240c-0 | .ipynb
.pdf
Structured Decoding with RELLM
Contents
Hugging Face Baseline
RELLM LLM Wrapper
Structured Decoding with RELLM#
RELLM is a library that wraps local Hugging Face pipeline models for structured decoding.
It works by generating tokens one at a time. At each step, it masks tokens that don’t conform to the pro... | https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html |
5a390d76240c-1 | Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
generations=[[Generation(text=' "What\'s the capital of Maryland?"\n', generation_info=None)]] llm_output=None
That’s not so impressive, is it? It didn’t answer the question and it didn’t follow the JSON format at all! Let’s try with the structured... | https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html |
4999ccf108e1-0 | .ipynb
.pdf
Runhouse
Runhouse#
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs.
This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.
Note: Code uses SelfHosted name instead... | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
4999ccf108e1-1 | llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds
"\n\nLet's say we're talking sports ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
4999ccf108e1-2 | )
return pipe
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"][len(prompt):]
llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)
llm("Who is the current US president?")
INFO | 2023-02-17 05:42:59,219 | Running _generat... | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
9fc646b4880c-0 | .ipynb
.pdf
Modal
Modal#
The Modal Python Library provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.
The Modal itself does not provide any LLMs but only the infrastructure.
This example goes over how to use LangChain to interact with Modal.
Here is another exam... | https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html |
9fc646b4880c-1 | llm_chain.run(question)
previous
Manifest
next
MosaicML
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html |
ea5b0bae433b-0 | .ipynb
.pdf
PromptLayer OpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer OpenAI#
PromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware... | https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html |
ea5b0bae433b-1 | The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track#
If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id.
llm = PromptLayerOpenAI(return_pl_id=True)
llm_results... | https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html |
506ade5a016b-0 | .ipynb
.pdf
DeepInfra
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
DeepInfra#
DeepInfra provides several LLMs.
This notebook goes over how to use Langchain with DeepInfra.
Imports#
import os
from langchain.llms import DeepIn... | https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html |
506ade5a016b-1 | llm_chain.run(question)
previous
Databricks
next
ForefrontAI
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html |
a2ea6d75e744-0 | .ipynb
.pdf
Banana
Banana#
Banana is focused on building the machine learning infrastructure.
This example goes over how to use LangChain to interact with Banana models
# Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/python
!pip install banana-dev
# get new tokens: https://app.banana.dev/
... | https://python.langchain.com/en/latest/modules/models/llms/integrations/banana.html |
fbf534b00e2a-0 | .ipynb
.pdf
C Transformers
C Transformers#
The C Transformers library provides Python bindings for GGML models.
This example goes over how to use LangChain to interact with C Transformers models.
Install
%pip install ctransformers
Load Model
from langchain.llms import CTransformers
llm = CTransformers(model='marella/gp... | https://python.langchain.com/en/latest/modules/models/llms/integrations/ctransformers.html |
8a7e442bed9e-0 | .ipynb
.pdf
PredictionGuard
Contents
Basic LLM usage
Chaining
PredictionGuard#
How to use PredictionGuard wrapper
! pip install predictionguard langchain
import predictionguard as pg
from langchain.llms import PredictionGuard
Basic LLM usage#
pgllm = PredictionGuard(name="default-text-gen", token="<your access token>... | https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html |
c8d4cdc6e27a-0 | .ipynb
.pdf
PipelineAI
Contents
Install pipeline-ai
Imports
Set the Environment API Key
Create the PipelineAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
PipelineAI#
PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
This ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/pipelineai_example.html |
c8d4cdc6e27a-1 | Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Petals
next
PredictionGuard
Contents
Install pipeline-ai
Imports
Set the Environment API Key
Create the PipelineAI instance
Create a Prompt Te... | https://python.langchain.com/en/latest/modules/models/llms/integrations/pipelineai_example.html |
888deda77342-0 | .ipynb
.pdf
OpenAI
Contents
OpenAI
if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through
OpenAI#
OpenAI offers a spectrum of models with different levels of power suitable for different tasks.
This example goes over how to use LangChain to interact with OpenAI models
#... | https://python.langchain.com/en/latest/modules/models/llms/integrations/openai.html |
d7992abeceae-0 | .ipynb
.pdf
CerebriumAI
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
CerebriumAI#
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.
This notebook goes over how ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html |
d7992abeceae-1 | Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Beam integration for langchain
next
Cohere
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Crea... | https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html |
617890675057-0 | .ipynb
.pdf
Petals
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
Petals#
Petals runs 100B+ language models at home, BitTorrent-style.
This notebook goes over how to use Langchain with Petals.
Install petals#
The p... | https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html |
617890675057-1 | Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
OpenLM
next
PipelineAI
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiat... | https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html |
7c41ac1b5adb-0 | .ipynb
.pdf
OpenLM
Contents
Setup
Using LangChain with OpenLM
OpenLM#
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.
It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset u... | https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html |
7c41ac1b5adb-1 | for model in ["text-davinci-003", "huggingface.co/gpt2"]:
llm = OpenLM(model=model)
llm_chain = LLMChain(prompt=prompt, llm=llm)
result = llm_chain.run(question)
print("""Model: {}
Result: {}""".format(model, result))
Model: text-davinci-003
Result: France is a country in Europe. The capital of France ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html |
1863da3dac17-0 | .ipynb
.pdf
Azure OpenAI
Contents
API configuration
Deployments
Azure OpenAI#
This notebook goes over how to use Langchain with Azure OpenAI.
The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html |
1863da3dac17-1 | import openai
response = openai.Completion.create(
engine="text-davinci-002-prod",
prompt="This is a test",
max_tokens=5
)
!pip install openai
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2022-12-01"
os.environ["OPENAI_API_BASE"] = "..."
os.environ["OPENAI_API_KEY"] ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html |
30f17816b1c6-0 | .ipynb
.pdf
Replicate
Contents
Setup
Calling a model
Chaining Calls
Replicate#
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you’re building your own machine learning models, Replicate makes it easy to deploy them at scale.
T... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
30f17816b1c6-1 | Note that only the first output of a model will be returned.
llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
'The legal driving age of dog... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
30f17816b1c6-2 | from langchain.chains import SimpleSequentialChain
First, let’s define the LLM for this model as a flan-5, and text2image as a stable diffusion model.
dolly_llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
text2image = Replicate(model="stability-ai/stable-... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
30f17816b1c6-3 | catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...
novelty socks
todd & co.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png
> Finished chain.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwU... | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
200568fb2a57-0 | .ipynb
.pdf
Google Cloud Platform Vertex AI PaLM
Google Cloud Platform Vertex AI PaLM#
Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
PaLM API on Vertex AI is a Preview offering, su... | https://python.langchain.com/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html |
200568fb2a57-1 | prompt = PromptTemplate(template=template, input_variables=["question"])
llm = VertexAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
'Justin Bieber was born on March 1, 1994. The Super Bowl in 1994 was won by the... | https://python.langchain.com/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html |
ab0d0d51952b-0 | .ipynb
.pdf
How to use the async API for LLMs
How to use the async API for LLMs#
LangChain provides async support for LLMs by leveraging the asyncio library.
Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, OpenAI, PromptLayerOpenAI, ChatOpenAI an... | https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html |
ab0d0d51952b-1 | I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, how about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How a... | https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html |
d4679ffc8456-0 | .ipynb
.pdf
How (and why) to use the human input LLM
How (and why) to use the human input LLM#
Similar to the fake LLM, LangChain provides a pseudo LLM class that can be used for testing, debugging, or educational purposes. This allows you to mock out calls to the LLM and simulate how a human would respond if they rece... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
d4679ffc8456-1 | Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: What is 'Bocchi the Rock!'?
Thought:
=====END OF PROMPT===... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
d4679ffc8456-2 | Page: Manga Time Kirara Max
Summary: Manga Time Kirara Max (まんがタイムきららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the mag... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
d4679ffc8456-3 | Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōb... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
d4679ffc8456-4 | Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōb... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
d4679ffc8456-5 | Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōb... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
d4679ffc8456-6 | Observation: Page: Bocchi the Rock!
Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōb... | https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html |
bc4c5b7b9bf7-0 | .ipynb
.pdf
How to write a custom LLM wrapper
How to write a custom LLM wrapper#
This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.
There is only one required thing that a custom LLM needs to implement:
A _call... | https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html |
bc4c5b7b9bf7-1 | 'This is a '
We can also print the LLM and see its custom print.
print(llm)
CustomLLM
Params: {'n': 10}
previous
How to use the async API for LLMs
next
How (and why) to use the fake LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html |
8d0ff0f863b8-0 | .ipynb
.pdf
How to stream LLM and Chat Model responses
How to stream LLM and Chat Model responses#
LangChain provides streaming support for LLMs. Currently, we support streaming for the OpenAI, ChatOpenAI, and ChatAnthropic implementations, but streaming support for other LLM implementations is on the roadmap. To utili... | https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
8d0ff0f863b8-1 | On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.
llm.generate(["Tell me a jok... | https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
8d0ff0f863b8-2 | Sparkling water, you're my favorite vibe
Bridge:
You're my go-to drink, day or night
You make me feel so light
I'll never give you up, you're my true love
Sparkling water, you're sent from above
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
S... | https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
4d60813c337a-0 | .ipynb
.pdf
How to serialize LLM classes
Contents
Loading
Saving
How to serialize LLM classes#
This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc).
from langchain.llms i... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html |
4d60813c337a-1 | llm.save("llm.json")
llm.save("llm.yaml")
previous
How to cache LLM calls
next
How to stream LLM and Chat Model responses
Contents
Loading
Saving
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html |
c5259b6ba69a-0 | .ipynb
.pdf
How (and why) to use the fake LLM
How (and why) to use the fake LLM#
We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.
In this notebook we go over how to use this.
We start this with usi... | https://python.langchain.com/en/latest/modules/models/llms/examples/fake_llm.html |
104caba1bf35-0 | .ipynb
.pdf
How to cache LLM calls
Contents
In Memory Cache
SQLite Cache
Redis Cache
Standard Cache
Semantic Cache
GPTCache
Momento Cache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
How to cache LLM calls#
This notebook covers how to cache results of individual LLM calls.
im... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
104caba1bf35-1 | llm("Tell me a joke")
CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms
Wall time: 825 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms
Wall time: 2.67 ms
'\n\nWhy did ... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
104caba1bf35-2 | Semantic Cache#
Use Redis to cache prompts and responses and evaluate hits based on semantic similarity.
from langchain.embeddings import OpenAIEmbeddings
from langchain.cache import RedisSemanticCache
langchain.llm_cache = RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=OpenAIEmbeddings()
)
%... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
104caba1bf35-3 | cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"),
)
langchain.llm_cache = GPTCache(init_gptcache)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 21.5 ms, sys... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
104caba1bf35-4 | Wall time: 8.44 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is an exact match, so it finds it in the cache
llm("Tell me a joke")
CPU times: user 866 ms, sys: 20 ms, total: 886 ms
Wall time: 226 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# ... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
104caba1bf35-5 | Wall time: 1.73 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
# The second time it is, so it goes faster
# When run in the same region as the cache, latencies are single digit ms
llm("Tell me a joke")
CPU times: user 3.16 ms, sys: 2.98 ms, total: 6.14 ms
Wall time: 57.9 ms
'\n\nWhy did... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
104caba1bf35-6 | idx = Column(Integer)
response = Column(String)
prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True))
__table_args__ = (
Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"),
)
engine = create_engine("postgresql://postgres:p... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
104caba1bf35-7 | llm = OpenAI(model_name="text-davinci-002")
no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
text_splitter = CharacterTextSplitter()
with open('../../../state_of_the_union.txt') as f:
sta... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
104caba1bf35-8 | %%time
chain.run(docs)
CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms
Wall time: 1.04 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education a... | https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
6f99445a1f6a-0 | .ipynb
.pdf
How to track token usage
How to track token usage#
This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.
Let’s first look at an extremely simple example of tracking token usage for a single LLM call.
from langchain.llms import OpenAI
f... | https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html |
6f99445a1f6a-1 | print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised t... | https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html |
3eecfe59647d-0 | .rst
.pdf
Example Selectors
Example Selectors#
Note
Conceptual Guide
If you have a large number of examples, you may need to select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so.
The base interface is defined as below:
class BaseExampleSelector(ABC):
"""Interface for... | https://python.langchain.com/en/latest/modules/prompts/example_selectors.html |
ce9e92caab35-0 | .ipynb
.pdf
Getting Started
Contents
PromptTemplates
to_string
to_messages
Getting Started#
This section contains everything related to prompts. A prompt is the value passed into the Language Model. This value can either be a string (for LLMs) or a list of messages (for Chat Models).
The data types of these prompts a... | https://python.langchain.com/en/latest/modules/prompts/getting_started.html |
ce9e92caab35-1 | string_prompt_value.to_string()
'tell me a joke about soccer'
chat_prompt_value.to_string()
'Human: tell me a joke about soccer'
to_messages#
This is what is called when passing to ChatModel (which expects a list of messages)
string_prompt_value.to_messages()
[HumanMessage(content='tell me a joke about soccer', additio... | https://python.langchain.com/en/latest/modules/prompts/getting_started.html |
ae00a66b5b60-0 | .rst
.pdf
Output Parsers
Output Parsers#
Note
Conceptual Guide
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an out... | https://python.langchain.com/en/latest/modules/prompts/output_parsers.html |
b9ff34cb0b7b-0 | .rst
.pdf
Prompt Templates
Prompt Templates#
Note
Conceptual Guide
Language models take text as input - that text is commonly referred to as a prompt.
Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
LangChain provides several classes and functions t... | https://python.langchain.com/en/latest/modules/prompts/prompt_templates.html |
399938b8e5d3-0 | .ipynb
.pdf
Chat Prompt Template
Contents
Format output
Different types of MessagePromptTemplate
Chat Prompt Template#
Chat Models takes a list of chat messages as input - this list commonly referred to as a prompt.
These chat messages differ from raw string (which you would pass into a LLM model) in that every messa... | https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html |
399938b8e5d3-1 | input_variables=["input_language", "output_language"],
)
system_message_prompt_2 = SystemMessagePromptTemplate(prompt=prompt)
assert system_message_prompt == system_message_prompt_2
After that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – t... | https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html |
399938b8e5d3-2 | [SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),
HumanMessage(content='I love programming.', additional_kwargs={})]
Different types of MessagePromptTemplate#
LangChain provides different types of MessagePromptTemplate. The most commonly used are AIMessageP... | https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html |
399938b8e5d3-3 | 3. Practice, practice, practice: The best way to learn programming is through hands-on experience\
""")
chat_prompt.format_prompt(conversation=[human_message, ai_message], word_count="10").to_messages()
[HumanMessage(content='What is the best way to learn programming?', additional_kwargs={}),
AIMessage(content='1. Cho... | https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html |
9996c947b3bb-0 | .ipynb
.pdf
Similarity ExampleSelector
Similarity ExampleSelector#
The SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
from langchain.prompts.exam... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html |
9996c947b3bb-1 | example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
# Input is a feeling, so should select the happy/sad example
pr... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/similarity.html |
832ff859a765-0 | .ipynb
.pdf
Maximal Marginal Relevance ExampleSelector
Maximal Marginal Relevance ExampleSelector#
The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddin... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html |
832ff859a765-1 | k=2
)
mmr_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
# Input is a feeling,... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/mmr.html |
faf06e2c3024-0 | .ipynb
.pdf
LengthBased ExampleSelector
LengthBased ExampleSelector#
This ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while ... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html |
faf06e2c3024-1 | # it is provided as a default value if none is specified.
# get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x))
)
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
pref... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html |
faf06e2c3024-2 | Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output: small
Input: enthusiastic
Output:
previous
How to create a custom example selector
next
Maximal Marginal Relevance ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/length_based.html |
3ce232196a5b-0 | .ipynb
.pdf
NGram Overlap ExampleSelector
NGram Overlap ExampleSelector#
The NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive.
The selector allows for a th... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
3ce232196a5b-1 | {"input": "Spot can run.", "output": "Spot puede correr."},
]
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
example_selector = NGramOverlapExampleSelector(
# These are the examples it has available to choose from.
examples=examples, ... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
3ce232196a5b-2 | Output: Ver correr a Spot.
Input: My dog barks.
Output: Mi perro ladra.
Input: Spot can run fast.
Output:
# You can add examples to NGramOverlapExampleSelector as well.
new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."}
example_selector.add_example(new_example)
print(dynamic_prompt.format(se... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
3ce232196a5b-3 | Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: Spot can play fetch.
Output:
# Setting threshold greater than 1.0
example_selector.threshold=1.0+1e-9
print(dynamic_prompt.format(sentence="Spot can play fetch."))
Give the Spanish translation of every input
Input: Spot can play fetch.
Output:
previous
Maxima... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html |
165562c52578-0 | .md
.pdf
How to create a custom example selector
Contents
Implement custom example selector
Use custom example selector
How to create a custom example selector#
In this tutorial, we’ll create a custom example selector that selects every alternate example from a given list of examples.
An ExampleSelector must implemen... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html |
165562c52578-1 | # Add new example to the set of examples
example_selector.add_example({"foo": "4"})
example_selector.examples
# -> [{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}]
# Select examples
example_selector.select_examples({"foo": "foo"})
# -> array([{'foo': '1'}, {'foo': '4'}], dtype=object)
previous
Example Selectors... | https://python.langchain.com/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html |
82c53a267bf5-0 | .ipynb
.pdf
Output Parsers
Output Parsers#
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must impl... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html |
82c53a267bf5-1 | punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator('setup')
def question_ends_with_question_mark(cls, field):
if field[-1] != '?':
raise ValueError("Badly formed question!")
return field
# S... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html |
b5b1e6a1cb3e-0 | .ipynb
.pdf
RetryOutputParser
RetryOutputParser#
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can’t. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.
from langchain.prompts... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html |
b5b1e6a1cb3e-1 | 23 json_object = json.loads(json_str)
---> 24 return self.pydantic_object.parse_obj(json_object)
26 except (json.JSONDecodeError, ValidationError) as e:
File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj()
File ~/.pyenv/version... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html |
b5b1e6a1cb3e-2 | fix_parser.parse(bad_response)
Action(action='search', action_input='')
Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response.
from langchain.output_parsers import RetryWithErrorOutputParser
retry_parser = RetryWithErrorOutputParser.... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/retry.html |
bf09f8c905df-0 | .ipynb
.pdf
OutputFixingParser
OutputFixingParser#
This output parser wraps another output parser and tries to fix any mistakes
The Pydantic guardrail simply tries to parse the LLM response. If it does not parse correctly, then it errors.
But we can do other things besides throw errors. Specifically, we can pass the mi... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html |
bf09f8c905df-1 | 24 return self.pydantic_object.parse_obj(json_object)
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is N... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html |
bf09f8c905df-2 | Cell In[6], line 1
----> 1 parser.parse(misformatted)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {text}. Got: {e}"
---> 29 raise OutputParserException(ms... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html |
209794bc40e5-0 | .ipynb
.pdf
CommaSeparatedListOutputParser
CommaSeparatedListOutputParser#
Here’s another parser strictly less powerful than Pydantic/JSON parsing.
from langchain.output_parsers import CommaSeparatedListOutputParser
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langch... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/comma_separated.html |
292ea692c528-0 | .ipynb
.pdf
Structured Output Parser
Structured Output Parser#
While the Pydantic/JSON parser is more powerful, we initially experimented data structures having text fields only.
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate, ChatPromptTemplate,... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html |
292ea692c528-1 | prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}")
],
input_variables=["question"],
partial_variables={"format_instructions": format_instructions}
)
_input = prompt.format_prompt(... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/structured.html |
8bb47e0ec45d-0 | .ipynb
.pdf
Enum Output Parser
Enum Output Parser#
This notebook shows how to use an Enum output parser
from langchain.output_parsers.enum import EnumOutputParser
from enum import Enum
class Colors(Enum):
RED = "red"
GREEN = "green"
BLUE = "blue"
parser = EnumOutputParser(enum=Colors)
parser.parse("red")
<C... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html |
8bb47e0ec45d-1 | During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[8], line 2
1 # And raises errors when appropriate
----> 2 parser.parse("yellow")
File ~/workplace/langchain/langchain/output_parsers/enum.py:27, in EnumOutputPars... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/enum.html |
43e614d88a76-0 | .ipynb
.pdf
PydanticOutputParser
PydanticOutputParser#
This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema.
Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-form... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html |
43e614d88a76-1 | prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
_input = prompt.format_prompt(query=joke_query)
output = model(_input.to_string())
parser.parse(output)
Joke(... | https://python.langchain.com/en/latest/modules/prompts/output_parsers/examples/pydantic.html |
14e4b02f4b09-0 | .rst
.pdf
How-To Guides
How-To Guides#
If you’re new to the library, you may want to start with the Quickstart.
The user guide here shows more advanced workflows and how to use the library in different ways.
Connecting to a Feature Store
How to create a custom prompt template
How to create a prompt template that uses f... | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/how_to_guides.html |
c09b78917f7e-0 | .md
.pdf
Getting Started
Contents
What is a prompt template?
Create a prompt template
Template formats
Validate template
Serialize prompt template
Pass few shot examples to a prompt template
Select examples for a prompt template
Getting Started#
In this tutorial, we will learn about:
what a prompt template is, and wh... | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html |
c09b78917f7e-1 | no_input_prompt.format()
# -> "Tell me a joke."
# An example prompt with one input variable
one_input_prompt = PromptTemplate(input_variables=["adjective"], template="Tell me a {adjective} joke.")
one_input_prompt.format(adjective="funny")
# -> "Tell me a funny joke."
# An example prompt with multiple input variables
m... | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html |
c09b78917f7e-2 | # -> Tell me a funny joke about chickens.
Currently, PromptTemplate only supports jinja2 and f-string templating format. If there is any other templating format that you would like to use, feel free to open an issue in the Github page.
Validate template#
By default, PromptTemplate will validate the template string by c... | https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.