id
stringlengths
14
16
text
stringlengths
29
2.73k
source
stringlengths
49
117
c6cdf35b784e-0
.ipynb .pdf Replicate Contents Setup Calling a model Chaining Calls Replicate# Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you’re building your own machine learning models, Replicate makes it easy to deploy them at scale. This example goes over how to use LangChain to interact with Replicate models Setup# To run this notebook, you’ll need to create a replicate account and install the replicate python client. !pip install replicate # get a token: https://replicate.com/account from getpass import getpass REPLICATE_API_TOKEN = getpass() ········ import os os.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKEN from langchain.llms import Replicate from langchain import PromptTemplate, LLMChain Calling a model# Find a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version For example, for this dolly model, click on the API tab. The model name/version would be: replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5 Only the model param is required, but we can add other model params when initializing. For example, if we were running stable diffusion and wanted to change the image dimensions: Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})
https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html
c6cdf35b784e-1
Note that only the first output of a model will be returned. llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5") prompt = """ Answer the following yes/no question by reasoning step by step. Can a dog drive a car? """ llm(prompt) 'The legal driving age of dogs is 2. Cars are designed for humans to drive. Therefore, the final answer is yes.' We can call any replicate model using this syntax. For example, we can call stable diffusion. text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'}) image_output = text2image("A cat riding a motorcycle by Picasso") image_output 'https://replicate.delivery/pbxt/Cf07B1zqzFQLOSBQcKG7m9beE74wf7kuip5W9VxHJFembefKE/out-0.png' The model spits out a URL. Let’s render it. from PIL import Image import requests from io import BytesIO response = requests.get(image_output) img = Image.open(BytesIO(response.content)) img Chaining Calls# The whole point of langchain is to… chain! Here’s an example of how do that. from langchain.chains import SimpleSequentialChain
https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html
c6cdf35b784e-2
from langchain.chains import SimpleSequentialChain First, let’s define the LLM for this model as a flan-5, and text2image as a stable diffusion model. dolly_llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5") text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf") First prompt in the chain prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=dolly_llm, prompt=prompt) Second prompt to get the logo for company description second_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}", ) chain_two = LLMChain(llm=dolly_llm, prompt=second_prompt) Third prompt, let’s create the image based on the description output from prompt 2 third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}", ) chain_three = LLMChain(llm=text2image, prompt=third_prompt) Now let’s run it! # Run the chain specifying only the input variable for the first chain. overall_chain = SimpleSequentialChain(chains=[chain, chain_two, chain_three], verbose=True) catchphrase = overall_chain.run("colorful socks")
https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html
c6cdf35b784e-3
catchphrase = overall_chain.run("colorful socks") print(catchphrase) > Entering new SimpleSequentialChain chain... novelty socks todd & co. https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png > Finished chain. https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png response = requests.get("https://replicate.delivery/pbxt/eq6foRJngThCAEBqse3nL3Km2MBfLnWQNd0Hy2SQRo2LuprCB/out-0.png") img = Image.open(BytesIO(response.content)) img previous Structured Decoding with RELLM next Runhouse Contents Setup Calling a model Chaining Calls By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html
b2ec8626b0a7-0
.ipynb .pdf GooseAI Contents Install openai Imports Set the Environment API Key Create the GooseAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain GooseAI# GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models. This notebook goes over how to use Langchain with GooseAI. Install openai# The openai package is required to use the GooseAI API. Install openai using pip3 install openai. $ pip3 install openai Imports# import os from langchain.llms import GooseAI from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from GooseAI. You are given $10 in free credits to test different models. from getpass import getpass GOOSEAI_API_KEY = getpass() os.environ["GOOSEAI_API_KEY"] = GOOSEAI_API_KEY Create the GooseAI instance# You can specify different parameters such as the model name, max tokens generated, temperature, etc. llm = GooseAI() Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Google Cloud Platform Vertex AI PaLM next GPT4All Contents
https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html
b2ec8626b0a7-1
Google Cloud Platform Vertex AI PaLM next GPT4All Contents Install openai Imports Set the Environment API Key Create the GooseAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html
cea888ac1cdc-0
.ipynb .pdf Amazon Bedrock Contents Using in a conversation chain Amazon Bedrock# Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case %pip install boto3 from langchain.llms.bedrock import Bedrock llm = Bedrock(credentials_profile_name="bedrock-admin", model_id="amazon.titan-tg1-large") Using in a conversation chain# from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory() ) conversation.predict(input="Hi there!") previous Beam integration for langchain next CerebriumAI Contents Using in a conversation chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/bedrock.html
e564df3eb57f-0
.ipynb .pdf Google Cloud Platform Vertex AI PaLM Google Cloud Platform Vertex AI PaLM# Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the GCP Service Specific Terms. Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the launch stage descriptions. Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview terms and conditions (Preview Terms). For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms). To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either: Have credentials configured for your environment (gcloud, workload identity, etc…) Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable This codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth. For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAC https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth #!pip install google-cloud-aiplatform from langchain.llms import VertexAI from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = VertexAI()
https://python.langchain.com/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html
e564df3eb57f-1
llm = VertexAI() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) 'Justin Bieber was born on March 1, 1994. The Super Bowl in 1994 was won by the San Francisco 49ers.\nThe final answer: San Francisco 49ers.' previous ForefrontAI next GooseAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html
d6ae6e20dace-0
.ipynb .pdf Structured Decoding with RELLM Contents Hugging Face Baseline RELLM LLM Wrapper Structured Decoding with RELLM# RELLM is a library that wraps local Hugging Face pipeline models for structured decoding. It works by generating tokens one at a time. At each step, it masks tokens that don’t conform to the provided partial regular expression. Warning - this module is still experimental !pip install rellm > /dev/null Hugging Face Baseline# First, let’s establish a qualitative baseline by checking the output of the model without structured decoding. import logging logging.basicConfig(level=logging.ERROR) prompt = """Human: "What's the capital of the United States?" AI Assistant:{ "action": "Final Answer", "action_input": "The capital of the United States is Washington D.C." } Human: "What's the capital of Pennsylvania?" AI Assistant:{ "action": "Final Answer", "action_input": "The capital of Pennsylvania is Harrisburg." } Human: "What 2 + 5?" AI Assistant:{ "action": "Final Answer", "action_input": "2 + 5 = 7." } Human: 'What's the capital of Maryland?' AI Assistant:""" from transformers import pipeline from langchain.llms import HuggingFacePipeline hf_model = pipeline("text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200) original_model = HuggingFacePipeline(pipeline=hf_model) generated = original_model.generate([prompt], stop=["Human:"]) print(generated) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html
d6ae6e20dace-1
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. generations=[[Generation(text=' "What\'s the capital of Maryland?"\n', generation_info=None)]] llm_output=None That’s not so impressive, is it? It didn’t answer the question and it didn’t follow the JSON format at all! Let’s try with the structured decoder. RELLM LLM Wrapper# Let’s try that again, now providing a regex to match the JSON structured format. import regex # Note this is the regex library NOT python's re stdlib module # We'll choose a regex that matches to a structured json string that looks like: # { # "action": "Final Answer", # "action_input": string or dict # } pattern = regex.compile(r'\{\s*"action":\s*"Final Answer",\s*"action_input":\s*(\{.*\}|"[^"]*")\s*\}\nHuman:') from langchain.experimental.llms import RELLM model = RELLM(pipeline=hf_model, regex=pattern, max_new_tokens=200) generated = model.predict(prompt, stop=["Human:"]) print(generated) {"action": "Final Answer", "action_input": "The capital of Maryland is Baltimore." } Voila! Free of parsing errors. previous PromptLayer OpenAI next Replicate Contents Hugging Face Baseline RELLM LLM Wrapper By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html
7cb53e667c4a-0
.ipynb .pdf Manifest Contents Compare HF Models Manifest# This notebook goes over how to use Manifest and LangChain. For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest Another example of using Manifest with Langchain. !pip install manifest-ml from manifest import Manifest from langchain.llms.manifest import ManifestWrapper manifest = Manifest( client_name = "huggingface", client_connection = "http://127.0.0.1:5000" ) print(manifest.client.get_model_params()) llm = ManifestWrapper(client=manifest, llm_kwargs={"temperature": 0.001, "max_tokens": 256}) # Map reduce example from langchain import PromptTemplate from langchain.text_splitter import CharacterTextSplitter from langchain.chains.mapreduce import MapReduceChain _prompt = """Write a concise summary of the following: {text} CONCISE SUMMARY:""" prompt = PromptTemplate(template=_prompt, input_variables=["text"]) text_splitter = CharacterTextSplitter() mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter) with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() mp_chain.run(state_of_the_union)
https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html
7cb53e667c4a-1
state_of_the_union = f.read() mp_chain.run(state_of_the_union) 'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will continue to send vaccines to 112 countries, more than any other nation. "We have lost so much to COVID-19," Trump said. "Time with one another. And worst of all, so much loss of life." He said the CDC is working on a vaccine for kids under 5, and that the government will be ready with plenty of vaccines when they are available. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government is launching a "Test to Treat" initiative that will allow people to get tested at a pharmacy and get antiviral pills on the spot at no cost. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government will continue to send vaccines to 112 countries, more than any other nation. "We are coming for your' Compare HF Models# from langchain.model_laboratory import ModelLaboratory manifest1 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5000" ), llm_kwargs={"temperature": 0.01} ) manifest2 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5001" ), llm_kwargs={"temperature": 0.01} ) manifest3 = ManifestWrapper(
https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html
7cb53e667c4a-2
) manifest3 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5002" ), llm_kwargs={"temperature": 0.01} ) llms = [manifest1, manifest2, manifest3] model_lab = ModelLaboratory(llms) model_lab.compare("What color is a flamingo?") Input: What color is a flamingo? ManifestWrapper Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B', 'temperature': 0.01} pink ManifestWrapper Params: {'model_name': 'EleutherAI/gpt-neo-125M', 'model_path': 'EleutherAI/gpt-neo-125M', 'temperature': 0.01} A flamingo is a small, round ManifestWrapper Params: {'model_name': 'google/flan-t5-xl', 'model_path': 'google/flan-t5-xl', 'temperature': 0.01} pink previous Llama-cpp next Modal Contents Compare HF Models By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html
56926ff53e01-0
.ipynb .pdf StochasticAI StochasticAI# Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production. This example goes over how to use LangChain to interact with StochasticAI models. You have to get the API_KEY and the API_URL here. from getpass import getpass STOCHASTICAI_API_KEY = getpass() import os os.environ["STOCHASTICAI_API_KEY"] = STOCHASTICAI_API_KEY YOUR_API_URL = getpass() from langchain.llms import StochasticAI from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = StochasticAI(api_url=YOUR_API_URL) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) "\n\nStep 1: In 1999, the St. Louis Rams won the Super Bowl.\n\nStep 2: In 1999, Beiber was born.\n\nStep 3: The Rams were in Los Angeles at the time.\n\nStep 4: So they didn't play in the Super Bowl that year.\n" previous SageMakerEndpoint next Writer By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/stochasticai.html
fd97a1987941-0
.ipynb .pdf DeepInfra Contents Imports Set the Environment API Key Create the DeepInfra instance Create a Prompt Template Initiate the LLMChain Run the LLMChain DeepInfra# DeepInfra provides several LLMs. This notebook goes over how to use Langchain with DeepInfra. Imports# import os from langchain.llms import DeepInfra from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from DeepInfra. You have to Login and get a new token. You are given a 1 hour free of serverless GPU compute to test different models. (see here) You can print your token with deepctl auth token # get a new token: https://deepinfra.com/login?from=%2Fdash from getpass import getpass DEEPINFRA_API_TOKEN = getpass() os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKEN Create the DeepInfra instance# You can also use our open source deepctl tool to manage your model deployments. You can view a list of available parameters here. llm = DeepInfra(model_id="databricks/dolly-v2-12b") llm.model_kwargs = {'temperature': 0.7, 'repetition_penalty': 1.2, 'max_new_tokens': 250, 'top_p': 0.9} Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm)
https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html
fd97a1987941-1
llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "Can penguins reach the North pole?" llm_chain.run(question) "Penguins live in the Southern hemisphere.\nThe North pole is located in the Northern hemisphere.\nSo, first you need to turn the penguin South.\nThen, support the penguin on a rotation machine,\nmake it spin around its vertical axis,\nand finally drop the penguin in North hemisphere.\nNow, you have a penguin in the north pole!\n\nStill didn't understand?\nWell, you're a failure as a teacher." previous Databricks next ForefrontAI Contents Imports Set the Environment API Key Create the DeepInfra instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html
445313b89a54-0
.ipynb .pdf MosaicML MosaicML# MosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own. This example goes over how to use LangChain to interact with MosaicML Inference for text completion. # sign up for an account: https://forms.mosaicml.com/demo?utm_source=langchain from getpass import getpass MOSAICML_API_TOKEN = getpass() import os os.environ["MOSAICML_API_TOKEN"] = MOSAICML_API_TOKEN from langchain.llms import MosaicML from langchain import PromptTemplate, LLMChain template = """Question: {question}""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = MosaicML(inject_instruction_format=True, model_kwargs={'do_sample': False}) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What is one good reason why you should train a large language model on domain specific data?" llm_chain.run(question) previous Modal next NLP Cloud By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/mosaicml.html
8ce661b71d9a-0
.ipynb .pdf PromptLayer OpenAI Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track PromptLayer OpenAI# PromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library. PromptLayer records all your OpenAI API requests, allowing you to search and explore request history in the PromptLayer dashboard. This example showcases how to connect to PromptLayer to start recording your OpenAI requests. Another example is here. Install PromptLayer# The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip. !pip install promptlayer Imports# import os from langchain.llms import PromptLayerOpenAI import promptlayer Set the Environment API Key# You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar. Set it as an environment variable called PROMPTLAYER_API_KEY. You also need an OpenAI Key, called OPENAI_API_KEY. from getpass import getpass PROMPTLAYER_API_KEY = getpass() os.environ["PROMPTLAYER_API_KEY"] = PROMPTLAYER_API_KEY from getpass import getpass OPENAI_API_KEY = getpass() os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY Use the PromptLayerOpenAI LLM like normal# You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature. llm = PromptLayerOpenAI(pl_tags=["langchain"]) llm("I am a cat and I want") The above request should now appear on your PromptLayer dashboard. Using PromptLayer Track#
https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html
8ce661b71d9a-1
The above request should now appear on your PromptLayer dashboard. Using PromptLayer Track# If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. llm = PromptLayerOpenAI(return_pl_id=True) llm_results = llm.generate(["Tell me a joke"]) for res in llm_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100) Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard. previous Basic LLM usage next Structured Decoding with RELLM Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html
0c0717be3f89-0
.ipynb .pdf Writer Writer# Writer is a platform to generate different language content. This example goes over how to use LangChain to interact with Writer models. You have to get the WRITER_API_KEY here. from getpass import getpass WRITER_API_KEY = getpass() import os os.environ["WRITER_API_KEY"] = WRITER_API_KEY from langchain.llms import Writer from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) # If you get an error, probably, you need to set up the "base_url" parameter that can be taken from the error log. llm = Writer() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous StochasticAI next LLMs By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/writer.html
a06029df9d4e-0
.ipynb .pdf GPT4All Contents Specify Model GPT4All# GitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue. This example goes over how to use LangChain to interact with GPT4All models. %pip install gpt4all > /dev/null Note: you may need to restart the kernel to use updated packages. from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Specify Model# To run locally, download a compatible ggml-formatted model. For more info, visit https://github.com/nomic-ai/gpt4all For full installation instructions go here. The GPT4All Chat installer needs to decompress a 3GB LLM model during the installation process! Note that new models are uploaded regularly - check the link above for the most recent .bin URL local_path = './models/ggml-gpt4all-l13b-snoozy.bin' # replace with your desired local file path Uncomment the below block to download a model. You may want to update url to a new version. # import requests # from pathlib import Path # from tqdm import tqdm # Path(local_path).parent.mkdir(parents=True, exist_ok=True) # # Example model. Check https://github.com/nomic-ai/gpt4all for the latest models. # url = 'http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin'
https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html
a06029df9d4e-1
# # send a GET request to the URL to download the file. Stream since it's large # response = requests.get(url, stream=True) # # open the file in binary mode and write the contents of the response to it in chunks # # This is a large file, so be prepared to wait. # with open(local_path, 'wb') as f: # for chunk in tqdm(response.iter_content(chunk_size=8192)): # if chunk: # f.write(chunk) # Callbacks support token-wise streaming callbacks = [StreamingStdOutCallbackHandler()] # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True) # If you want to use a custom model add the backend parameter # Check https://docs.gpt4all.io/gpt4all_python.html for supported backends llm = GPT4All(model=local_path, backend='gptj', callbacks=callbacks, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) previous GooseAI next Hugging Face Hub Contents Specify Model By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html
ef8511b0914c-0
.ipynb .pdf PipelineAI Contents Install pipeline-ai Imports Set the Environment API Key Create the PipelineAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain PipelineAI# PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models. This notebook goes over how to use Langchain with PipelineAI. Install pipeline-ai# The pipeline-ai library is required to use the PipelineAI API, AKA Pipeline Cloud. Install pipeline-ai using pip install pipeline-ai. # Install the package !pip install pipeline-ai Imports# import os from langchain.llms import PipelineAI from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from PipelineAI. Check out the cloud quickstart guide. You’ll be given a 30 day free trial with 10 hours of serverless GPU compute to test different models. os.environ["PIPELINE_API_KEY"] = "YOUR_API_KEY_HERE" Create the PipelineAI instance# When instantiating PipelineAI, you need to specify the id or tag of the pipeline you want to use, e.g. pipeline_key = "public/gpt-j:base". You then have the option of passing additional pipeline-specific keyword arguments: llm = PipelineAI(pipeline_key="YOUR_PIPELINE_KEY", pipeline_kwargs={...}) Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain#
https://python.langchain.com/en/latest/modules/models/llms/integrations/pipelineai_example.html
ef8511b0914c-1
Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Petals next Basic LLM usage Contents Install pipeline-ai Imports Set the Environment API Key Create the PipelineAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/pipelineai_example.html
64a63b65948b-0
.ipynb .pdf Anyscale Anyscale# Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications This example goes over how to use LangChain to interact with Anyscale service import os os.environ["ANYSCALE_SERVICE_URL"] = ANYSCALE_SERVICE_URL os.environ["ANYSCALE_SERVICE_ROUTE"] = ANYSCALE_SERVICE_ROUTE os.environ["ANYSCALE_SERVICE_TOKEN"] = ANYSCALE_SERVICE_TOKEN from langchain.llms import Anyscale from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = Anyscale() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "When was George Washington president?" llm_chain.run(question) With Ray, we can distribute the queries without asyncrhonized implementation. This not only applies to Anyscale LLM model, but to any other Langchain LLM models which do not have _acall or _agenerate implemented prompt_list = [ "When was George Washington president?", "Explain to me the difference between nuclear fission and fusion.", "Give me a list of 5 science fiction books I should read next.", "Explain the difference between Spark and Ray.", "Suggest some fun holiday ideas.", "Tell a joke.", "What is 2+2?", "Explain what is machine learning like I am five years old.", "Explain what is artifical intelligence.", ] import ray @ray.remote def send_query(llm, prompt): resp = llm(prompt) return resp
https://python.langchain.com/en/latest/modules/models/llms/integrations/anyscale.html
64a63b65948b-1
resp = llm(prompt) return resp futures = [send_query.remote(llm, prompt) for prompt in prompt_list] results = ray.get(futures) previous Aleph Alpha next Azure OpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/anyscale.html
c874924f2a60-0
.ipynb .pdf Banana Banana# Banana is focused on building the machine learning infrastructure. This example goes over how to use LangChain to interact with Banana models # Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/python !pip install banana-dev # get new tokens: https://app.banana.dev/ # We need two tokens, not just an `api_key`: `BANANA_API_KEY` and `YOUR_MODEL_KEY` import os from getpass import getpass os.environ["BANANA_API_KEY"] = "YOUR_API_KEY" # OR # BANANA_API_KEY = getpass() from langchain.llms import Banana from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = Banana(model_key="YOUR_MODEL_KEY") llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Azure OpenAI next Beam integration for langchain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/banana.html
28722f954f09-0
.ipynb .pdf Runhouse Runhouse# The Runhouse allows remote compute and data across environments and users. See the Runhouse docs. This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda. Note: Code uses SelfHosted name instead of the Runhouse. !pip install runhouse from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM from langchain import PromptTemplate, LLMChain import runhouse as rh INFO | 2023-04-17 16:47:36,173 | No auth token provided, so not using RNS API to save and load configs # For an on-demand A100 with GCP, Azure, or Lambda gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False) # For an on-demand A10G with AWS (no single A100s on AWS) # gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws') # For an existing cluster # gpu = rh.cluster(ips=['<ip of the cluster>'], # ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'}, # name='rh-a10x') template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = SelfHostedHuggingFaceLLM(model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"]) llm_chain = LLMChain(prompt=prompt, llm=llm)
https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html
28722f954f09-1
llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds "\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber" You can also load more custom models through the SelfHostedHuggingFaceLLM interface: llm = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-small", task="text2text-generation", hardware=gpu, ) llm("What is the capital of Germany?") INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds 'berlin' Using a custom load function, we can load a custom pipeline directly on the remote hardware: def load_pipeline(): from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Need to be inside the fn in notebooks model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) return pipe def inference_fn(pipeline, prompt, stop = None):
https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html
28722f954f09-2
) return pipe def inference_fn(pipeline, prompt, stop = None): return pipeline(prompt)[0]["generated_text"][len(prompt):] llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn) llm("Who is the current US president?") INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds 'john w. bush' You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow: pipeline = load_pipeline() llm = SelfHostedPipeline.from_pipeline( pipeline=pipeline, hardware=gpu, model_reqs=model_reqs ) Instead, we can also send it to the hardware’s filesystem, which will be much faster. rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models") llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu) previous Replicate next SageMakerEndpoint By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html
289af6aa352c-0
.ipynb .pdf CerebriumAI Contents Install cerebrium Imports Set the Environment API Key Create the CerebriumAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain CerebriumAI# Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models. This notebook goes over how to use Langchain with CerebriumAI. Install cerebrium# The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium. # Install the package !pip3 install cerebrium Imports# import os from langchain.llms import CerebriumAI from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from CerebriumAI. See here. You are given a 1 hour free of serverless GPU compute to test different models. os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE" Create the CerebriumAI instance# You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url. llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE") Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain.
https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html
289af6aa352c-1
Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Amazon Bedrock next Cohere Contents Install cerebrium Imports Set the Environment API Key Create the CerebriumAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html
51ac4084a46c-0
.ipynb .pdf Hugging Face Local Pipelines Contents Load the model Integrate the model in an LLMChain Hugging Face Local Pipelines# Hugging Face models can be run locally through the HuggingFacePipeline class. The Hugging Face Model Hub hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together. These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the HuggingFaceHub notebook. To use, you should have the transformers python package installed. !pip install transformers > /dev/null Load the model# from langchain import HuggingFacePipeline llm = HuggingFacePipeline.from_model_id(model_id="bigscience/bloom-1b7", task="text-generation", model_kwargs={"temperature":0, "max_length":64}) WARNING:root:Failed to default session, using empty session: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /sessions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x1117f9790>: Failed to establish a new connection: [Errno 61] Connection refused')) Integrate the model in an LLMChain# from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What is electroencephalography?" print(llm_chain.run(question))
https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html
51ac4084a46c-1
question = "What is electroencephalography?" print(llm_chain.run(question)) /Users/wfh/code/lc/lckg/.venv/lib/python3.11/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (64) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn( WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x144d06910>: Failed to establish a new connection: [Errno 61] Connection refused')) First, we need to understand what is an electroencephalogram. An electroencephalogram is a recording of brain activity. It is a recording of brain activity that is made by placing electrodes on the scalp. The electrodes are placed previous Hugging Face Hub next Huggingface TextGen Inference Contents Load the model Integrate the model in an LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html
20dbfeda26e6-0
.ipynb .pdf OpenLM Contents Setup Using LangChain with OpenLM OpenLM# OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code. This examples goes over how to use LangChain to interact with both OpenAI and HuggingFace. You’ll need API keys from both. Setup# Install dependencies and set API keys. # Uncomment to install openlm and openai if you haven't already # !pip install openlm # !pip install openai from getpass import getpass import os import subprocess # Check if OPENAI_API_KEY environment variable is set if "OPENAI_API_KEY" not in os.environ: print("Enter your OpenAI API key:") os.environ["OPENAI_API_KEY"] = getpass() # Check if HF_API_TOKEN environment variable is set if "HF_API_TOKEN" not in os.environ: print("Enter your HuggingFace Hub API key:") os.environ["HF_API_TOKEN"] = getpass() Using LangChain with OpenLM# Here we’re going to call two models in an LLMChain, text-davinci-003 from OpenAI and gpt2 on HuggingFace. from langchain.llms import OpenLM from langchain import PromptTemplate, LLMChain question = "What is the capital of France?" template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) for model in ["text-davinci-003", "huggingface.co/gpt2"]:
https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html
20dbfeda26e6-1
llm = OpenLM(model=model) llm_chain = LLMChain(prompt=prompt, llm=llm) result = llm_chain.run(question) print("""Model: {} Result: {}""".format(model, result)) Model: text-davinci-003 Result: France is a country in Europe. The capital of France is Paris. Model: huggingface.co/gpt2 Result: Question: What is the capital of France? Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far more previous OpenAI next Petals Contents Setup Using LangChain with OpenLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html
8480ed408398-0
.ipynb .pdf C Transformers C Transformers# The C Transformers library provides Python bindings for GGML models. This example goes over how to use LangChain to interact with C Transformers models. Install %pip install ctransformers Load Model from langchain.llms import CTransformers llm = CTransformers(model='marella/gpt-2-ggml') Generate Text print(llm('AI is going to')) Streaming from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = CTransformers(model='marella/gpt-2-ggml', callbacks=[StreamingStdOutCallbackHandler()]) response = llm('AI is going to') LLMChain from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer:""" prompt = PromptTemplate(template=template, input_variables=['question']) llm_chain = LLMChain(prompt=prompt, llm=llm) response = llm_chain.run('What is AI?') previous Cohere next Databricks By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/ctransformers.html
c1f44a663278-0
.ipynb .pdf SageMakerEndpoint Contents Set up Example SageMakerEndpoint# Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. This notebooks goes over how to use an LLM hosted on a SageMaker endpoint. !pip3 install langchain boto3 Set up# You have to set up following required parameters of the SagemakerEndpoint call: endpoint_name: The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. credentials_profile_name: The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html Example# from langchain.docstore.document import Document example_doc_1 = """ Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital. Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well. Therefore, Peter stayed with her at the hospital for 3 days without leaving. """ docs = [ Document( page_content=example_doc_1, ) ] from typing import Dict from langchain import PromptTemplate, SagemakerEndpoint from langchain.llms.sagemaker_endpoint import LLMContentHandler from langchain.chains.question_answering import load_qa_chain import json query = """How long was Elizabeth hospitalized? """
https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html
c1f44a663278-1
import json query = """How long was Elizabeth hospitalized? """ prompt_template = """Use the following pieces of context to answer the question at the end. {context} Question: {question} Answer:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) class ContentHandler(LLMContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps({prompt: prompt, **model_kwargs}) return input_str.encode('utf-8') def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode("utf-8")) return response_json[0]["generated_text"] content_handler = ContentHandler() chain = load_qa_chain( llm=SagemakerEndpoint( endpoint_name="endpoint-name", credentials_profile_name="credentials-profile-name", region_name="us-west-2", model_kwargs={"temperature":1e-10}, content_handler=content_handler ), prompt=PROMPT ) chain({"input_documents": docs, "question": query}, return_only_outputs=True) previous Runhouse next StochasticAI Contents Set up Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html
ccec0a0857f5-0
.ipynb .pdf How to track token usage How to track token usage# This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API. Let’s first look at an extremely simple example of tracking token usage for a single LLM call. from langchain.llms import OpenAI from langchain.callbacks import get_openai_callback llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2) with get_openai_callback() as cb: result = llm("Tell me a joke") print(cb) Tokens Used: 42 Prompt Tokens: 4 Completion Tokens: 38 Successful Requests: 1 Total Cost (USD): $0.00084 Anything inside the context manager will get tracked. Here’s an example of using it to track multiple calls in sequence. with get_openai_callback() as cb: result = llm("Tell me a joke") result2 = llm("Tell me a joke") print(cb.total_tokens) 91 If a chain or agent with multiple steps in it is used, it will track all those steps. from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) with get_openai_callback() as cb: response = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html
ccec0a0857f5-1
print(f"Total Tokens: {cb.total_tokens}") print(f"Prompt Tokens: {cb.prompt_tokens}") print(f"Completion Tokens: {cb.completion_tokens}") print(f"Total Cost (USD): ${cb.total_cost}") > Entering new AgentExecutor chain... I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power. Action: Search Action Input: "Olivia Wilde boyfriend" Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Thought: I need to find out Harry Styles' age. Action: Search Action Input: "Harry Styles age" Observation: 29 years Thought: I need to calculate 29 raised to the 0.23 power. Action: Calculator Action Input: 29^0.23 Observation: Answer: 2.169459462491557 Thought: I now know the final answer. Final Answer: Harry Styles, Olivia Wilde's boyfriend, is 29 years old and his age raised to the 0.23 power is 2.169459462491557. > Finished chain. Total Tokens: 1506 Prompt Tokens: 1350 Completion Tokens: 156 Total Cost (USD): $0.03012 previous How to stream LLM and Chat Model responses next Integrations By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html
2b8910665260-0
.ipynb .pdf How (and why) to use the fake LLM How (and why) to use the fake LLM# We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way. In this notebook we go over how to use this. We start this with using the FakeLLM in an agent. from langchain.llms.fake import FakeListLLM from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType tools = load_tools(["python_repl"]) responses=[ "Action: Python REPL\nAction Input: print(2 + 2)", "Final Answer: 4" ] llm = FakeListLLM(responses=responses) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("whats 2 + 2") > Entering new AgentExecutor chain... Action: Python REPL Action Input: print(2 + 2) Observation: 4 Thought:Final Answer: 4 > Finished chain. '4' previous How to write a custom LLM wrapper next How (and why) to use the human input LLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/examples/fake_llm.html
8db58dce53f9-0
.ipynb .pdf How to stream LLM and Chat Model responses How to stream LLM and Chat Model responses# LangChain provides streaming support for LLMs. Currently, we support streaming for the OpenAI, ChatOpenAI, and ChatAnthropic implementations, but streaming support for other LLM implementations is on the roadmap. To utilize streaming, use a CallbackHandler that implements on_llm_new_token. In this example, we are using StreamingStdOutCallbackHandler. from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI, ChatAnthropic from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import HumanMessage llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0) resp = llm("Write me a song about sparkling water.") Verse 1 I'm sippin' on sparkling water, It's so refreshing and light, It's the perfect way to quench my thirst On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. Verse 2 I'm sippin' on sparkling water, It's so bubbly and bright, It's the perfect way to cool me down On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. Verse 3 I'm sippin' on sparkling water, It's so light and so clear, It's the perfect way to keep me cool On a hot summer night. Chorus Sparkling water, sparkling water,
https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html
8db58dce53f9-1
On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming. llm.generate(["Tell me a joke."]) Q: What did the fish say when it hit the wall? A: Dam! LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {}, 'model_name': 'text-davinci-003'}) Here’s an example with the ChatOpenAI chat model implementation: chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0) resp = chat([HumanMessage(content="Write me a song about sparkling water.")]) Verse 1: Bubbles rising to the top A refreshing drink that never stops Clear and crisp, it's oh so pure Sparkling water, I can't ignore Chorus: Sparkling water, oh how you shine A taste so clean, it's simply divine You quench my thirst, you make me feel alive Sparkling water, you're my favorite vibe Verse 2: No sugar, no calories, just H2O A drink that's good for me, don't you know With lemon or lime, you're even better Sparkling water, you're my forever Chorus: Sparkling water, oh how you shine A taste so clean, it's simply divine You quench my thirst, you make me feel alive
https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html
8db58dce53f9-2
You quench my thirst, you make me feel alive Sparkling water, you're my favorite vibe Bridge: You're my go-to drink, day or night You make me feel so light I'll never give you up, you're my true love Sparkling water, you're sent from above Chorus: Sparkling water, oh how you shine A taste so clean, it's simply divine You quench my thirst, you make me feel alive Sparkling water, you're my favorite vibe Outro: Sparkling water, you're the one for me I'll never let you go, can't you see You're my drink of choice, forevermore Sparkling water, I adore. Here is an example with the ChatAnthropic chat model implementation, which uses their claude model. chat = ChatAnthropic(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0) resp = chat([HumanMessage(content="Write me a song about sparkling water.")]) Here is my attempt at a song about sparkling water: Sparkling water, bubbles so bright, Dancing in the glass with delight. Refreshing and crisp, a fizzy delight, Quenching my thirst with each sip I take. The carbonation tickles my tongue, As the refreshing water song is sung. Lime or lemon, a citrus twist, Makes sparkling water such a bliss. Healthy and hydrating, a drink so pure, Sparkling water, always alluring. Bubbles ascending in a stream, Sparkling water, you're my dream! previous How to serialize LLM classes next How to track token usage By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html
b80e41fed43a-0
.ipynb .pdf How to use the async API for LLMs How to use the async API for LLMs# LangChain provides async support for LLMs by leveraging the asyncio library. Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, OpenAI, PromptLayerOpenAI, ChatOpenAI and Anthropic are supported, but async support for other LLMs is on the roadmap. You can use the agenerate method to call an OpenAI LLM asynchronously. import time import asyncio from langchain.llms import OpenAI def generate_serially(): llm = OpenAI(temperature=0.9) for _ in range(10): resp = llm.generate(["Hello, how are you?"]) print(resp.generations[0][0].text) async def async_generate(llm): resp = await llm.agenerate(["Hello, how are you?"]) print(resp.generations[0][0].text) async def generate_concurrently(): llm = OpenAI(temperature=0.9) tasks = [async_generate(llm) for _ in range(10)] await asyncio.gather(*tasks) s = time.perf_counter() # If running this outside of Jupyter, use asyncio.run(generate_concurrently()) await generate_concurrently() elapsed = time.perf_counter() - s print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m') s = time.perf_counter() generate_serially() elapsed = time.perf_counter() - s print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m')
https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html
b80e41fed43a-1
I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, how about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about yourself? I'm doing well, thank you! How about you? I'm doing well, thank you. How about you? I'm doing well, thank you! How about you? I'm doing well, thank you. How about you? Concurrent executed in 1.39 seconds. I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about yourself? I'm doing well, thanks for asking. How about you? I'm doing well, thanks! How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about yourself? I'm doing well, thanks for asking. How about you? Serial executed in 5.77 seconds. previous Generic Functionality next How to write a custom LLM wrapper By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html
cfc7b2cd6579-0
.ipynb .pdf How to write a custom LLM wrapper How to write a custom LLM wrapper# This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain. There is only one required thing that a custom LLM needs to implement: A _call method that takes in a string, some optional stop words, and returns a string There is a second optional thing it can implement: An _identifying_params property that is used to help with printing of this class. Should return a dictionary. Let’s implement a very simple custom LLM that just returns the first N characters of the input. from typing import Any, List, Mapping, Optional from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM class CustomLLM(LLM): n: int @property def _llm_type(self) -> str: return "custom" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str: if stop is not None: raise ValueError("stop kwargs are not permitted.") return prompt[:self.n] @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" return {"n": self.n} We can now use this as an any other LLM. llm = CustomLLM(n=10) llm("This is a foobar thing") 'This is a ' We can also print the LLM and see its custom print.
https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html
cfc7b2cd6579-1
'This is a ' We can also print the LLM and see its custom print. print(llm) CustomLLM Params: {'n': 10} previous How to use the async API for LLMs next How (and why) to use the fake LLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html
08165be5c401-0
.ipynb .pdf How to cache LLM calls Contents In Memory Cache SQLite Cache Redis Cache Standard Cache Semantic Cache GPTCache Momento Cache SQLAlchemy Cache Custom SQLAlchemy Schemas Optional Caching Optional Caching in Chains How to cache LLM calls# This notebook covers how to cache results of individual LLM calls. import langchain from langchain.llms import OpenAI # To make the caching really obvious, lets use a slower model. llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2) In Memory Cache# from langchain.cache import InMemoryCache langchain.llm_cache = InMemoryCache() %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms Wall time: 4.83 s "\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!" %%time # The second time it is, so it goes faster llm("Tell me a joke") CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' SQLite Cache# !rm .langchain.db # We can do the same thing with a SQLite cache from langchain.cache import SQLiteCache langchain.llm_cache = SQLiteCache(database_path=".langchain.db") %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke")
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
08165be5c401-1
llm("Tell me a joke") CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms Wall time: 825 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' %%time # The second time it is, so it goes faster llm("Tell me a joke") CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms Wall time: 2.67 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' Redis Cache# Standard Cache# Use Redis to cache prompts and responses. # We can do the same thing with a Redis cache # (make sure your local Redis instance is running first before running this example) from redis import Redis from langchain.cache import RedisCache langchain.llm_cache = RedisCache(redis_=Redis()) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 6.88 ms, sys: 8.75 ms, total: 15.6 ms Wall time: 1.04 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' %%time # The second time it is, so it goes faster llm("Tell me a joke") CPU times: user 1.59 ms, sys: 610 µs, total: 2.2 ms Wall time: 5.58 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' Semantic Cache# Use Redis to cache prompts and responses and evaluate hits based on semantic similarity.
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
08165be5c401-2
Semantic Cache# Use Redis to cache prompts and responses and evaluate hits based on semantic similarity. from langchain.embeddings import OpenAIEmbeddings from langchain.cache import RedisSemanticCache langchain.llm_cache = RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings() ) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 351 ms, sys: 156 ms, total: 507 ms Wall time: 3.37 s "\n\nWhy don't scientists trust atoms?\nBecause they make up everything." %%time # The second time, while not a direct hit, the question is semantically similar to the original question, # so it uses the cached result! llm("Tell me one joke") CPU times: user 6.25 ms, sys: 2.72 ms, total: 8.97 ms Wall time: 262 ms "\n\nWhy don't scientists trust atoms?\nBecause they make up everything." GPTCache# We can use GPTCache for exact match caching OR to cache results based on semantic similarity Let’s first start with an example of exact match from gptcache import Cache from gptcache.manager.factory import manager_factory from gptcache.processor.pre import get_prompt from langchain.cache import GPTCache import hashlib def get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest() def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) cache_obj.init( pre_embedding_func=get_prompt,
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
08165be5c401-3
cache_obj.init( pre_embedding_func=get_prompt, data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"), ) langchain.llm_cache = GPTCache(init_gptcache) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 21.5 ms, sys: 21.3 ms, total: 42.8 ms Wall time: 6.2 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' %%time # The second time it is, so it goes faster llm("Tell me a joke") CPU times: user 571 µs, sys: 43 µs, total: 614 µs Wall time: 635 µs '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' Let’s now show an example of similarity caching from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache import hashlib def get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest() def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{hashed_llm}") langchain.llm_cache = GPTCache(init_gptcache) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 1.42 s, sys: 279 ms, total: 1.7 s
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
08165be5c401-4
Wall time: 8.44 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' %%time # This is an exact match, so it finds it in the cache llm("Tell me a joke") CPU times: user 866 ms, sys: 20 ms, total: 886 ms Wall time: 226 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' %%time # This is not an exact match, but semantically within distance so it hits! llm("Tell me joke") CPU times: user 853 ms, sys: 14.8 ms, total: 868 ms Wall time: 224 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' Momento Cache# Use Momento to cache prompts and responses. Requires momento to use, uncomment below to install: # !pip install momento You’ll need to get a Momemto auth token to use this class. This can either be passed in to a momento.CacheClient if you’d like to instantiate that directly, as a named parameter auth_token to MomentoChatMessageHistory.from_client_params, or can just be set as an environment variable MOMENTO_AUTH_TOKEN. from datetime import timedelta from langchain.cache import MomentoCache cache_name = "langchain" ttl = timedelta(days=1) langchain.llm_cache = MomentoCache.from_client_params(cache_name, ttl) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 40.7 ms, sys: 16.5 ms, total: 57.2 ms Wall time: 1.73 s
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
08165be5c401-5
Wall time: 1.73 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' %%time # The second time it is, so it goes faster # When run in the same region as the cache, latencies are single digit ms llm("Tell me a joke") CPU times: user 3.16 ms, sys: 2.98 ms, total: 6.14 ms Wall time: 57.9 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' SQLAlchemy Cache# # You can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy. # from langchain.cache import SQLAlchemyCache # from sqlalchemy import create_engine # engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres") # langchain.llm_cache = SQLAlchemyCache(engine) Custom SQLAlchemy Schemas# # You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use: from sqlalchemy import Column, Integer, String, Computed, Index, Sequence from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy_utils import TSVectorType from langchain.cache import SQLAlchemyCache Base = declarative_base() class FulltextLLMCache(Base): # type: ignore """Postgres table for fulltext-indexed LLM Cache""" __tablename__ = "llm_cache_fulltext" id = Column(Integer, Sequence('cache_id'), primary_key=True) prompt = Column(String, nullable=False) llm = Column(String, nullable=False) idx = Column(Integer) response = Column(String)
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
08165be5c401-6
idx = Column(Integer) response = Column(String) prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True)) __table_args__ = ( Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"), ) engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres") langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache) Optional Caching# You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLM llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False) %%time llm("Tell me a joke") CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms Wall time: 745 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' %%time llm("Tell me a joke") CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms Wall time: 623 ms '\n\nTwo guys stole a calendar. They got six months each.' Optional Caching in Chains# You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards. As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
08165be5c401-7
llm = OpenAI(model_name="text-davinci-002") no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False) from langchain.text_splitter import CharacterTextSplitter from langchain.chains.mapreduce import MapReduceChain text_splitter = CharacterTextSplitter() with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() texts = text_splitter.split_text(state_of_the_union) from langchain.docstore.document import Document docs = [Document(page_content=t) for t in texts[:3]] from langchain.chains.summarize import load_summarize_chain chain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm) %%time chain.run(docs) CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms Wall time: 5.09 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.' When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step. %%time chain.run(docs)
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
08165be5c401-8
%%time chain.run(docs) CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms Wall time: 1.04 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.' !rm .langchain.db sqlite.db previous How (and why) to use the human input LLM next How to serialize LLM classes Contents In Memory Cache SQLite Cache Redis Cache Standard Cache Semantic Cache GPTCache Momento Cache SQLAlchemy Cache Custom SQLAlchemy Schemas Optional Caching Optional Caching in Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html
7bd6fce3cb05-0
.ipynb .pdf How to serialize LLM classes Contents Loading Saving How to serialize LLM classes# This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc). from langchain.llms import OpenAI from langchain.llms.loading import load_llm Loading# First, lets go over loading an LLM from disk. LLMs can be saved on disk in two formats: json or yaml. No matter the extension, they are loaded in the same way. !cat llm.json { "model_name": "text-davinci-003", "temperature": 0.7, "max_tokens": 256, "top_p": 1.0, "frequency_penalty": 0.0, "presence_penalty": 0.0, "n": 1, "best_of": 1, "request_timeout": null, "_type": "openai" } llm = load_llm("llm.json") !cat llm.yaml _type: openai best_of: 1 frequency_penalty: 0.0 max_tokens: 256 model_name: text-davinci-003 n: 1 presence_penalty: 0.0 request_timeout: null temperature: 0.7 top_p: 1.0 llm = load_llm("llm.yaml") Saving# If you want to go from an LLM in memory to a serialized version of it, you can do so easily by calling the .save method. Again, this supports both json and yaml. llm.save("llm.json")
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html
7bd6fce3cb05-1
llm.save("llm.json") llm.save("llm.yaml") previous How to cache LLM calls next How to stream LLM and Chat Model responses Contents Loading Saving By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html
9b3fe37d86ee-0
.ipynb .pdf How (and why) to use the human input LLM How (and why) to use the human input LLM# Similar to the fake LLM, LangChain provides a pseudo LLM class that can be used for testing, debugging, or educational purposes. This allows you to mock out calls to the LLM and simulate how a human would respond if they received the prompts. In this notebook, we go over how to use this. We start this with using the HumanInputLLM in an agent. from langchain.llms.human import HumanInputLLM from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType Since we will use the WikipediaQueryRun tool in this notebook, you might need to install the wikipedia package if you haven’t done so already. %pip install wikipedia tools = load_tools(["wikipedia"]) llm = HumanInputLLM(prompt_func=lambda prompt: print(f"\n===PROMPT====\n{prompt}\n=====END OF PROMPT======")) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("What is 'Bocchi the Rock!'?") > Entering new AgentExecutor chain... ===PROMPT==== Answer the following questions as best you can. You have access to the following tools: Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Wikipedia] Action Input: the input to the action
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
9b3fe37d86ee-1
Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: What is 'Bocchi the Rock!'? Thought: =====END OF PROMPT====== I need to use a tool. Action: Wikipedia Action Input: Bocchi the Rock!, Japanese four-panel manga and anime series. Observation: Page: Bocchi the Rock! Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōbon volumes as of November 2022. An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. Page: Manga Time Kirara Summary: Manga Time Kirara (まんがタイムきらら, Manga Taimu Kirara) is a Japanese seinen manga magazine published by Houbunsha which mainly serializes four-panel manga. The magazine is sold on the ninth of each month and was first published as a special edition of Manga Time, another Houbunsha magazine, on May 17, 2002. Characters from this magazine have appeared in a crossover role-playing game called Kirara Fantasia. Page: Manga Time Kirara Max
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
9b3fe37d86ee-2
Page: Manga Time Kirara Max Summary: Manga Time Kirara Max (まんがタイムきららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the magazine is released on the 19th of each month. Thought: ===PROMPT==== Answer the following questions as best you can. You have access to the following tools: Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Wikipedia] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: What is 'Bocchi the Rock!'? Thought:I need to use a tool. Action: Wikipedia Action Input: Bocchi the Rock!, Japanese four-panel manga and anime series. Observation: Page: Bocchi the Rock!
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
9b3fe37d86ee-3
Observation: Page: Bocchi the Rock! Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōbon volumes as of November 2022. An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. Page: Manga Time Kirara Summary: Manga Time Kirara (まんがタイムきらら, Manga Taimu Kirara) is a Japanese seinen manga magazine published by Houbunsha which mainly serializes four-panel manga. The magazine is sold on the ninth of each month and was first published as a special edition of Manga Time, another Houbunsha magazine, on May 17, 2002. Characters from this magazine have appeared in a crossover role-playing game called Kirara Fantasia. Page: Manga Time Kirara Max Summary: Manga Time Kirara Max (まんがタイムきららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the magazine is released on the 19th of each month. Thought: =====END OF PROMPT====== These are not relevant articles. Action: Wikipedia
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
9b3fe37d86ee-4
=====END OF PROMPT====== These are not relevant articles. Action: Wikipedia Action Input: Bocchi the Rock!, Japanese four-panel manga series written and illustrated by Aki Hamaji. Observation: Page: Bocchi the Rock! Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōbon volumes as of November 2022. An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. Thought: ===PROMPT==== Answer the following questions as best you can. You have access to the following tools: Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Wikipedia] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: What is 'Bocchi the Rock!'? Thought:I need to use a tool. Action: Wikipedia Action Input: Bocchi the Rock!, Japanese four-panel manga and anime series.
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
9b3fe37d86ee-5
Action Input: Bocchi the Rock!, Japanese four-panel manga and anime series. Observation: Page: Bocchi the Rock! Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōbon volumes as of November 2022. An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. Page: Manga Time Kirara Summary: Manga Time Kirara (まんがタイムきらら, Manga Taimu Kirara) is a Japanese seinen manga magazine published by Houbunsha which mainly serializes four-panel manga. The magazine is sold on the ninth of each month and was first published as a special edition of Manga Time, another Houbunsha magazine, on May 17, 2002. Characters from this magazine have appeared in a crossover role-playing game called Kirara Fantasia. Page: Manga Time Kirara Max Summary: Manga Time Kirara Max (まんがタイムきららMAX) is a Japanese four-panel seinen manga magazine published by Houbunsha. It is the third magazine of the "Kirara" series, after "Manga Time Kirara" and "Manga Time Kirara Carat". The first issue was released on September 29, 2004. Currently the magazine is released on the 19th of each month. Thought:These are not relevant articles. Action: Wikipedia
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
9b3fe37d86ee-6
Thought:These are not relevant articles. Action: Wikipedia Action Input: Bocchi the Rock!, Japanese four-panel manga series written and illustrated by Aki Hamaji. Observation: Page: Bocchi the Rock! Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series written and illustrated by Aki Hamaji. It has been serialized in Houbunsha's seinen manga magazine Manga Time Kirara Max since December 2017. Its chapters have been collected in five tankōbon volumes as of November 2022. An anime television series adaptation produced by CloverWorks aired from October to December 2022. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. Thought: =====END OF PROMPT====== It worked. Final Answer: Bocchi the Rock! is a four-panel manga series and anime television series. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim. > Finished chain. "Bocchi the Rock! is a four-panel manga series and anime television series. The series has been praised for its writing, comedy, characters, and depiction of social anxiety, with the anime's visual creativity receiving acclaim." previous How (and why) to use the fake LLM next How to cache LLM calls By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/llms/examples/human_input_llm.html
7d45020d986d-0
.rst .pdf Integrations Integrations# The examples here all highlight how to integrate with different chat models. Anthropic Azure Google Cloud Platform Vertex AI PaLM OpenAI PromptLayer ChatOpenAI previous How to stream responses next Anthropic By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/chat/integrations.html
69794a56aa6a-0
.rst .pdf How-To Guides How-To Guides# The examples here all address certain “how-to” guides for working with chat models. How to use few shot examples How to stream responses previous Getting Started next How to use few shot examples By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/chat/how_to_guides.html
19a01905d5c6-0
.ipynb .pdf Getting Started Contents PromptTemplates LLMChain Streaming Getting Started# This notebook covers how to get started with chat models. The interface is based around messages rather than raw text. from langchain.chat_models import ChatOpenAI from langchain import PromptTemplate, LLMChain from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")]) AIMessage(content="J'aime programmer.", additional_kwargs={}) OpenAI’s chat model supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model: messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.") ] chat(messages) AIMessage(content="J'aime programmer.", additional_kwargs={}) You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter. batch_messages = [ [ SystemMessage(content="You are a helpful assistant that translates English to French."),
https://python.langchain.com/en/latest/modules/models/chat/getting_started.html
19a01905d5c6-1
[ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love artificial intelligence.") ], ] result = chat.generate(batch_messages) result LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}}) You can recover things like token usage from this LLMResult result.llm_output {'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}} PromptTemplates# You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like: template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}"
https://python.langchain.com/en/latest/modules/models/chat/getting_started.html
19a01905d5c6-2
system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted messages chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()) AIMessage(content="J'adore la programmation.", additional_kwargs={}) If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg: prompt=PromptTemplate( template="You are a helpful assistant that translates {input_language} to {output_language}.", input_variables=["input_language", "output_language"], ) system_message_prompt = SystemMessagePromptTemplate(prompt=prompt) LLMChain# You can use the existing LLMChain in a very similar way to before - provide a prompt and a model. chain = LLMChain(llm=chat, prompt=chat_prompt) chain.run(input_language="English", output_language="French", text="I love programming.") "J'adore la programmation." Streaming# Streaming is supported for ChatOpenAI through callback handling. from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0) resp = chat([HumanMessage(content="Write me a song about sparkling water.")]) Verse 1: Bubbles rising to the top A refreshing drink that never stops Clear and crisp, it's pure delight A taste that's sure to excite Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive
https://python.langchain.com/en/latest/modules/models/chat/getting_started.html
19a01905d5c6-3
A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Verse 2: No sugar, no calories, just pure bliss A drink that's hard to resist It's the perfect way to quench my thirst A drink that always comes first Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Bridge: From the mountains to the sea Sparkling water, you're the key To a healthy life, a happy soul A drink that makes me feel whole Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Outro: Sparkling water, you're the one A drink that's always so much fun I'll never let you go, my friend Sparkling previous Chat Models next How-To Guides Contents PromptTemplates LLMChain Streaming By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/chat/getting_started.html
3692518f8db4-0
.ipynb .pdf OpenAI OpenAI# This notebook covers how to get started with OpenAI chat models. from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ] chat(messages) AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False) You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like: template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted messages chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html
3692518f8db4-1
AIMessage(content="J'adore la programmation.", additional_kwargs={}) previous Google Cloud Platform Vertex AI PaLM next PromptLayer ChatOpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html
3b90e3843203-0
.ipynb .pdf PromptLayer ChatOpenAI Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track PromptLayer ChatOpenAI# This example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests. Install PromptLayer# The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip. pip install promptlayer Imports# import os from langchain.chat_models import PromptLayerChatOpenAI from langchain.schema import HumanMessage Set the Environment API Key# You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar. Set it as an environment variable called PROMPTLAYER_API_KEY. os.environ["PROMPTLAYER_API_KEY"] = "**********" Use the PromptLayerOpenAI LLM like normal# You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature. chat = PromptLayerChatOpenAI(pl_tags=["langchain"]) chat([HumanMessage(content="I am a cat and I want")]) AIMessage(content='to take a nap in a cozy spot. I search around for a suitable place and finally settle on a soft cushion on the window sill. I curl up into a ball and close my eyes, relishing the warmth of the sun on my fur. As I drift off to sleep, I can hear the birds chirping outside and feel the gentle breeze blowing through the window. This is the life of a contented cat.', additional_kwargs={}) The above request should now appear on your PromptLayer dashboard. Using PromptLayer Track# If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id.
https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html
3b90e3843203-1
chat = PromptLayerChatOpenAI(return_pl_id=True) chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]]) for res in chat_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100) Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard. previous OpenAI next Text Embedding Models Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html
84a16219f836-0
.ipynb .pdf Azure Azure# This notebook goes over how to connect to an Azure hosted OpenAI endpoint from langchain.chat_models import AzureChatOpenAI from langchain.schema import HumanMessage BASE_URL = "https://${TODO}.openai.azure.com" API_KEY = "..." DEPLOYMENT_NAME = "chat" model = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-03-15-preview", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type = "azure", ) model([HumanMessage(content="Translate this sentence from English to French. I love programming.")]) AIMessage(content="\n\nJ'aime programmer.", additional_kwargs={}) previous Anthropic next Google Cloud Platform Vertex AI PaLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/chat/integrations/azure_chat_openai.html
06351b9cdb24-0
.ipynb .pdf Google Cloud Platform Vertex AI PaLM Google Cloud Platform Vertex AI PaLM# Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the GCP Service Specific Terms. Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the launch stage descriptions. Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview terms and conditions (Preview Terms). For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms). To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either: Have credentials configured for your environment (gcloud, workload identity, etc…) Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable This codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth. For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAC https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth #!pip install google-cloud-aiplatform from langchain.chat_models import ChatVertexAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( HumanMessage, SystemMessage ) chat = ChatVertexAI()
https://python.langchain.com/en/latest/modules/models/chat/integrations/google_vertex_ai_palm.html
06351b9cdb24-1
HumanMessage, SystemMessage ) chat = ChatVertexAI() messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ] chat(messages) AIMessage(content='Sure, here is the translation of the sentence "I love programming" from English to French:\n\nJ\'aime programmer.', additional_kwargs={}, example=False) You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like: template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted messages chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()) AIMessage(content='Sure, here is the translation of "I love programming" in French:\n\nJ\'aime programmer.', additional_kwargs={}, example=False) previous Azure next OpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/chat/integrations/google_vertex_ai_palm.html
ce2dc56d44a1-0
.ipynb .pdf Anthropic Contents ChatAnthropic also supports async and streaming functionality: Anthropic# This notebook covers how to get started with Anthropic chat models. from langchain.chat_models import ChatAnthropic from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatAnthropic() messages = [ HumanMessage(content="Translate this sentence from English to French. I love programming.") ] chat(messages) AIMessage(content=" J'aime programmer. ", additional_kwargs={}) ChatAnthropic also supports async and streaming functionality:# from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler await chat.agenerate([messages]) LLMResult(generations=[[ChatGeneration(text=" J'aime la programmation.", generation_info=None, message=AIMessage(content=" J'aime la programmation.", additional_kwargs={}))]], llm_output={}) chat = ChatAnthropic(streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])) chat(messages) J'adore programmer. AIMessage(content=" J'adore programmer.", additional_kwargs={}) previous Integrations next Azure Contents ChatAnthropic also supports async and streaming functionality: By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/chat/integrations/anthropic.html
98e71397c445-0
.ipynb .pdf How to use few shot examples Contents Alternating Human/AI messages System Messages How to use few shot examples# This notebook covers how to use few shot examples in chat models. There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abstractions around this yet but rather using existing abstractions. Alternating Human/AI messages# The first way of doing few shot prompting relies on using alternating human/ai messages. See an example of this below. from langchain.chat_models import ChatOpenAI from langchain import PromptTemplate, LLMChain from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) template="You are a helpful assistant that translates english to pirate." system_message_prompt = SystemMessagePromptTemplate.from_template(template) example_human = HumanMessagePromptTemplate.from_template("Hi") example_ai = AIMessagePromptTemplate.from_template("Argh me mateys") human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt]) chain = LLMChain(llm=chat, prompt=chat_prompt) # get a chat completion from the formatted messages chain.run("I love programming.") "I be lovin' programmin', me hearty!" System Messages# OpenAI provides an optional name parameter that they also recommend using in conjunction with system messages to do few shot prompting. Here is an example of how to do that below.
https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html
98e71397c445-1
template="You are a helpful assistant that translates english to pirate." system_message_prompt = SystemMessagePromptTemplate.from_template(template) example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"}) example_ai = SystemMessagePromptTemplate.from_template("Argh me mateys", additional_kwargs={"name": "example_assistant"}) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt]) chain = LLMChain(llm=chat, prompt=chat_prompt) # get a chat completion from the formatted messages chain.run("I love programming.") "I be lovin' programmin', me hearty." previous How-To Guides next How to stream responses Contents Alternating Human/AI messages System Messages By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html
84684baeedf7-0
.ipynb .pdf How to stream responses How to stream responses# This notebook goes over how to use streaming with a chat model. from langchain.chat_models import ChatOpenAI from langchain.schema import ( HumanMessage, ) from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0) resp = chat([HumanMessage(content="Write me a song about sparkling water.")]) Verse 1: Bubbles rising to the top A refreshing drink that never stops Clear and crisp, it's pure delight A taste that's sure to excite Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Verse 2: No sugar, no calories, just pure bliss A drink that's hard to resist It's the perfect way to quench my thirst A drink that always comes first Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Bridge: From the mountains to the sea Sparkling water, you're the key To a healthy life, a happy soul A drink that makes me feel whole Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Outro: Sparkling water, you're the one A drink that's always so much fun I'll never let you go, my friend Sparkling previous How to use few shot examples next Integrations By Harrison Chase
https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html
84684baeedf7-1
How to use few shot examples next Integrations By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html
882fb64ddf7f-0
.ipynb .pdf OpenAI OpenAI# Let’s load the OpenAI Embedding class. from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) Let’s load the OpenAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see here from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) # if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through os.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080" previous MosaicML embeddings next SageMaker Endpoint Embeddings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/openai.html
80e6c6d11483-0
.ipynb .pdf TensorflowHub TensorflowHub# Let’s load the TensorflowHub Embedding class. from langchain.embeddings import TensorflowHubEmbeddings embeddings = TensorflowHubEmbeddings() 2023-01-30 23:53:01.652176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-01-30 23:53:34.362802: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. text = "This is a test document." query_result = embeddings.embed_query(text) doc_results = embeddings.embed_documents(["foo"]) doc_results previous Sentence Transformers Embeddings next Prompts By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/tensorflowhub.html
99841c93fdb6-0
.ipynb .pdf AzureOpenAI AzureOpenAI# Let’s load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints. # set the environment variables needed for openai package to know to reach out to azure import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key" os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings(deployment="your-embeddings-deployment-name") text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) previous Aleph Alpha next Bedrock Embeddings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/azureopenai.html
bed9c7184688-0
.ipynb .pdf Self Hosted Embeddings Self Hosted Embeddings# Let’s load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes. from langchain.embeddings import ( SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddings, ) import runhouse as rh # For an on-demand A100 with GCP, Azure, or Lambda gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False) # For an on-demand A10G with AWS (no single A100s on AWS) # gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws') # For an existing cluster # gpu = rh.cluster(ips=['<ip of the cluster>'], # ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'}, # name='my-cluster') embeddings = SelfHostedHuggingFaceEmbeddings(hardware=gpu) text = "This is a test document." query_result = embeddings.embed_query(text) And similarly for SelfHostedHuggingFaceInstructEmbeddings: embeddings = SelfHostedHuggingFaceInstructEmbeddings(hardware=gpu) Now let’s load an embedding model with a custom load function: def get_pipeline(): from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) # Must be inside the function in notebooks model_id = "facebook/bart-base" tokenizer = AutoTokenizer.from_pretrained(model_id)
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html
bed9c7184688-1
tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer) def inference_fn(pipeline, prompt): # Return last hidden state of the model if isinstance(prompt, list): return [emb[0][-1] for emb in pipeline(prompt)] return pipeline(prompt)[0][-1] embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu, model_reqs=["./", "torch", "transformers"], inference_fn=inference_fn, ) query_result = embeddings.embed_query(text) previous SageMaker Endpoint Embeddings next Sentence Transformers Embeddings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html
b7812bf301cc-0
.ipynb .pdf Cohere Cohere# Let’s load the Cohere Embedding class. from langchain.embeddings import CohereEmbeddings embeddings = CohereEmbeddings(cohere_api_key=cohere_api_key) text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) previous Bedrock Embeddings next Elasticsearch By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/cohere.html
2d5f80c687a6-0
.ipynb .pdf Llama-cpp Llama-cpp# This notebook goes over how to use Llama-cpp embeddings within LangChain !pip install llama-cpp-python from langchain.embeddings import LlamaCppEmbeddings llama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin") text = "This is a test document." query_result = llama.embed_query(text) doc_result = llama.embed_documents([text]) previous Jina next MiniMax By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/llamacpp.html
f4617ae85499-0
.ipynb .pdf ModelScope ModelScope# Let’s load the ModelScope Embedding class. from langchain.embeddings import ModelScopeEmbeddings model_id = "damo/nlp_corom_sentence-embedding_english-base" embeddings = ModelScopeEmbeddings(model_id=model_id) text = "This is a test document." query_result = embeddings.embed_query(text) doc_results = embeddings.embed_documents(["foo"]) previous MiniMax next MosaicML embeddings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/modelscope_hub.html
5c35fb9b1955-0
.ipynb .pdf Aleph Alpha Contents Asymmetric Symmetric Aleph Alpha# There are two possible ways to use Aleph Alpha’s semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric embeddings are the suggested approach. Asymmetric# from langchain.embeddings import AlephAlphaAsymmetricSemanticEmbedding document = "This is a content of the document" query = "What is the contnt of the document?" embeddings = AlephAlphaAsymmetricSemanticEmbedding() doc_result = embeddings.embed_documents([document]) query_result = embeddings.embed_query(query) Symmetric# from langchain.embeddings import AlephAlphaSymmetricSemanticEmbedding text = "This is a test text" embeddings = AlephAlphaSymmetricSemanticEmbedding() doc_result = embeddings.embed_documents([text]) query_result = embeddings.embed_query(text) previous Text Embedding Models next AzureOpenAI Contents Asymmetric Symmetric By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/aleph_alpha.html
7f61c52ad78e-0
.ipynb .pdf MiniMax MiniMax# MiniMax offers an embeddings service. This example goes over how to use LangChain to interact with MiniMax Inference for text embedding. import os os.environ["MINIMAX_GROUP_ID"] = "MINIMAX_GROUP_ID" os.environ["MINIMAX_API_KEY"] = "MINIMAX_API_KEY" from langchain.embeddings import MiniMaxEmbeddings embeddings = MiniMaxEmbeddings() query_text = "This is a test query." query_result = embeddings.embed_query(query_text) document_text = "This is a test document." document_result = embeddings.embed_documents([document_text]) import numpy as np query_numpy = np.array(query_result) document_numpy = np.array(document_result[0]) similarity = np.dot(query_numpy, document_numpy) / (np.linalg.norm(query_numpy)*np.linalg.norm(document_numpy)) print(f"Cosine similarity between document and query: {similarity}") Cosine similarity between document and query: 0.1573236279277012 previous Llama-cpp next ModelScope By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/minimax.html
b8f28e5f950e-0
.ipynb .pdf Bedrock Embeddings Bedrock Embeddings# %pip install boto3 from langchain.embeddings import BedrockEmbeddings embeddings = BedrockEmbeddings(credentials_profile_name="bedrock-admin") embeddings.embed_query("This is a content of the document") embeddings.embed_documents(["This is a content of the document"]) previous AzureOpenAI next Cohere By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/bedrock.html
e09f0fc0e148-0
.ipynb .pdf Sentence Transformers Embeddings Sentence Transformers Embeddings# SentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package. SentenceTransformers is a python package that can generate text and image embeddings, originating from Sentence-BERT !pip install sentence_transformers > /dev/null [notice] A new release of pip is available: 23.0.1 -> 23.1.1 [notice] To update, run: pip install --upgrade pip from langchain.embeddings import HuggingFaceEmbeddings, SentenceTransformerEmbeddings embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2") # Equivalent to SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2") text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text, "This is not a test document."]) previous Self Hosted Embeddings next TensorflowHub By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sentence_transformers.html
e82efbe145af-0
.ipynb .pdf Google Cloud Platform Vertex AI PaLM Google Cloud Platform Vertex AI PaLM# Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the GCP Service Specific Terms. Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the launch stage descriptions. Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview terms and conditions (Preview Terms). For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms). To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either: Have credentials configured for your environment (gcloud, workload identity, etc…) Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable This codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth. For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAC https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth #!pip install google-cloud-aiplatform from langchain.embeddings import VertexAIEmbeddings embeddings = VertexAIEmbeddings() text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) previous Fake Embeddings next Hugging Face Hub By Harrison Chase
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/google_vertex_ai_palm.html
e82efbe145af-1
previous Fake Embeddings next Hugging Face Hub By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/google_vertex_ai_palm.html