id stringlengths 14 16 | text stringlengths 36 2.73k | source stringlengths 49 117 |
|---|---|---|
71392c374e26-2 | ),
]
# Construct the agent. We will use the default agent type here.
# See documentation for a full list of options.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What did biden say about ketanji brown jackson is the state of the union address?")
> Entering n... | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/agent_vectorstore.html |
71392c374e26-3 | Action Input: What are the advantages of using ruff over flake8?
Observation: Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quali... | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/agent_vectorstore.html |
71392c374e26-4 | Notice that in the above examples the agent did some extra work after querying the RetrievalQAChain. You can avoid that and just return the result directly.
tools = [
Tool(
name = "State of Union QA System",
func=state_of_union.run,
description="useful for when you need to answer questions a... | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/agent_vectorstore.html |
71392c374e26-5 | Action Input: What are the advantages of using ruff over flake8?
Observation: Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quali... | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/agent_vectorstore.html |
71392c374e26-6 | Tool(
name = "Ruff QA System",
func=ruff.run,
description="useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question, not referencing any obscure pronouns from the conversation before."
),
]
# Construct the agent. We will use the defau... | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/agent_vectorstore.html |
71392c374e26-7 | previous
Agent Executors
next
How to use the async API for Agents
Contents
Create the Vectorstore
Create the Agent
Use the Agent solely as a router
Multi-Hop vectorstore reasoning
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/agent_vectorstore.html |
a7a8b4c497fb-0 | .ipynb
.pdf
How to access intermediate steps
How to access intermediate steps#
In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples.
from langchain.agents import loa... | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/intermediate_steps.html |
a7a8b4c497fb-1 | > Finished chain.
# The actual return type is a NamedTuple for the agent action, and then an observation
print(response["intermediate_steps"])
[(AgentAction(tool='Search', tool_input='Leo DiCaprio girlfriend', log=' I should look up who Leo DiCaprio is dating\nAction: Search\nAction Input: "Leo DiCaprio girlfriend"'), ... | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/intermediate_steps.html |
a7a8b4c497fb-2 | ],
"Answer: 3.991298452658078\n"
]
]
previous
Handle Parsing Errors
next
How to cap the max number of iterations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/intermediate_steps.html |
72d58b17005c-0 | .ipynb
.pdf
How to cap the max number of iterations
How to cap the max number of iterations#
This notebook walks through how to cap an agent at taking a certain number of steps. This can be useful to ensure that they do not go haywire and take too many steps.
from langchain.agents import load_tools
from langchain.agent... | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/max_iterations.html |
72d58b17005c-1 | Final Answer: foo
> Finished chain.
'foo'
Now let’s try it again with the max_iterations=2 keyword argument. It now stops nicely after a certain amount of iterations!
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, max_iterations=2)
agent.run(adversarial_prompt)
> Enterin... | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/max_iterations.html |
72d58b17005c-2 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/max_iterations.html |
044fcb80628d-0 | .rst
.pdf
Text Embedding Models
Text Embedding Models#
Note
Conceptual Guide
This documentation goes over how to use the Embedding class in LangChain.
The Embedding class is a class designed for interfacing with embeddings. There are lots of Embedding providers (OpenAI, Cohere, Hugging Face, etc) - this class is design... | https://python.langchain.com/en/latest/modules/models/text_embedding.html |
55618d3d0fe5-0 | .ipynb
.pdf
Getting Started
Contents
Language Models
text -> text interface
messages -> message interface
Getting Started#
One of the core value props of LangChain is that it provides a standard interface to models. This allows you to swap easily between models. At a high level, there are two main types of models:
La... | https://python.langchain.com/en/latest/modules/models/getting_started.html |
55618d3d0fe5-1 | Contents
Language Models
text -> text interface
messages -> message interface
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/getting_started.html |
cec6e10597dd-0 | .rst
.pdf
LLMs
LLMs#
Note
Conceptual Guide
Large Language Models (LLMs) are a core component of LangChain.
LangChain is not a provider of LLMs, but rather provides a standard interface through which
you can interact with a variety of LLMs.
The following sections of documentation are provided:
Getting Started: An overvi... | https://python.langchain.com/en/latest/modules/models/llms.html |
67800dac5fde-0 | .rst
.pdf
Chat Models
Chat Models#
Note
Conceptual Guide
Chat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and out... | https://python.langchain.com/en/latest/modules/models/chat.html |
e014fabf73b7-0 | .ipynb
.pdf
MosaicML embeddings
MosaicML embeddings#
MosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.
This example goes over how to use LangChain to interact with MosaicML Inference for text embedding.
# sign up for an account: https://forms.mosaicml.c... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/mosaicml.html |
e490062b6482-0 | .ipynb
.pdf
Sentence Transformers Embeddings
Sentence Transformers Embeddings#
SentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package.
SentenceTransformers is a... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sentence_transformers.html |
27372f799b96-0 | .ipynb
.pdf
TensorflowHub
TensorflowHub#
Let’s load the TensorflowHub Embedding class.
from langchain.embeddings import TensorflowHubEmbeddings
embeddings = TensorflowHubEmbeddings()
2023-01-30 23:53:01.652176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neu... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/tensorflowhub.html |
9e9600551ae8-0 | .ipynb
.pdf
SageMaker Endpoint Embeddings
SageMaker Endpoint Embeddings#
Let’s load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.
For instructions on how to do this, please see here. Note: In order to handle batched requests, you will need to... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sagemaker-endpoint.html |
9e9600551ae8-1 | query_result = embeddings.embed_query("foo")
doc_results = embeddings.embed_documents(["foo"])
doc_results
previous
OpenAI
next
Self Hosted Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sagemaker-endpoint.html |
db9077de25e4-0 | .ipynb
.pdf
InstructEmbeddings
InstructEmbeddings#
Let’s load the HuggingFace instruct Embeddings class.
from langchain.embeddings import HuggingFaceInstructEmbeddings
embeddings = HuggingFaceInstructEmbeddings(
query_instruction="Represent the query for retrieval: "
)
load INSTRUCTOR_Transformer
max_seq_length 51... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/instruct_embeddings.html |
0eba439971bd-0 | .ipynb
.pdf
Cohere
Cohere#
Let’s load the Cohere Embedding class.
from langchain.embeddings import CohereEmbeddings
embeddings = CohereEmbeddings(cohere_api_key=cohere_api_key)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
AzureOpe... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/cohere.html |
3b8031fcf00c-0 | .ipynb
.pdf
Aleph Alpha
Contents
Asymmetric
Symmetric
Aleph Alpha#
There are two possible ways to use Aleph Alpha’s semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric ... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/aleph_alpha.html |
968b6fbdda17-0 | .ipynb
.pdf
AzureOpenAI
AzureOpenAI#
Let’s load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.
# set the environment variables needed for openai package to know to reach out to azure
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https:/... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/azureopenai.html |
328aaeec6ced-0 | .ipynb
.pdf
Fake Embeddings
Fake Embeddings#
LangChain also provides a fake embedding class. You can use this to test your pipelines.
from langchain.embeddings import FakeEmbeddings
embeddings = FakeEmbeddings(size=1352)
query_result = embeddings.embed_query("foo")
doc_results = embeddings.embed_documents(["foo"])
prev... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/fake.html |
e17929127740-0 | .ipynb
.pdf
Llama-cpp
Llama-cpp#
This notebook goes over how to use Llama-cpp embeddings within LangChain
!pip install llama-cpp-python
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin")
text = "This is a test document."
query_result = llama.e... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/llamacpp.html |
47215d0b0022-0 | .ipynb
.pdf
Self Hosted Embeddings
Self Hosted Embeddings#
Let’s load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.
from langchain.embeddings import (
SelfHostedEmbeddings,
SelfHostedHuggingFaceEmbeddings,
SelfHostedHuggingFaceInstructEmbeddi... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html |
47215d0b0022-1 | tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
def inference_fn(pipeline, prompt):
# Return last hidden state of the model
if isinstance(prompt, list):
return [emb[... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html |
2c4fcc914455-0 | .ipynb
.pdf
MiniMax
MiniMax#
MiniMax offers an embeddings service.
This example goes over how to use LangChain to interact with MiniMax Inference for text embedding.
import os
os.environ["MINIMAX_GROUP_ID"] = "MINIMAX_GROUP_ID"
os.environ["MINIMAX_API_KEY"] = "MINIMAX_API_KEY"
from langchain.embeddings import MiniMaxEm... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/minimax.html |
5e1aa3ca2940-0 | .ipynb
.pdf
Hugging Face Hub
Hugging Face Hub#
Let’s load the Hugging Face Embedding class.
from langchain.embeddings import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings()
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
previous
G... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/huggingfacehub.html |
e80f5c9174a7-0 | .ipynb
.pdf
Jina
Jina#
Let’s load the Jina Embedding class.
from langchain.embeddings import JinaEmbeddings
embeddings = JinaEmbeddings(jina_auth_token=jina_auth_token, model_name="ViT-B-32::openai")
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([t... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/jina.html |
cb5a28ed1c5b-0 | .ipynb
.pdf
OpenAI
OpenAI#
Let’s load the OpenAI Embedding class.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
Let’s load the OpenAI Embedding class with fir... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/openai.html |
0802de7cd365-0 | .ipynb
.pdf
Contents
!pip -q install elasticsearch langchain
import elasticsearch
from langchain.embeddings.elasticsearch import ElasticsearchEmbeddings
# Define the model ID
model_id = 'your_model_id'
# Instantiate ElasticsearchEmbeddings using credentials
embeddings = ElasticsearchEmbeddings.from_credentials(
m... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/elasticsearch.html |
d7e493d141dd-0 | .ipynb
.pdf
ModelScope
ModelScope#
Let’s load the ModelScope Embedding class.
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embeddings = ModelScopeEmbeddings(model_id=model_id)
text = "This is a test document."
query_result = embeddings.embed_query(tex... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/modelscope_hub.html |
355b547d5a89-0 | .ipynb
.pdf
Google Cloud Platform Vertex AI PaLM
Google Cloud Platform Vertex AI PaLM#
Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
PaLM API on Vertex AI is a Preview offering, su... | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/google_vertex_ai_palm.html |
355b547d5a89-1 | previous
Fake Embeddings
next
Hugging Face Hub
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/google_vertex_ai_palm.html |
16e02874f6b5-0 | .rst
.pdf
How-To Guides
How-To Guides#
The examples here all address certain “how-to” guides for working with chat models.
How to use few shot examples
How to stream responses
previous
Getting Started
next
How to use few shot examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated ... | https://python.langchain.com/en/latest/modules/models/chat/how_to_guides.html |
b551ba091b16-0 | .ipynb
.pdf
Getting Started
Contents
PromptTemplates
LLMChain
Streaming
Getting Started#
This notebook covers how to get started with chat models. The interface is based around messages rather than raw text.
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.pro... | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
b551ba091b16-1 | [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love artificial intelligenc... | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
b551ba091b16-2 | human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_langua... | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
b551ba091b16-3 | Sparkling water, you're my vibe
Verse 2:
No sugar, no calories, just pure bliss
A drink that's hard to resist
It's the perfect way to quench my thirst
A drink that always comes first
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Bridge:... | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
bf46d17f09a7-0 | .rst
.pdf
Integrations
Integrations#
The examples here all highlight how to integrate with different chat models.
Anthropic
Azure
Google Cloud Platform Vertex AI PaLM
OpenAI
PromptLayer ChatOpenAI
previous
How to stream responses
next
Anthropic
By Harrison Chase
© Copyright 2023, Harrison Chase.
Las... | https://python.langchain.com/en/latest/modules/models/chat/integrations.html |
4a39b7d25927-0 | .ipynb
.pdf
PromptLayer ChatOpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer ChatOpenAI#
This example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.
Install PromptLayer#
The promp... | https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html |
4a39b7d25927-1 | chat = PromptLayerChatOpenAI(return_pl_id=True)
chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]])
for res in chat_results.generations:
pl_request_id = res[0].generation_info["pl_request_id"]
promptlayer.track.score(request_id=pl_request_id, score=100)
Using this allows you to track... | https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html |
19e1ba7a9904-0 | .ipynb
.pdf
Azure
Azure#
This notebook goes over how to connect to an Azure hosted OpenAI endpoint
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage
BASE_URL = "https://${TODO}.openai.azure.com"
API_KEY = "..."
DEPLOYMENT_NAME = "chat"
model = AzureChatOpenAI(
openai_api_ba... | https://python.langchain.com/en/latest/modules/models/chat/integrations/azure_chat_openai.html |
d5eb7e2bb306-0 | .ipynb
.pdf
Anthropic
Contents
ChatAnthropic also supports async and streaming functionality:
Anthropic#
This notebook covers how to get started with Anthropic chat models.
from langchain.chat_models import ChatAnthropic
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
... | https://python.langchain.com/en/latest/modules/models/chat/integrations/anthropic.html |
c16fc3efbfc6-0 | .ipynb
.pdf
OpenAI
OpenAI#
This notebook covers how to get started with OpenAI chat models.
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema impo... | https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html |
c16fc3efbfc6-1 | AIMessage(content="J'adore la programmation.", additional_kwargs={})
previous
Google Cloud Platform Vertex AI PaLM
next
PromptLayer ChatOpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html |
3ca37efd578d-0 | .ipynb
.pdf
Google Cloud Platform Vertex AI PaLM
Google Cloud Platform Vertex AI PaLM#
Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
PaLM API on Vertex AI is a Preview offering, su... | https://python.langchain.com/en/latest/modules/models/chat/integrations/google_vertex_ai_palm.html |
3ca37efd578d-1 | HumanMessage,
SystemMessage
)
chat = ChatVertexAI()
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)
AIMessage(content='Sure, here is the translat... | https://python.langchain.com/en/latest/modules/models/chat/integrations/google_vertex_ai_palm.html |
24227218203d-0 | .ipynb
.pdf
How to stream responses
How to stream responses#
This notebook goes over how to use streaming with a chat model.
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
)
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
chat = ChatOpenAI(s... | https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html |
24227218203d-1 | previous
How to use few shot examples
next
Integrations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html |
60251cd72590-0 | .ipynb
.pdf
How to use few shot examples
Contents
Alternating Human/AI messages
System Messages
How to use few shot examples#
This notebook covers how to use few shot examples in chat models.
There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abst... | https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html |
60251cd72590-1 | template="You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"})
example_ai = SystemMessagePromptTemplate.from_template("Argh m... | https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html |
ac14587e7d5d-0 | .rst
.pdf
Generic Functionality
Generic Functionality#
The examples here all address certain “how-to” guides for working with LLMs.
How to use the async API for LLMs
How to write a custom LLM wrapper
How (and why) to use the fake LLM
How (and why) to use the human input LLM
How to cache LLM calls
How to serialize LLM c... | https://python.langchain.com/en/latest/modules/models/llms/how_to_guides.html |
282a677b641a-0 | .ipynb
.pdf
Getting Started
Getting Started#
This notebook goes over how to use the LLM class in LangChain.
The LLM class is a class designed for interfacing with LLMs. There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. In this p... | https://python.langchain.com/en/latest/modules/models/llms/getting_started.html |
282a677b641a-1 | llm_result.generations[-1]
[Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just f... | https://python.langchain.com/en/latest/modules/models/llms/getting_started.html |
282a677b641a-2 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/getting_started.html |
4627b6c7bcfb-0 | .rst
.pdf
Integrations
Integrations#
The examples here are all “how-to” guides for how to integrate with various LLM providers.
AI21
Aleph Alpha
Anyscale
Azure OpenAI
Banana
Beam integration for langchain
CerebriumAI
Cohere
C Transformers
Databricks
DeepInfra
ForefrontAI
Google Cloud Platform Vertex AI PaLM
GooseAI
GPT... | https://python.langchain.com/en/latest/modules/models/llms/integrations.html |
3f21b0df6f48-0 | .ipynb
.pdf
Huggingface TextGen Inference
Huggingface TextGen Inference#
Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.
This notebooks goes over how to use a self hosted LLM using Text Generation Inference... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_textgen_inference.html |
df005c2e10b5-0 | .ipynb
.pdf
Databricks
Contents
Wrapping a serving endpoint
Wrapping a cluster driver proxy app
Databricks#
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.
This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain.
It supports two endpoint types:
Serving endp... | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
df005c2e10b5-1 | # See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens
# We strongly recommend not exposing the API token explicitly inside a notebook.
# You can use Databricks secret manager to store your API token securely.
# See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-uti... | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
df005c2e10b5-2 | It uses a port number between [3000, 8000] and litens to the driver IP address or simply 0.0.0.0 instead of localhost only.
You have “Can Attach To” permission to the cluster.
The expected server schema (using JSON schema) is:
inputs:
{"type": "object",
"properties": {
"prompt": {"type": "string"},
"stop": {"... | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
df005c2e10b5-3 | self.matched = self.stop[i]
return True
return False
def llm(prompt, stop=None, **kwargs):
check_stop = CheckStop(stop)
result = dolly(prompt, stopping_criteria=[check_stop], **kwargs)
return result[0]["generated_text"].rstrip(check_stop.matched)
app = Flask("dolly")
@app.route('/', method... | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
df005c2e10b5-4 | # Use `transform_input_fn` and `transform_output_fn` if the app
# expects a different input schema and does not return a JSON string,
# respectively, or you want to apply a prompt template on top.
def transform_input(**request):
full_prompt = f"""{request["prompt"]}
Be Concise.
"""
request["prompt"] = f... | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html |
4c73ac4e2785-0 | .ipynb
.pdf
Hugging Face Hub
Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
Hugging Face Hub#
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily col... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
4c73ac4e2785-1 | StableLM, by Stability AI#
See Stability AI’s organization page for a list of available models.
repo_id = "stabilityai/stablelm-tuned-alpha-3b"
# Others include stabilityai/stablelm-base-alpha-3b
# as well as 7B parameter versions
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
4c73ac4e2785-2 | Hugging Face Local Pipelines
Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
9c4e45db4513-0 | .ipynb
.pdf
MosaicML
MosaicML#
MosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.
This example goes over how to use LangChain to interact with MosaicML Inference for text completion.
# sign up for an account: https://forms.mosaicml.com/demo?utm_source=la... | https://python.langchain.com/en/latest/modules/models/llms/integrations/mosaicml.html |
e5e5ed072b09-0 | .ipynb
.pdf
Writer
Writer#
Writer is a platform to generate different language content.
This example goes over how to use LangChain to interact with Writer models.
You have to get the WRITER_API_KEY here.
from getpass import getpass
WRITER_API_KEY = getpass()
import os
os.environ["WRITER_API_KEY"] = WRITER_API_KEY
from... | https://python.langchain.com/en/latest/modules/models/llms/integrations/writer.html |
a9664b7a0a7f-0 | .ipynb
.pdf
Beam integration for langchain
Beam integration for langchain#
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instan... | https://python.langchain.com/en/latest/modules/models/llms/integrations/beam.html |
a9664b7a0a7f-1 | "torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length="50",
verbose=False)
llm._deploy()
response = llm._call("Running machine learning on a remote GPU")
print(response)
previous
Banana
next
CerebriumAI
By Harrison Chas... | https://python.langchain.com/en/latest/modules/models/llms/integrations/beam.html |
8403744b3d9b-0 | .ipynb
.pdf
Manifest
Contents
Compare HF Models
Manifest#
This notebook goes over how to use Manifest and LangChain.
For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest
Another example of using Manifest with Langc... | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
8403744b3d9b-1 | state_of_the_union = f.read()
mp_chain.run(state_of_the_union)
'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing business... | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
8403744b3d9b-2 | )
manifest3 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5002"
),
llm_kwargs={"temperature": 0.01}
)
llms = [manifest1, manifest2, manifest3]
model_lab = ModelLaboratory(llms)
model_lab.compare("What color is a flamingo?")
Input:
What col... | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
cf590b13e73d-0 | .ipynb
.pdf
ForefrontAI
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
ForefrontAI#
The Forefront platform gives you the ability to fine-tune and use open source large language models.
This notebook goes over how to use Lang... | https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html |
cf590b13e73d-1 | DeepInfra
next
Google Cloud Platform Vertex AI PaLM
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html |
c4b2ed4ed5d7-0 | .ipynb
.pdf
Hugging Face Local Pipelines
Contents
Load the model
Integrate the model in an LLMChain
Hugging Face Local Pipelines#
Hugging Face models can be run locally through the HuggingFacePipeline class.
The Hugging Face Model Hub hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source a... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html |
c4b2ed4ed5d7-1 | question = "What is electroencephalography?"
print(llm_chain.run(question))
/Users/wfh/code/lc/lckg/.venv/lib/python3.11/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (64) to control the generation length. This behaviour is deprecated and will be removed from the config ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html |
3f8dac7ed298-0 | .ipynb
.pdf
GPT4All
Contents
Specify Model
GPT4All#
GitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.
This example goes over how to use LangChain to interact with GPT4All models.
%pip install gpt4all > /dev/null
... | https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html |
3f8dac7ed298-1 | # # send a GET request to the URL to download the file. Stream since it's large
# response = requests.get(url, stream=True)
# # open the file in binary mode and write the contents of the response to it in chunks
# # This is a large file, so be prepared to wait.
# with open(local_path, 'wb') as f:
# for chunk in tqd... | https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html |
04d6300a6fee-0 | .ipynb
.pdf
GooseAI
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
GooseAI#
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.
This notebook goes over how to u... | https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html |
04d6300a6fee-1 | previous
Google Cloud Platform Vertex AI PaLM
next
GPT4All
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html |
a531a26e8d32-0 | .ipynb
.pdf
Structured Decoding with JSONFormer
Contents
HuggingFace Baseline
JSONFormer LLM Wrapper
Structured Decoding with JSONFormer#
JSONFormer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.
It works by filling in the structure tokens and then sa... | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html |
a531a26e8d32-1 | {arg_schema}
EXAMPLES
----
Human: "So what's all this about a GIL?"
AI Assistant:{{
"action": "ask_star_coder",
"action_input": {{"query": "What is a GIL?", "temperature": 0.0, "max_new_tokens": 100}}"
}}
Observation: "The GIL is python's Global Interpreter Lock"
Human: "Could you please write a calculator program ... | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html |
a531a26e8d32-2 | original_model = HuggingFacePipeline(pipeline=hf_model)
generated = original_model.predict(prompt, stop=["Observation:", "Human:"])
print(generated)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
'What's the difference between an iterator and an iterable?'
That’s not so impressive, is it? It d... | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html |
081618bcc95e-0 | .ipynb
.pdf
Anyscale
Anyscale#
Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications
This example goes over how to use LangChain to interact with Anyscale service
import os
os.environ["ANYSCALE_SERVICE_URL"] = ANYSCALE_SERVICE_URL
os.environ["ANYSCALE_S... | https://python.langchain.com/en/latest/modules/models/llms/integrations/anyscale.html |
081618bcc95e-1 | def send_query(llm, prompt):
resp = llm(prompt)
return resp
futures = [send_query.remote(llm, prompt) for prompt in prompt_list]
results = ray.get(futures)
previous
Aleph Alpha
next
Azure OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/models/llms/integrations/anyscale.html |
5cb4b4a1caea-0 | .ipynb
.pdf
StochasticAI
StochasticAI#
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.
This example goes over how to use LangChain to interact with Stochastic... | https://python.langchain.com/en/latest/modules/models/llms/integrations/stochasticai.html |
b35f46087ca9-0 | .ipynb
.pdf
Cohere
Cohere#
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.
This example goes over how to use LangChain to interact with Cohere models.
# Install the package
!pip install cohere
# get a new token: https://dashboard.cohe... | https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html |
b35f46087ca9-1 | llm_chain.run(question)
" Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our inform... | https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html |
f845bd1f9df9-0 | .ipynb
.pdf
Aleph Alpha
Aleph Alpha#
The Luminous series is a family of large language models.
This example goes over how to use LangChain to interact with Aleph Alpha models
# Install the package
!pip install aleph-alpha-client
# create a new token: https://docs.aleph-alpha.com/docs/account/#create-a-new-token
from ge... | https://python.langchain.com/en/latest/modules/models/llms/integrations/aleph_alpha.html |
299da108d4ac-0 | .ipynb
.pdf
AI21
AI21#
AI21 Studio provides API access to Jurassic-2 large language models.
This example goes over how to use LangChain to interact with AI21 models.
# install the package:
!pip install ai21
# get AI21_API_KEY. Use https://studio.ai21.com/account/account
from getpass import getpass
AI21_API_KEY = getpa... | https://python.langchain.com/en/latest/modules/models/llms/integrations/ai21.html |
13f49734621f-0 | .ipynb
.pdf
NLP Cloud
NLP Cloud#
The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text g... | https://python.langchain.com/en/latest/modules/models/llms/integrations/nlpcloud.html |
5c096ebf8ae6-0 | .ipynb
.pdf
SageMakerEndpoint
Contents
Set up
Example
SageMakerEndpoint#
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.
!pip... | https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html |
5c096ebf8ae6-1 | import json
query = """How long was Elizabeth hospitalized?
"""
prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
class ContentHandler(LLMC... | https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html |
3aab583b76ca-0 | .ipynb
.pdf
Llama-cpp
Llama-cpp#
llama-cpp is a Python binding for llama.cpp.
It supports several LLMs.
This notebook goes over how to run llama-cpp within LangChain.
!pip install llama-cpp-python
Make sure you are following all instructions to install all necessary model files.
You don’t need an API_TOKEN!
from langch... | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html |
3aab583b76ca-1 | ' First we need to identify what year Justin Beiber was born in. A quick google search reveals that he was born on March 1st, 1994. Now we know when the Super Bowl was played in, so we can look up which NFL team won it. The NFL Superbowl of the year 1994 was won by the San Francisco 49ers against the San Diego Chargers... | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.