id stringlengths 14 16 | source stringlengths 49 117 | text stringlengths 16 2.73k |
|---|---|---|
dbe0ca68dd64-0 | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/openai.html | .ipynb
.pdf
OpenAI
OpenAI#
Let’s load the OpenAI Embedding class.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
Let’s load the OpenAI Embedding class with fir... |
b44f9da5a6bd-0 | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/elasticsearch.html | .ipynb
.pdf
Elasticsearch
Contents
Testing with from_credentials
Testing with Existing Elasticsearch client connection
Elasticsearch#
Walkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch
The easiest way to instantiate the ElasticsearchEmebddings class it either
using the from_cred... |
b44f9da5a6bd-1 | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/elasticsearch.html | # Create embeddings for multiple documents
documents = [
'This is an example document.',
'Another example document to generate embeddings for.'
]
document_embeddings = embeddings.embed_documents(documents)
# Print document embeddings
for i, embedding in enumerate(document_embeddings):
print(f"Embedding for... |
2b39d550ab50-0 | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/modelscope_hub.html | .ipynb
.pdf
ModelScope
ModelScope#
Let’s load the ModelScope Embedding class.
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embeddings = ModelScopeEmbeddings(model_id=model_id)
text = "This is a test document."
query_result = embeddings.embed_query(tex... |
2ef7e9f4cabe-0 | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/google_vertex_ai_palm.html | .ipynb
.pdf
Google Cloud Platform Vertex AI PaLM
Google Cloud Platform Vertex AI PaLM#
Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
PaLM API on Vertex AI is a Preview offering, su... |
2ef7e9f4cabe-1 | https://python.langchain.com/en/latest/modules/models/text_embedding/examples/google_vertex_ai_palm.html | © Copyright 2023, Harrison Chase.
Last updated on Jun 04, 2023. |
14f79af5e62c-0 | https://python.langchain.com/en/latest/modules/models/chat/how_to_guides.html | .rst
.pdf
How-To Guides
How-To Guides#
The examples here all address certain “how-to” guides for working with chat models.
How to use few shot examples
How to stream responses
previous
Getting Started
next
How to use few shot examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated ... |
94d3bd8b892d-0 | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html | .ipynb
.pdf
Getting Started
Contents
PromptTemplates
LLMChain
Streaming
Getting Started#
This notebook covers how to get started with chat models. The interface is based around messages rather than raw text.
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.pro... |
94d3bd8b892d-1 | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html | HumanMessage(content="I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love artificial intelligence.")
],
]
result = chat.generate(batch_messages)
result
LLMResult(generations=[[ChatGeneration(text="J'... |
94d3bd8b892d-2 | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html | # get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
AIMessage(content="J'adore la programmation.", additional_kwargs={})
If you wanted to construct the MessagePromptTemplate more directly, you c... |
94d3bd8b892d-3 | https://python.langchain.com/en/latest/modules/models/chat/getting_started.html | Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Bridge:
From the mountains to the sea
Sparkling water, you're the key
To a healthy life, a happy soul
A drink that makes me feel whole
Chorus:
Sparkling water, oh so fine
A drink that's always on my... |
fdcf23c508b6-0 | https://python.langchain.com/en/latest/modules/models/chat/integrations.html | .rst
.pdf
Integrations
Integrations#
The examples here all highlight how to integrate with different chat models.
Anthropic
Azure
Google Cloud Platform Vertex AI PaLM
OpenAI
PromptLayer ChatOpenAI
previous
How to stream responses
next
Anthropic
By Harrison Chase
© Copyright 2023, Harrison Chase.
Las... |
e82bc87e62ab-0 | https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html | .ipynb
.pdf
PromptLayer ChatOpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer ChatOpenAI#
This example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.
Install PromptLayer#
The promp... |
e82bc87e62ab-1 | https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html | chat = PromptLayerChatOpenAI(return_pl_id=True)
chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]])
for res in chat_results.generations:
pl_request_id = res[0].generation_info["pl_request_id"]
promptlayer.track.score(request_id=pl_request_id, score=100)
Using this allows you to track... |
aa60a843c6cf-0 | https://python.langchain.com/en/latest/modules/models/chat/integrations/azure_chat_openai.html | .ipynb
.pdf
Azure
Azure#
This notebook goes over how to connect to an Azure hosted OpenAI endpoint
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage
BASE_URL = "https://${TODO}.openai.azure.com"
API_KEY = "..."
DEPLOYMENT_NAME = "chat"
model = AzureChatOpenAI(
openai_api_ba... |
94106df32451-0 | https://python.langchain.com/en/latest/modules/models/chat/integrations/anthropic.html | .ipynb
.pdf
Anthropic
Contents
ChatAnthropic also supports async and streaming functionality:
Anthropic#
This notebook covers how to get started with Anthropic chat models.
from langchain.chat_models import ChatAnthropic
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
... |
46bb561187a6-0 | https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html | .ipynb
.pdf
OpenAI
OpenAI#
This notebook covers how to get started with OpenAI chat models.
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema impo... |
46bb561187a6-1 | https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html | AIMessage(content="J'adore la programmation.", additional_kwargs={})
previous
Google Cloud Platform Vertex AI PaLM
next
PromptLayer ChatOpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 04, 2023. |
5fab457f3ad8-0 | https://python.langchain.com/en/latest/modules/models/chat/integrations/google_vertex_ai_palm.html | .ipynb
.pdf
Google Cloud Platform Vertex AI PaLM
Google Cloud Platform Vertex AI PaLM#
Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.
PaLM API on Vertex AI is a Preview offering, su... |
5fab457f3ad8-1 | https://python.langchain.com/en/latest/modules/models/chat/integrations/google_vertex_ai_palm.html | messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)
AIMessage(content='Sure, here is the translation of the sentence "I love programming" from English to... |
38ccede58c37-0 | https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html | .ipynb
.pdf
How to stream responses
How to stream responses#
This notebook goes over how to use streaming with a chat model.
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
)
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
chat = ChatOpenAI(s... |
38ccede58c37-1 | https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html | © Copyright 2023, Harrison Chase.
Last updated on Jun 04, 2023. |
57d244985fee-0 | https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html | .ipynb
.pdf
How to use few shot examples
Contents
Alternating Human/AI messages
System Messages
How to use few shot examples#
This notebook covers how to use few shot examples in chat models.
There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abst... |
57d244985fee-1 | https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html | template="You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"})
example_ai = SystemMessagePromptTemplate.from_template("Argh m... |
03324c8441f8-0 | https://python.langchain.com/en/latest/modules/models/llms/how_to_guides.html | .rst
.pdf
Generic Functionality
Generic Functionality#
The examples here all address certain “how-to” guides for working with LLMs.
How to use the async API for LLMs
How to write a custom LLM wrapper
How (and why) to use the fake LLM
How (and why) to use the human input LLM
How to cache LLM calls
How to serialize LLM c... |
179c2b687dfa-0 | https://python.langchain.com/en/latest/modules/models/llms/getting_started.html | .ipynb
.pdf
Getting Started
Getting Started#
This notebook goes over how to use the LLM class in LangChain.
The LLM class is a class designed for interfacing with LLMs. There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. In this p... |
179c2b687dfa-1 | https://python.langchain.com/en/latest/modules/models/llms/getting_started.html | [Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just feel what love is for us\n\n... |
ebd369240c89-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations.html | .rst
.pdf
Integrations
Integrations#
The examples here are all “how-to” guides for how to integrate with various LLM providers.
AI21
Aleph Alpha
Anyscale
Azure OpenAI
Banana
Beam integration for langchain
Amazon Bedrock
CerebriumAI
Cohere
C Transformers
Databricks
DeepInfra
ForefrontAI
Google Cloud Platform Vertex AI P... |
4ec46dcff27e-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_textgen_inference.html | .ipynb
.pdf
Huggingface TextGen Inference
Huggingface TextGen Inference#
Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.
This notebooks goes over how to use a self hosted LLM using Text Generation Inference... |
0353bf6eb3a0-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html | .ipynb
.pdf
Databricks
Contents
Wrapping a serving endpoint
Wrapping a cluster driver proxy app
Databricks#
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.
This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain.
It supports two endpoint types:
Serving endp... |
0353bf6eb3a0-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html | # See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens
# We strongly recommend not exposing the API token explicitly inside a notebook.
# You can use Databricks secret manager to store your API token securely.
# See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-uti... |
0353bf6eb3a0-2 | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html | It uses a port number between [3000, 8000] and litens to the driver IP address or simply 0.0.0.0 instead of localhost only.
You have “Can Attach To” permission to the cluster.
The expected server schema (using JSON schema) is:
inputs:
{"type": "object",
"properties": {
"prompt": {"type": "string"},
"stop": {"... |
0353bf6eb3a0-3 | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html | def llm(prompt, stop=None, **kwargs):
check_stop = CheckStop(stop)
result = dolly(prompt, stopping_criteria=[check_stop], **kwargs)
return result[0]["generated_text"].rstrip(check_stop.matched)
app = Flask("dolly")
@app.route('/', methods=['POST'])
def serve_llm():
resp = llm(**request.json)
return jsonify(re... |
0353bf6eb3a0-4 | https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html | # respectively, or you want to apply a prompt template on top.
def transform_input(**request):
full_prompt = f"""{request["prompt"]}
Be Concise.
"""
request["prompt"] = full_prompt
return request
def transform_output(response):
return response.upper()
llm = Databricks(
cluster_driver_port="777... |
2efb543bffd8-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html | .ipynb
.pdf
Hugging Face Hub
Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
Hugging Face Hub#
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily col... |
2efb543bffd8-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html | See Stability AI’s organization page for a list of available models.
repo_id = "stabilityai/stablelm-tuned-alpha-3b"
# Others include stabilityai/stablelm-base-alpha-3b
# as well as 7B parameter versions
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and questi... |
2efb543bffd8-2 | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html | Camel, by Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 04, 2023. |
a35aab68522c-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/bedrock.html | .ipynb
.pdf
Amazon Bedrock
Contents
Using in a conversation chain
Amazon Bedrock#
Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case
%pip install boto3
fro... |
0dc1e6376d10-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/mosaicml.html | .ipynb
.pdf
MosaicML
MosaicML#
MosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.
This example goes over how to use LangChain to interact with MosaicML Inference for text completion.
# sign up for an account: https://forms.mosaicml.com/demo?utm_source=la... |
47d82e6dde3f-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/writer.html | .ipynb
.pdf
Writer
Writer#
Writer is a platform to generate different language content.
This example goes over how to use LangChain to interact with Writer models.
You have to get the WRITER_API_KEY here.
from getpass import getpass
WRITER_API_KEY = getpass()
import os
os.environ["WRITER_API_KEY"] = WRITER_API_KEY
from... |
38f70f70ea2d-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/beam.html | .ipynb
.pdf
Beam integration for langchain
Beam integration for langchain#
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instan... |
38f70f70ea2d-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/beam.html | "safetensors",
"xformers",],
max_length="50",
verbose=False)
llm._deploy()
response = llm._call("Running machine learning on a remote GPU")
print(response)
previous
Banana
next
Amazon Bedrock
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun... |
b6d08cb26f5f-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html | .ipynb
.pdf
Manifest
Contents
Compare HF Models
Manifest#
This notebook goes over how to use Manifest and LangChain.
For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest
Another example of using Manifest with Langc... |
b6d08cb26f5f-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html | 'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will ... |
b6d08cb26f5f-2 | https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html | client_connection="http://127.0.0.1:5002"
),
llm_kwargs={"temperature": 0.01}
)
llms = [manifest1, manifest2, manifest3]
model_lab = ModelLaboratory(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
ManifestWrapper
Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'big... |
2007ed395d32-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html | .ipynb
.pdf
ForefrontAI
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
ForefrontAI#
The Forefront platform gives you the ability to fine-tune and use open source large language models.
This notebook goes over how to use Lang... |
2007ed395d32-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html | Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 04, 2023. |
94a1322adcd0-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html | .ipynb
.pdf
Hugging Face Local Pipelines
Contents
Load the model
Integrate the model in an LLMChain
Hugging Face Local Pipelines#
Hugging Face models can be run locally through the HuggingFacePipeline class.
The Hugging Face Model Hub hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source a... |
94a1322adcd0-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html | /Users/wfh/code/lc/lckg/.venv/lib/python3.11/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (64) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the ... |
baa4fd273140-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html | .ipynb
.pdf
GPT4All
Contents
Specify Model
GPT4All#
GitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.
This example goes over how to use LangChain to interact with GPT4All models.
%pip install gpt4all > /dev/null
... |
baa4fd273140-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html | # # send a GET request to the URL to download the file. Stream since it's large
# response = requests.get(url, stream=True)
# # open the file in binary mode and write the contents of the response to it in chunks
# # This is a large file, so be prepared to wait.
# with open(local_path, 'wb') as f:
# for chunk in tqd... |
76160afdccbd-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html | .ipynb
.pdf
GooseAI
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
GooseAI#
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.
This notebook goes over how to u... |
76160afdccbd-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html | Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 04, 2023. |
34615d318bfe-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html | .ipynb
.pdf
Structured Decoding with JSONFormer
Contents
HuggingFace Baseline
JSONFormer LLM Wrapper
Structured Decoding with JSONFormer#
JSONFormer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.
It works by filling in the structure tokens and then sa... |
34615d318bfe-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html | Human: "So what's all this about a GIL?"
AI Assistant:{{
"action": "ask_star_coder",
"action_input": {{"query": "What is a GIL?", "temperature": 0.0, "max_new_tokens": 100}}"
}}
Observation: "The GIL is python's Global Interpreter Lock"
Human: "Could you please write a calculator program in LISP?"
AI Assistant:{{
... |
34615d318bfe-2 | https://python.langchain.com/en/latest/modules/models/llms/integrations/jsonformer_experimental.html | Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
'What's the difference between an iterator and an iterable?'
That’s not so impressive, is it? It didn’t follow the JSON format at all! Let’s try with the structured decoder.
JSONFormer LLM Wrapper#
Let’s try that again, now providing a the Action ... |
48c83c2246f1-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/anyscale.html | .ipynb
.pdf
Anyscale
Anyscale#
Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications
This example goes over how to use LangChain to interact with Anyscale service
import os
os.environ["ANYSCALE_SERVICE_URL"] = ANYSCALE_SERVICE_URL
os.environ["ANYSCALE_S... |
48c83c2246f1-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/anyscale.html | futures = [send_query.remote(llm, prompt) for prompt in prompt_list]
results = ray.get(futures)
previous
Aleph Alpha
next
Azure OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 04, 2023. |
2a998f12ae36-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/stochasticai.html | .ipynb
.pdf
StochasticAI
StochasticAI#
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.
This example goes over how to use LangChain to interact with Stochastic... |
4a3a8a3ab3b3-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html | .ipynb
.pdf
Cohere
Cohere#
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.
This example goes over how to use LangChain to interact with Cohere models.
# Install the package
!pip install cohere
# get a new token: https://dashboard.cohe... |
4a3a8a3ab3b3-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html | " Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our information, the Green Bay Pac... |
2ba86293c828-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/aleph_alpha.html | .ipynb
.pdf
Aleph Alpha
Aleph Alpha#
The Luminous series is a family of large language models.
This example goes over how to use LangChain to interact with Aleph Alpha models
# Install the package
!pip install aleph-alpha-client
# create a new token: https://docs.aleph-alpha.com/docs/account/#create-a-new-token
from ge... |
9f28ff728b31-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/ai21.html | .ipynb
.pdf
AI21
AI21#
AI21 Studio provides API access to Jurassic-2 large language models.
This example goes over how to use LangChain to interact with AI21 models.
# install the package:
!pip install ai21
# get AI21_API_KEY. Use https://studio.ai21.com/account/account
from getpass import getpass
AI21_API_KEY = getpa... |
9e9cd944fdbd-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/nlpcloud.html | .ipynb
.pdf
NLP Cloud
NLP Cloud#
The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text g... |
14f82c01fe5a-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html | .ipynb
.pdf
SageMakerEndpoint
Contents
Set up
Example
SageMakerEndpoint#
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.
!pip... |
14f82c01fe5a-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html | prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
class ContentHandler(LLMContentHandler):
content_type = "application/json"
accept... |
5b52fcadebf1-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html | .ipynb
.pdf
Llama-cpp
Contents
Installation
CPU only installation
Installation with OpenBLAS / cuBLAS / CLBlast
Usage
CPU
GPU
Llama-cpp#
llama-cpp is a Python binding for llama.cpp.
It supports several LLMs.
This notebook goes over how to run llama-cpp within LangChain.
Installation#
There is a banch of options how t... |
5b52fcadebf1-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html | Answer: Let's work this out in a step by step way to be sure we have the right answer."""
prompt = PromptTemplate(template=template, input_variables=["question"])
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Verbose is required to pass to the callback... |
5b52fcadebf1-2 | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html | llama_print_timings: total time = 28945.95 ms
'\n\n1. First, find out when Justin Bieber was born.\n2. We know that Justin Bieber was born on March 1, 1994.\n3. Next, we need to look up when the Super Bowl was played in that year.\n4. The Super Bowl was played on January 28, 1995.\n5. Finally, we can use this inf... |
5b52fcadebf1-3 | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html | We are looking for an NFL team that won the Super Bowl when Justin Bieber (born March 1, 1994) was born.
First, let's look up which year is closest to when Justin Bieber was born:
* The year before he was born: 1993
* The year of his birth: 1994
* The year after he was born: 1995
We want to know what NFL team won the ... |
5b52fcadebf1-4 | https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html | " We are looking for an NFL team that won the Super Bowl when Justin Bieber (born March 1, 1994) was born. \n\nFirst, let's look up which year is closest to when Justin Bieber was born:\n\n* The year before he was born: 1993\n* The year of his birth: 1994\n* The year after he was born: 1995\n\nWe want to know what NFL ... |
f1c675387a5c-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html | .ipynb
.pdf
Structured Decoding with RELLM
Contents
Hugging Face Baseline
RELLM LLM Wrapper
Structured Decoding with RELLM#
RELLM is a library that wraps local Hugging Face pipeline models for structured decoding.
It works by generating tokens one at a time. At each step, it masks tokens that don’t conform to the pro... |
f1c675387a5c-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/rellm_experimental.html | generations=[[Generation(text=' "What\'s the capital of Maryland?"\n', generation_info=None)]] llm_output=None
That’s not so impressive, is it? It didn’t answer the question and it didn’t follow the JSON format at all! Let’s try with the structured decoder.
RELLM LLM Wrapper#
Let’s try that again, now providing a regex... |
be0bb29bfaae-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html | .ipynb
.pdf
Runhouse
Runhouse#
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs.
This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.
Note: Code uses SelfHosted name instead... |
be0bb29bfaae-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html | question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds
"\n\nLet's say we're talking sports teams who won the Super Bowl in the year Just... |
be0bb29bfaae-2 | https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html | llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)
llm("Who is the current US president?")
INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds
'john w. bush'
You can send your pipeline direc... |
63bc1a8248cf-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html | .ipynb
.pdf
Modal
Modal#
The Modal Python Library provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.
The Modal itself does not provide any LLMs but only the infrastructure.
This example goes over how to use LangChain to interact with Modal.
Here is another exam... |
63bc1a8248cf-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html | MosaicML
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 04, 2023. |
7f210319229a-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html | .ipynb
.pdf
PromptLayer OpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer OpenAI#
PromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware... |
7f210319229a-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html | If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id.
llm = PromptLayerOpenAI(return_pl_id=True)
llm_results = llm.generate(["Tell me a joke"])
for res in llm_results.generations:
pl_request_id = ... |
cb1868d819e0-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html | .ipynb
.pdf
DeepInfra
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
DeepInfra#
DeepInfra provides several LLMs.
This notebook goes over how to use Langchain with DeepInfra.
Imports#
import os
from langchain.llms import DeepIn... |
cb1868d819e0-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html | Run the LLMChain#
Provide a question and run the LLMChain.
question = "Can penguins reach the North pole?"
llm_chain.run(question)
"Penguins live in the Southern hemisphere.\nThe North pole is located in the Northern hemisphere.\nSo, first you need to turn the penguin South.\nThen, support the penguin on a rotation mac... |
ee9c1c097700-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/banana.html | .ipynb
.pdf
Banana
Banana#
Banana is focused on building the machine learning infrastructure.
This example goes over how to use LangChain to interact with Banana models
# Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/python
!pip install banana-dev
# get new tokens: https://app.banana.dev/
... |
cfc3dbfdfdac-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/ctransformers.html | .ipynb
.pdf
C Transformers
C Transformers#
The C Transformers library provides Python bindings for GGML models.
This example goes over how to use LangChain to interact with C Transformers models.
Install
%pip install ctransformers
Load Model
from langchain.llms import CTransformers
llm = CTransformers(model='marella/gp... |
4ed8771c1363-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html | .ipynb
.pdf
Basic LLM usage
Contents
Basic LLM usage
Control the output structure/ type of LLMs
Chaining
! pip install predictionguard langchain
import os
import predictionguard as pg
from langchain.llms import PredictionGuard
from langchain import PromptTemplate, LLMChain
Basic LLM usage#
# Optional, add your OpenAI... |
4ed8771c1363-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html | # With "guarding" or controlling the output of the LLM. See the
# Prediction Guard docs (https://docs.predictionguard.com) to learn how to
# control the output with integer, float, boolean, JSON, and other types and
# structures.
pgllm = PredictionGuard(model="OpenAI-text-davinci-003",
output... |
a379fc565ae9-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/pipelineai_example.html | .ipynb
.pdf
PipelineAI
Contents
Install pipeline-ai
Imports
Set the Environment API Key
Create the PipelineAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
PipelineAI#
PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
This ... |
a379fc565ae9-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/pipelineai_example.html | Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Petals
next
Basic LLM usage
Contents
Install pipeline-ai
Imports
Set the Environment API Key
Create the PipelineAI instance
Create a Prompt Template
Initiate th... |
1b06109065f4-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/openai.html | .ipynb
.pdf
OpenAI
OpenAI#
OpenAI offers a spectrum of models with different levels of power suitable for different tasks.
This example goes over how to use LangChain to interact with OpenAI models
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
······... |
16573297507a-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html | .ipynb
.pdf
CerebriumAI
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
CerebriumAI#
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.
This notebook goes over how ... |
16573297507a-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html | question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Amazon Bedrock
next
Cohere
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Cha... |
478eb96b3bc5-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html | .ipynb
.pdf
Petals
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
Petals#
Petals runs 100B+ language models at home, BitTorrent-style.
This notebook goes over how to use Langchain with Petals.
Install petals#
The p... |
478eb96b3bc5-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html | question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
OpenLM
next
PipelineAI
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
... |
d82975074ee1-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html | .ipynb
.pdf
OpenLM
Contents
Setup
Using LangChain with OpenLM
OpenLM#
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.
It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset u... |
d82975074ee1-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html | llm = OpenLM(model=model)
llm_chain = LLMChain(prompt=prompt, llm=llm)
result = llm_chain.run(question)
print("""Model: {}
Result: {}""".format(model, result))
Model: text-davinci-003
Result: France is a country in Europe. The capital of France is Paris.
Model: huggingface.co/gpt2
Result: Question: What is... |
d378b25ef184-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html | .ipynb
.pdf
Azure OpenAI
Contents
API configuration
Deployments
Azure OpenAI#
This notebook goes over how to use Langchain with Azure OpenAI.
The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you ... |
d378b25ef184-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html | engine="text-davinci-002-prod",
prompt="This is a test",
max_tokens=5
)
!pip install openai
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_API_BASE"] = "..."
os.environ["OPENAI_API_KEY"] = "..."
# Import Azure OpenAI
from langchain.ll... |
4adc05761a2a-0 | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html | .ipynb
.pdf
Replicate
Contents
Setup
Calling a model
Chaining Calls
Replicate#
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you’re building your own machine learning models, Replicate makes it easy to deploy them at scale.
T... |
4adc05761a2a-1 | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html | Note that only the first output of a model will be returned.
llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
'The legal driving age of dog... |
4adc05761a2a-2 | https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html | First, let’s define the LLM for this model as a flan-5, and text2image as a stable diffusion model.
dolly_llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.