id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
10ba8875dd49-3 | Returns
The embedding for the input query text.
Return type
List[float]
classmethod from_credentials(model_id: str, *, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, input_field: str = 'text_field') → langchain.embeddings.elasticsearch.ElasticsearchEmbeddings[source]#
Instantiate embeddings from Elasticsearch credentials.
Parameters
model_id (str) – The model_id of the model deployed in the Elasticsearch
cluster.
input_field (str) – The name of the key for the input text field in the
document. Defaults to ‘text_field’.
es_cloud_id – (str, optional): The Elasticsearch cloud ID to connect to.
es_user – (str, optional): Elasticsearch username.
es_password – (str, optional): Elasticsearch password.
Example Usage:from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = “your_model_id”
# Optional, only if different from ‘text_field’
input_field = “your_input_field”
# Credentials can be passed in two ways. Either set the env vars
# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically pulled
# in, or pass them in directly as kwargs.
embeddings = ElasticsearchEmbeddings.from_credentials(
model_id,
input_field=input_field,
# es_cloud_id=”foo”,
# es_user=”bar”,
# es_password=”baz”,
)
documents = [“This is an example document.”,
“Another example document to generate embeddings for.”,
]
embeddings_generator.embed_documents(documents)
pydantic model langchain.embeddings.FakeEmbeddings[source]#
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed search docs. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-4 | Embed search docs.
embed_query(text: str) → List[float][source]#
Embed query text.
pydantic model langchain.embeddings.HuggingFaceEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
hf = HuggingFaceEmbeddings(model_name=model_name, model_kwargs=model_kwargs)
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
field encode_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass when calling the encode method of the model.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceHubEmbeddings[source]#
Wrapper around HuggingFaceHub embedding models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-5 | environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field repo_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field task: Optional[str] = 'feature-extraction'#
Task to call the model with.
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to HuggingFaceHub’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to HuggingFaceHub’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceInstructEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python packages installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name, model_kwargs=model_kwargs
)
field cache_folder: Optional[str] = None# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-6 | )
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'hkunlp/instructor-large'#
Model name to use.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.LlamaCppEmbeddings[source]#
Wrapper around llama.cpp embedding models.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: abetlen/llama-cpp-python
Example
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-7 | Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_gpu_layers: Optional[int] = None#
Number of layers to be loaded into gpu memory. Default None.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_threads: Optional[int] = None#
Number of threads to use. If None, the number
of threads is automatically determined.
field seed: int = -1#
Seed. If -1, a random seed is used.
field use_mlock: bool = False#
Force system to keep model in RAM.
field vocab_only: bool = False#
Only load the vocabulary, no weights.
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed a list of documents using the Llama model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using the Llama model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.MiniMaxEmbeddings[source]#
Wrapper around MiniMax’s embedding inference service.
To use, you should have the environment variable MINIMAX_GROUP_ID and
MINIMAX_API_KEY set with your API token, or pass it as a named parameter to
the constructor.
Example
from langchain.embeddings import MiniMaxEmbeddings
embeddings = MiniMaxEmbeddings()
query_text = "This is a test query."
query_result = embeddings.embed_query(query_text)
document_text = "This is a test document." | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-8 | query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
field embed_type_db: str = 'db'#
For embed_documents
field embed_type_query: str = 'query'#
For embed_query
field endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'#
Endpoint URL to use.
field minimax_api_key: Optional[str] = None#
API Key for MiniMax API.
field minimax_group_id: Optional[str] = None#
Group ID for MiniMax API.
field model: str = 'embo-01'#
Embeddings model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed documents using a MiniMax embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using a MiniMax embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.ModelScopeEmbeddings[source]#
Wrapper around modelscope_hub embedding models.
To use, you should have the modelscope python package installed.
Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id)
field model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a modelscope embedding model.
Parameters
texts – The list of texts to embed.
Returns | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-9 | Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a modelscope embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.MosaicMLInstructorEmbeddings[source]#
Wrapper around MosaicML’s embedding inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicMLInstructorEmbeddings
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict"
)
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction used to embed documents.
field endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict'#
Endpoint URL to use.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction used to embed the query.
field retry_sleep: float = 1.0#
How long to try sleeping for if a rate limit is encountered
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using a MosaicML deployed instructor embedding model. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-10 | Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.OpenAIEmbeddings[source]#
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to ‘azure’ and the others correspond to
the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
api_base="https://your-endpoint.openai.azure.com/",
api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
field chunk_size: int = 1000# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-11 | query_result = embeddings.embed_query(text)
field chunk_size: int = 1000#
Maximum number of texts to embed in each batch
field max_retries: int = 6#
Maximum number of retries to make when generating.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout in seconds for the OpenAPI request.
embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]#
Call out to OpenAI’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to OpenAI’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
pydantic model langchain.embeddings.SagemakerEndpointEmbeddings[source]#
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-12 | field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
field endpoint_name: str = ''#
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: str = ''#
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts: List[str], chunk_size: int = 64) → List[List[float]][source]#
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a SageMaker inference endpoint.
Parameters | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-13 | Compute query embeddings using a SageMaker inference endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedEmbeddings[source]#
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline( | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-14 | embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
raise_deprecation » all fields
set_verbose » verbose
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings on the remote hardware.
field inference_kwargs: Any = None#
Any kwargs to pass to the model’s inference function.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.s
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedHuggingFaceEmbeddings[source]#
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Validators
raise_deprecation » all fields
set_verbose » verbose | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-15 | Validators
raise_deprecation » all fields
set_verbose » verbose
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field model_load_fn: Callable = <function load_embedding_model>#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']#
Requirements to install on hardware to inference the model.
pydantic model langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings[source]#
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Validators
raise_deprecation » all fields
set_verbose » verbose
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_id: str = 'hkunlp/instructor-large'# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-16 | field model_id: str = 'hkunlp/instructor-large'#
Model name to use.
field model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']#
Requirements to install on hardware to inference the model.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
langchain.embeddings.SentenceTransformerEmbeddings#
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
pydantic model langchain.embeddings.TensorflowHubEmbeddings[source]#
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url)
field model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a TensorflowHub embedding model.
Parameters | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
10ba8875dd49-17 | Compute query embeddings using a TensorflowHub embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
previous
Chat Models
next
Indexes
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
f27a06397b2e-0 | .md
.pdf
Quickstart Guide
Contents
Installation
Environment Setup
Building a Language Model Application: LLMs
LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs
Chains: Combine LLMs and prompts in multi-step workflows
Agents: Dynamically Call Chains Based on User Input
Memory: Add State to Chains and Agents
Building a Language Model Application: Chat Models
Get Message Completions from a Chat Model
Chat Prompt Templates
Chains with Chat Models
Agents with Chat Models
Memory: Add State to Chains and Agents
Quickstart Guide#
This tutorial gives you a quick walkthrough about building an end-to-end language model application with LangChain.
Installation#
To get started, install LangChain with the following command:
pip install langchain
# or
conda install langchain -c conda-forge
Environment Setup#
Using LangChain will usually require integrations with one or more model providers, data stores, apis, etc.
For this example, we will be using OpenAI’s APIs, so we will first need to install their SDK:
pip install openai
We will then need to set the environment variable in the terminal.
export OPENAI_API_KEY="..."
Alternatively, you could do this from inside the Jupyter notebook (or Python script):
import os
os.environ["OPENAI_API_KEY"] = "..."
If you want to set the API key dynamically, you can use the openai_api_key parameter when initiating OpenAI class—for instance, each user’s API key.
from langchain.llms import OpenAI
llm = OpenAI(openai_api_key="OPENAI_API_KEY")
Building a Language Model Application: LLMs#
Now that we have installed LangChain and set up our environment, we can start building our language model application. | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-1 | LangChain provides many modules that can be used to build language model applications. Modules can be combined to create more complex applications, or be used individually for simple applications.
LLMs: Get predictions from a language model#
The most basic building block of LangChain is calling an LLM on some input.
Let’s walk through a simple example of how to do this.
For this purpose, let’s pretend we are building a service that generates a company name based on what the company makes.
In order to do this, we first need to import the LLM wrapper.
from langchain.llms import OpenAI
We can then initialize the wrapper with any arguments.
In this example, we probably want the outputs to be MORE random, so we’ll initialize it with a HIGH temperature.
llm = OpenAI(temperature=0.9)
We can now call it on some input!
text = "What would be a good company name for a company that makes colorful socks?"
print(llm(text))
Feetful of Fun
For more details on how to use LLMs within LangChain, see the LLM getting started guide.
Prompt Templates: Manage prompts for LLMs#
Calling an LLM is a great first step, but it’s just the beginning.
Normally when you use an LLM in an application, you are not sending user input directly to the LLM.
Instead, you are probably taking user input and constructing a prompt, and then sending that to the LLM.
For example, in the previous example, the text we passed in was hardcoded to ask for a name for a company that made colorful socks.
In this imaginary service, what we would want to do is take only the user input describing what the company does, and then format the prompt with that information.
This is easy to do with LangChain!
First lets define the prompt template: | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-2 | This is easy to do with LangChain!
First lets define the prompt template:
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
Let’s now see how this works! We can call the .format method to format it.
print(prompt.format(product="colorful socks"))
What is a good name for a company that makes colorful socks?
For more details, check out the getting started guide for prompts.
Chains: Combine LLMs and prompts in multi-step workflows#
Up until now, we’ve worked with the PromptTemplate and LLM primitives by themselves. But of course, a real application is not just one primitive, but rather a combination of them.
A chain in LangChain is made up of links, which can be either primitives like LLMs or other chains.
The most core type of chain is an LLMChain, which consists of a PromptTemplate and an LLM.
Extending the previous example, we can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM.
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM:
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
Now we can run that chain only specifying the product!
chain.run("colorful socks") | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-3 | Now we can run that chain only specifying the product!
chain.run("colorful socks")
# -> '\n\nSocktastic!'
There we go! There’s the first chain - an LLM Chain.
This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains.
For more details, check out the getting started guide for chains.
Agents: Dynamically Call Chains Based on User Input#
So far the chains we’ve looked at run in a predetermined order.
Agents no longer do: they use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning to the user.
When used correctly agents can be extremely powerful. In this tutorial, we show you how to easily use agents through the simplest, highest level API.
In order to load agents, you should understand the following concepts:
Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output.
LLM: The language model powering the agent.
Agent: The agent to use. This should be a string that references a support agent class. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents (coming soon).
Agents: For a list of supported agents and their specifications, see here.
Tools: For a list of predefined tools and their specifications, see here.
For this example, you will also need to install the SerpAPI Python package.
pip install google-search-results
And set the appropriate environment variables.
import os | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-4 | pip install google-search-results
And set the appropriate environment variables.
import os
os.environ["SERPAPI_API_KEY"] = "..."
Now we can get started!
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
llm = OpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")
> Entering new AgentExecutor chain...
I need to find the temperature first, then use the calculator to raise it to the .023 power.
Action: Search
Action Input: "High temperature in SF yesterday"
Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...
Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.
Action: Calculator
Action Input: 57^.023
Observation: Answer: 1.0974509573251117
Thought: I now know the final answer | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-5 | Thought: I now know the final answer
Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.
> Finished chain.
Memory: Add State to Chains and Agents#
So far, all the chains and agents we’ve gone through have been stateless. But often, you may want a chain or agent to have some concept of “memory” so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of “short-term memory”. On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of “long-term memory”. For more concrete ideas on the latter, see this awesome paper.
LangChain provides several specially created chains just for this purpose. This notebook walks through using one of those chains (the ConversationChain) with two different types of memory.
By default, the ConversationChain has a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Let’s take a look at using this chain (setting verbose=True so we can see the prompt).
from langchain import OpenAI, ConversationChain
llm = OpenAI(temperature=0)
conversation = ConversationChain(llm=llm, verbose=True)
output = conversation.predict(input="Hi there!")
print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-6 | Current conversation:
Human: Hi there!
AI:
> Finished chain.
' Hello! How are you today?'
output = conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: Hello! How are you today?
Human: I'm doing well! Just having a conversation with an AI.
AI:
> Finished chain.
" That's great! What would you like to talk about?"
Building a Language Model Application: Chat Models#
Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
Get Message Completions from a Chat Model#
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage.
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
) | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-7 | AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
You can get completions by passing in a single message.
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can also pass in multiple messages for OpenAI’s gpt-3.5-turbo and gpt-4 models.
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
]
chat(messages)
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter:
batch_messages = [
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love artificial intelligence.")
],
]
result = chat.generate(batch_messages)
result
# -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}})
You can recover things like token usage from this LLMResult: | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-8 | You can recover things like token usage from this LLMResult:
result.llm_output['token_usage']
# -> {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}
Chat Prompt Templates#
Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
Chains with Chat Models#
The LLMChain discussed in the above section can be used with chat models as well:
from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import ( | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-9 | from langchain import LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
# -> "J'aime programmer."
Agents with Chat Models#
Agents can also be used with chat models, you can initialize one using AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION as the agent type.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-10 | agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
"action": "Search",
"action_input": "Olivia Wilde boyfriend"
}
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought:I need to use a search engine to find Harry Styles' current age.
Action:
{
"action": "Search",
"action_input": "Harry Styles age"
}
Observation: 29 years
Thought:Now I need to calculate 29 raised to the 0.23 power.
Action:
{
"action": "Calculator",
"action_input": "29^0.23"
}
Observation: Answer: 2.169459462491557
Thought:I now know the final answer.
Final Answer: 2.169459462491557
> Finished chain.
'2.169459462491557'
Memory: Add State to Chains and Agents# | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-11 | '2.169459462491557'
Memory: Add State to Chains and Agents#
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
])
llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
conversation.predict(input="Hi there!")
# -> 'Hello! How can I assist you today?'
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"
conversation.predict(input="Tell me about yourself.") | https://python.langchain.com/en/latest/getting_started/getting_started.html |
f27a06397b2e-12 | conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"
previous
Welcome to LangChain
next
Concepts
Contents
Installation
Environment Setup
Building a Language Model Application: LLMs
LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs
Chains: Combine LLMs and prompts in multi-step workflows
Agents: Dynamically Call Chains Based on User Input
Memory: Add State to Chains and Agents
Building a Language Model Application: Chat Models
Get Message Completions from a Chat Model
Chat Prompt Templates
Chains with Chat Models
Agents with Chat Models
Memory: Add State to Chains and Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/getting_started/getting_started.html |
663e094b34a0-0 | .md
.pdf
Concepts
Contents
Chain of Thought
Action Plan Generation
ReAct
Self-ask
Prompt Chaining
Memetic Proxy
Self Consistency
Inception
MemPrompt
Concepts#
These are concepts and terminology commonly used when developing LLM applications.
It contains reference to external papers or sources where the concept was first introduced,
as well as to places in LangChain where the concept is used.
Chain of Thought#
Chain of Thought (CoT) is a prompting technique used to encourage the model to generate a series of intermediate reasoning steps.
A less formal way to induce this behavior is to include “Let’s think step-by-step” in the prompt.
Chain-of-Thought Paper
Step-by-Step Paper
Action Plan Generation#
Action Plan Generation is a prompting technique that uses a language model to generate actions to take.
The results of these actions can then be fed back into the language model to generate a subsequent action.
WebGPT Paper
SayCan Paper
ReAct#
ReAct is a prompting technique that combines Chain-of-Thought prompting with action plan generation.
This induces the model to think about what action to take, then take it.
Paper
LangChain Example
Self-ask#
Self-ask is a prompting method that builds on top of chain-of-thought prompting.
In this method, the model explicitly asks itself follow-up questions, which are then answered by an external search engine.
Paper
LangChain Example
Prompt Chaining#
Prompt Chaining is combining multiple LLM calls, with the output of one-step being the input to the next.
PromptChainer Paper
Language Model Cascades
ICE Primer Book
Socratic Models
Memetic Proxy#
Memetic Proxy is encouraging the LLM
to respond in a certain way framing the discussion in a context that the model knows of and that | https://python.langchain.com/en/latest/getting_started/concepts.html |
663e094b34a0-1 | to respond in a certain way framing the discussion in a context that the model knows of and that
will result in that type of response.
For example, as a conversation between a student and a teacher.
Paper
Self Consistency#
Self Consistency is a decoding strategy that samples a diverse set of reasoning paths and then selects the most consistent answer.
Is most effective when combined with Chain-of-thought prompting.
Paper
Inception#
Inception is also called First Person Instruction.
It is encouraging the model to think a certain way by including the start of the model’s response in the prompt.
Example
MemPrompt#
MemPrompt maintains a memory of errors and user feedback, and uses them to prevent repetition of mistakes.
Paper
previous
Quickstart Guide
next
Tutorials
Contents
Chain of Thought
Action Plan Generation
ReAct
Self-ask
Prompt Chaining
Memetic Proxy
Self Consistency
Inception
MemPrompt
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/getting_started/concepts.html |
9599bc411a7a-0 | .md
.pdf
Tutorials
Contents
Tutorials#
This is a collection of LangChain tutorials mostly on YouTube.
⛓ icon marks a new video [last update 2023-05-15]
#
LangChain AI Handbook By James Briggs and Francisco Ingham
#
LangChain Tutorials by Edrick:
⛓ LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF
LangChain Crash Course: Build an AutoGPT app in 25 minutes by Nicholas Renotte
LangChain Crash Course - Build apps with language models by Patrick Loeber
LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners by Rabbitmetrics
#
LangChain for Gen AI and LLMs by James Briggs:
#1 Getting Started with GPT-3 vs. Open Source LLMs
#2 Prompt Templates for GPT 3.5 and other LLMs
#3 LLM Chains using GPT 3.5 and other LLMs
#4 Chatbot Memory for Chat-GPT, Davinci + other LLMs
#5 Chat with OpenAI in LangChain
⛓ #6 Fixing LLM Hallucinations with Retrieval Augmentation in LangChain
⛓ #7 LangChain Agents Deep Dive with GPT 3.5
⛓ #8 Create Custom Tools for Chatbots in LangChain
⛓ #9 Build Conversational Agents with Vector DBs
#
LangChain 101 by Data Independent:
What Is LangChain? - LangChain + ChatGPT Overview
Quickstart Guide
Beginner Guide To 7 Essential Concepts
OpenAI + Wolfram Alpha
Ask Questions On Your Custom (or Private) Files
Connect Google Drive Files To OpenAI
YouTube Transcripts + OpenAI
Question A 300 Page Book (w/ OpenAI + Pinecone) | https://python.langchain.com/en/latest/getting_started/tutorials.html |
9599bc411a7a-1 | Question A 300 Page Book (w/ OpenAI + Pinecone)
Workaround OpenAI's Token Limit With Chain Types
Build Your Own OpenAI + LangChain Web App in 23 Minutes
Working With The New ChatGPT API
OpenAI + LangChain Wrote Me 100 Custom Sales Emails
Structured Output From OpenAI (Clean Dirty Data)
Connect OpenAI To +5,000 Tools (LangChain + Zapier)
Use LLMs To Extract Data From Text (Expert Mode)
⛓ Extract Insights From Interview Transcripts Using LLMs
⛓ 5 Levels Of LLM Summarizing: Novice to Expert
#
LangChain How to and guides by Sam Witteveen:
LangChain Basics - LLMs & PromptTemplates with Colab
LangChain Basics - Tools and Chains
ChatGPT API Announcement & Code Walkthrough with LangChain
Conversations with Memory (explanation & code walkthrough)
Chat with Flan20B
Using Hugging Face Models locally (code walkthrough)
PAL : Program-aided Language Models with LangChain code
Building a Summarization System with LangChain and GPT-3 - Part 1
Building a Summarization System with LangChain and GPT-3 - Part 2
Microsoft’s Visual ChatGPT using LangChain
LangChain Agents - Joining Tools and Chains with Decisions
Comparing LLMs with LangChain
Using Constitutional AI in LangChain
Talking to Alpaca with LangChain - Creating an Alpaca Chatbot
Talk to your CSV & Excel with LangChain
BabyAGI: Discover the Power of Task-Driven Autonomous Agents!
Improve your BabyAGI with LangChain
⛓ Master PDF Chat with LangChain - Your essential guide to queries on documents
⛓ Using LangChain with DuckDuckGO Wikipedia & PythonREPL Tools | https://python.langchain.com/en/latest/getting_started/tutorials.html |
9599bc411a7a-2 | ⛓ Using LangChain with DuckDuckGO Wikipedia & PythonREPL Tools
⛓ Building Custom Tools and Agents with LangChain (gpt-3.5-turbo)
⛓ LangChain Retrieval QA Over Multiple Files with ChromaDB
⛓ LangChain Retrieval QA with Instructor Embeddings & ChromaDB for PDFs
⛓ LangChain + Retrieval Local LLMs for Retrieval QA - No OpenAI!!!
#
LangChain by Prompt Engineering:
LangChain Crash Course — All You Need to Know to Build Powerful Apps with LLMs
Working with MULTIPLE PDF Files in LangChain: ChatGPT for your Data
ChatGPT for YOUR OWN PDF files with LangChain
Talk to YOUR DATA without OpenAI APIs: LangChain
⛓️ CHATGPT For WEBSITES: Custom ChatBOT
#
LangChain by Chat with data
LangChain Beginner’s Tutorial for Typescript/Javascript
GPT-4 Tutorial: How to Chat With Multiple PDF Files (~1000 pages of Tesla’s 10-K Annual Reports)
GPT-4 & LangChain Tutorial: How to Chat With A 56-Page PDF Document (w/Pinecone)
⛓ LangChain & Supabase Tutorial: How to Build a ChatGPT Chatbot For Your Website
#
Get SH*T Done with Prompt Engineering and LangChain by Venelin Valkov
Getting Started with LangChain: Load Custom Data, Run OpenAI Models, Embeddings and ChatGPT
Loaders, Indexes & Vectorstores in LangChain: Question Answering on PDF files with ChatGPT
LangChain Models: ChatGPT, Flan Alpaca, OpenAI Embeddings, Prompt Templates & Streaming
LangChain Chains: Use ChatGPT to Build Conversational Agents, Summaries and Q&A on Text With LLMs | https://python.langchain.com/en/latest/getting_started/tutorials.html |
9599bc411a7a-3 | Analyze Custom CSV Data with GPT-4 using Langchain
⛓ Build ChatGPT Chatbots with LangChain Memory: Understanding and Implementing Memory in Conversations
⛓ icon marks a new video [last update 2023-05-15]
previous
Concepts
next
Models
Contents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/getting_started/tutorials.html |