id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 59
127
|
---|---|---|
5caa50234d45-0 | .md
.pdf
AwaDB
Contents
Installation and Setup
VectorStore
AwaDB#
AwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.
Installation and Setup#
pip install awadb
VectorStore#
There exists a wrapper around AwaDB vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
from langchain.vectorstores import AwaDB
For a more detailed walkthrough of the AwaDB wrapper, see this notebook
previous
AtlasDB
next
AWS S3 Directory
Contents
Installation and Setup
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/awadb.html |
cd136016050a-0 | .ipynb
.pdf
Aim
Aim#
Aim makes it super easy to visualize and debug LangChain executions. Aim tracks inputs and outputs of LLMs and tools, as well as actions of agents.
With Aim, you can easily debug and examine an individual execution:
Additionally, you have the option to compare multiple executions side by side:
Aim is fully open source, learn more about Aim on GitHub.
Let’s move forward and see how to enable and configure Aim callback.
Tracking LangChain Executions with AimIn this notebook we will explore three usage scenarios. To start off, we will install the necessary packages and import certain modules. Subsequently, we will configure two environment variables that can be established either within the Python script or through the terminal.
!pip install aim
!pip install langchain
!pip install openai
!pip install google-search-results
import os
from datetime import datetime
from langchain.llms import OpenAI
from langchain.callbacks import AimCallbackHandler, StdOutCallbackHandler
Our examples use a GPT model as the LLM, and OpenAI offers an API for this purpose. You can obtain the key from the following link: https://platform.openai.com/account/api-keys .
We will use the SerpApi to retrieve search results from Google. To acquire the SerpApi key, please go to https://serpapi.com/manage-api-key .
os.environ["OPENAI_API_KEY"] = "..."
os.environ["SERPAPI_API_KEY"] = "..."
The event methods of AimCallbackHandler accept the LangChain module or agent as input and log at least the prompts and generated results, as well as the serialized version of the LangChain module, to the designated Aim run.
session_group = datetime.now().strftime("%m.%d.%Y_%H.%M.%S")
aim_callback = AimCallbackHandler(
repo=".", | rtdocs_stable/api.python.langchain.com/en/stable/integrations/aim_tracking.html |
cd136016050a-1 | aim_callback = AimCallbackHandler(
repo=".",
experiment_name="scenario 1: OpenAI LLM",
)
callbacks = [StdOutCallbackHandler(), aim_callback]
llm = OpenAI(temperature=0, callbacks=callbacks)
The flush_tracker function is used to record LangChain assets on Aim. By default, the session is reset rather than being terminated outright.
Scenario 1 In the first scenario, we will use OpenAI LLM.
# scenario 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
aim_callback.flush_tracker(
langchain_asset=llm,
experiment_name="scenario 2: Chain with multiple SubChains on multiple generations",
)
Scenario 2 Scenario two involves chaining with multiple SubChains across multiple generations.
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
# scenario 2 - Chain
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)
test_prompts = [
{"title": "documentary about good video games that push the boundary of game design"},
{"title": "the phenomenon behind the remarkable speed of cheetahs"},
{"title": "the best in class mlops tooling"},
]
synopsis_chain.apply(test_prompts)
aim_callback.flush_tracker(
langchain_asset=synopsis_chain, experiment_name="scenario 3: Agent with Tools"
) | rtdocs_stable/api.python.langchain.com/en/stable/integrations/aim_tracking.html |
cd136016050a-2 | )
Scenario 3 The third scenario involves an agent with tools.
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
# scenario 3 - Agent with Tools
tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callbacks=callbacks,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
aim_callback.flush_tracker(langchain_asset=agent, reset=False, finish=True)
> Entering new AgentExecutor chain...
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation: Leonardo DiCaprio seemed to prove a long-held theory about his love life right after splitting from girlfriend Camila Morrone just months ...
Thought: I need to find out Camila Morrone's age
Action: Search
Action Input: "Camila Morrone age"
Observation: 25 years
Thought: I need to calculate 25 raised to the 0.43 power
Action: Calculator
Action Input: 25^0.43
Observation: Answer: 3.991298452658078
Thought: I now know the final answer
Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.
> Finished chain.
previous
AI21 Labs
next
Airbyte
By Harrison Chase
© Copyright 2023, Harrison Chase. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/aim_tracking.html |
cd136016050a-3 | Airbyte
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/aim_tracking.html |
737aed95bbc3-0 | .md
.pdf
Wolfram Alpha
Contents
Installation and Setup
Wrappers
Utility
Tool
Wolfram Alpha#
WolframAlpha is an answer engine developed by Wolfram Research.
It answers factual queries by computing answers from externally sourced data.
This page covers how to use the Wolfram Alpha API within LangChain.
Installation and Setup#
Install requirements with
pip install wolframalpha
Go to wolfram alpha and sign up for a developer account here
Create an app and get your APP ID
Set your APP ID as an environment variable WOLFRAM_ALPHA_APPID
Wrappers#
Utility#
There exists a WolframAlphaAPIWrapper utility which wraps this API. To import this utility:
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["wolfram-alpha"])
For more information on this, see this page
previous
Wikipedia
next
Writer
Contents
Installation and Setup
Wrappers
Utility
Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/wolfram_alpha.html |
2b7f4695a195-0 | .md
.pdf
Tensorflow Hub
Contents
Installation and Setup
Text Embedding Models
Tensorflow Hub#
TensorFlow Hub is a repository of trained machine learning models ready for fine-tuning and deployable anywhere.
TensorFlow Hub lets you search and discover hundreds of trained, ready-to-deploy machine learning models in one place.
Installation and Setup#
pip install tensorflow-hub
pip install tensorflow_text
Text Embedding Models#
See a usage example
from langchain.embeddings import TensorflowHubEmbeddings
previous
Telegram
next
2Markdown
Contents
Installation and Setup
Text Embedding Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/tensorflow_hub.html |
92a4569c06a1-0 | .md
.pdf
Chroma
Contents
Installation and Setup
VectorStore
Retriever
Chroma#
Chroma is a database for building AI applications with embeddings.
Installation and Setup#
pip install chromadb
VectorStore#
There exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
from langchain.vectorstores import Chroma
For a more detailed walkthrough of the Chroma wrapper, see this notebook
Retriever#
See a usage example.
from langchain.retrievers import SelfQueryRetriever
previous
CerebriumAI
next
ClearML
Contents
Installation and Setup
VectorStore
Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/chroma.html |
65cdaa269c0f-0 | .md
.pdf
CerebriumAI
Contents
Installation and Setup
Wrappers
LLM
CerebriumAI#
This page covers how to use the CerebriumAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific CerebriumAI wrappers.
Installation and Setup#
Install with pip install cerebrium
Get an CerebriumAI api key and set it as an environment variable (CEREBRIUMAI_API_KEY)
Wrappers#
LLM#
There exists an CerebriumAI LLM wrapper, which you can access with
from langchain.llms import CerebriumAI
previous
Cassandra
next
Chroma
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/cerebriumai.html |
830ff027c24b-0 | .md
.pdf
Vectara
Contents
Installation and Setup
Usage
VectorStore
Vectara#
What is Vectara?
Vectara Overview:
Vectara is developer-first API platform for building GenAI applications
To use Vectara - first sign up and create an account. Then create a corpus and an API key for indexing and searching.
You can use Vectara’s indexing API to add documents into Vectara’s index
You can use Vectara’s Search API to query Vectara’s index (which also supports Hybrid search implicitly).
You can use Vectara’s integration with LangChain as a Vector store or using the Retriever abstraction.
Installation and Setup#
To use Vectara with LangChain no special installation steps are required. You just have to provide your customer_id, corpus ID, and an API key created within the Vectara console to enable indexing and searching.
Alternatively these can be provided as environment variables
export VECTARA_CUSTOMER_ID=”your_customer_id”
export VECTARA_CORPUS_ID=”your_corpus_id”
export VECTARA_API_KEY=”your-vectara-api-key”
Usage#
VectorStore#
There exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Vectara
To create an instance of the Vectara vectorstore:
vectara = Vectara(
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key
)
The customer_id, corpus_id and api_key are optional, and if they are not supplied will be read from the environment variables VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY, respectively. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara.html |
830ff027c24b-1 | To query the vectorstore, you can use the similarity_search method (or similarity_search_with_score), which takes a query string and returns a list of results:
results = vectara.similarity_score("what is LangChain?")
similarity_search_with_score also supports the following additional arguments:
k: number of results to return (defaults to 5)
lambda_val: the lexical matching factor for hybrid search (defaults to 0.025)
filter: a filter to apply to the results (default None)
n_sentence_context: number of sentences to include before/after the actual matching segment when returning results. This defaults to 0 so as to return the exact text segment that matches, but can be used with other values e.g. 2 or 3 to return adjacent text segments.
The results are returned as a list of relevant documents, and a relevance score of each document.
For a more detailed examples of using the Vectara wrapper, see one of these two sample notebooks:
Chat Over Documents with Vectara
Vectara Text Generation
previous
Unstructured
next
Vespa
Contents
Installation and Setup
Usage
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara.html |
cd0d1d858888-0 | .md
.pdf
Figma
Contents
Installation and Setup
Document Loader
Figma#
Figma is a collaborative web application for interface design.
Installation and Setup#
The Figma API requires an access token, node_ids, and a file key.
The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename
Node IDs are also available in the URL. Click on anything and look for the ‘?node-id={node_id}’ param.
Access token instructions.
Document Loader#
See a usage example.
from langchain.document_loaders import FigmaFileLoader
previous
Facebook Chat
next
ForefrontAI
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/figma.html |
96e2bd1ad0f0-0 | .md
.pdf
BiliBili
Contents
Installation and Setup
Document Loader
BiliBili#
Bilibili is one of the most beloved long-form video sites in China.
Installation and Setup#
pip install bilibili-api-python
Document Loader#
See a usage example.
from langchain.document_loaders import BiliBiliLoader
previous
Beam
next
Blackboard
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/bilibili.html |
c338014f07f7-0 | .md
.pdf
LangChain Decorators ✨
Contents
LangChain Decorators ✨
Quick start
Installation
Examples
Defining other parameters
Passing a memory and/or callbacks:
Simplified streaming
Prompt declarations
Documenting your prompt
Chat messages prompt
Optional sections
Output parsers
More complex structures
Binding the prompt to an object
More examples:
LangChain Decorators ✨#
lanchchain decorators is a layer on the top of LangChain that provides syntactic sugar 🍭 for writing custom langchain prompts and chains
For Feedback, Issues, Contributions - please raise an issue here:
ju-bezdek/langchain-decorators
Main principles and benefits:
more pythonic way of writing code
write multiline prompts that wont break your code flow with indentation
making use of IDE in-built support for hinting, type checking and popup with docs to quickly peek in the function to see the prompt, parameters it consumes etc.
leverage all the power of 🦜🔗 LangChain ecosystem
adding support for optional parameters
easily share parameters between the prompts by binding them to one class
Here is a simple example of a code written with LangChain Decorators ✨
@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers")->str:
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
return
# run it naturaly
write_me_short_post(topic="starwars")
# or
write_me_short_post(topic="starwars", platform="redit")
Quick start#
Installation#
pip install langchain_decorators
Examples#
Good idea on how to start is to review the examples here:
jupyter notebook | rtdocs_stable/api.python.langchain.com/en/stable/integrations/langchain_decorators.html |
c338014f07f7-1 | Good idea on how to start is to review the examples here:
jupyter notebook
colab notebook
Defining other parameters#
Here we are just marking a function as a prompt with llm_prompt decorator, turning it effectively into a LLMChain. Instead of running it
Standard LLMchain takes much more init parameter than just inputs_variables and prompt… here is this implementation detail hidden in the decorator.
Here is how it works:
Using Global settings:
# define global settings for all prompty (if not set - chatGPT is the current default)
from langchain_decorators import GlobalSettings
GlobalSettings.define_settings(
default_llm=ChatOpenAI(temperature=0.0), this is default... can change it here globally
default_streaming_llm=ChatOpenAI(temperature=0.0,streaming=True), this is default... can change it here for all ... will be used for streaming
)
Using predefined prompt types
#You can change the default prompt types
from langchain_decorators import PromptTypes, PromptTypeSettings
PromptTypes.AGENT_REASONING.llm = ChatOpenAI()
# Or you can just define your own ones:
class MyCustomPromptTypes(PromptTypes):
GPT4=PromptTypeSettings(llm=ChatOpenAI(model="gpt-4"))
@llm_prompt(prompt_type=MyCustomPromptTypes.GPT4)
def write_a_complicated_code(app_idea:str)->str:
...
Define the settings directly in the decorator
from langchain.llms import OpenAI
@llm_prompt(
llm=OpenAI(temperature=0.7),
stop_tokens=["\nObservation"],
...
)
def creative_writer(book_title:str)->str:
...
Passing a memory and/or callbacks:# | rtdocs_stable/api.python.langchain.com/en/stable/integrations/langchain_decorators.html |
c338014f07f7-2 | ...
Passing a memory and/or callbacks:#
To pass any of these, just declare them in the function (or use kwargs to pass anything)
@llm_prompt()
async def write_me_short_post(topic:str, platform:str="twitter", memory:SimpleMemory = None):
"""
{history_key}
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass
await write_me_short_post(topic="old movies")
Simplified streaming#
If we wan’t to leverage streaming:
we need to define prompt as async function
turn on the streaming on the decorator, or we can define PromptType with streaming on
capture the stream using StreamingContext
This way we just mark which prompt should be streamed, not needing to tinker with what LLM should we use, passing around the creating and distribute streaming handler into particular part of our chain… just turn the streaming on/off on prompt/prompt type…
The streaming will happen only if we call it in streaming context … there we can define a simple function to handle the stream
# this code example is complete and should run as it is
from langchain_decorators import StreamingContext, llm_prompt
# this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers)
# note that only async functions can be streamed (will get an error if it's not)
@llm_prompt(capture_stream=True)
async def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
"""
Write me a short header for my post about {topic} for {platform} platform. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/langchain_decorators.html |
c338014f07f7-3 | It should be for {audience} audience.
(Max 15 words)
"""
pass
# just an arbitrary function to demonstrate the streaming... wil be some websockets code in the real world
tokens=[]
def capture_stream_func(new_token:str):
tokens.append(new_token)
# if we want to capture the stream, we need to wrap the execution into StreamingContext...
# this will allow us to capture the stream even if the prompt call is hidden inside higher level method
# only the prompts marked with capture_stream will be captured here
with StreamingContext(stream_to_stdout=True, callback=capture_stream_func):
result = await run_prompt()
print("Stream finished ... we can distinguish tokens thanks to alternating colors")
print("\nWe've captured",len(tokens),"tokens🎉\n")
print("Here is the result:")
print(result)
Prompt declarations#
By default the prompt is is the whole function docs, unless you mark your prompt
Documenting your prompt#
We can specify what part of our docs is the prompt definition, by specifying a code block with language tag
@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
"""
Here is a good way to write a prompt as part of a function docstring, with additional documentation for devs.
It needs to be a code block, marked as a `<prompt>` language
```<prompt>
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
```
Now only to code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/langchain_decorators.html |
c338014f07f7-4 | (It has also a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly))
"""
return
Chat messages prompt#
For chat models is very useful to define prompt as a set of message templates… here is how to do it:
@llm_prompt
def simulate_conversation(human_input:str, agent_role:str="a pirate"):
"""
## System message
- note the `:system` sufix inside the <prompt:_role_> tag
```<prompt:system>
You are a {agent_role} hacker. You mus act like one.
You reply always in code, using python or javascript code block...
for example:
... do not reply with anything else.. just with code - respecting your role.
```
# human message
(we are using the real role that are enforced by the LLM - GPT supports system, assistant, user)
``` <prompt:user>
Helo, who are you
```
a reply:
``` <prompt:assistant>
\``` python <<- escaping inner code block with \ that should be part of the prompt
def hello():
print("Argh... hello you pesky pirate")
\```
```
we can also add some history using placeholder
```<prompt:placeholder>
{history}
```
```<prompt:user>
{human_input}
```
Now only to code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/langchain_decorators.html |
c338014f07f7-5 | (It has also a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly))
"""
pass
the roles here are model native roles (assistant, user, system for chatGPT)
Optional sections#
you can define a whole sections of your prompt that should be optional
if any input in the section is missing, the whole section wont be rendered
the syntax for this is as follows:
@llm_prompt
def prompt_with_optional_partials():
"""
this text will be rendered always, but
{? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "") ?}
you can also place it in between the words
this too will be rendered{? , but
this block will be rendered only if {this_value} and {this_value}
is not empty?} !
"""
Output parsers#
llm_prompt decorator natively tries to detect the best output parser based on the output type. (if not set, it returns the raw string)
list, dict and pydantic outputs are also supported natively (automaticaly)
# this code example is complete and should run as it is
from langchain_decorators import llm_prompt
@llm_prompt
def write_name_suggestions(company_business:str, count:int)->list:
""" Write me {count} good name suggestions for company that {company_business}
"""
pass
write_name_suggestions(company_business="sells cookies", count=5)
More complex structures#
for dict / pydantic you need to specify the formatting instructions…
this can be tedious, that’s why you can let the output parser gegnerate you the instructions based on the model (pydantic) | rtdocs_stable/api.python.langchain.com/en/stable/integrations/langchain_decorators.html |
c338014f07f7-6 | from langchain_decorators import llm_prompt
from pydantic import BaseModel, Field
class TheOutputStructureWeExpect(BaseModel):
name:str = Field (description="The name of the company")
headline:str = Field( description="The description of the company (for landing page)")
employees:list[str] = Field(description="5-8 fake employee names with their positions")
@llm_prompt()
def fake_company_generator(company_business:str)->TheOutputStructureWeExpect:
""" Generate a fake company that {company_business}
{FORMAT_INSTRUCTIONS}
"""
return
company = fake_company_generator(company_business="sells cookies")
# print the result nicely formatted
print("Company name: ",company.name)
print("company headline: ",company.headline)
print("company employees: ",company.employees)
Binding the prompt to an object#
from pydantic import BaseModel
from langchain_decorators import llm_prompt
class AssistantPersonality(BaseModel):
assistant_name:str
assistant_role:str
field:str
@property
def a_property(self):
return "whatever"
def hello_world(self, function_kwarg:str=None):
"""
We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method
"""
@llm_prompt
def introduce_your_self(self)->str:
"""
``` <prompt:system>
You are an assistant named {assistant_name}.
Your role is to act as {assistant_role}
```
```<prompt:user>
Introduce your self (in less than 20 words)
```
"""
personality = AssistantPersonality(assistant_name="John", assistant_role="a pirate") | rtdocs_stable/api.python.langchain.com/en/stable/integrations/langchain_decorators.html |
c338014f07f7-7 | personality = AssistantPersonality(assistant_name="John", assistant_role="a pirate")
print(personality.introduce_your_self(personality))
More examples:#
these and few more examples are also available in the colab notebook here
including the ReAct Agent re-implementation using purely langchain decorators
previous
LanceDB
next
Llama.cpp
Contents
LangChain Decorators ✨
Quick start
Installation
Examples
Defining other parameters
Passing a memory and/or callbacks:
Simplified streaming
Prompt declarations
Documenting your prompt
Chat messages prompt
Optional sections
Output parsers
More complex structures
Binding the prompt to an object
More examples:
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/langchain_decorators.html |
0aae1128af5a-0 | .md
.pdf
Git
Contents
Installation and Setup
Document Loader
Git#
Git is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.
Installation and Setup#
First, you need to install GitPython python package.
pip install GitPython
Document Loader#
See a usage example.
from langchain.document_loaders import GitLoader
previous
ForefrontAI
next
GitBook
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/git.html |
0f65fa219547-0 | .md
.pdf
Arxiv
Contents
Installation and Setup
Document Loader
Retriever
Arxiv#
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics,
mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and
systems science, and economics.
Installation and Setup#
First, you need to install arxiv python package.
pip install arxiv
Second, you need to install PyMuPDF python package which transforms PDF files downloaded from the arxiv.org site into the text format.
pip install pymupdf
Document Loader#
See a usage example.
from langchain.document_loaders import ArxivLoader
Retriever#
See a usage example.
from langchain.retrievers import ArxivRetriever
previous
Argilla
next
AtlasDB
Contents
Installation and Setup
Document Loader
Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/arxiv.html |
05436d5c4a40-0 | .md
.pdf
Momento
Contents
Installation and Setup
Cache
Memory
Chat Message History Memory
Momento#
Momento Cache is the world’s first truly serverless caching service. It provides instant elasticity, scale-to-zero
capability, and blazing-fast performance.
With Momento Cache, you grab the SDK, you get an end point, input a few lines into your code, and you’re off and running.
This page covers how to use the Momento ecosystem within LangChain.
Installation and Setup#
Sign up for a free account here and get an auth token
Install the Momento Python SDK with pip install momento
Cache#
The Cache wrapper allows for Momento to be used as a serverless, distributed, low-latency cache for LLM prompts and responses.
The standard cache is the go-to use case for Momento users in any environment.
Import the cache as follows:
from langchain.cache import MomentoCache
And set up like so:
from datetime import timedelta
from momento import CacheClient, Configurations, CredentialProvider
import langchain
# Instantiate the Momento client
cache_client = CacheClient(
Configurations.Laptop.v1(),
CredentialProvider.from_environment_variable("MOMENTO_AUTH_TOKEN"),
default_ttl=timedelta(days=1))
# Choose a Momento cache name of your choice
cache_name = "langchain"
# Instantiate the LLM cache
langchain.llm_cache = MomentoCache(cache_client, cache_name)
Memory#
Momento can be used as a distributed memory store for LLMs.
Chat Message History Memory#
See this notebook for a walkthrough of how to use Momento as a memory store for chat message history.
previous
Modern Treasury
next
MyScale
Contents
Installation and Setup
Cache
Memory
Chat Message History Memory
By Harrison Chase | rtdocs_stable/api.python.langchain.com/en/stable/integrations/momento.html |
05436d5c4a40-1 | Installation and Setup
Cache
Memory
Chat Message History Memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/momento.html |
69dbfcd75cd9-0 | .md
.pdf
Azure Cognitive Search
Contents
Installation and Setup
Retriever
Azure Cognitive Search#
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you’ll work with the following capabilities:
A search engine for full text search over a search index containing user-owned content
Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation
Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more
Programmability through REST APIs and client libraries in Azure SDKs
Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
Installation and Setup#
See set up instructions.
Retriever#
See a usage example.
from langchain.retrievers import AzureCognitiveSearchRetriever
previous
Azure Blob Storage
next
Azure OpenAI
Contents
Installation and Setup
Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/azure_cognitive_search_.html |
a5135281c052-0 | .md
.pdf
Databerry
Contents
Installation and Setup
Retriever
Databerry#
Databerry is an open source document retrieval platform that helps to connect your personal data with Large Language Models.
Installation and Setup#
We need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url.
We need the API Key.
Retriever#
See a usage example.
from langchain.retrievers import DataberryRetriever
previous
C Transformers
next
Databricks
Contents
Installation and Setup
Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/databerry.html |
d7ebf6c8613b-0 | .md
.pdf
OpenSearch
Contents
Installation and Setup
Wrappers
VectorStore
OpenSearch#
This page covers how to use the OpenSearch ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific OpenSearch wrappers.
Installation and Setup#
Install the Python package with pip install opensearch-py
Wrappers#
VectorStore#
There exists a wrapper around OpenSearch vector databases, allowing you to use it as a vectorstore
for semantic search using approximate vector search powered by lucene, nmslib and faiss engines
or using painless scripting and script scoring functions for bruteforce vector search.
To import this vectorstore:
from langchain.vectorstores import OpenSearchVectorSearch
For a more detailed walkthrough of the OpenSearch wrapper, see this notebook
previous
OpenAI
next
OpenWeatherMap
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/opensearch.html |
31214c002829-0 | .md
.pdf
Wikipedia
Contents
Installation and Setup
Document Loader
Retriever
Wikipedia#
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
Installation and Setup#
pip install wikipedia
Document Loader#
See a usage example.
from langchain.document_loaders import WikipediaLoader
Retriever#
See a usage example.
from langchain.retrievers import WikipediaRetriever
previous
WhyLabs
next
Wolfram Alpha
Contents
Installation and Setup
Document Loader
Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/wikipedia.html |
f675f15365ee-0 | .md
.pdf
IMSDb
Contents
Installation and Setup
Document Loader
IMSDb#
IMSDb is the Internet Movie Script Database.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import IMSDbLoader
previous
iFixit
next
Jina
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/imsdb.html |
2f7c1849649e-0 | .md
.pdf
Gutenberg
Contents
Installation and Setup
Document Loader
Gutenberg#
Project Gutenberg is an online library of free eBooks.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import GutenbergLoader
previous
Graphsignal
next
Hacker News
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/gutenberg.html |
b8e41501dda9-0 | .md
.pdf
Metal
Contents
What is Metal?
Quick start
Metal#
This page covers how to use Metal within LangChain.
What is Metal?#
Metal is a managed retrieval & memory platform built for production. Easily index your data into Metal and run semantic search and retrieval on it.
Quick start#
Get started by creating a Metal account.
Then, you can easily take advantage of the MetalRetriever class to start retrieving your data for semantic search, prompting context, etc. This class takes a Metal instance and a dictionary of parameters to pass to the Metal API.
from langchain.retrievers import MetalRetriever
from metal_sdk.metal import Metal
metal = Metal("API_KEY", "CLIENT_ID", "INDEX_ID");
retriever = MetalRetriever(metal, params={"limit": 2})
docs = retriever.get_relevant_documents("search term")
previous
MediaWikiDump
next
Microsoft OneDrive
Contents
What is Metal?
Quick start
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/metal.html |
f5bbbec33ff2-0 | .ipynb
.pdf
MLflow
MLflow#
This notebook goes over how to track your LangChain experiments into your MLflow Server
!pip install azureml-mlflow
!pip install pandas
!pip install textstat
!pip install spacy
!pip install openai
!pip install google-search-results
!python -m spacy download en_core_web_sm
import os
os.environ["MLFLOW_TRACKING_URI"] = ""
os.environ["OPENAI_API_KEY"] = ""
os.environ["SERPAPI_API_KEY"] = ""
from langchain.callbacks import MlflowCallbackHandler
from langchain.llms import OpenAI
"""Main function.
This function is used to try the callback handler.
Scenarios:
1. OpenAI LLM
2. Chain with multiple SubChains on multiple generations
3. Agent with Tools
"""
mlflow_callback = MlflowCallbackHandler()
llm = OpenAI(model_name="gpt-3.5-turbo", temperature=0, callbacks=[mlflow_callback], verbose=True)
# SCENARIO 1 - LLM
llm_result = llm.generate(["Tell me a joke"])
mlflow_callback.flush_tracker(llm)
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
# SCENARIO 2 - Chain
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=[mlflow_callback])
test_prompts = [
{ | rtdocs_stable/api.python.langchain.com/en/stable/integrations/mlflow_tracking.html |
f5bbbec33ff2-1 | test_prompts = [
{
"title": "documentary about good video games that push the boundary of game design"
},
]
synopsis_chain.apply(test_prompts)
mlflow_callback.flush_tracker(synopsis_chain)
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
# SCENARIO 3 - Agent with Tools
tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=[mlflow_callback])
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callbacks=[mlflow_callback],
verbose=True,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
mlflow_callback.flush_tracker(agent, finish=True)
previous
Milvus
next
Modal
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/mlflow_tracking.html |
5ee4730a5a23-0 | .ipynb
.pdf
Databricks
Contents
Installation and Setup
Connecting to Databricks
Syntax
Required Parameters
Optional Parameters
Examples
SQL Chain example
SQL Database Agent example
Databricks#
This notebook covers how to connect to the Databricks runtimes and Databricks SQL using the SQLDatabase wrapper of LangChain.
It is broken into 3 parts: installation and setup, connecting to Databricks, and examples.
Installation and Setup#
!pip install databricks-sql-connector
Connecting to Databricks#
You can connect to Databricks runtimes and Databricks SQL using the SQLDatabase.from_databricks() method.
Syntax#
SQLDatabase.from_databricks(
catalog: str,
schema: str,
host: Optional[str] = None,
api_token: Optional[str] = None,
warehouse_id: Optional[str] = None,
cluster_id: Optional[str] = None,
engine_args: Optional[dict] = None,
**kwargs: Any)
Required Parameters#
catalog: The catalog name in the Databricks database.
schema: The schema name in the catalog.
Optional Parameters#
There following parameters are optional. When executing the method in a Databricks notebook, you don’t need to provide them in most of the cases.
host: The Databricks workspace hostname, excluding ‘https://’ part. Defaults to ‘DATABRICKS_HOST’ environment variable or current workspace if in a Databricks notebook.
api_token: The Databricks personal access token for accessing the Databricks SQL warehouse or the cluster. Defaults to ‘DATABRICKS_TOKEN’ environment variable or a temporary one is generated if in a Databricks notebook.
warehouse_id: The warehouse ID in the Databricks SQL. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/databricks/databricks.html |
5ee4730a5a23-1 | warehouse_id: The warehouse ID in the Databricks SQL.
cluster_id: The cluster ID in the Databricks Runtime. If running in a Databricks notebook and both ‘warehouse_id’ and ‘cluster_id’ are None, it uses the ID of the cluster the notebook is attached to.
engine_args: The arguments to be used when connecting Databricks.
**kwargs: Additional keyword arguments for the SQLDatabase.from_uri method.
Examples#
# Connecting to Databricks with SQLDatabase wrapper
from langchain import SQLDatabase
db = SQLDatabase.from_databricks(catalog='samples', schema='nyctaxi')
# Creating a OpenAI Chat LLM wrapper
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(temperature=0, model_name="gpt-4")
SQL Chain example#
This example demonstrates the use of the SQL Chain for answering a question over a Databricks database.
from langchain import SQLDatabaseChain
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
db_chain.run("What is the average duration of taxi rides that start between midnight and 6am?")
> Entering new SQLDatabaseChain chain...
What is the average duration of taxi rides that start between midnight and 6am?
SQLQuery:SELECT AVG(UNIX_TIMESTAMP(tpep_dropoff_datetime) - UNIX_TIMESTAMP(tpep_pickup_datetime)) as avg_duration
FROM trips
WHERE HOUR(tpep_pickup_datetime) >= 0 AND HOUR(tpep_pickup_datetime) < 6
SQLResult: [(987.8122786304605,)]
Answer:The average duration of taxi rides that start between midnight and 6am is 987.81 seconds.
> Finished chain.
'The average duration of taxi rides that start between midnight and 6am is 987.81 seconds.' | rtdocs_stable/api.python.langchain.com/en/stable/integrations/databricks/databricks.html |
5ee4730a5a23-2 | SQL Database Agent example#
This example demonstrates the use of the SQL Database Agent for answering questions over a Databricks database.
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True
)
agent.run("What is the longest trip distance and how long did it take?")
> Entering new AgentExecutor chain...
Action: list_tables_sql_db
Action Input:
Observation: trips
Thought:I should check the schema of the trips table to see if it has the necessary columns for trip distance and duration.
Action: schema_sql_db
Action Input: trips
Observation:
CREATE TABLE trips (
tpep_pickup_datetime TIMESTAMP,
tpep_dropoff_datetime TIMESTAMP,
trip_distance FLOAT,
fare_amount FLOAT,
pickup_zip INT,
dropoff_zip INT
) USING DELTA
/*
3 rows from trips table:
tpep_pickup_datetime tpep_dropoff_datetime trip_distance fare_amount pickup_zip dropoff_zip
2016-02-14 16:52:13+00:00 2016-02-14 17:16:04+00:00 4.94 19.0 10282 10171
2016-02-04 18:44:19+00:00 2016-02-04 18:46:00+00:00 0.28 3.5 10110 10110 | rtdocs_stable/api.python.langchain.com/en/stable/integrations/databricks/databricks.html |
5ee4730a5a23-3 | 2016-02-17 17:13:57+00:00 2016-02-17 17:17:55+00:00 0.7 5.0 10103 10023
*/
Thought:The trips table has the necessary columns for trip distance and duration. I will write a query to find the longest trip distance and its duration.
Action: query_checker_sql_db
Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1
Observation: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1
Thought:The query is correct. I will now execute it to find the longest trip distance and its duration.
Action: query_sql_db
Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1
Observation: [(30.6, '0 00:43:31.000000000')]
Thought:I now know the final answer.
Final Answer: The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.
> Finished chain.
'The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.'
Contents
Installation and Setup
Connecting to Databricks
Syntax
Required Parameters
Optional Parameters
Examples
SQL Chain example
SQL Database Agent example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/databricks/databricks.html |
0bda3b67aeef-0 | .ipynb
.pdf
Vectara Text Generation
Contents
Prepare Data
Set Up Vector DB
Set Up LLM Chain with Custom Prompt
Generate Text
Vectara Text Generation#
This notebook is based on text generation notebook and adapted to Vectara.
Prepare Data#
First, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents.
import os
from langchain.llms import OpenAI
from langchain.docstore.document import Document
import requests
from langchain.vectorstores import Vectara
from langchain.text_splitter import CharacterTextSplitter
from langchain.prompts import PromptTemplate
import pathlib
import subprocess
import tempfile
def get_github_docs(repo_owner, repo_name):
with tempfile.TemporaryDirectory() as d:
subprocess.check_call(
f"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .",
cwd=d,
shell=True,
)
git_sha = (
subprocess.check_output("git rev-parse HEAD", shell=True, cwd=d)
.decode("utf-8")
.strip()
)
repo_path = pathlib.Path(d)
markdown_files = list(repo_path.glob("*/*.md")) + list(
repo_path.glob("*/*.mdx")
)
for markdown_file in markdown_files:
with open(markdown_file, "r") as f:
relative_path = markdown_file.relative_to(repo_path)
github_url = f"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}"
yield Document(page_content=f.read(), metadata={"source": github_url})
sources = get_github_docs("yirenlu92", "deno-manual-forked") | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_text_generation.html |
0bda3b67aeef-1 | source_chunks = []
splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0)
for source in sources:
for chunk in splitter.split_text(source.page_content):
source_chunks.append(chunk)
Cloning into '.'...
Set Up Vector DB#
Now that we have the documentation content in chunks, let’s put all this information in a vector index for easy retrieval.
import os
search_index = Vectara.from_texts(source_chunks, embedding=None)
Set Up LLM Chain with Custom Prompt#
Next, let’s set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the vector search, and topic, which is given by the user.
from langchain.chains import LLMChain
prompt_template = """Use the context below to write a 400 word blog post about the topic below:
Context: {context}
Topic: {topic}
Blog post:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "topic"]
)
llm = OpenAI(openai_api_key=os.environ['OPENAI_API_KEY'], temperature=0)
chain = LLMChain(llm=llm, prompt=PROMPT)
Generate Text#
Finally, we write a function to apply our inputs to the chain. The function takes an input parameter topic. We find the documents in the vector index that correspond to that topic, and use them as additional context in our simple LLM chain.
def generate_blog_post(topic):
docs = search_index.similarity_search(topic, k=4)
inputs = [{"context": doc.page_content, "topic": topic} for doc in docs]
print(chain.apply(inputs)) | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_text_generation.html |
0bda3b67aeef-2 | print(chain.apply(inputs))
generate_blog_post("environment variables") | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_text_generation.html |
0bda3b67aeef-3 | [{'text': '\n\nEnvironment variables are a powerful tool for managing configuration settings in your applications. They allow you to store and access values from anywhere in your code, making it easier to keep your codebase organized and maintainable.\n\nHowever, there are times when you may want to use environment variables specifically for a single command. This is where shell variables come in. Shell variables are similar to environment variables, but they won\'t be exported to spawned commands. They are defined with the following syntax:\n\n```sh\nVAR_NAME=value\n```\n\nFor example, if you wanted to use a shell variable instead of an environment variable in a command, you could do something like this:\n\n```sh\nVAR=hello && echo $VAR && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR\'))"\n```\n\nThis would output the following:\n\n```\nhello\nDeno: undefined\n```\n\nShell variables can be useful when you want to re-use a value, but don\'t want it available in any spawned processes.\n\nAnother way to use environment variables is through pipelines. Pipelines provide a way to pipe the'}, {'text': '\n\nEnvironment variables are a great way to store and access sensitive information in your applications. They are also useful for configuring applications and managing different environments. In Deno, there are two ways to use environment variables: the built-in `Deno.env` and the `.env` file.\n\nThe `Deno.env` is a built-in feature of the Deno runtime that allows you to set and get environment variables. It has getter and setter methods that you can use to access and set environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_text_generation.html |
0bda3b67aeef-4 | set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain'}, {'text': "\n\nEnvironment variables are a powerful tool for managing configuration and settings in your applications. They allow you to store and access values that can be used in your code, and they can be set and changed without having to modify your code.\n\nIn Deno, environment variables are defined using the `export` command. For example, to set a variable called `VAR_NAME` to the value `value`, you would use the following command:\n\n```sh\nexport VAR_NAME=value\n```\n\nYou can then access the value of the environment variable in your code using the `Deno.env.get()` method. For example, if you wanted to log the value of the `VAR_NAME` variable, you could use the following code:\n\n```js\nconsole.log(Deno.env.get('VAR_NAME'));\n```\n\nYou can also set environment variables for a single command. To do this, you can list the environment variables before the command, like so:\n\n```\nVAR=hello VAR2=bye deno run main.ts\n```\n\nThis will set the environment variables `VAR` and `V"}, {'text': "\n\nEnvironment variables are a powerful tool for managing settings and configuration in your applications. They can be used to store information such as user preferences, application settings, and even passwords. In this blog post, we'll discuss how to make Deno scripts executable with a hashbang | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_text_generation.html |
0bda3b67aeef-5 | In this blog post, we'll discuss how to make Deno scripts executable with a hashbang (shebang).\n\nA hashbang is a line of code that is placed at the beginning of a script. It tells the system which interpreter to use when running the script. In the case of Deno, the hashbang should be `#!/usr/bin/env -S deno run --allow-env`. This tells the system to use the Deno interpreter and to allow the script to access environment variables.\n\nOnce the hashbang is in place, you may need to give the script execution permissions. On Linux, this can be done with the command `sudo chmod +x hashbang.ts`. After that, you can execute the script by calling it like any other command: `./hashbang.ts`.\n\nIn the example program, we give the context permission to access the environment variables and print the Deno installation path. This is done by using the `Deno.env.get()` function, which returns the value of the specified environment"}] | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_text_generation.html |
0bda3b67aeef-6 | Contents
Prepare Data
Set Up Vector DB
Set Up LLM Chain with Custom Prompt
Generate Text
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_text_generation.html |
f27c0624c8a2-0 | .ipynb
.pdf
Chat Over Documents with Vectara
Contents
Pass in chat history
Return Source Documents
ConversationalRetrievalChain with search_distance
ConversationalRetrievalChain with map_reduce
ConversationalRetrievalChain with Question Answering with sources
ConversationalRetrievalChain with streaming to stdout
get_chat_history Function
Chat Over Documents with Vectara#
This notebook is based on the chat_vector_db notebook, but using Vectara as the vector database.
import os
from langchain.vectorstores import Vectara
from langchain.vectorstores.vectara import VectaraRetriever
from langchain.llms import OpenAI
from langchain.chains import ConversationalRetrievalChain
Load in documents. You can replace this with a loader for whatever type of data you want
from langchain.document_loaders import TextLoader
loader = TextLoader("../../modules/state_of_the_union.txt")
documents = loader.load()
We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them.
vectorstore = Vectara.from_documents(documents, embedding=None)
We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation.
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
We now initialize the ConversationalRetrievalChain
openai_api_key = os.environ['OPENAI_API_KEY']
llm = OpenAI(openai_api_key=openai_api_key, temperature=0)
retriever = vectorstore.as_retriever(lambda_val=0.025, k=5, filter=None)
d = retriever.get_relevant_documents('What did the president say about Ketanji Brown Jackson') | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_chat.html |
f27c0624c8a2-1 | qa = ConversationalRetrievalChain.from_llm(llm, retriever, memory=memory)
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query})
result["answer"]
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence."
query = "Did he mention who she suceeded"
result = qa({"question": query})
result['answer']
' Justice Stephen Breyer'
Pass in chat history#
In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object.
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever())
Here’s an example of asking a question with no chat history
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result["answer"]
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence."
Here’s an example of asking a question with some chat history
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
result['answer']
' Justice Stephen Breyer'
Return Source Documents#
You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_chat.html |
f27c0624c8a2-2 | qa = ConversationalRetrievalChain.from_llm(llm, vectorstore.as_retriever(), return_source_documents=True)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result['source_documents'][0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})
ConversationalRetrievalChain with search_distance#
If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter.
vectordbkwargs = {"search_distance": 0.9}
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history, "vectordbkwargs": vectordbkwargs})
print(result['answer']) | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_chat.html |
f27c0624c8a2-3 | print(result['answer'])
The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.
ConversationalRetrievalChain with map_reduce#
We can also use different types of combine document chains with the ConversationalRetrievalChain chain.
from langchain.chains import LLMChain
from langchain.chains.question_answering import load_qa_chain
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
" The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who he described as one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence."
ConversationalRetrievalChain with Question Answering with sources#
You can also use this chain with the question answering with sources chain.
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator, | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_chat.html |
f27c0624c8a2-4 | retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
" The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who he described as one of the nation's top legal minds, and that she will continue Justice Breyer's legacy of excellence.\nSOURCES: ../../../state_of_the_union.txt"
ConversationalRetrievalChain with streaming to stdout#
Output from the chain will be streamed to stdout token by token in this example.
from langchain.chains.llm import LLMChain
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT
from langchain.chains.question_answering import load_qa_chain
# Construct a ConversationalRetrievalChain with a streaming llm for combine docs
# and a separate, non-streaming llm for question generation
llm = OpenAI(temperature=0, openai_api_key=openai_api_key)
streaming_llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0, openai_api_key=openai_api_key)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT)
qa = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator)
chat_history = [] | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_chat.html |
f27c0624c8a2-5 | chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
Justice Stephen Breyer
get_chat_history Function#
You can also specify a get_chat_history function, which can be used to format the chat_history string.
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
qa = ConversationalRetrievalChain.from_llm(llm, vectorstore.as_retriever(), get_chat_history=get_chat_history)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence."
Contents
Pass in chat history
Return Source Documents
ConversationalRetrievalChain with search_distance
ConversationalRetrievalChain with map_reduce
ConversationalRetrievalChain with Question Answering with sources
ConversationalRetrievalChain with streaming to stdout
get_chat_history Function
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vectara/vectara_chat.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.