id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 59
127
|
---|---|---|
a9aff50e0505-0 | .md
.pdf
Cohere
Contents
Installation and Setup
LLM
Text Embedding Model
Retriever
Cohere#
Cohere is a Canadian startup that provides natural language processing models
that help companies improve human-machine interactions.
Installation and Setup#
Install the Python SDK :
pip install cohere
Get a Cohere api key and set it as an environment variable (COHERE_API_KEY)
LLM#
There exists an Cohere LLM wrapper, which you can access with
See a usage example.
from langchain.llms import Cohere
Text Embedding Model#
There exists an Cohere Embedding model, which you can access with
from langchain.embeddings import CohereEmbeddings
For a more detailed walkthrough of this, see this notebook
Retriever#
See a usage example.
from langchain.retrievers.document_compressors import CohereRerank
previous
ClickHouse
next
College Confidential
Contents
Installation and Setup
LLM
Text Embedding Model
Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/cohere.html |
4cc00f0713c2-0 | .md
.pdf
Discord
Contents
Installation and Setup
Document Loader
Discord#
Discord is a VoIP and instant messaging social platform. Users have the ability to communicate
with voice calls, video calls, text messaging, media and files in private chats or as part of communities called
“servers”. A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.
Installation and Setup#
pip install pandas
Follow these steps to download your Discord data:
Go to your User Settings
Then go to Privacy and Safety
Head over to the Request all of my Data and click on Request Data button
It might take 30 days for you to receive your data. You’ll receive an email at the address which is registered
with Discord. That email will have a download button using which you would be able to download your personal Discord data.
Document Loader#
See a usage example.
from langchain.document_loaders import DiscordChatLoader
previous
Diffbot
next
Docugami
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/discord.html |
21f2137afdc6-0 | .md
.pdf
EverNote
Contents
Installation and Setup
Document Loader
EverNote#
EverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual “notebooks” and can be tagged, annotated, edited, searched, and exported.
Installation and Setup#
First, you need to install lxml and html2text python packages.
pip install lxml
pip install html2text
Document Loader#
See a usage example.
from langchain.document_loaders import EverNoteLoader
previous
Elasticsearch
next
Facebook Chat
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/evernote.html |
4677bfae40a9-0 | .md
.pdf
LanceDB
Contents
Installation and Setup
Wrappers
VectorStore
LanceDB#
This page covers how to use LanceDB within LangChain.
It is broken into two parts: installation and setup, and then references to specific LanceDB wrappers.
Installation and Setup#
Install the Python SDK with pip install lancedb
Wrappers#
VectorStore#
There exists a wrapper around LanceDB databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import LanceDB
For a more detailed walkthrough of the LanceDB wrapper, see this notebook
previous
Jina
next
LangChain Decorators ✨
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/lancedb.html |
797ecdd0ee78-0 | .md
.pdf
Spreedly
Contents
Installation and Setup
Document Loader
Spreedly#
Spreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.
Installation and Setup#
See setup instructions.
Document Loader#
See a usage example.
from langchain.document_loaders import SpreedlyLoader
previous
spaCy
next
StochasticAI
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/spreedly.html |
84d4fda2e9a4-0 | .md
.pdf
PromptLayer
Contents
Installation and Setup
LLM
Example
Chat Model
PromptLayer#
PromptLayer
is a devtool that allows you to track, manage, and share your GPT prompt engineering.
It acts as a middleware between your code and OpenAI’s python library, recording all your API requests
and saving relevant metadata for easy exploration and search in the PromptLayer dashboard.
Installation and Setup#
Install the promptlayer python library
pip install promptlayer
Create a PromptLayer account
Create an api token and set it as an environment variable (PROMPTLAYER_API_KEY)
LLM#
from langchain.llms import PromptLayerOpenAI
Example#
To tag your requests, use the argument pl_tags when instantiating the LLM
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
To get the PromptLayer request id, use the argument return_pl_id when instantiating the LLM
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(return_pl_id=True)
This will add the PromptLayer request ID in the generation_info field of the Generation returned when using .generate or .agenerate
For example:
llm_results = llm.generate(["hello world"])
for res in llm_results.generations:
print("pl request id: ", res[0].generation_info["pl_request_id"])
You can use the PromptLayer request ID to add a prompt, score, or other metadata to your request. Read more about it here.
This LLM is identical to the OpenAI LLM, except that
all your requests will be logged to your PromptLayer account
you can add pl_tags when instantiating to tag your requests on PromptLayer | rtdocs_stable/api.python.langchain.com/en/stable/integrations/promptlayer.html |
84d4fda2e9a4-1 | you can add pl_tags when instantiating to tag your requests on PromptLayer
you can add return_pl_id when instantiating to return a PromptLayer request id to use while tracking requests.
Chat Model#
from langchain.chat_models import PromptLayerChatOpenAI
See a usage example.
previous
Prediction Guard
next
Psychic
Contents
Installation and Setup
LLM
Example
Chat Model
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/promptlayer.html |
420dde89108d-0 | .md
.pdf
AWS S3 Directory
Contents
Installation and Setup
Document Loader
AWS S3 Directory#
Amazon Simple Storage Service (Amazon S3) is an object storage service.
AWS S3 Directory
AWS S3 Buckets
Installation and Setup#
pip install boto3
Document Loader#
See a usage example for S3DirectoryLoader.
See a usage example for S3FileLoader.
from langchain.document_loaders import S3DirectoryLoader, S3FileLoader
previous
AwaDB
next
AZLyrics
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/aws_s3.html |
8cb7aebcf0a5-0 | .md
.pdf
SerpAPI
Contents
Installation and Setup
Wrappers
Utility
Tool
SerpAPI#
This page covers how to use the SerpAPI search APIs within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SerpAPI wrapper.
Installation and Setup#
Install requirements with pip install google-search-results
Get a SerpAPI api key and either set it as an environment variable (SERPAPI_API_KEY)
Wrappers#
Utility#
There exists a SerpAPI utility which wraps this API. To import this utility:
from langchain.utilities import SerpAPIWrapper
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["serpapi"])
For more information on this, see this page
previous
SearxNG Search API
next
Shale Protocol
Contents
Installation and Setup
Wrappers
Utility
Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/serpapi.html |
64562a6cd661-0 | .md
.pdf
Stripe
Contents
Installation and Setup
Document Loader
Stripe#
Stripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.
Installation and Setup#
See setup instructions.
Document Loader#
See a usage example.
from langchain.document_loaders import StripeLoader
previous
StochasticAI
next
Tair
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/stripe.html |
19b726cbcbcc-0 | .md
.pdf
Jina
Contents
Installation and Setup
Wrappers
Embeddings
Jina#
This page covers how to use the Jina ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Jina wrappers.
Installation and Setup#
Install the Python SDK with pip install jina
Get a Jina AI Cloud auth token from here and set it as an environment variable (JINA_AUTH_TOKEN)
Wrappers#
Embeddings#
There exists a Jina Embeddings wrapper, which you can access with
from langchain.embeddings import JinaEmbeddings
For a more detailed walkthrough of this, see this notebook
previous
IMSDb
next
LanceDB
Contents
Installation and Setup
Wrappers
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/jina.html |
ec17345f303b-0 | .md
.pdf
Unstructured
Contents
Installation and Setup
Wrappers
Data Loaders
Unstructured#
The unstructured package from
Unstructured.IO extracts clean text from raw source documents like
PDFs and Word documents.
This page covers how to use the unstructured
ecosystem within LangChain.
Installation and Setup#
If you are using a loader that runs locally, use the following steps to get unstructured and
its dependencies running locally.
Install the Python SDK with pip install "unstructured[local-inference]"
Install the following system dependencies if they are not already available on your system.
Depending on what document types you’re parsing, you may not need all of these.
libmagic-dev (filetype detection)
poppler-utils (images and PDFs)
tesseract-ocr(images and PDFs)
libreoffice (MS Office docs)
pandoc (EPUBs)
If you want to get up and running with less set up, you can
simply run pip install unstructured and use UnstructuredAPIFileLoader or
UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API.
Note that currently (as of 1 May 2023) the Unstructured API is open, but it will soon require
an API. The Unstructured documentation page will have
instructions on how to generate an API key once they’re available. Check out the instructions
here
if you’d like to self-host the Unstructured API or run it locally.
Wrappers#
Data Loaders#
The primary unstructured wrappers within langchain are data loaders. The following
shows how to use the most basic unstructured data loader. There are other file-specific
data loaders available in the langchain.document_loaders module.
from langchain.document_loaders import UnstructuredFileLoader | rtdocs_stable/api.python.langchain.com/en/stable/integrations/unstructured.html |
ec17345f303b-1 | from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader("state_of_the_union.txt")
loader.load()
If you instantiate the loader with UnstructuredFileLoader(mode="elements"), the loader
will track additional metadata like the page number and text type (i.e. title, narrative text)
when that information is available.
previous
Twitter
next
Vectara
Contents
Installation and Setup
Wrappers
Data Loaders
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/unstructured.html |
542eabbd03af-0 | .md
.pdf
ClickHouse
Contents
Installation
Configure clickhouse vector index
Wrappers
VectorStore
ClickHouse#
This page covers how to use ClickHouse Vector Search within LangChain.
ClickHouse is a open source real-time OLAP database with full SQL support and a wide range of functions to assist users in writing analytical queries. Some of these functions and data structures perform distance operations between vectors, enabling ClickHouse to be used as a vector database.
Due to the fully parallelized query pipeline, ClickHouse can process vector search operations very quickly, especially when performing exact matching through a linear scan over all rows, delivering processing speed comparable to dedicated vector databases.
High compression levels, tunable through custom compression codecs, enable very large datasets to be stored and queried. ClickHouse is not memory-bound, allowing multi-TB datasets containing embeddings to be queried.
The capabilities for computing the distance between two vectors are just another SQL function and can be effectively combined with more traditional SQL filtering and aggregation capabilities. This allows vectors to be stored and queried alongside metadata, and even rich text, enabling a broad array of use cases and applications.
Finally, experimental ClickHouse capabilities like Approximate Nearest Neighbour (ANN) indices support faster approximate matching of vectors and provide a promising development aimed to further enhance the vector matching capabilities of ClickHouse.
Installation#
Install clickhouse server by binary or docker image
Install the Python SDK with pip install clickhouse-connect
Configure clickhouse vector index#
Customize ClickhouseSettings object with parameters
```python
from langchain.vectorstores import ClickHouse, ClickhouseSettings
config = ClickhouseSettings(host="<clickhouse-server-host>", port=8123, ...)
index = Clickhouse(embedding_function, config)
index.add_documents(...)
```
Wrappers#
supported functions:
add_texts
add_documents
from_texts
from_documents
similarity_search
asimilarity_search | rtdocs_stable/api.python.langchain.com/en/stable/integrations/clickhouse.html |
542eabbd03af-1 | add_documents
from_texts
from_documents
similarity_search
asimilarity_search
similarity_search_by_vector
asimilarity_search_by_vector
similarity_search_with_relevance_scores
VectorStore#
There exists a wrapper around open source Clickhouse database, allowing you to use it as a vectorstore,
whether for semantic search or similar example retrieval.
To import this vectorstore:
from langchain.vectorstores import Clickhouse
For a more detailed walkthrough of the MyScale wrapper, see this notebook
previous
ClearML
next
Cohere
Contents
Installation
Configure clickhouse vector index
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/clickhouse.html |
0c2792f4a62a-0 | .md
.pdf
AtlasDB
Contents
Installation and Setup
Wrappers
VectorStore
AtlasDB#
This page covers how to use Nomic’s Atlas ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Atlas wrappers.
Installation and Setup#
Install the Python package with pip install nomic
Nomic is also included in langchains poetry extras poetry install -E all
Wrappers#
VectorStore#
There exists a wrapper around the Atlas neural database, allowing you to use it as a vectorstore.
This vectorstore also gives you full access to the underlying AtlasProject object, which will allow you to use the full range of Atlas map interactions, such as bulk tagging and automatic topic modeling.
Please see the Atlas docs for more detailed information.
To import this vectorstore:
from langchain.vectorstores import AtlasDB
For a more detailed walkthrough of the AtlasDB wrapper, see this notebook
previous
Arxiv
next
AwaDB
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/atlas.html |
882ac042007b-0 | .md
.pdf
GitBook
Contents
Installation and Setup
Document Loader
GitBook#
GitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import GitbookLoader
previous
Git
next
Google BigQuery
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/gitbook.html |
789e7fe5541d-0 | .md
.pdf
Diffbot
Contents
Installation and Setup
Document Loader
Diffbot#
Diffbot is a service to read web pages. Unlike traditional web scraping tools,
Diffbot doesn’t require any rules to read the content on a page.
It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.
The result is a website transformed into clean-structured data (like JSON or CSV), ready for your application.
Installation and Setup#
Read instructions how to get the Diffbot API Token.
Document Loader#
See a usage example.
from langchain.document_loaders import DiffbotLoader
previous
Deep Lake
next
Discord
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/diffbot.html |
4c69a6e4e93d-0 | .md
.pdf
Argilla
Contents
Installation and Setup
Tracking
Argilla#
Argilla is an open-source data curation platform for LLMs.
Using Argilla, everyone can build robust language models through faster data curation
using both human and machine feedback. We provide support for each step in the MLOps cycle,
from data labeling to model monitoring.
Installation and Setup#
First, you’ll need to install the argilla Python package as follows:
pip install argilla --upgrade
If you already have an Argilla Server running, then you’re good to go; but if
you don’t, follow the next steps to install it.
If you don’t you can refer to Argilla - 🚀 Quickstart to deploy Argilla either on HuggingFace Spaces, locally, or on a server.
Tracking#
See a usage example of ArgillaCallbackHandler.
from langchain.callbacks import ArgillaCallbackHandler
previous
Apify
next
Arxiv
Contents
Installation and Setup
Tracking
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/argilla.html |
472f88a8c5c6-0 | .md
.pdf
Llama.cpp
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Llama.cpp#
This page covers how to use llama.cpp within LangChain.
It is broken into two parts: installation and setup, and then references to specific Llama-cpp wrappers.
Installation and Setup#
Install the Python package with pip install llama-cpp-python
Download one of the supported models and convert them to the llama.cpp format per the instructions
Wrappers#
LLM#
There exists a LlamaCpp LLM wrapper, which you can access with
from langchain.llms import LlamaCpp
For a more detailed walkthrough of this, see this notebook
Embeddings#
There exists a LlamaCpp Embeddings wrapper, which you can access with
from langchain.embeddings import LlamaCppEmbeddings
For a more detailed walkthrough of this, see this notebook
previous
LangChain Decorators ✨
next
MediaWikiDump
Contents
Installation and Setup
Wrappers
LLM
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/llamacpp.html |
1ca459231e82-0 | .md
.pdf
Microsoft Word
Contents
Installation and Setup
Document Loader
Microsoft Word#
Microsoft Word is a word processor developed by Microsoft.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import UnstructuredWordDocumentLoader
previous
Microsoft PowerPoint
next
Milvus
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/microsoft_word.html |
05add05f7c0a-0 | .md
.pdf
Modern Treasury
Contents
Installation and Setup
Document Loader
Modern Treasury#
Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.
Connect to banks and payment systems
Track transactions and balances in real-time
Automate payment operations for scale
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import ModernTreasuryLoader
previous
Modal
next
Momento
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/modern_treasury.html |
3faf6fc0c803-0 | .md
.pdf
Obsidian
Contents
Installation and Setup
Document Loader
Obsidian#
Obsidian is a powerful and extensible knowledge base
that works on top of your local folder of plain text files.
Installation and Setup#
All instructions are in examples below.
Document Loader#
See a usage example.
from langchain.document_loaders import ObsidianLoader
previous
Notion DB
next
OpenAI
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/obsidian.html |
4f10e90b5ce3-0 | .ipynb
.pdf
Ray Serve
Contents
Goal of this notebook
Setup Ray Serve
General Skeleton
Example of deploying and OpenAI chain with custom prompts
Ray Serve#
Ray Serve is a scalable model serving library for building online inference APIs. Serve is particularly well suited for system composition, enabling you to build a complex inference service consisting of multiple chains and business logic all in Python code.
Goal of this notebook#
This notebook shows a simple example of how to deploy an OpenAI chain into production. You can extend it to deploy your own self-hosted models where you can easily define amount of hardware resources (GPUs and CPUs) needed to run your model in production efficiently. Read more about available options including autoscaling in the Ray Serve documentation.
Setup Ray Serve#
Install ray with pip install ray[serve].
General Skeleton#
The general skeleton for deploying a service is the following:
# 0: Import ray serve and request from starlette
from ray import serve
from starlette.requests import Request
# 1: Define a Ray Serve deployment.
@serve.deployment
class LLMServe:
def __init__(self) -> None:
# All the initialization code goes here
pass
async def __call__(self, request: Request) -> str:
# You can parse the request here
# and return a response
return "Hello World"
# 2: Bind the model to deployment
deployment = LLMServe.bind()
# 3: Run the deployment
serve.api.run(deployment)
# Shutdown the deployment
serve.api.shutdown()
Example of deploying and OpenAI chain with custom prompts#
Get an OpenAI API key from here. By running the following code, you will be asked to provide your API key.
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain | rtdocs_stable/api.python.langchain.com/en/stable/integrations/ray_serve.html |
4f10e90b5ce3-1 | from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
from getpass import getpass
OPENAI_API_KEY = getpass()
@serve.deployment
class DeployLLM:
def __init__(self):
# We initialize the LLM, template and the chain here
llm = OpenAI(openai_api_key=OPENAI_API_KEY)
template = "Question: {question}\n\nAnswer: Let's think step by step."
prompt = PromptTemplate(template=template, input_variables=["question"])
self.chain = LLMChain(llm=llm, prompt=prompt)
def _run_chain(self, text: str):
return self.chain(text)
async def __call__(self, request: Request):
# 1. Parse the request
text = request.query_params["text"]
# 2. Run the chain
resp = self._run_chain(text)
# 3. Return the response
return resp["text"]
Now we can bind the deployment.
# Bind the model to deployment
deployment = DeployLLM.bind()
We can assign the port number and host when we want to run the deployment.
# Example port number
PORT_NUMBER = 8282
# Run the deployment
serve.api.run(deployment, port=PORT_NUMBER)
Now that service is deployed on port localhost:8282 we can send a post request to get the results back.
import requests
text = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
response = requests.post(f'http://localhost:{PORT_NUMBER}/?text={text}')
print(response.content.decode())
previous
Qdrant
next
Rebuff
Contents
Goal of this notebook
Setup Ray Serve
General Skeleton | rtdocs_stable/api.python.langchain.com/en/stable/integrations/ray_serve.html |
4f10e90b5ce3-2 | Rebuff
Contents
Goal of this notebook
Setup Ray Serve
General Skeleton
Example of deploying and OpenAI chain with custom prompts
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/ray_serve.html |
dfaf9dc7331f-0 | .md
.pdf
Tair
Contents
Installation and Setup
Wrappers
VectorStore
Tair#
This page covers how to use the Tair ecosystem within LangChain.
Installation and Setup#
Install Tair Python SDK with pip install tair.
Wrappers#
VectorStore#
There exists a wrapper around TairVector, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Tair
For a more detailed walkthrough of the Tair wrapper, see this notebook
previous
Stripe
next
Telegram
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/tair.html |
acdd39449586-0 | .md
.pdf
Modal
Contents
Installation and Setup
Define your Modal Functions and Webhooks
Wrappers
LLM
Modal#
This page covers how to use the Modal ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Modal wrappers.
Installation and Setup#
Install with pip install modal-client
Run modal token new
Define your Modal Functions and Webhooks#
You must include a prompt. There is a rigid response structure.
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def my_webhook(item: Item):
return {"prompt": my_function.call(item.prompt)}
An example with GPT2:
from pydantic import BaseModel
import modal
stub = modal.Stub("example-get-started")
volume = modal.SharedVolume().persist("gpt2_model_vol")
CACHE_PATH = "/root/model_cache"
@stub.function(
gpu="any",
image=modal.Image.debian_slim().pip_install(
"tokenizers", "transformers", "torch", "accelerate"
),
shared_volumes={CACHE_PATH: volume},
retries=3,
)
def run_gpt2(text: str):
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
encoded_input = tokenizer(text, return_tensors='pt').input_ids
output = model.generate(encoded_input, max_length=50, do_sample=True)
return tokenizer.decode(output[0], skip_special_tokens=True)
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def get_text(item: Item): | rtdocs_stable/api.python.langchain.com/en/stable/integrations/modal.html |
acdd39449586-1 | @stub.webhook(method="POST")
def get_text(item: Item):
return {"prompt": run_gpt2.call(item.prompt)}
Wrappers#
LLM#
There exists an Modal LLM wrapper, which you can access with
from langchain.llms import Modal
previous
MLflow
next
Modern Treasury
Contents
Installation and Setup
Define your Modal Functions and Webhooks
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/modal.html |
fe042e9c793e-0 | .md
.pdf
PGVector
Contents
Installation
Setup
Wrappers
VectorStore
Usage
PGVector#
This page covers how to use the Postgres PGVector ecosystem within LangChain
It is broken into two parts: installation and setup, and then references to specific PGVector wrappers.
Installation#
Install the Python package with pip install pgvector
Setup#
The first step is to create a database with the pgvector extension installed.
Follow the steps at PGVector Installation Steps to install the database and the extension. The docker image is the easiest way to get started.
Wrappers#
VectorStore#
There exists a wrapper around Postgres vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores.pgvector import PGVector
Usage#
For a more detailed walkthrough of the PGVector Wrapper, see this notebook
previous
Petals
next
Pinecone
Contents
Installation
Setup
Wrappers
VectorStore
Usage
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/pgvector.html |
4bd89ef6c29b-0 | .md
.pdf
Amazon Bedrock
Contents
Installation and Setup
LLM
Text Embedding Models
Amazon Bedrock#
Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.
Installation and Setup#
pip install boto3
LLM#
See a usage example.
from langchain import Bedrock
Text Embedding Models#
See a usage example.
from langchain.embeddings import BedrockEmbeddings
previous
Aleph Alpha
next
AnalyticDB
Contents
Installation and Setup
LLM
Text Embedding Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/amazon_bedrock.html |
51a8714a455c-0 | .md
.pdf
Notion DB
Contents
Installation and Setup
Document Loader
Notion DB#
Notion is a collaboration platform with modified Markdown support that integrates kanban
boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management,
and project and task management.
Installation and Setup#
All instructions are in examples below.
Document Loader#
We have two different loaders: NotionDirectoryLoader and NotionDBLoader.
See a usage example for the NotionDirectoryLoader.
from langchain.document_loaders import NotionDirectoryLoader
See a usage example for the NotionDBLoader.
from langchain.document_loaders import NotionDBLoader
previous
NLPCloud
next
Obsidian
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/notion.html |
40d8ee031b96-0 | .md
.pdf
AZLyrics
Contents
Installation and Setup
Document Loader
AZLyrics#
AZLyrics is a large, legal, every day growing collection of lyrics.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import AZLyricsLoader
previous
AWS S3 Directory
next
Azure Blob Storage
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/azlyrics.html |
ce8de77b2949-0 | .md
.pdf
Roam
Contents
Installation and Setup
Document Loader
Roam#
ROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import RoamLoader
previous
Replicate
next
Runhouse
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/roam.html |
730c282c5497-0 | .md
.pdf
Graphsignal
Contents
Installation and Setup
Tracing and Monitoring
Graphsignal#
This page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more.
Installation and Setup#
Install the Python library with pip install graphsignal
Create free Graphsignal account here
Get an API key and set it as an environment variable (GRAPHSIGNAL_API_KEY)
Tracing and Monitoring#
Graphsignal automatically instruments and starts tracing and monitoring chains. Traces and metrics are then available in your Graphsignal dashboards.
Initialize the tracer by providing a deployment name:
import graphsignal
graphsignal.configure(deployment='my-langchain-app-prod')
To additionally trace any function or code, you can use a decorator or a context manager:
@graphsignal.trace_function
def handle_request():
chain.run("some initial text")
with graphsignal.start_trace('my-chain'):
chain.run("some initial text")
Optionally, enable profiling to record function-level statistics for each trace.
with graphsignal.start_trace(
'my-chain', options=graphsignal.TraceOptions(enable_profiling=True)):
chain.run("some initial text")
See the Quick Start guide for complete setup instructions.
previous
GPT4All
next
Gutenberg
Contents
Installation and Setup
Tracing and Monitoring
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/graphsignal.html |
2accc1c1f4a7-0 | .md
.pdf
Redis
Contents
Installation and Setup
Wrappers
Cache
Standard Cache
Semantic Cache
VectorStore
Retriever
Memory
Vector Store Retriever Memory
Chat Message History Memory
Redis#
This page covers how to use the Redis ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Redis wrappers.
Installation and Setup#
Install the Redis Python SDK with pip install redis
Wrappers#
Cache#
The Cache wrapper allows for Redis to be used as a remote, low-latency, in-memory cache for LLM prompts and responses.
Standard Cache#
The standard cache is the Redis bread & butter of use case in production for both open source and enterprise users globally.
To import this cache:
from langchain.cache import RedisCache
To use this cache with your LLMs:
import langchain
import redis
redis_client = redis.Redis.from_url(...)
langchain.llm_cache = RedisCache(redis_client)
Semantic Cache#
Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends Redis as both a cache and a vectorstore.
To import this cache:
from langchain.cache import RedisSemanticCache
To use this cache with your LLMs:
import langchain
import redis
# use any embedding provider...
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
redis_url = "redis://localhost:6379"
langchain.llm_cache = RedisSemanticCache(
embedding=FakeEmbeddings(),
redis_url=redis_url
)
VectorStore#
The vectorstore wrapper turns Redis into a low-latency vector database for semantic search or LLM content retrieval.
To import this vectorstore:
from langchain.vectorstores import Redis | rtdocs_stable/api.python.langchain.com/en/stable/integrations/redis.html |
2accc1c1f4a7-1 | To import this vectorstore:
from langchain.vectorstores import Redis
For a more detailed walkthrough of the Redis vectorstore wrapper, see this notebook.
Retriever#
The Redis vector store retriever wrapper generalizes the vectorstore class to perform low-latency document retrieval. To create the retriever, simply call .as_retriever() on the base vectorstore class.
Memory#
Redis can be used to persist LLM conversations.
Vector Store Retriever Memory#
For a more detailed walkthrough of the VectorStoreRetrieverMemory wrapper, see this notebook.
Chat Message History Memory#
For a detailed example of Redis to cache conversation message history, see this notebook.
previous
Reddit
next
Replicate
Contents
Installation and Setup
Wrappers
Cache
Standard Cache
Semantic Cache
VectorStore
Retriever
Memory
Vector Store Retriever Memory
Chat Message History Memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/redis.html |
b721a8f5c06a-0 | .md
.pdf
OpenWeatherMap
Contents
Installation and Setup
Wrappers
Utility
Tool
OpenWeatherMap#
OpenWeatherMap provides all essential weather data for a specific location:
Current weather
Minute forecast for 1 hour
Hourly forecast for 48 hours
Daily forecast for 8 days
National weather alerts
Historical weather data for 40+ years back
This page covers how to use the OpenWeatherMap API within LangChain.
Installation and Setup#
Install requirements with
pip install pyowm
Go to OpenWeatherMap and sign up for an account to get your API key here
Set your API key as OPENWEATHERMAP_API_KEY environment variable
Wrappers#
Utility#
There exists a OpenWeatherMapAPIWrapper utility which wraps this API. To import this utility:
from langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["openweathermap-api"])
For more information on this, see this page
previous
OpenSearch
next
Petals
Contents
Installation and Setup
Wrappers
Utility
Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/openweathermap.html |
80dd081a686c-0 | .md
.pdf
Banana
Contents
Installation and Setup
Define your Banana Template
Build the Banana app
Wrappers
LLM
Banana#
This page covers how to use the Banana ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Banana wrappers.
Installation and Setup#
Install with pip install banana-dev
Get an Banana api key and set it as an environment variable (BANANA_API_KEY)
Define your Banana Template#
If you want to use an available language model template you can find one here.
This template uses the Palmyra-Base model by Writer.
You can check out an example Banana repository here.
Build the Banana app#
Banana Apps must include the “output” key in the return json.
There is a rigid response structure.
# Return the results as a dictionary
result = {'output': result}
An example inference function would be:
def inference(model_inputs:dict) -> dict:
global model
global tokenizer
# Parse out your arguments
prompt = model_inputs.get('prompt', None)
if prompt == None:
return {'message': "No prompt provided"}
# Run the model
input_ids = tokenizer.encode(prompt, return_tensors='pt').cuda()
output = model.generate(
input_ids,
max_length=100,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1,
temperature=0.9,
early_stopping=True,
no_repeat_ngram_size=3,
num_beams=5,
length_penalty=1.5,
repetition_penalty=1.5,
bad_words_ids=[[tokenizer.encode(' ', add_prefix_space=True)[0]]]
) | rtdocs_stable/api.python.langchain.com/en/stable/integrations/bananadev.html |
80dd081a686c-1 | )
result = tokenizer.decode(output[0], skip_special_tokens=True)
# Return the results as a dictionary
result = {'output': result}
return result
You can find a full example of a Banana app here.
Wrappers#
LLM#
There exists an Banana LLM wrapper, which you can access with
from langchain.llms import Banana
You need to provide a model key located in the dashboard:
llm = Banana(model_key="YOUR_MODEL_KEY")
previous
Azure OpenAI
next
Beam
Contents
Installation and Setup
Define your Banana Template
Build the Banana app
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/bananadev.html |
df57e9148ff1-0 | .md
.pdf
Google BigQuery
Contents
Installation and Setup
Document Loader
Google BigQuery#
Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.
BigQuery is a part of the Google Cloud Platform.
Installation and Setup#
First, you need to install google-cloud-bigquery python package.
pip install google-cloud-bigquery
Document Loader#
See a usage example.
from langchain.document_loaders import BigQueryLoader
previous
GitBook
next
Google Cloud Storage
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/google_bigquery.html |
fcf1dad642ac-0 | .md
.pdf
Vespa
Contents
Installation and Setup
Retriever
Vespa#
Vespa is a fully featured search engine and vector database.
It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Installation and Setup#
pip install pyvespa
Retriever#
See a usage example.
from langchain.retrievers import VespaRetriever
previous
Vectara
next
Weights & Biases
Contents
Installation and Setup
Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/vespa.html |
8da01f2c28a7-0 | .md
.pdf
Azure OpenAI
Contents
Installation and Setup
LLM
Text Embedding Models
Chat Models
Azure OpenAI#
Microsoft Azure, often referred to as Azure is a cloud computing platform run by Microsoft, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Microsoft Azure supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.
Azure OpenAI is an Azure service with powerful language models from OpenAI including the GPT-3, Codex and Embeddings model series for content generation, summarization, semantic search, and natural language to code translation.
Installation and Setup#
pip install openai
pip install tiktoken
Set the environment variables to get access to the Azure OpenAI service.
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
LLM#
See a usage example.
from langchain.llms import AzureOpenAI
Text Embedding Models#
See a usage example
from langchain.embeddings import OpenAIEmbeddings
Chat Models#
See a usage example
from langchain.chat_models import AzureChatOpenAI
previous
Azure Cognitive Search
next
Banana
Contents
Installation and Setup
LLM
Text Embedding Models
Chat Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/azure_openai.html |
024d413e338e-0 | .md
.pdf
Slack
Contents
Installation and Setup
Document Loader
Slack#
Slack is an instant messaging program.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import SlackDirectoryLoader
previous
scikit-learn
next
spaCy
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/slack.html |
cae1390c8c9b-0 | .md
.pdf
Prediction Guard
Contents
Installation and Setup
LLM
Example
Basic usage of the controlled or guarded LLM:
Basic LLM Chaining with the Prediction Guard:
Prediction Guard#
Prediction Guard gives a quick and easy access to state-of-the-art open and closed access LLMs, without needing to spend days and weeks figuring out all of the implementation details, managing a bunch of different API specs, and setting up the infrastructure for model deployments.
Installation and Setup#
Install the Python SDK:
pip install predictionguard
Get an Prediction Guard access token (as described here) and set it as an environment variable (PREDICTIONGUARD_TOKEN)
LLM#
from langchain.llms import PredictionGuard
Example#
You can provide the name of the Prediction Guard model as an argument when initializing the LLM:
pgllm = PredictionGuard(model="MPT-7B-Instruct")
You can also provide your access token directly as an argument:
pgllm = PredictionGuard(model="MPT-7B-Instruct", token="<your access token>")
Also, you can provide an “output” argument that is used to structure/ control the output of the LLM:
pgllm = PredictionGuard(model="MPT-7B-Instruct", output={"type": "boolean"})
Basic usage of the controlled or guarded LLM:#
import os
import predictionguard as pg
from langchain.llms import PredictionGuard
from langchain import PromptTemplate, LLMChain
# Your Prediction Guard API key. Get one at predictionguard.com
os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"
# Define a prompt template
template = """Respond to the following query based on the context. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/predictionguard.html |
cae1390c8c9b-1 | # Define a prompt template
template = """Respond to the following query based on the context.
Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦
Exclusive Candle Box - $80
Monthly Candle Box - $45 (NEW!)
Scent of The Month Box - $28 (NEW!)
Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉
Query: {query}
Result: """
prompt = PromptTemplate(template=template, input_variables=["query"])
# With "guarding" or controlling the output of the LLM. See the
# Prediction Guard docs (https://docs.predictionguard.com) to learn how to
# control the output with integer, float, boolean, JSON, and other types and
# structures.
pgllm = PredictionGuard(model="MPT-7B-Instruct",
output={
"type": "categorical",
"categories": [
"product announcement",
"apology",
"relational"
]
})
pgllm(prompt.format(query="What kind of post is this?"))
Basic LLM Chaining with the Prediction Guard:#
import os
from langchain import PromptTemplate, LLMChain
from langchain.llms import PredictionGuard
# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows
# you to access all the latest open access models (see https://docs.predictionguard.com)
os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>"
# Your Prediction Guard API key. Get one at predictionguard.com | rtdocs_stable/api.python.langchain.com/en/stable/integrations/predictionguard.html |
cae1390c8c9b-2 | # Your Prediction Guard API key. Get one at predictionguard.com
os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"
pgllm = PredictionGuard(model="OpenAI-text-davinci-003")
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)
previous
PipelineAI
next
PromptLayer
Contents
Installation and Setup
LLM
Example
Basic usage of the controlled or guarded LLM:
Basic LLM Chaining with the Prediction Guard:
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/predictionguard.html |
9a20556a7dfa-0 | .md
.pdf
Google Search
Contents
Installation and Setup
Wrappers
Utility
Tool
Google Search#
This page covers how to use the Google Search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific Google Search wrapper.
Installation and Setup#
Install requirements with pip install google-api-python-client
Set up a Custom Search Engine, following these instructions
Get an API Key and Custom Search Engine ID from the previous step, and set them as environment variables GOOGLE_API_KEY and GOOGLE_CSE_ID respectively
Wrappers#
Utility#
There exists a GoogleSearchAPIWrapper utility which wraps this API. To import this utility:
from langchain.utilities import GoogleSearchAPIWrapper
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["google-search"])
For more information on this, see this page
previous
Google Drive
next
Google Serper
Contents
Installation and Setup
Wrappers
Utility
Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/google_search.html |
e069d8008aa3-0 | .md
.pdf
Hacker News
Contents
Installation and Setup
Document Loader
Hacker News#
Hacker News (sometimes abbreviated as HN) is a social news
website focusing on computer science and entrepreneurship. It is run by the investment fund and startup
incubator Y Combinator. In general, content that can be submitted is defined as “anything that gratifies
one’s intellectual curiosity.”
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import HNLoader
previous
Gutenberg
next
Hazy Research
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/hacker_news.html |
188e49ecc049-0 | .md
.pdf
NLPCloud
Contents
Installation and Setup
Wrappers
LLM
NLPCloud#
This page covers how to use the NLPCloud ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific NLPCloud wrappers.
Installation and Setup#
Install the Python SDK with pip install nlpcloud
Get an NLPCloud api key and set it as an environment variable (NLPCLOUD_API_KEY)
Wrappers#
LLM#
There exists an NLPCloud LLM wrapper, which you can access with
from langchain.llms import NLPCloud
previous
MyScale
next
Notion DB
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/nlpcloud.html |
1a159fd3deaf-0 | .md
.pdf
Hazy Research
Contents
Installation and Setup
Wrappers
LLM
Hazy Research#
This page covers how to use the Hazy Research ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Hazy Research wrappers.
Installation and Setup#
To use the manifest, install it with pip install manifest-ml
Wrappers#
LLM#
There exists an LLM wrapper around Hazy Research’s manifest library.
manifest is a python library which is itself a wrapper around many model providers, and adds in caching, history, and more.
To use this wrapper:
from langchain.llms.manifest import ManifestWrapper
previous
Hacker News
next
Helicone
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/hazy_research.html |
0cb6bac04ee0-0 | .ipynb
.pdf
Comet
Contents
Install Comet and Dependencies
Initialize Comet and Set your Credentials
Set OpenAI and SerpAPI credentials
Scenario 1: Using just an LLM
Scenario 2: Using an LLM in a Chain
Scenario 3: Using An Agent with Tools
Scenario 4: Using Custom Evaluation Metrics
Comet#
In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet.
Example Project: Comet with LangChain
Install Comet and Dependencies#
%pip install comet_ml langchain openai google-search-results spacy textstat pandas
import sys
!{sys.executable} -m spacy download en_core_web_sm
Initialize Comet and Set your Credentials#
You can grab your Comet API Key here or click the link after initializing Comet
import comet_ml
comet_ml.init(project_name="comet-example-langchain")
Set OpenAI and SerpAPI credentials#
You will need an OpenAI API Key and a SerpAPI API Key to run the following examples
import os
os.environ["OPENAI_API_KEY"] = "..."
#os.environ["OPENAI_ORGANIZATION"] = "..."
os.environ["SERPAPI_API_KEY"] = "..."
Scenario 1: Using just an LLM#
from datetime import datetime
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.llms import OpenAI
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=True,
stream_logs=True,
tags=["llm"],
visualizations=["dep"],
)
callbacks = [StdOutCallbackHandler(), comet_callback]
llm = OpenAI(temperature=0.9, callbacks=callbacks, verbose=True) | rtdocs_stable/api.python.langchain.com/en/stable/integrations/comet_tracking.html |
0cb6bac04ee0-1 | llm = OpenAI(temperature=0.9, callbacks=callbacks, verbose=True)
llm_result = llm.generate(["Tell me a joke", "Tell me a poem", "Tell me a fact"] * 3)
print("LLM result", llm_result)
comet_callback.flush_tracker(llm, finish=True)
Scenario 2: Using an LLM in a Chain#
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
comet_callback = CometCallbackHandler(
complexity_metrics=True,
project_name="comet-example-langchain",
stream_logs=True,
tags=["synopsis-chain"],
)
callbacks = [StdOutCallbackHandler(), comet_callback]
llm = OpenAI(temperature=0.9, callbacks=callbacks)
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)
test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]
print(synopsis_chain.apply(test_prompts))
comet_callback.flush_tracker(synopsis_chain, finish=True)
Scenario 3: Using An Agent with Tools#
from langchain.agents import initialize_agent, load_tools
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.llms import OpenAI
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=True, | rtdocs_stable/api.python.langchain.com/en/stable/integrations/comet_tracking.html |
0cb6bac04ee0-2 | project_name="comet-example-langchain",
complexity_metrics=True,
stream_logs=True,
tags=["agent"],
)
callbacks = [StdOutCallbackHandler(), comet_callback]
llm = OpenAI(temperature=0.9, callbacks=callbacks)
tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)
agent = initialize_agent(
tools,
llm,
agent="zero-shot-react-description",
callbacks=callbacks,
verbose=True,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
comet_callback.flush_tracker(agent, finish=True)
Scenario 4: Using Custom Evaluation Metrics#
The CometCallbackManager also allows you to define and use Custom Evaluation Metrics to assess generated outputs from your model. Let’s take a look at how this works.
In the snippet below, we will use the ROUGE metric to evaluate the quality of a generated summary of an input prompt.
%pip install rouge-score
from rouge_score import rouge_scorer
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
class Rouge:
def __init__(self, reference):
self.reference = reference
self.scorer = rouge_scorer.RougeScorer(["rougeLsum"], use_stemmer=True)
def compute_metric(self, generation, prompt_idx, gen_idx):
prediction = generation.text
results = self.scorer.score(target=self.reference, prediction=prediction)
return { | rtdocs_stable/api.python.langchain.com/en/stable/integrations/comet_tracking.html |
0cb6bac04ee0-3 | return {
"rougeLsum_score": results["rougeLsum"].fmeasure,
"reference": self.reference,
}
reference = """
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building.
It was the first structure to reach a height of 300 metres.
It is now taller than the Chrysler Building in New York City by 5.2 metres (17 ft)
Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France .
"""
rouge_score = Rouge(reference=reference)
template = """Given the following article, it is your job to write a summary.
Article:
{article}
Summary: This is the summary for the above article:"""
prompt_template = PromptTemplate(input_variables=["article"], template=template)
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=False,
stream_logs=True,
tags=["custom_metrics"],
custom_metrics=rouge_score.compute_metric,
)
callbacks = [StdOutCallbackHandler(), comet_callback]
llm = OpenAI(temperature=0.9)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
test_prompts = [
{
"article": """
The tower is 324 metres (1,063 ft) tall, about the same height as
an 81-storey building, and the tallest structure in Paris. Its base is square,
measuring 125 metres (410 ft) on each side.
During its construction, the Eiffel Tower surpassed the
Washington Monument to become the tallest man-made structure in the world,
a title it held for 41 years until the Chrysler Building | rtdocs_stable/api.python.langchain.com/en/stable/integrations/comet_tracking.html |
0cb6bac04ee0-4 | a title it held for 41 years until the Chrysler Building
in New York City was finished in 1930.
It was the first structure to reach a height of 300 metres.
Due to the addition of a broadcasting aerial at the top of the tower in 1957,
it is now taller than the Chrysler Building by 5.2 metres (17 ft).
Excluding transmitters, the Eiffel Tower is the second tallest
free-standing structure in France after the Millau Viaduct.
"""
}
]
print(synopsis_chain.apply(test_prompts, callbacks=callbacks))
comet_callback.flush_tracker(synopsis_chain, finish=True)
previous
College Confidential
next
Confluence
Contents
Install Comet and Dependencies
Initialize Comet and Set your Credentials
Set OpenAI and SerpAPI credentials
Scenario 1: Using just an LLM
Scenario 2: Using an LLM in a Chain
Scenario 3: Using An Agent with Tools
Scenario 4: Using Custom Evaluation Metrics
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/comet_tracking.html |
a5dfc4624676-0 | .md
.pdf
Anyscale
Contents
Installation and Setup
Wrappers
LLM
Anyscale#
This page covers how to use the Anyscale ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Anyscale wrappers.
Installation and Setup#
Get an Anyscale Service URL, route and API key and set them as environment variables (ANYSCALE_SERVICE_URL,ANYSCALE_SERVICE_ROUTE, ANYSCALE_SERVICE_TOKEN).
Please see the Anyscale docs for more details.
Wrappers#
LLM#
There exists an Anyscale LLM wrapper, which you can access with
from langchain.llms import Anyscale
previous
Anthropic
next
Apify
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/anyscale.html |
97ebab10c47d-0 | .md
.pdf
DuckDB
Contents
Installation and Setup
Document Loader
DuckDB#
DuckDB is an in-process SQL OLAP database management system.
Installation and Setup#
First, you need to install duckdb python package.
pip install duckdb
Document Loader#
See a usage example.
from langchain.document_loaders import DuckDBLoader
previous
Docugami
next
Elasticsearch
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/duckdb.html |
4ec3ccd2a65f-0 | .md
.pdf
College Confidential
Contents
Installation and Setup
Document Loader
College Confidential#
College Confidential gives information on 3,800+ colleges and universities.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import CollegeConfidentialLoader
previous
Cohere
next
Comet
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/college_confidential.html |
3c32fe30e04b-0 | .md
.pdf
GPT4All
Contents
Installation and Setup
Usage
GPT4All
Model File
GPT4All#
This page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.
Installation and Setup#
Install the Python package with pip install pyllamacpp
Download a GPT4All model and place it in your desired directory
Usage#
GPT4All#
To use the GPT4All wrapper, you need to provide the path to the pre-trained model file and the model’s configuration.
from langchain.llms import GPT4All
# Instantiate the model. Callbacks support token-wise streaming
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Generate text
response = model("Once upon a time, ")
You can also customize the generation parameters, such as n_predict, temp, top_p, top_k, and others.
To stream the model’s predictions, add in a CallbackManager.
from langchain.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
# There are many CallbackHandlers supported, such as
# from langchain.callbacks.streamlit import StreamlitCallbackHandler
callbacks = [StreamingStdOutCallbackHandler()]
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Generate text. Tokens are streamed through the callback manager.
model("Once upon a time, ", callbacks=callbacks)
Model File#
You can find links to model file downloads in the pyllamacpp repository.
For a more detailed walkthrough of this, see this notebook
previous
GooseAI
next
Graphsignal
Contents | rtdocs_stable/api.python.langchain.com/en/stable/integrations/gpt4all.html |
3c32fe30e04b-1 | previous
GooseAI
next
Graphsignal
Contents
Installation and Setup
Usage
GPT4All
Model File
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/gpt4all.html |
3f4b813c9ec4-0 | .md
.pdf
Blackboard
Contents
Installation and Setup
Document Loader
Blackboard#
Blackboard Learn (previously the Blackboard Learning Management System)
is a web-based virtual learning environment and learning management system developed by Blackboard Inc.
The software features course management, customizable open architecture, and scalable design that allows
integration with student information systems and authentication protocols. It may be installed on local servers,
hosted by Blackboard ASP Solutions, or provided as Software as a Service hosted on Amazon Web Services.
Its main purposes are stated to include the addition of online elements to courses traditionally delivered
face-to-face and development of completely online courses with few or no face-to-face meetings.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import BlackboardLoader
previous
BiliBili
next
Cassandra
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/blackboard.html |
a23c862f6cd0-0 | .md
.pdf
Pinecone
Contents
Installation and Setup
Vectorstore
Pinecone#
This page covers how to use the Pinecone ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
Installation and Setup#
Install the Python SDK:
pip install pinecone-client
Vectorstore#
There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
from langchain.vectorstores import Pinecone
For a more detailed walkthrough of the Pinecone vectorstore, see this notebook
previous
PGVector
next
PipelineAI
Contents
Installation and Setup
Vectorstore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/pinecone.html |
bd23485602ea-0 | .md
.pdf
Microsoft PowerPoint
Contents
Installation and Setup
Document Loader
Microsoft PowerPoint#
Microsoft PowerPoint is a presentation program by Microsoft.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import UnstructuredPowerPointLoader
previous
Microsoft OneDrive
next
Microsoft Word
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/microsoft_powerpoint.html |
28b30e19e142-0 | .md
.pdf
Weaviate
Contents
Installation and Setup
Wrappers
VectorStore
Weaviate#
This page covers how to use the Weaviate ecosystem within LangChain.
What is Weaviate?
Weaviate in a nutshell:
Weaviate is an open-source database of the type vector search engine.
Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space.
Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities.
Weaviate has a GraphQL-API to access your data easily.
We aim to bring your vector search set up to production to query in mere milliseconds (check our open source benchmarks to see if Weaviate fits your use case).
Get to know Weaviate in the basics getting started guide in under five minutes.
Weaviate in detail:
Weaviate is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.
Installation and Setup#
Install the Python SDK with pip install weaviate-client
Wrappers#
VectorStore#
There exists a wrapper around Weaviate indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Weaviate | rtdocs_stable/api.python.langchain.com/en/stable/integrations/weaviate.html |
28b30e19e142-1 | To import this vectorstore:
from langchain.vectorstores import Weaviate
For a more detailed walkthrough of the Weaviate wrapper, see this notebook
previous
Weather
next
WhatsApp
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/weaviate.html |
f2500f740bb9-0 | .md
.pdf
Zilliz
Contents
Installation and Setup
Vectorstore
Zilliz#
Zilliz Cloud is a fully managed service on cloud for LF AI Milvus®,
Installation and Setup#
Install the Python SDK:
pip install pymilvus
Vectorstore#
A wrapper around Zilliz indexes allows you to use it as a vectorstore,
whether for semantic search or example selection.
from langchain.vectorstores import Milvus
For a more detailed walkthrough of the Miluvs wrapper, see this notebook
previous
Zep
next
Dependents
Contents
Installation and Setup
Vectorstore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/zilliz.html |
9eea08c0d0fa-0 | .md
.pdf
Annoy
Contents
Installation and Setup
Vectorstore
Annoy#
Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.
Installation and Setup#
pip install annoy
Vectorstore#
See a usage example.
from langchain.vectorstores import Annoy
previous
AnalyticDB
next
Anthropic
Contents
Installation and Setup
Vectorstore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/annoy.html |
50f9e6af9026-0 | .md
.pdf
Yeager.ai
Contents
What is Yeager.ai?
yAgents
How to use?
Creating and Executing Tools with yAgents
Yeager.ai#
This page covers how to use Yeager.ai to generate LangChain tools and agents.
What is Yeager.ai?#
Yeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools.
It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications.
yAgents#
Low code generative agent designed to help you build, prototype, and deploy Langchain tools with ease.
How to use?#
pip install yeagerai-agent
yeagerai-agent
Go to http://127.0.0.1:7860
This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab “Settings”.
OPENAI_API_KEY=<your_openai_api_key_here>
We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently.
Creating and Executing Tools with yAgents#
yAgents makes it easy to create and execute AI-powered tools. Here’s a brief overview of the process:
Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the tool’s purpose and functionality. For example:
create a tool that returns the n-th prime number | rtdocs_stable/api.python.langchain.com/en/stable/integrations/yeagerai.html |
50f9e6af9026-1 | create a tool that returns the n-th prime number
Load the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example:
load the tool that you just created it into your toolkit
Execute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example:
generate the 50th prime number
You can see a video of how it works here.
As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity.
For more information, see yAgents’ Github or our docs
previous
Writer
next
YouTube
Contents
What is Yeager.ai?
yAgents
How to use?
Creating and Executing Tools with yAgents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/yeagerai.html |
e9d901fcde54-0 | .md
.pdf
Twitter
Contents
Installation and Setup
Document Loader
Twitter#
Twitter is an online social media and social networking service.
Installation and Setup#
pip install tweepy
We must initialize the loader with the Twitter API token, and we need to set up the Twitter username.
Document Loader#
See a usage example.
from langchain.document_loaders import TwitterTweetLoader
previous
Trello
next
Unstructured
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/twitter.html |
08296115a388-0 | .md
.pdf
GooseAI
Contents
Installation and Setup
Wrappers
LLM
GooseAI#
This page covers how to use the GooseAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific GooseAI wrappers.
Installation and Setup#
Install the Python SDK with pip install openai
Get your GooseAI api key from this link here.
Set the environment variable (GOOSEAI_API_KEY).
import os
os.environ["GOOSEAI_API_KEY"] = "YOUR_API_KEY"
Wrappers#
LLM#
There exists an GooseAI LLM wrapper, which you can access with:
from langchain.llms import GooseAI
previous
Google Vertex AI
next
GPT4All
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/gooseai.html |
15e716abd0f7-0 | .md
.pdf
WhatsApp
Contents
Installation and Setup
Document Loader
WhatsApp#
WhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import WhatsAppChatLoader
previous
Weaviate
next
WhyLabs
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/whatsapp.html |
ba2d618adfb5-0 | .md
.pdf
Qdrant
Contents
Installation and Setup
Wrappers
VectorStore
Qdrant#
This page covers how to use the Qdrant ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Qdrant wrappers.
Installation and Setup#
Install the Python SDK with pip install qdrant-client
Wrappers#
VectorStore#
There exists a wrapper around Qdrant indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Qdrant
For a more detailed walkthrough of the Qdrant wrapper, see this notebook
previous
Psychic
next
Ray Serve
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/qdrant.html |
7457ccc26012-0 | .md
.pdf
C Transformers
Contents
Installation and Setup
Wrappers
LLM
C Transformers#
This page covers how to use the C Transformers library within LangChain.
It is broken into two parts: installation and setup, and then references to specific C Transformers wrappers.
Installation and Setup#
Install the Python package with pip install ctransformers
Download a supported GGML model (see Supported Models)
Wrappers#
LLM#
There exists a CTransformers LLM wrapper, which you can access with:
from langchain.llms import CTransformers
It provides a unified interface for all models:
llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2')
print(llm('AI is going to'))
If you are getting illegal instruction error, try using lib='avx' or lib='basic':
llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2', lib='avx')
It can be used with models hosted on the Hugging Face Hub:
llm = CTransformers(model='marella/gpt-2-ggml')
If a model repo has multiple model files (.bin files), specify a model file using:
llm = CTransformers(model='marella/gpt-2-ggml', model_file='ggml-model.bin')
Additional parameters can be passed using the config parameter:
config = {'max_new_tokens': 256, 'repetition_penalty': 1.1}
llm = CTransformers(model='marella/gpt-2-ggml', config=config)
See Documentation for a list of available parameters.
For a more detailed walkthrough of this, see this notebook.
previous
Confluence
next
Databerry
Contents
Installation and Setup | rtdocs_stable/api.python.langchain.com/en/stable/integrations/ctransformers.html |
7457ccc26012-1 | previous
Confluence
next
Databerry
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/ctransformers.html |
4f49678e12f6-0 | .md
.pdf
Telegram
Contents
Installation and Setup
Document Loader
Telegram#
Telegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.
Installation and Setup#
See setup instructions.
Document Loader#
See a usage example.
from langchain.document_loaders import TelegramChatFileLoader
from langchain.document_loaders import TelegramChatApiLoader
previous
Tair
next
Tensorflow Hub
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/telegram.html |
36d48875dd31-0 | .md
.pdf
AI21 Labs
Contents
Installation and Setup
Wrappers
LLM
AI21 Labs#
This page covers how to use the AI21 ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific AI21 wrappers.
Installation and Setup#
Get an AI21 api key and set it as an environment variable (AI21_API_KEY)
Wrappers#
LLM#
There exists an AI21 LLM wrapper, which you can access with
from langchain.llms import AI21
previous
Tracing Walkthrough
next
Aim
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/ai21.html |
eed7d61a4380-0 | .md
.pdf
Anthropic
Contents
Installation and Setup
Chat Models
Anthropic#
Anthropic is an American artificial intelligence (AI) startup and
public-benefit corporation, founded by former members of OpenAI. Anthropic specializes in developing general AI
systems and language models, with a company ethos of responsible AI usage.
Anthropic develops a chatbot, named Claude. Similar to ChatGPT, Claude uses a messaging
interface where users can submit questions or requests and receive highly detailed and relevant responses.
Installation and Setup#
pip install anthropic
See the setup documentation.
Chat Models#
See a usage example
from langchain.chat_models import ChatAnthropic
previous
Annoy
next
Anyscale
Contents
Installation and Setup
Chat Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/anthropic.html |
c0eab9e96c22-0 | .md
.pdf
Google Cloud Storage
Contents
Installation and Setup
Document Loader
Google Cloud Storage#
Google Cloud Storage is a managed service for storing unstructured data.
Installation and Setup#
First, you need to install google-cloud-bigquery python package.
pip install google-cloud-storage
Document Loader#
There are two loaders for the Google Cloud Storage: the Directory and the File loaders.
See a usage example.
from langchain.document_loaders import GCSDirectoryLoader
See a usage example.
from langchain.document_loaders import GCSFileLoader
previous
Google BigQuery
next
Google Drive
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/google_cloud_storage.html |
d3c6e768f194-0 | .ipynb
.pdf
Weights & Biases
Weights & Biases#
This notebook goes over how to track your LangChain experiments into one centralized Weights and Biases dashboard. To learn more about prompt engineering and the callback please refer to this Report which explains both alongside the resultant dashboards you can expect to see.
View Report
Note: the WandbCallbackHandler is being deprecated in favour of the WandbTracer . In future please use the WandbTracer as it is more flexible and allows for more granular logging. To know more about the WandbTracer refer to the agent_with_wandb_tracing.ipynb notebook or use the following colab notebook. To know more about Weights & Biases Prompts refer to the following prompts documentation.
!pip install wandb
!pip install pandas
!pip install textstat
!pip install spacy
!python -m spacy download en_core_web_sm
import os
os.environ["WANDB_API_KEY"] = ""
# os.environ["OPENAI_API_KEY"] = ""
# os.environ["SERPAPI_API_KEY"] = ""
from datetime import datetime
from langchain.callbacks import WandbCallbackHandler, StdOutCallbackHandler
from langchain.llms import OpenAI
Callback Handler that logs to Weights and Biases.
Parameters:
job_type (str): The type of job.
project (str): The project to log to.
entity (str): The entity to log to.
tags (list): The tags to log.
group (str): The group to log to.
name (str): The name of the run.
notes (str): The notes to log.
visualize (bool): Whether to visualize the run.
complexity_metrics (bool): Whether to log complexity metrics. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/wandb_tracking.html |
d3c6e768f194-1 | complexity_metrics (bool): Whether to log complexity metrics.
stream_logs (bool): Whether to stream callback actions to W&B
Default values for WandbCallbackHandler(...)
visualize: bool = False,
complexity_metrics: bool = False,
stream_logs: bool = False,
NOTE: For beta workflows we have made the default analysis based on textstat and the visualizations based on spacy
"""Main function.
This function is used to try the callback handler.
Scenarios:
1. OpenAI LLM
2. Chain with multiple SubChains on multiple generations
3. Agent with Tools
"""
session_group = datetime.now().strftime("%m.%d.%Y_%H.%M.%S")
wandb_callback = WandbCallbackHandler(
job_type="inference",
project="langchain_callback_demo",
group=f"minimal_{session_group}",
name="llm",
tags=["test"],
)
callbacks = [StdOutCallbackHandler(), wandb_callback]
llm = OpenAI(temperature=0, callbacks=callbacks)
wandb: Currently logged in as: harrison-chase. Use `wandb login --relogin` to force relogin | rtdocs_stable/api.python.langchain.com/en/stable/integrations/wandb_tracking.html |
d3c6e768f194-2 | Tracking run with wandb version 0.14.0Run data is saved locally in /Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150408-e47j1914Syncing run llm to Weights & Biases (docs) View project at https://wandb.ai/harrison-chase/langchain_callback_demo View run at https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914wandb: WARNING The wandb callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/wandb/wandb/issues with the tag `langchain`.
# Defaults for WandbCallbackHandler.flush_tracker(...)
reset: bool = True,
finish: bool = False,
The flush_tracker function is used to log LangChain sessions to Weights & Biases. It takes in the LangChain module or agent, and logs at minimum the prompts and generations alongside the serialized form of the LangChain module to the specified Weights & Biases project. By default we reset the session as opposed to concluding the session outright.
# SCENARIO 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
wandb_callback.flush_tracker(llm, name="simple_sequential") | rtdocs_stable/api.python.langchain.com/en/stable/integrations/wandb_tracking.html |
d3c6e768f194-3 | wandb_callback.flush_tracker(llm, name="simple_sequential")
Waiting for W&B process to finish... (success). View run llm at: https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914Synced 5 W&B file(s), 2 media file(s), 5 artifact file(s) and 0 other file(s)Find logs at: ./wandb/run-20230318_150408-e47j1914/logsTracking run with wandb version 0.14.0Run data is saved locally in /Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150534-jyxma7huSyncing run simple_sequential to Weights & Biases (docs) View project at https://wandb.ai/harrison-chase/langchain_callback_demo View run at https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
# SCENARIO 2 - Chain
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)
test_prompts = [
{
"title": "documentary about good video games that push the boundary of game design"
},
{"title": "cocaine bear vs heroin wolf"},
{"title": "the best in class mlops tooling"},
]
synopsis_chain.apply(test_prompts) | rtdocs_stable/api.python.langchain.com/en/stable/integrations/wandb_tracking.html |
d3c6e768f194-4 | ]
synopsis_chain.apply(test_prompts)
wandb_callback.flush_tracker(synopsis_chain, name="agent")
Waiting for W&B process to finish... (success). View run simple_sequential at: https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7huSynced 4 W&B file(s), 2 media file(s), 6 artifact file(s) and 0 other file(s)Find logs at: ./wandb/run-20230318_150534-jyxma7hu/logsTracking run with wandb version 0.14.0Run data is saved locally in /Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150550-wzy59zjqSyncing run agent to Weights & Biases (docs) View project at https://wandb.ai/harrison-chase/langchain_callback_demo View run at https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
# SCENARIO 3 - Agent with Tools
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?",
callbacks=callbacks,
)
wandb_callback.flush_tracker(agent, reset=False, finish=True)
> Entering new AgentExecutor chain...
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
Action: Search | rtdocs_stable/api.python.langchain.com/en/stable/integrations/wandb_tracking.html |
d3c6e768f194-5 | Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation: DiCaprio had a steady girlfriend in Camila Morrone. He had been with the model turned actress for nearly five years, as they were first said to be dating at the end of 2017. And the now 26-year-old Morrone is no stranger to Hollywood.
Thought: I need to calculate her age raised to the 0.43 power.
Action: Calculator
Action Input: 26^0.43
Observation: Answer: 4.059182145592686
Thought: I now know the final answer.
Final Answer: Leo DiCaprio's girlfriend is Camila Morrone and her current age raised to the 0.43 power is 4.059182145592686.
> Finished chain.
Waiting for W&B process to finish... (success). View run agent at: https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjqSynced 5 W&B file(s), 2 media file(s), 7 artifact file(s) and 0 other file(s)Find logs at: ./wandb/run-20230318_150550-wzy59zjq/logs
previous
Vespa
next
Weather
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/wandb_tracking.html |
07b7768b06c8-0 | .md
.pdf
scikit-learn
Contents
Installation and Setup
Wrappers
VectorStore
scikit-learn#
This page covers how to use the scikit-learn package within LangChain.
It is broken into two parts: installation and setup, and then references to specific scikit-learn wrappers.
Installation and Setup#
Install the Python package with pip install scikit-learn
Wrappers#
VectorStore#
SKLearnVectorStore provides a simple wrapper around the nearest neighbor implementation in the
scikit-learn package, allowing you to use it as a vectorstore.
To import this vectorstore:
from langchain.vectorstores import SKLearnVectorStore
For a more detailed walkthrough of the SKLearnVectorStore wrapper, see this notebook.
previous
Shale Protocol
next
Slack
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/sklearn.html |
d694abf941fe-0 | .md
.pdf
Google Drive
Contents
Installation and Setup
Document Loader
Google Drive#
Google Drive is a file storage and synchronization service developed by Google.
Currently, only Google Docs are supported.
Installation and Setup#
First, you need to install several python package.
pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
Document Loader#
See a usage example and authorizing instructions.
from langchain.document_loaders import GoogleDriveLoader
previous
Google Cloud Storage
next
Google Search
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/google_drive.html |
9f846933d0bb-0 | .md
.pdf
Databricks
Contents
Databricks connector for the SQLDatabase Chain
Databricks-managed MLflow integrates with LangChain
Databricks as an LLM provider
Databricks Dolly
Databricks#
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.
Databricks embraces the LangChain ecosystem in various ways:
Databricks connector for the SQLDatabase Chain: SQLDatabase.from_databricks() provides an easy way to query your data on Databricks through LangChain
Databricks-managed MLflow integrates with LangChain: Tracking and serving LangChain applications with fewer steps
Databricks as an LLM provider: Deploy your fine-tuned LLMs on Databricks via serving endpoints or cluster driver proxy apps, and query it as langchain.llms.Databricks
Databricks Dolly: Databricks open-sourced Dolly which allows for commercial use, and can be accessed through the Hugging Face Hub
Databricks connector for the SQLDatabase Chain#
You can connect to Databricks runtimes and Databricks SQL using the SQLDatabase wrapper of LangChain. See the notebook Connect to Databricks for details.
Databricks-managed MLflow integrates with LangChain#
MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook MLflow Callback Handler for details about MLflow’s integration with LangChain.
Databricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See MLflow guide for more details. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/databricks.html |
9f846933d0bb-1 | Databricks-managed MLflow makes it more convenient to develop LangChain applications on Databricks. For MLflow tracking, you don’t need to set the tracking uri. For MLflow Model Serving, you can save LangChain Chains in the MLflow langchain flavor, and then register and serve the Chain with a few clicks on Databricks, with credentials securely managed by MLflow Model Serving.
Databricks as an LLM provider#
The notebook Wrap Databricks endpoints as LLMs illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
Databricks endpoints support Dolly, but are also great for hosting models like MPT-7B or any other models from the Hugging Face ecosystem. Databricks endpoints can also be used with proprietary models like OpenAI to provide a governance layer for enterprises.
Databricks Dolly#
Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook Hugging Face Hub for instructions to access it through the Hugging Face Hub integration with LangChain.
previous
Databerry
next
DeepInfra
Contents
Databricks connector for the SQLDatabase Chain
Databricks-managed MLflow integrates with LangChain
Databricks as an LLM provider
Databricks Dolly
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/databricks.html |
b0e21dc6e9c1-0 | .md
.pdf
ForefrontAI
Contents
Installation and Setup
Wrappers
LLM
ForefrontAI#
This page covers how to use the ForefrontAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific ForefrontAI wrappers.
Installation and Setup#
Get an ForefrontAI api key and set it as an environment variable (FOREFRONTAI_API_KEY)
Wrappers#
LLM#
There exists an ForefrontAI LLM wrapper, which you can access with
from langchain.llms import ForefrontAI
previous
Figma
next
Git
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/forefrontai.html |
79697e357e48-0 | .md
.pdf
Aleph Alpha
Contents
Installation and Setup
LLM
Text Embedding Models
Aleph Alpha#
Aleph Alpha was founded in 2019 with the mission to research and build the foundational technology for an era of strong AI. The team of international scientists, engineers, and innovators researches, develops, and deploys transformative AI like large language and multimodal models and runs the fastest European commercial AI cluster.
The Luminous series is a family of large language models.
Installation and Setup#
pip install aleph-alpha-client
You have to create a new token. Please, see instructions.
from getpass import getpass
ALEPH_ALPHA_API_KEY = getpass()
LLM#
See a usage example.
from langchain.llms import AlephAlpha
Text Embedding Models#
See a usage example.
from langchain.embeddings import AlephAlphaSymmetricSemanticEmbedding, AlephAlphaAsymmetricSemanticEmbedding
previous
Airbyte
next
Amazon Bedrock
Contents
Installation and Setup
LLM
Text Embedding Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/aleph_alpha.html |
b910dd706e24-0 | .md
.pdf
Trello
Contents
Installation and Setup
Document Loader
Trello#
Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a “board” where users can create lists and cards to represent their tasks and activities.
The TrelloLoader allows us to load cards from a Trello board.
Installation and Setup#
pip install py-trello beautifulsoup4
See setup instructions.
Document Loader#
See a usage example.
from langchain.document_loaders import TrelloLoader
previous
2Markdown
next
Twitter
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/trello.html |
e828cee7c14e-0 | .md
.pdf
MyScale
Contents
Introduction
Installation and Setup
Setting up envrionments
Wrappers
VectorStore
MyScale#
This page covers how to use MyScale vector database within LangChain.
It is broken into two parts: installation and setup, and then references to specific MyScale wrappers.
With MyScale, you can manage both structured and unstructured (vectorized) data, and perform joint queries and analytics on both types of data using SQL. Plus, MyScale’s cloud-native OLAP architecture, built on top of ClickHouse, enables lightning-fast data processing even on massive datasets.
Introduction#
Overview to MyScale and High performance vector search
You can now register on our SaaS and start a cluster now!
If you are also interested in how we managed to integrate SQL and vector, please refer to this document for further syntax reference.
We also deliver with live demo on huggingface! Please checkout our huggingface space! They search millions of vector within a blink!
Installation and Setup#
Install the Python SDK with pip install clickhouse-connect
Setting up envrionments#
There are two ways to set up parameters for myscale index.
Environment Variables
Before you run the app, please set the environment variable with export:
export MYSCALE_URL='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...
You can easily find your account, password and other info on our SaaS. For details please refer to this document
Every attributes under MyScaleSettings can be set with prefix MYSCALE_ and is case insensitive.
Create MyScaleSettings object with parameters
from langchain.vectorstores import MyScale, MyScaleSettings
config = MyScaleSetting(host="<your-backend-url>", port=8443, ...)
index = MyScale(embedding_function, config) | rtdocs_stable/api.python.langchain.com/en/stable/integrations/myscale.html |
e828cee7c14e-1 | index = MyScale(embedding_function, config)
index.add_documents(...)
Wrappers#
supported functions:
add_texts
add_documents
from_texts
from_documents
similarity_search
asimilarity_search
similarity_search_by_vector
asimilarity_search_by_vector
similarity_search_with_relevance_scores
VectorStore#
There exists a wrapper around MyScale database, allowing you to use it as a vectorstore,
whether for semantic search or similar example retrieval.
To import this vectorstore:
from langchain.vectorstores import MyScale
For a more detailed walkthrough of the MyScale wrapper, see this notebook
previous
Momento
next
NLPCloud
Contents
Introduction
Installation and Setup
Setting up envrionments
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/myscale.html |
1c4c26f3ebd3-0 | .md
.pdf
Azure Blob Storage
Contents
Installation and Setup
Document Loader
Azure Blob Storage#
Azure Blob Storage is Microsoft’s object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.
Azure Files offers fully managed
file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol,
Network File System (NFS) protocol, and Azure Files REST API. Azure Files are based on the Azure Blob Storage.
Azure Blob Storage is designed for:
Serving images or documents directly to a browser.
Storing files for distributed access.
Streaming video and audio.
Writing to log files.
Storing data for backup and restore, disaster recovery, and archiving.
Storing data for analysis by an on-premises or Azure-hosted service.
Installation and Setup#
pip install azure-storage-blob
Document Loader#
See a usage example for the Azure Blob Storage.
from langchain.document_loaders import AzureBlobStorageContainerLoader
See a usage example for the Azure Files.
from langchain.document_loaders import AzureBlobStorageFileLoader
previous
AZLyrics
next
Azure Cognitive Search
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/azure_blob_storage.html |
305406e307b7-0 | .md
.pdf
Docugami
Contents
Installation and Setup
Document Loader
Docugami#
Docugami converts business documents into a Document XML Knowledge Graph, generating forests
of XML semantic trees representing entire documents. This is a rich representation that includes the semantic and
structural characteristics of various chunks in the document as an XML tree.
Installation and Setup#
pip install lxml
Document Loader#
See a usage example.
from langchain.document_loaders import DocugamiLoader
previous
Discord
next
DuckDB
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/docugami.html |
4ba23f4dcf39-0 | .md
.pdf
Weather
Contents
Installation and Setup
Document Loader
Weather#
OpenWeatherMap is an open source weather service provider.
Installation and Setup#
pip install pyowm
We must set up the OpenWeatherMap API token.
Document Loader#
See a usage example.
from langchain.document_loaders import WeatherDataLoader
previous
Weights & Biases
next
Weaviate
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/weather.html |
dd5f30593094-0 | .md
.pdf
Shale Protocol
Contents
How to
1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the “Shale Bot” on our Discord. No credit card is required and no free trials. It’s a forever free tier with 1K limit per day per API key.
2. Use https://shale.live/v1 as OpenAI API drop-in replacement
Shale Protocol#
Shale Protocol provides production-ready inference APIs for open LLMs. It’s a Plug & Play API as it’s hosted on a highly scalable GPU cloud infrastructure.
Our free tier supports up to 1K daily requests per key as we want to eliminate the barrier for anyone to start building genAI apps with LLMs.
With Shale Protocol, developers/researchers can create apps and explore the capabilities of open LLMs at no cost.
This page covers how Shale-Serve API can be incorporated with LangChain.
As of June 2023, the API supports Vicuna-13B by default. We are going to support more LLMs such as Falcon-40B in future releases.
How to#
1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the “Shale Bot” on our Discord. No credit card is required and no free trials. It’s a forever free tier with 1K limit per day per API key.#
2. Use https://shale.live/v1 as OpenAI API drop-in replacement#
For example
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
import os
os.environ['OPENAI_API_BASE'] = "https://shale.live/v1"
os.environ['OPENAI_API_KEY'] = "ENTER YOUR API KEY"
llm = OpenAI() | rtdocs_stable/api.python.langchain.com/en/stable/integrations/shaleprotocol.html |
dd5f30593094-1 | llm = OpenAI()
template = """Question: {question}
# Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
SerpAPI
next
scikit-learn
Contents
How to
1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the “Shale Bot” on our Discord. No credit card is required and no free trials. It’s a forever free tier with 1K limit per day per API key.
2. Use https://shale.live/v1 as OpenAI API drop-in replacement
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/integrations/shaleprotocol.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.