a7df18eef3dc8fa23d69c379f84383a01376f2b8c132bbebf666d7cca140236a
Browse files- langchain_md_files/integrations/providers/atlas.mdx +19 -0
- langchain_md_files/integrations/providers/azlyrics.mdx +16 -0
- langchain_md_files/integrations/providers/bagel.mdx +21 -0
- langchain_md_files/integrations/providers/bageldb.mdx +21 -0
- langchain_md_files/integrations/providers/baichuan.mdx +33 -0
- langchain_md_files/integrations/providers/baidu.mdx +72 -0
- langchain_md_files/integrations/providers/bananadev.mdx +68 -0
- langchain_md_files/integrations/providers/beam.mdx +28 -0
- langchain_md_files/integrations/providers/beautiful_soup.mdx +20 -0
- langchain_md_files/integrations/providers/bibtex.mdx +20 -0
- langchain_md_files/integrations/providers/bilibili.mdx +17 -0
- langchain_md_files/integrations/providers/bittensor.mdx +17 -0
- langchain_md_files/integrations/providers/blackboard.mdx +22 -0
- langchain_md_files/integrations/providers/bookendai.mdx +18 -0
- langchain_md_files/integrations/providers/box.mdx +179 -0
- langchain_md_files/integrations/providers/brave_search.mdx +36 -0
- langchain_md_files/integrations/providers/browserbase.mdx +34 -0
- langchain_md_files/integrations/providers/browserless.mdx +18 -0
- langchain_md_files/integrations/providers/byte_dance.mdx +22 -0
- langchain_md_files/integrations/providers/cassandra.mdx +85 -0
- langchain_md_files/integrations/providers/cerebriumai.mdx +26 -0
- langchain_md_files/integrations/providers/chaindesk.mdx +17 -0
- langchain_md_files/integrations/providers/chroma.mdx +29 -0
- langchain_md_files/integrations/providers/clarifai.mdx +53 -0
- langchain_md_files/integrations/providers/clickhouse.mdx +25 -0
- langchain_md_files/integrations/providers/clickup.mdx +20 -0
- langchain_md_files/integrations/providers/cloudflare.mdx +25 -0
- langchain_md_files/integrations/providers/clova.mdx +14 -0
- langchain_md_files/integrations/providers/cnosdb.mdx +110 -0
- langchain_md_files/integrations/providers/cogniswitch.mdx +53 -0
- langchain_md_files/integrations/providers/cohere.mdx +157 -0
- langchain_md_files/integrations/providers/college_confidential.mdx +16 -0
- langchain_md_files/integrations/providers/confident.mdx +26 -0
- langchain_md_files/integrations/providers/confluence.mdx +22 -0
- langchain_md_files/integrations/providers/connery.mdx +28 -0
- langchain_md_files/integrations/providers/context.mdx +20 -0
- langchain_md_files/integrations/providers/couchbase.mdx +111 -0
- langchain_md_files/integrations/providers/coze.mdx +19 -0
- langchain_md_files/integrations/providers/ctransformers.mdx +57 -0
- langchain_md_files/integrations/providers/ctranslate2.mdx +30 -0
- langchain_md_files/integrations/providers/cube.mdx +21 -0
- langchain_md_files/integrations/providers/dashvector.mdx +39 -0
- langchain_md_files/integrations/providers/datadog.mdx +88 -0
- langchain_md_files/integrations/providers/datadog_logs.mdx +19 -0
- langchain_md_files/integrations/providers/dataforseo.mdx +52 -0
- langchain_md_files/integrations/providers/dataherald.mdx +64 -0
- langchain_md_files/integrations/providers/dedoc.mdx +56 -0
- langchain_md_files/integrations/providers/deepinfra.mdx +53 -0
- langchain_md_files/integrations/providers/deepsparse.mdx +34 -0
- langchain_md_files/integrations/providers/diffbot.mdx +29 -0
langchain_md_files/integrations/providers/atlas.mdx
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Atlas
|
2 |
+
|
3 |
+
>[Nomic Atlas](https://docs.nomic.ai/index.html) is a platform for interacting with both
|
4 |
+
> small and internet scale unstructured datasets.
|
5 |
+
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
- Install the Python package with `pip install nomic`
|
10 |
+
- `Nomic` is also included in langchains poetry extras `poetry install -E all`
|
11 |
+
|
12 |
+
|
13 |
+
## VectorStore
|
14 |
+
|
15 |
+
See a [usage example](/docs/integrations/vectorstores/atlas).
|
16 |
+
|
17 |
+
```python
|
18 |
+
from langchain_community.vectorstores import AtlasDB
|
19 |
+
```
|
langchain_md_files/integrations/providers/azlyrics.mdx
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# AZLyrics
|
2 |
+
|
3 |
+
>[AZLyrics](https://www.azlyrics.com/) is a large, legal, every day growing collection of lyrics.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
There isn't any special setup for it.
|
8 |
+
|
9 |
+
|
10 |
+
## Document Loader
|
11 |
+
|
12 |
+
See a [usage example](/docs/integrations/document_loaders/azlyrics).
|
13 |
+
|
14 |
+
```python
|
15 |
+
from langchain_community.document_loaders import AZLyricsLoader
|
16 |
+
```
|
langchain_md_files/integrations/providers/bagel.mdx
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Bagel
|
2 |
+
|
3 |
+
> [Bagel](https://www.bagel.net/) (`Open Vector Database for AI`), is like GitHub for AI data.
|
4 |
+
It is a collaborative platform where users can create,
|
5 |
+
share, and manage vector datasets. It can support private projects for independent developers,
|
6 |
+
internal collaborations for enterprises, and public contributions for data DAOs.
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
```bash
|
11 |
+
pip install bagelML
|
12 |
+
```
|
13 |
+
|
14 |
+
|
15 |
+
## VectorStore
|
16 |
+
|
17 |
+
See a [usage example](/docs/integrations/vectorstores/bagel).
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_community.vectorstores import Bagel
|
21 |
+
```
|
langchain_md_files/integrations/providers/bageldb.mdx
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# BagelDB
|
2 |
+
|
3 |
+
> [BagelDB](https://www.bageldb.ai/) (`Open Vector Database for AI`), is like GitHub for AI data.
|
4 |
+
It is a collaborative platform where users can create,
|
5 |
+
share, and manage vector datasets. It can support private projects for independent developers,
|
6 |
+
internal collaborations for enterprises, and public contributions for data DAOs.
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
```bash
|
11 |
+
pip install betabageldb
|
12 |
+
```
|
13 |
+
|
14 |
+
|
15 |
+
## VectorStore
|
16 |
+
|
17 |
+
See a [usage example](/docs/integrations/vectorstores/bageldb).
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_community.vectorstores import Bagel
|
21 |
+
```
|
langchain_md_files/integrations/providers/baichuan.mdx
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Baichuan
|
2 |
+
|
3 |
+
>[Baichuan Inc.](https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI,
|
4 |
+
> dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness.
|
5 |
+
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
Register and get an API key [here](https://platform.baichuan-ai.com/).
|
10 |
+
|
11 |
+
## LLMs
|
12 |
+
|
13 |
+
See a [usage example](/docs/integrations/llms/baichuan).
|
14 |
+
|
15 |
+
```python
|
16 |
+
from langchain_community.llms import BaichuanLLM
|
17 |
+
```
|
18 |
+
|
19 |
+
## Chat models
|
20 |
+
|
21 |
+
See a [usage example](/docs/integrations/chat/baichuan).
|
22 |
+
|
23 |
+
```python
|
24 |
+
from langchain_community.chat_models import ChatBaichuan
|
25 |
+
```
|
26 |
+
|
27 |
+
## Embedding models
|
28 |
+
|
29 |
+
See a [usage example](/docs/integrations/text_embedding/baichuan).
|
30 |
+
|
31 |
+
```python
|
32 |
+
from langchain_community.embeddings import BaichuanTextEmbeddings
|
33 |
+
```
|
langchain_md_files/integrations/providers/baidu.mdx
ADDED
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Baidu
|
2 |
+
|
3 |
+
>[Baidu Cloud](https://cloud.baidu.com/) is a cloud service provided by `Baidu, Inc.`,
|
4 |
+
> headquartered in Beijing. It offers a cloud storage service, client software,
|
5 |
+
> file management, resource sharing, and Third Party Integration.
|
6 |
+
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
Register and get the `Qianfan` `AK` and `SK` keys [here](https://cloud.baidu.com/product/wenxinworkshop).
|
11 |
+
|
12 |
+
## LLMs
|
13 |
+
|
14 |
+
### Baidu Qianfan
|
15 |
+
|
16 |
+
See a [usage example](/docs/integrations/llms/baidu_qianfan_endpoint).
|
17 |
+
|
18 |
+
```python
|
19 |
+
from langchain_community.llms import QianfanLLMEndpoint
|
20 |
+
```
|
21 |
+
|
22 |
+
## Chat models
|
23 |
+
|
24 |
+
### Qianfan Chat Endpoint
|
25 |
+
|
26 |
+
See a [usage example](/docs/integrations/chat/baidu_qianfan_endpoint).
|
27 |
+
|
28 |
+
```python
|
29 |
+
from langchain_community.chat_models import QianfanChatEndpoint
|
30 |
+
```
|
31 |
+
|
32 |
+
## Embedding models
|
33 |
+
|
34 |
+
### Baidu Qianfan
|
35 |
+
|
36 |
+
See a [usage example](/docs/integrations/text_embedding/baidu_qianfan_endpoint).
|
37 |
+
|
38 |
+
```python
|
39 |
+
from langchain_community.embeddings import QianfanEmbeddingsEndpoint
|
40 |
+
```
|
41 |
+
|
42 |
+
## Document loaders
|
43 |
+
|
44 |
+
### Baidu BOS Directory Loader
|
45 |
+
|
46 |
+
```python
|
47 |
+
from langchain_community.document_loaders.baiducloud_bos_directory import BaiduBOSDirectoryLoader
|
48 |
+
```
|
49 |
+
|
50 |
+
### Baidu BOS File Loader
|
51 |
+
|
52 |
+
```python
|
53 |
+
from langchain_community.document_loaders.baiducloud_bos_file import BaiduBOSFileLoader
|
54 |
+
```
|
55 |
+
|
56 |
+
## Vector stores
|
57 |
+
|
58 |
+
### Baidu Cloud ElasticSearch VectorSearch
|
59 |
+
|
60 |
+
See a [usage example](/docs/integrations/vectorstores/baiducloud_vector_search).
|
61 |
+
|
62 |
+
```python
|
63 |
+
from langchain_community.vectorstores import BESVectorStore
|
64 |
+
```
|
65 |
+
|
66 |
+
### Baidu VectorDB
|
67 |
+
|
68 |
+
See a [usage example](/docs/integrations/vectorstores/baiduvectordb).
|
69 |
+
|
70 |
+
```python
|
71 |
+
from langchain_community.vectorstores import BaiduVectorDB
|
72 |
+
```
|
langchain_md_files/integrations/providers/bananadev.mdx
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Banana
|
2 |
+
|
3 |
+
>[Banana](https://www.banana.dev/) provided serverless GPU inference for AI models,
|
4 |
+
> a CI/CD build pipeline and a simple Python framework (`Potassium`) to server your models.
|
5 |
+
|
6 |
+
This page covers how to use the [Banana](https://www.banana.dev) ecosystem within LangChain.
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
- Install the python package `banana-dev`:
|
11 |
+
|
12 |
+
```bash
|
13 |
+
pip install banana-dev
|
14 |
+
```
|
15 |
+
|
16 |
+
- Get an Banana api key from the [Banana.dev dashboard](https://app.banana.dev) and set it as an environment variable (`BANANA_API_KEY`)
|
17 |
+
- Get your model's key and url slug from the model's details page.
|
18 |
+
|
19 |
+
## Define your Banana Template
|
20 |
+
|
21 |
+
You'll need to set up a Github repo for your Banana app. You can get started in 5 minutes using [this guide](https://docs.banana.dev/banana-docs/).
|
22 |
+
|
23 |
+
Alternatively, for a ready-to-go LLM example, you can check out Banana's [CodeLlama-7B-Instruct-GPTQ](https://github.com/bananaml/demo-codellama-7b-instruct-gptq) GitHub repository. Just fork it and deploy it within Banana.
|
24 |
+
|
25 |
+
Other starter repos are available [here](https://github.com/orgs/bananaml/repositories?q=demo-&type=all&language=&sort=).
|
26 |
+
|
27 |
+
## Build the Banana app
|
28 |
+
|
29 |
+
To use Banana apps within Langchain, you must include the `outputs` key
|
30 |
+
in the returned json, and the value must be a string.
|
31 |
+
|
32 |
+
```python
|
33 |
+
# Return the results as a dictionary
|
34 |
+
result = {'outputs': result}
|
35 |
+
```
|
36 |
+
|
37 |
+
An example inference function would be:
|
38 |
+
|
39 |
+
```python
|
40 |
+
@app.handler("/")
|
41 |
+
def handler(context: dict, request: Request) -> Response:
|
42 |
+
"""Handle a request to generate code from a prompt."""
|
43 |
+
model = context.get("model")
|
44 |
+
tokenizer = context.get("tokenizer")
|
45 |
+
max_new_tokens = request.json.get("max_new_tokens", 512)
|
46 |
+
temperature = request.json.get("temperature", 0.7)
|
47 |
+
prompt = request.json.get("prompt")
|
48 |
+
prompt_template=f'''[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:
|
49 |
+
{prompt}
|
50 |
+
[/INST]
|
51 |
+
'''
|
52 |
+
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
|
53 |
+
output = model.generate(inputs=input_ids, temperature=temperature, max_new_tokens=max_new_tokens)
|
54 |
+
result = tokenizer.decode(output[0])
|
55 |
+
return Response(json={"outputs": result}, status=200)
|
56 |
+
```
|
57 |
+
|
58 |
+
This example is from the `app.py` file in [CodeLlama-7B-Instruct-GPTQ](https://github.com/bananaml/demo-codellama-7b-instruct-gptq).
|
59 |
+
|
60 |
+
|
61 |
+
## LLM
|
62 |
+
|
63 |
+
|
64 |
+
```python
|
65 |
+
from langchain_community.llms import Banana
|
66 |
+
```
|
67 |
+
|
68 |
+
See a [usage example](/docs/integrations/llms/banana).
|
langchain_md_files/integrations/providers/beam.mdx
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Beam
|
2 |
+
|
3 |
+
>[Beam](https://www.beam.cloud/) is a cloud computing platform that allows you to run your code
|
4 |
+
> on remote servers with GPUs.
|
5 |
+
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
- [Create an account](https://www.beam.cloud/)
|
10 |
+
- Install the Beam CLI with `curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh`
|
11 |
+
- Register API keys with `beam configure`
|
12 |
+
- Set environment variables (`BEAM_CLIENT_ID`) and (`BEAM_CLIENT_SECRET`)
|
13 |
+
- Install the Beam SDK:
|
14 |
+
|
15 |
+
```bash
|
16 |
+
pip install beam-sdk
|
17 |
+
```
|
18 |
+
|
19 |
+
|
20 |
+
## LLMs
|
21 |
+
|
22 |
+
See a [usage example](/docs/integrations/llms/beam).
|
23 |
+
|
24 |
+
See another example in the [Beam documentation](https://docs.beam.cloud/examples/langchain).
|
25 |
+
|
26 |
+
```python
|
27 |
+
from langchain_community.llms.beam import Beam
|
28 |
+
```
|
langchain_md_files/integrations/providers/beautiful_soup.mdx
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Beautiful Soup
|
2 |
+
|
3 |
+
>[Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/) is a Python package for parsing
|
4 |
+
> HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup).
|
5 |
+
> It creates a parse tree for parsed pages that can be used to extract data from HTML,[3] which
|
6 |
+
> is useful for web scraping.
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
```bash
|
11 |
+
pip install beautifulsoup4
|
12 |
+
```
|
13 |
+
|
14 |
+
## Document Transformer
|
15 |
+
|
16 |
+
See a [usage example](/docs/integrations/document_transformers/beautiful_soup).
|
17 |
+
|
18 |
+
```python
|
19 |
+
from langchain_community.document_loaders import BeautifulSoupTransformer
|
20 |
+
```
|
langchain_md_files/integrations/providers/bibtex.mdx
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# BibTeX
|
2 |
+
|
3 |
+
>[BibTeX](https://www.ctan.org/pkg/bibtex) is a file format and reference management system commonly used in conjunction with `LaTeX` typesetting. It serves as a way to organize and store bibliographic information for academic and research documents.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
We have to install the `bibtexparser` and `pymupdf` packages.
|
8 |
+
|
9 |
+
```bash
|
10 |
+
pip install bibtexparser pymupdf
|
11 |
+
```
|
12 |
+
|
13 |
+
|
14 |
+
## Document loader
|
15 |
+
|
16 |
+
See a [usage example](/docs/integrations/document_loaders/bibtex).
|
17 |
+
|
18 |
+
```python
|
19 |
+
from langchain_community.document_loaders import BibtexLoader
|
20 |
+
```
|
langchain_md_files/integrations/providers/bilibili.mdx
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# BiliBili
|
2 |
+
|
3 |
+
>[Bilibili](https://www.bilibili.tv/) is one of the most beloved long-form video sites in China.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
```bash
|
8 |
+
pip install bilibili-api-python
|
9 |
+
```
|
10 |
+
|
11 |
+
## Document Loader
|
12 |
+
|
13 |
+
See a [usage example](/docs/integrations/document_loaders/bilibili).
|
14 |
+
|
15 |
+
```python
|
16 |
+
from langchain_community.document_loaders import BiliBiliLoader
|
17 |
+
```
|
langchain_md_files/integrations/providers/bittensor.mdx
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Bittensor
|
2 |
+
|
3 |
+
>[Neural Internet Bittensor](https://neuralinternet.ai/) network, an open source protocol
|
4 |
+
> that powers a decentralized, blockchain-based, machine learning network.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
Get your API_KEY from [Neural Internet](https://neuralinternet.ai/).
|
9 |
+
|
10 |
+
|
11 |
+
## LLMs
|
12 |
+
|
13 |
+
See a [usage example](/docs/integrations/llms/bittensor).
|
14 |
+
|
15 |
+
```python
|
16 |
+
from langchain_community.llms import NIBittensorLLM
|
17 |
+
```
|
langchain_md_files/integrations/providers/blackboard.mdx
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Blackboard
|
2 |
+
|
3 |
+
>[Blackboard Learn](https://en.wikipedia.org/wiki/Blackboard_Learn) (previously the `Blackboard Learning Management System`)
|
4 |
+
> is a web-based virtual learning environment and learning management system developed by Blackboard Inc.
|
5 |
+
> The software features course management, customizable open architecture, and scalable design that allows
|
6 |
+
> integration with student information systems and authentication protocols. It may be installed on local servers,
|
7 |
+
> hosted by `Blackboard ASP Solutions`, or provided as Software as a Service hosted on Amazon Web Services.
|
8 |
+
> Its main purposes are stated to include the addition of online elements to courses traditionally delivered
|
9 |
+
> face-to-face and development of completely online courses with few or no face-to-face meetings.
|
10 |
+
|
11 |
+
## Installation and Setup
|
12 |
+
|
13 |
+
There isn't any special setup for it.
|
14 |
+
|
15 |
+
## Document Loader
|
16 |
+
|
17 |
+
See a [usage example](/docs/integrations/document_loaders/blackboard).
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_community.document_loaders import BlackboardLoader
|
21 |
+
|
22 |
+
```
|
langchain_md_files/integrations/providers/bookendai.mdx
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# bookend.ai
|
2 |
+
|
3 |
+
LangChain implements an integration with embeddings provided by [bookend.ai](https://bookend.ai/).
|
4 |
+
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
|
9 |
+
You need to register and get the `API_KEY`
|
10 |
+
from the [bookend.ai](https://bookend.ai/) website.
|
11 |
+
|
12 |
+
## Embedding model
|
13 |
+
|
14 |
+
See a [usage example](/docs/integrations/text_embedding/bookend).
|
15 |
+
|
16 |
+
```python
|
17 |
+
from langchain_community.embeddings import BookendEmbeddings
|
18 |
+
```
|
langchain_md_files/integrations/providers/box.mdx
ADDED
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Box
|
2 |
+
|
3 |
+
[Box](https://box.com) is the Intelligent Content Cloud, a single platform that enables
|
4 |
+
organizations to fuel collaboration, manage the entire content lifecycle, secure critical content,
|
5 |
+
and transform business workflows with enterprise AI. Founded in 2005, Box simplifies work for
|
6 |
+
leading global organizations, including AstraZeneca, JLL, Morgan Stanley, and Nationwide.
|
7 |
+
|
8 |
+
In this package, we make available a number of ways to include Box content in your AI workflows.
|
9 |
+
|
10 |
+
### Installation and setup
|
11 |
+
|
12 |
+
```bash
|
13 |
+
pip install -U langchain-box
|
14 |
+
|
15 |
+
```
|
16 |
+
|
17 |
+
# langchain-box
|
18 |
+
|
19 |
+
This package contains the LangChain integration with Box. For more information about
|
20 |
+
Box, check out our [developer documentation](https://developer.box.com).
|
21 |
+
|
22 |
+
## Pre-requisites
|
23 |
+
|
24 |
+
In order to integrate with Box, you need a few things:
|
25 |
+
|
26 |
+
* A Box instance — if you are not a current Box customer, sign up for a
|
27 |
+
[free dev account](https://account.box.com/signup/n/developer#ty9l3).
|
28 |
+
* A Box app — more on how to
|
29 |
+
[create an app](https://developer.box.com/guides/getting-started/first-application/)
|
30 |
+
* Your app approved in your Box instance — This is done by your admin.
|
31 |
+
The good news is if you are using a free developer account, you are the admin.
|
32 |
+
[Authorize your app](https://developer.box.com/guides/authorization/custom-app-approval/#manual-approval)
|
33 |
+
|
34 |
+
## Authentication
|
35 |
+
|
36 |
+
The `box-langchain` package offers some flexibility to authentication. The
|
37 |
+
most basic authentication method is by using a developer token. This can be
|
38 |
+
found in the [Box developer console](https://account.box.com/developers/console)
|
39 |
+
on the configuration screen. This token is purposely short-lived (1 hour) and is
|
40 |
+
intended for development. With this token, you can add it to your environment as
|
41 |
+
`BOX_DEVELOPER_TOKEN`, you can pass it directly to the loader, or you can use the
|
42 |
+
`BoxAuth` authentication helper class.
|
43 |
+
|
44 |
+
We will cover passing it directly to the loader in the section below.
|
45 |
+
|
46 |
+
### BoxAuth helper class
|
47 |
+
|
48 |
+
`BoxAuth` supports the following authentication methods:
|
49 |
+
|
50 |
+
* Token — either a developer token or any token generated through the Box SDK
|
51 |
+
* JWT with a service account
|
52 |
+
* JWT with a specified user
|
53 |
+
* CCG with a service account
|
54 |
+
* CCG with a specified user
|
55 |
+
|
56 |
+
:::note
|
57 |
+
If using JWT authentication, you will need to download the configuration from the Box
|
58 |
+
developer console after generating your public/private key pair. Place this file in your
|
59 |
+
application directory structure somewhere. You will use the path to this file when using
|
60 |
+
the `BoxAuth` helper class.
|
61 |
+
:::
|
62 |
+
|
63 |
+
For more information, learn about how to
|
64 |
+
[set up a Box application](https://developer.box.com/guides/getting-started/first-application/),
|
65 |
+
and check out the
|
66 |
+
[Box authentication guide](https://developer.box.com/guides/authentication/select/)
|
67 |
+
for more about our different authentication options.
|
68 |
+
|
69 |
+
Examples:
|
70 |
+
|
71 |
+
**Token**
|
72 |
+
|
73 |
+
```python
|
74 |
+
from langchain_box.document_loaders import BoxLoader
|
75 |
+
from langchain_box.utilities import BoxAuth, BoxAuthType
|
76 |
+
|
77 |
+
auth = BoxAuth(
|
78 |
+
auth_type=BoxAuthType.TOKEN,
|
79 |
+
box_developer_token=box_developer_token
|
80 |
+
)
|
81 |
+
|
82 |
+
loader = BoxLoader(
|
83 |
+
box_auth=auth,
|
84 |
+
...
|
85 |
+
)
|
86 |
+
```
|
87 |
+
|
88 |
+
**JWT with a service account**
|
89 |
+
|
90 |
+
```python
|
91 |
+
from langchain_box.document_loaders import BoxLoader
|
92 |
+
from langchain_box.utilities import BoxAuth, BoxAuthType
|
93 |
+
|
94 |
+
auth = BoxAuth(
|
95 |
+
auth_type=BoxAuthType.JWT,
|
96 |
+
box_jwt_path=box_jwt_path
|
97 |
+
)
|
98 |
+
|
99 |
+
loader = BoxLoader(
|
100 |
+
box_auth=auth,
|
101 |
+
...
|
102 |
+
```
|
103 |
+
|
104 |
+
**JWT with a specified user**
|
105 |
+
|
106 |
+
```python
|
107 |
+
from langchain_box.document_loaders import BoxLoader
|
108 |
+
from langchain_box.utilities import BoxAuth, BoxAuthType
|
109 |
+
|
110 |
+
auth = BoxAuth(
|
111 |
+
auth_type=BoxAuthType.JWT,
|
112 |
+
box_jwt_path=box_jwt_path,
|
113 |
+
box_user_id=box_user_id
|
114 |
+
)
|
115 |
+
|
116 |
+
loader = BoxLoader(
|
117 |
+
box_auth=auth,
|
118 |
+
...
|
119 |
+
```
|
120 |
+
|
121 |
+
**CCG with a service account**
|
122 |
+
|
123 |
+
```python
|
124 |
+
from langchain_box.document_loaders import BoxLoader
|
125 |
+
from langchain_box.utilities import BoxAuth, BoxAuthType
|
126 |
+
|
127 |
+
auth = BoxAuth(
|
128 |
+
auth_type=BoxAuthType.CCG,
|
129 |
+
box_client_id=box_client_id,
|
130 |
+
box_client_secret=box_client_secret,
|
131 |
+
box_enterprise_id=box_enterprise_id
|
132 |
+
)
|
133 |
+
|
134 |
+
loader = BoxLoader(
|
135 |
+
box_auth=auth,
|
136 |
+
...
|
137 |
+
```
|
138 |
+
|
139 |
+
**CCG with a specified user**
|
140 |
+
|
141 |
+
```python
|
142 |
+
from langchain_box.document_loaders import BoxLoader
|
143 |
+
from langchain_box.utilities import BoxAuth, BoxAuthType
|
144 |
+
|
145 |
+
auth = BoxAuth(
|
146 |
+
auth_type=BoxAuthType.CCG,
|
147 |
+
box_client_id=box_client_id,
|
148 |
+
box_client_secret=box_client_secret,
|
149 |
+
box_user_id=box_user_id
|
150 |
+
)
|
151 |
+
|
152 |
+
loader = BoxLoader(
|
153 |
+
box_auth=auth,
|
154 |
+
...
|
155 |
+
```
|
156 |
+
|
157 |
+
If you wish to use OAuth2 with the authorization_code flow, please use `BoxAuthType.TOKEN` with the token you have acquired.
|
158 |
+
|
159 |
+
## Document Loaders
|
160 |
+
|
161 |
+
### BoxLoader
|
162 |
+
|
163 |
+
[See usage example](/docs/integrations/document_loaders/box)
|
164 |
+
|
165 |
+
```python
|
166 |
+
from langchain_box.document_loaders import BoxLoader
|
167 |
+
|
168 |
+
```
|
169 |
+
|
170 |
+
## Retrievers
|
171 |
+
|
172 |
+
### BoxRetriever
|
173 |
+
|
174 |
+
[See usage example](/docs/integrations/retrievers/box)
|
175 |
+
|
176 |
+
```python
|
177 |
+
from langchain_box.retrievers import BoxRetriever
|
178 |
+
|
179 |
+
```
|
langchain_md_files/integrations/providers/brave_search.mdx
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Brave Search
|
2 |
+
|
3 |
+
|
4 |
+
>[Brave Search](https://en.wikipedia.org/wiki/Brave_Search) is a search engine developed by Brave Software.
|
5 |
+
> - `Brave Search` uses its own web index. As of May 2022, it covered over 10 billion pages and was used to serve 92%
|
6 |
+
> of search results without relying on any third-parties, with the remainder being retrieved
|
7 |
+
> server-side from the Bing API or (on an opt-in basis) client-side from Google. According
|
8 |
+
> to Brave, the index was kept "intentionally smaller than that of Google or Bing" in order to
|
9 |
+
> help avoid spam and other low-quality content, with the disadvantage that "Brave Search is
|
10 |
+
> not yet as good as Google in recovering long-tail queries."
|
11 |
+
>- `Brave Search Premium`: As of April 2023 Brave Search is an ad-free website, but it will
|
12 |
+
> eventually switch to a new model that will include ads and premium users will get an ad-free experience.
|
13 |
+
> User data including IP addresses won't be collected from its users by default. A premium account
|
14 |
+
> will be required for opt-in data-collection.
|
15 |
+
|
16 |
+
|
17 |
+
## Installation and Setup
|
18 |
+
|
19 |
+
To get access to the Brave Search API, you need to [create an account and get an API key](https://api.search.brave.com/app/dashboard).
|
20 |
+
|
21 |
+
|
22 |
+
## Document Loader
|
23 |
+
|
24 |
+
See a [usage example](/docs/integrations/document_loaders/brave_search).
|
25 |
+
|
26 |
+
```python
|
27 |
+
from langchain_community.document_loaders import BraveSearchLoader
|
28 |
+
```
|
29 |
+
|
30 |
+
## Tool
|
31 |
+
|
32 |
+
See a [usage example](/docs/integrations/tools/brave_search).
|
33 |
+
|
34 |
+
```python
|
35 |
+
from langchain.tools import BraveSearch
|
36 |
+
```
|
langchain_md_files/integrations/providers/browserbase.mdx
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Browserbase
|
2 |
+
|
3 |
+
[Browserbase](https://browserbase.com) is a developer platform to reliably run, manage, and monitor headless browsers.
|
4 |
+
|
5 |
+
Power your AI data retrievals with:
|
6 |
+
- [Serverless Infrastructure](https://docs.browserbase.com/under-the-hood) providing reliable browsers to extract data from complex UIs
|
7 |
+
- [Stealth Mode](https://docs.browserbase.com/features/stealth-mode) with included fingerprinting tactics and automatic captcha solving
|
8 |
+
- [Session Debugger](https://docs.browserbase.com/features/sessions) to inspect your Browser Session with networks timeline and logs
|
9 |
+
- [Live Debug](https://docs.browserbase.com/guides/session-debug-connection/browser-remote-control) to quickly debug your automation
|
10 |
+
|
11 |
+
## Installation and Setup
|
12 |
+
|
13 |
+
- Get an API key and Project ID from [browserbase.com](https://browserbase.com) and set it in environment variables (`BROWSERBASE_API_KEY`, `BROWSERBASE_PROJECT_ID`).
|
14 |
+
- Install the [Browserbase SDK](http://github.com/browserbase/python-sdk):
|
15 |
+
|
16 |
+
```python
|
17 |
+
pip install browserbase
|
18 |
+
```
|
19 |
+
|
20 |
+
## Document loader
|
21 |
+
|
22 |
+
See a [usage example](/docs/integrations/document_loaders/browserbase).
|
23 |
+
|
24 |
+
```python
|
25 |
+
from langchain_community.document_loaders import BrowserbaseLoader
|
26 |
+
```
|
27 |
+
|
28 |
+
## Multi-Modal
|
29 |
+
|
30 |
+
See a [usage example](/docs/integrations/document_loaders/browserbase).
|
31 |
+
|
32 |
+
```python
|
33 |
+
from browserbase.helpers.gpt4 import GPT4VImage, GPT4VImageDetail
|
34 |
+
```
|
langchain_md_files/integrations/providers/browserless.mdx
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Browserless
|
2 |
+
|
3 |
+
>[Browserless](https://www.browserless.io/docs/start) is a service that allows you to
|
4 |
+
> run headless Chrome instances in the cloud. It’s a great way to run browser-based
|
5 |
+
> automation at scale without having to worry about managing your own infrastructure.
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
We have to get the API key [here](https://www.browserless.io/pricing/).
|
10 |
+
|
11 |
+
|
12 |
+
## Document loader
|
13 |
+
|
14 |
+
See a [usage example](/docs/integrations/document_loaders/browserless).
|
15 |
+
|
16 |
+
```python
|
17 |
+
from langchain_community.document_loaders import BrowserlessLoader
|
18 |
+
```
|
langchain_md_files/integrations/providers/byte_dance.mdx
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ByteDance
|
2 |
+
|
3 |
+
>[ByteDance](https://bytedance.com/) is a Chinese internet technology company.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
Get the access token.
|
8 |
+
You can find the access instructions [here](https://open.larksuite.com/document)
|
9 |
+
|
10 |
+
|
11 |
+
## Document Loader
|
12 |
+
|
13 |
+
### Lark Suite
|
14 |
+
|
15 |
+
>[Lark Suite](https://www.larksuite.com/) is an enterprise collaboration platform
|
16 |
+
> developed by `ByteDance`.
|
17 |
+
|
18 |
+
See a [usage example](/docs/integrations/document_loaders/larksuite).
|
19 |
+
|
20 |
+
```python
|
21 |
+
from langchain_community.document_loaders.larksuite import LarkSuiteDocLoader
|
22 |
+
```
|
langchain_md_files/integrations/providers/cassandra.mdx
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Cassandra
|
2 |
+
|
3 |
+
> [Apache Cassandra®](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.
|
4 |
+
> Starting with version 5.0, the database ships with [vector search capabilities](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html).
|
5 |
+
|
6 |
+
The integrations outlined in this page can be used with `Cassandra` as well as other CQL-compatible databases,
|
7 |
+
i.e. those using the `Cassandra Query Language` protocol.
|
8 |
+
|
9 |
+
|
10 |
+
## Installation and Setup
|
11 |
+
|
12 |
+
Install the following Python package:
|
13 |
+
|
14 |
+
```bash
|
15 |
+
pip install "cassio>=0.1.6"
|
16 |
+
```
|
17 |
+
|
18 |
+
## Vector Store
|
19 |
+
|
20 |
+
```python
|
21 |
+
from langchain_community.vectorstores import Cassandra
|
22 |
+
```
|
23 |
+
|
24 |
+
Learn more in the [example notebook](/docs/integrations/vectorstores/cassandra).
|
25 |
+
|
26 |
+
## Chat message history
|
27 |
+
|
28 |
+
```python
|
29 |
+
from langchain_community.chat_message_histories import CassandraChatMessageHistory
|
30 |
+
```
|
31 |
+
|
32 |
+
Learn more in the [example notebook](/docs/integrations/memory/cassandra_chat_message_history).
|
33 |
+
|
34 |
+
|
35 |
+
## LLM Cache
|
36 |
+
|
37 |
+
```python
|
38 |
+
from langchain.globals import set_llm_cache
|
39 |
+
from langchain_community.cache import CassandraCache
|
40 |
+
set_llm_cache(CassandraCache())
|
41 |
+
```
|
42 |
+
|
43 |
+
Learn more in the [example notebook](/docs/integrations/llm_caching#cassandra-caches) (scroll to the Cassandra section).
|
44 |
+
|
45 |
+
|
46 |
+
## Semantic LLM Cache
|
47 |
+
|
48 |
+
```python
|
49 |
+
from langchain.globals import set_llm_cache
|
50 |
+
from langchain_community.cache import CassandraSemanticCache
|
51 |
+
set_llm_cache(CassandraSemanticCache(
|
52 |
+
embedding=my_embedding,
|
53 |
+
table_name="my_store",
|
54 |
+
))
|
55 |
+
```
|
56 |
+
|
57 |
+
Learn more in the [example notebook](/docs/integrations/llm_caching#cassandra-caches) (scroll to the appropriate section).
|
58 |
+
|
59 |
+
## Document loader
|
60 |
+
|
61 |
+
```python
|
62 |
+
from langchain_community.document_loaders import CassandraLoader
|
63 |
+
```
|
64 |
+
|
65 |
+
Learn more in the [example notebook](/docs/integrations/document_loaders/cassandra).
|
66 |
+
|
67 |
+
#### Attribution statement
|
68 |
+
|
69 |
+
> Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of
|
70 |
+
> the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries.
|
71 |
+
|
72 |
+
## Toolkit
|
73 |
+
|
74 |
+
The `Cassandra Database toolkit` enables AI engineers to efficiently integrate agents
|
75 |
+
with Cassandra data.
|
76 |
+
|
77 |
+
```python
|
78 |
+
from langchain_community.agent_toolkits.cassandra_database.toolkit import (
|
79 |
+
CassandraDatabaseToolkit,
|
80 |
+
)
|
81 |
+
```
|
82 |
+
|
83 |
+
Learn more in the [example notebook](/docs/integrations/tools/cassandra_database).
|
84 |
+
|
85 |
+
|
langchain_md_files/integrations/providers/cerebriumai.mdx
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# CerebriumAI
|
2 |
+
|
3 |
+
>[Cerebrium](https://docs.cerebrium.ai/cerebrium/getting-started/introduction) is a serverless GPU infrastructure provider.
|
4 |
+
> It provides API access to several LLM models.
|
5 |
+
|
6 |
+
See the examples in the [CerebriumAI documentation](https://docs.cerebrium.ai/examples/langchain).
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
- Install a python package:
|
11 |
+
```bash
|
12 |
+
pip install cerebrium
|
13 |
+
```
|
14 |
+
|
15 |
+
- [Get an CerebriumAI api key](https://docs.cerebrium.ai/cerebrium/getting-started/installation) and set
|
16 |
+
it as an environment variable (`CEREBRIUMAI_API_KEY`)
|
17 |
+
|
18 |
+
|
19 |
+
## LLMs
|
20 |
+
|
21 |
+
See a [usage example](/docs/integrations/llms/cerebriumai).
|
22 |
+
|
23 |
+
|
24 |
+
```python
|
25 |
+
from langchain_community.llms import CerebriumAI
|
26 |
+
```
|
langchain_md_files/integrations/providers/chaindesk.mdx
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Chaindesk
|
2 |
+
|
3 |
+
>[Chaindesk](https://chaindesk.ai) is an [open-source](https://github.com/gmpetrov/databerry) document retrieval platform that helps to connect your personal data with Large Language Models.
|
4 |
+
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
We need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url.
|
9 |
+
We need the [API Key](https://docs.chaindesk.ai/api-reference/authentication).
|
10 |
+
|
11 |
+
## Retriever
|
12 |
+
|
13 |
+
See a [usage example](/docs/integrations/retrievers/chaindesk).
|
14 |
+
|
15 |
+
```python
|
16 |
+
from langchain.retrievers import ChaindeskRetriever
|
17 |
+
```
|
langchain_md_files/integrations/providers/chroma.mdx
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Chroma
|
2 |
+
|
3 |
+
>[Chroma](https://docs.trychroma.com/getting-started) is a database for building AI applications with embeddings.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
```bash
|
8 |
+
pip install langchain-chroma
|
9 |
+
```
|
10 |
+
|
11 |
+
|
12 |
+
## VectorStore
|
13 |
+
|
14 |
+
There exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore,
|
15 |
+
whether for semantic search or example selection.
|
16 |
+
|
17 |
+
```python
|
18 |
+
from langchain_chroma import Chroma
|
19 |
+
```
|
20 |
+
|
21 |
+
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/integrations/vectorstores/chroma)
|
22 |
+
|
23 |
+
## Retriever
|
24 |
+
|
25 |
+
See a [usage example](/docs/integrations/retrievers/self_query/chroma_self_query).
|
26 |
+
|
27 |
+
```python
|
28 |
+
from langchain.retrievers import SelfQueryRetriever
|
29 |
+
```
|
langchain_md_files/integrations/providers/clarifai.mdx
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Clarifai
|
2 |
+
|
3 |
+
>[Clarifai](https://clarifai.com) is one of first deep learning platforms having been founded in 2013. Clarifai provides an AI platform with the full AI lifecycle for data exploration, data labeling, model training, evaluation and inference around images, video, text and audio data. In the LangChain ecosystem, as far as we're aware, Clarifai is the only provider that supports LLMs, embeddings and a vector store in one production scale platform, making it an excellent choice to operationalize your LangChain implementations.
|
4 |
+
>
|
5 |
+
> `Clarifai` provides 1,000s of AI models for many different use cases. You can [explore them here](https://clarifai.com/explore) to find the one most suited for your use case. These models include those created by other providers such as OpenAI, Anthropic, Cohere, AI21, etc. as well as state of the art from open source such as Falcon, InstructorXL, etc. so that you build the best in AI into your products. You'll find these organized by the creator's user_id and into projects we call applications denoted by their app_id. Those IDs will be needed in additional to the model_id and optionally the version_id, so make note of all these IDs once you found the best model for your use case!
|
6 |
+
>
|
7 |
+
>Also note that given there are many models for images, video, text and audio understanding, you can build some interested AI agents that utilize the variety of AI models as experts to understand those data types.
|
8 |
+
|
9 |
+
|
10 |
+
## Installation and Setup
|
11 |
+
- Install the Python SDK:
|
12 |
+
```bash
|
13 |
+
pip install clarifai
|
14 |
+
```
|
15 |
+
[Sign-up](https://clarifai.com/signup) for a Clarifai account, then get a personal access token to access the Clarifai API from your [security settings](https://clarifai.com/settings/security) and set it as an environment variable (`CLARIFAI_PAT`).
|
16 |
+
|
17 |
+
|
18 |
+
## LLMs
|
19 |
+
|
20 |
+
To find the selection of LLMs in the Clarifai platform you can select the text to text model type [here](https://clarifai.com/explore/models?filterData=%5B%7B%22field%22%3A%22model_type_id%22%2C%22value%22%3A%5B%22text-to-text%22%5D%7D%5D&page=1&perPage=24).
|
21 |
+
|
22 |
+
```python
|
23 |
+
from langchain_community.llms import Clarifai
|
24 |
+
llm = Clarifai(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
|
25 |
+
```
|
26 |
+
|
27 |
+
For more details, the docs on the Clarifai LLM wrapper provide a [detailed walkthrough](/docs/integrations/llms/clarifai).
|
28 |
+
|
29 |
+
|
30 |
+
## Embedding Models
|
31 |
+
|
32 |
+
To find the selection of embeddings models in the Clarifai platform you can select the text to embedding model type [here](https://clarifai.com/explore/models?page=1&perPage=24&filterData=%5B%7B%22field%22%3A%22model_type_id%22%2C%22value%22%3A%5B%22text-embedder%22%5D%7D%5D).
|
33 |
+
|
34 |
+
There is a Clarifai Embedding model in LangChain, which you can access with:
|
35 |
+
```python
|
36 |
+
from langchain_community.embeddings import ClarifaiEmbeddings
|
37 |
+
embeddings = ClarifaiEmbeddings(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
|
38 |
+
```
|
39 |
+
|
40 |
+
See a [usage example](/docs/integrations/document_loaders/couchbase).
|
41 |
+
|
42 |
+
|
43 |
+
## Vectorstore
|
44 |
+
|
45 |
+
Clarifai's vector DB was launched in 2016 and has been optimized to support live search queries. With workflows in the Clarifai platform, you data is automatically indexed by am embedding model and optionally other models as well to index that information in the DB for search. You can query the DB not only via the vectors but also filter by metadata matches, other AI predicted concepts, and even do geo-coordinate search. Simply create an application, select the appropriate base workflow for your type of data, and upload it (through the API as [documented here](https://docs.clarifai.com/api-guide/data/create-get-update-delete) or the UIs at clarifai.com).
|
46 |
+
|
47 |
+
You can also add data directly from LangChain as well, and the auto-indexing will take place for you. You'll notice this is a little different than other vectorstores where you need to provide an embedding model in their constructor and have LangChain coordinate getting the embeddings from text and writing those to the index. Not only is it more convenient, but it's much more scalable to use Clarifai's distributed cloud to do all the index in the background.
|
48 |
+
|
49 |
+
```python
|
50 |
+
from langchain_community.vectorstores import Clarifai
|
51 |
+
clarifai_vector_db = Clarifai.from_texts(user_id=USER_ID, app_id=APP_ID, texts=texts, pat=CLARIFAI_PAT, number_of_docs=NUMBER_OF_DOCS, metadatas = metadatas)
|
52 |
+
```
|
53 |
+
For more details, the docs on the Clarifai vector store provide a [detailed walkthrough](/docs/integrations/vectorstores/clarifai).
|
langchain_md_files/integrations/providers/clickhouse.mdx
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ClickHouse
|
2 |
+
|
3 |
+
> [ClickHouse](https://clickhouse.com/) is the fast and resource efficient open-source database for real-time
|
4 |
+
> apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries.
|
5 |
+
> It has data structures and distance search functions (like `L2Distance`) as well as
|
6 |
+
> [approximate nearest neighbor search indexes](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/annindexes)
|
7 |
+
> That enables ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.
|
8 |
+
|
9 |
+
|
10 |
+
## Installation and Setup
|
11 |
+
|
12 |
+
We need to install `clickhouse-connect` python package.
|
13 |
+
|
14 |
+
```bash
|
15 |
+
pip install clickhouse-connect
|
16 |
+
```
|
17 |
+
|
18 |
+
## Vector Store
|
19 |
+
|
20 |
+
See a [usage example](/docs/integrations/vectorstores/clickhouse).
|
21 |
+
|
22 |
+
```python
|
23 |
+
from langchain_community.vectorstores import Clickhouse, ClickhouseSettings
|
24 |
+
```
|
25 |
+
|
langchain_md_files/integrations/providers/clickup.mdx
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ClickUp
|
2 |
+
|
3 |
+
>[ClickUp](https://clickup.com/) is an all-in-one productivity platform that provides small and large teams across industries with flexible and customizable work management solutions, tools, and functions.
|
4 |
+
>
|
5 |
+
>It is a cloud-based project management solution for businesses of all sizes featuring communication and collaboration tools to help achieve organizational goals.
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
1. Create a [ClickUp App](https://help.clickup.com/hc/en-us/articles/6303422883095-Create-your-own-app-with-the-ClickUp-API)
|
10 |
+
2. Follow [these steps](https://clickup.com/api/developer-portal/authentication/) to get your client_id and client_secret.
|
11 |
+
|
12 |
+
## Toolkits
|
13 |
+
|
14 |
+
```python
|
15 |
+
from langchain_community.agent_toolkits.clickup.toolkit import ClickupToolkit
|
16 |
+
from langchain_community.utilities.clickup import ClickupAPIWrapper
|
17 |
+
```
|
18 |
+
|
19 |
+
See a [usage example](/docs/integrations/tools/clickup).
|
20 |
+
|
langchain_md_files/integrations/providers/cloudflare.mdx
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Cloudflare
|
2 |
+
|
3 |
+
>[Cloudflare, Inc. (Wikipedia)](https://en.wikipedia.org/wiki/Cloudflare) is an American company that provides
|
4 |
+
> content delivery network services, cloud cybersecurity, DDoS mitigation, and ICANN-accredited
|
5 |
+
> domain registration services.
|
6 |
+
|
7 |
+
>[Cloudflare Workers AI](https://developers.cloudflare.com/workers-ai/) allows you to run machine
|
8 |
+
> learning models, on the `Cloudflare` network, from your code via REST API.
|
9 |
+
|
10 |
+
|
11 |
+
## LLMs
|
12 |
+
|
13 |
+
See [installation instructions and usage example](/docs/integrations/llms/cloudflare_workersai).
|
14 |
+
|
15 |
+
```python
|
16 |
+
from langchain_community.llms.cloudflare_workersai import CloudflareWorkersAI
|
17 |
+
```
|
18 |
+
|
19 |
+
## Embedding models
|
20 |
+
|
21 |
+
See [installation instructions and usage example](/docs/integrations/text_embedding/cloudflare_workersai).
|
22 |
+
|
23 |
+
```python
|
24 |
+
from langchain_community.embeddings.cloudflare_workersai import CloudflareWorkersAIEmbeddings
|
25 |
+
```
|
langchain_md_files/integrations/providers/clova.mdx
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Clova
|
2 |
+
|
3 |
+
>[CLOVA Studio](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary) is a service
|
4 |
+
> of [Naver Cloud Platform](https://www.ncloud.com/) that uses `HyperCLOVA` language models,
|
5 |
+
> a hyperscale AI technology, to output phrases generated through AI technology based on user input.
|
6 |
+
|
7 |
+
|
8 |
+
## Embedding models
|
9 |
+
|
10 |
+
See [installation instructions and usage example](/docs/integrations/text_embedding/clova).
|
11 |
+
|
12 |
+
```python
|
13 |
+
from langchain_community.embeddings import ClovaEmbeddings
|
14 |
+
```
|
langchain_md_files/integrations/providers/cnosdb.mdx
ADDED
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# CnosDB
|
2 |
+
> [CnosDB](https://github.com/cnosdb/cnosdb) is an open-source distributed time series database with high performance, high compression rate and high ease of use.
|
3 |
+
|
4 |
+
## Installation and Setup
|
5 |
+
|
6 |
+
```python
|
7 |
+
pip install cnos-connector
|
8 |
+
```
|
9 |
+
|
10 |
+
## Connecting to CnosDB
|
11 |
+
You can connect to CnosDB using the `SQLDatabase.from_cnosdb()` method.
|
12 |
+
### Syntax
|
13 |
+
```python
|
14 |
+
def SQLDatabase.from_cnosdb(url: str = "127.0.0.1:8902",
|
15 |
+
user: str = "root",
|
16 |
+
password: str = "",
|
17 |
+
tenant: str = "cnosdb",
|
18 |
+
database: str = "public")
|
19 |
+
```
|
20 |
+
Args:
|
21 |
+
1. url (str): The HTTP connection host name and port number of the CnosDB
|
22 |
+
service, excluding "http://" or "https://", with a default value
|
23 |
+
of "127.0.0.1:8902".
|
24 |
+
2. user (str): The username used to connect to the CnosDB service, with a
|
25 |
+
default value of "root".
|
26 |
+
3. password (str): The password of the user connecting to the CnosDB service,
|
27 |
+
with a default value of "".
|
28 |
+
4. tenant (str): The name of the tenant used to connect to the CnosDB service,
|
29 |
+
with a default value of "cnosdb".
|
30 |
+
5. database (str): The name of the database in the CnosDB tenant.
|
31 |
+
## Examples
|
32 |
+
```python
|
33 |
+
# Connecting to CnosDB with SQLDatabase Wrapper
|
34 |
+
from langchain_community.utilities import SQLDatabase
|
35 |
+
|
36 |
+
db = SQLDatabase.from_cnosdb()
|
37 |
+
```
|
38 |
+
```python
|
39 |
+
# Creating a OpenAI Chat LLM Wrapper
|
40 |
+
from langchain_openai import ChatOpenAI
|
41 |
+
|
42 |
+
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")
|
43 |
+
```
|
44 |
+
|
45 |
+
### SQL Database Chain
|
46 |
+
This example demonstrates the use of the SQL Chain for answering a question over a CnosDB.
|
47 |
+
```python
|
48 |
+
from langchain_community.utilities import SQLDatabaseChain
|
49 |
+
|
50 |
+
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
|
51 |
+
|
52 |
+
db_chain.run(
|
53 |
+
"What is the average temperature of air at station XiaoMaiDao between October 19, 2022 and Occtober 20, 2022?"
|
54 |
+
)
|
55 |
+
```
|
56 |
+
```shell
|
57 |
+
> Entering new chain...
|
58 |
+
What is the average temperature of air at station XiaoMaiDao between October 19, 2022 and Occtober 20, 2022?
|
59 |
+
SQLQuery:SELECT AVG(temperature) FROM air WHERE station = 'XiaoMaiDao' AND time >= '2022-10-19' AND time < '2022-10-20'
|
60 |
+
SQLResult: [(68.0,)]
|
61 |
+
Answer:The average temperature of air at station XiaoMaiDao between October 19, 2022 and October 20, 2022 is 68.0.
|
62 |
+
> Finished chain.
|
63 |
+
```
|
64 |
+
### SQL Database Agent
|
65 |
+
This example demonstrates the use of the SQL Database Agent for answering questions over a CnosDB.
|
66 |
+
```python
|
67 |
+
from langchain.agents import create_sql_agent
|
68 |
+
from langchain_community.agent_toolkits import SQLDatabaseToolkit
|
69 |
+
|
70 |
+
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
|
71 |
+
agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True)
|
72 |
+
```
|
73 |
+
```python
|
74 |
+
agent.run(
|
75 |
+
"What is the average temperature of air at station XiaoMaiDao between October 19, 2022 and Occtober 20, 2022?"
|
76 |
+
)
|
77 |
+
```
|
78 |
+
```shell
|
79 |
+
> Entering new chain...
|
80 |
+
Action: sql_db_list_tables
|
81 |
+
Action Input: ""
|
82 |
+
Observation: air
|
83 |
+
Thought:The "air" table seems relevant to the question. I should query the schema of the "air" table to see what columns are available.
|
84 |
+
Action: sql_db_schema
|
85 |
+
Action Input: "air"
|
86 |
+
Observation:
|
87 |
+
CREATE TABLE air (
|
88 |
+
pressure FLOAT,
|
89 |
+
station STRING,
|
90 |
+
temperature FLOAT,
|
91 |
+
time TIMESTAMP,
|
92 |
+
visibility FLOAT
|
93 |
+
)
|
94 |
+
|
95 |
+
/*
|
96 |
+
3 rows from air table:
|
97 |
+
pressure station temperature time visibility
|
98 |
+
75.0 XiaoMaiDao 67.0 2022-10-19T03:40:00 54.0
|
99 |
+
77.0 XiaoMaiDao 69.0 2022-10-19T04:40:00 56.0
|
100 |
+
76.0 XiaoMaiDao 68.0 2022-10-19T05:40:00 55.0
|
101 |
+
*/
|
102 |
+
Thought:The "temperature" column in the "air" table is relevant to the question. I can query the average temperature between the specified dates.
|
103 |
+
Action: sql_db_query
|
104 |
+
Action Input: "SELECT AVG(temperature) FROM air WHERE station = 'XiaoMaiDao' AND time >= '2022-10-19' AND time <= '2022-10-20'"
|
105 |
+
Observation: [(68.0,)]
|
106 |
+
Thought:The average temperature of air at station XiaoMaiDao between October 19, 2022 and October 20, 2022 is 68.0.
|
107 |
+
Final Answer: 68.0
|
108 |
+
|
109 |
+
> Finished chain.
|
110 |
+
```
|
langchain_md_files/integrations/providers/cogniswitch.mdx
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# CogniSwitch
|
2 |
+
|
3 |
+
>[CogniSwitch](https://www.cogniswitch.ai/aboutus) is an API based data platform that
|
4 |
+
> enhances enterprise data by extracting entities, concepts and their relationships
|
5 |
+
> thereby converting this data into a multidimensional format and storing it in
|
6 |
+
> a database that can accommodate these enhancements. In our case the data is stored
|
7 |
+
> in a knowledge graph. This enhanced data is now ready for consumption by LLMs and
|
8 |
+
> other GenAI applications ensuring the data is consumable and context can be maintained.
|
9 |
+
> Thereby eliminating hallucinations and delivering accuracy.
|
10 |
+
|
11 |
+
## Toolkit
|
12 |
+
|
13 |
+
See [installation instructions and usage example](/docs/integrations/tools/cogniswitch).
|
14 |
+
|
15 |
+
```python
|
16 |
+
from langchain_community.agent_toolkits import CogniswitchToolkit
|
17 |
+
```
|
18 |
+
|
19 |
+
## Tools
|
20 |
+
|
21 |
+
### CogniswitchKnowledgeRequest
|
22 |
+
|
23 |
+
>Tool that uses the CogniSwitch service to answer questions.
|
24 |
+
|
25 |
+
```python
|
26 |
+
from langchain_community.tools.cogniswitch.tool import CogniswitchKnowledgeRequest
|
27 |
+
```
|
28 |
+
|
29 |
+
### CogniswitchKnowledgeSourceFile
|
30 |
+
|
31 |
+
>Tool that uses the CogniSwitch services to store data from file.
|
32 |
+
|
33 |
+
```python
|
34 |
+
from langchain_community.tools.cogniswitch.tool import CogniswitchKnowledgeSourceFile
|
35 |
+
```
|
36 |
+
|
37 |
+
### CogniswitchKnowledgeSourceURL
|
38 |
+
|
39 |
+
>Tool that uses the CogniSwitch services to store data from a URL.
|
40 |
+
|
41 |
+
```python
|
42 |
+
from langchain_community.tools.cogniswitch.tool import CogniswitchKnowledgeSourceURL
|
43 |
+
```
|
44 |
+
|
45 |
+
### CogniswitchKnowledgeStatus
|
46 |
+
|
47 |
+
>Tool that uses the CogniSwitch services to get the status of the document or url uploaded.
|
48 |
+
|
49 |
+
```python
|
50 |
+
from langchain_community.tools.cogniswitch.tool import CogniswitchKnowledgeStatus
|
51 |
+
```
|
52 |
+
|
53 |
+
|
langchain_md_files/integrations/providers/cohere.mdx
ADDED
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Cohere
|
2 |
+
|
3 |
+
>[Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models
|
4 |
+
> that help companies improve human-machine interactions.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
- Install the Python SDK :
|
8 |
+
```bash
|
9 |
+
pip install langchain-cohere
|
10 |
+
```
|
11 |
+
|
12 |
+
Get a [Cohere api key](https://dashboard.cohere.ai/) and set it as an environment variable (`COHERE_API_KEY`)
|
13 |
+
|
14 |
+
## Cohere langchain integrations
|
15 |
+
|
16 |
+
|API|description|Endpoint docs|Import|Example usage|
|
17 |
+
|---|---|---|---|---|
|
18 |
+
|Chat|Build chat bots|[chat](https://docs.cohere.com/reference/chat)|`from langchain_cohere import ChatCohere`|[cohere.ipynb](/docs/integrations/chat/cohere)|
|
19 |
+
|LLM|Generate text|[generate](https://docs.cohere.com/reference/generate)|`from langchain_cohere.llms import Cohere`|[cohere.ipynb](/docs/integrations/llms/cohere)|
|
20 |
+
|RAG Retriever|Connect to external data sources|[chat + rag](https://docs.cohere.com/reference/chat)|`from langchain.retrievers import CohereRagRetriever`|[cohere.ipynb](/docs/integrations/retrievers/cohere)|
|
21 |
+
|Text Embedding|Embed strings to vectors|[embed](https://docs.cohere.com/reference/embed)|`from langchain_cohere import CohereEmbeddings`|[cohere.ipynb](/docs/integrations/text_embedding/cohere)|
|
22 |
+
|Rerank Retriever|Rank strings based on relevance|[rerank](https://docs.cohere.com/reference/rerank)|`from langchain.retrievers.document_compressors import CohereRerank`|[cohere.ipynb](/docs/integrations/retrievers/cohere-reranker)|
|
23 |
+
|
24 |
+
## Quick copy examples
|
25 |
+
|
26 |
+
### Chat
|
27 |
+
|
28 |
+
```python
|
29 |
+
from langchain_cohere import ChatCohere
|
30 |
+
from langchain_core.messages import HumanMessage
|
31 |
+
chat = ChatCohere()
|
32 |
+
messages = [HumanMessage(content="knock knock")]
|
33 |
+
print(chat.invoke(messages))
|
34 |
+
```
|
35 |
+
|
36 |
+
Usage of the Cohere [chat model](/docs/integrations/chat/cohere)
|
37 |
+
|
38 |
+
### LLM
|
39 |
+
|
40 |
+
|
41 |
+
```python
|
42 |
+
from langchain_cohere.llms import Cohere
|
43 |
+
|
44 |
+
llm = Cohere()
|
45 |
+
print(llm.invoke("Come up with a pet name"))
|
46 |
+
```
|
47 |
+
|
48 |
+
Usage of the Cohere (legacy) [LLM model](/docs/integrations/llms/cohere)
|
49 |
+
|
50 |
+
### Tool calling
|
51 |
+
```python
|
52 |
+
from langchain_cohere import ChatCohere
|
53 |
+
from langchain_core.messages import (
|
54 |
+
HumanMessage,
|
55 |
+
ToolMessage,
|
56 |
+
)
|
57 |
+
from langchain_core.tools import tool
|
58 |
+
|
59 |
+
@tool
|
60 |
+
def magic_function(number: int) -> int:
|
61 |
+
"""Applies a magic operation to an integer
|
62 |
+
|
63 |
+
Args:
|
64 |
+
number: Number to have magic operation performed on
|
65 |
+
"""
|
66 |
+
return number + 10
|
67 |
+
|
68 |
+
def invoke_tools(tool_calls, messages):
|
69 |
+
for tool_call in tool_calls:
|
70 |
+
selected_tool = {"magic_function":magic_function}[
|
71 |
+
tool_call["name"].lower()
|
72 |
+
]
|
73 |
+
tool_output = selected_tool.invoke(tool_call["args"])
|
74 |
+
messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))
|
75 |
+
return messages
|
76 |
+
|
77 |
+
tools = [magic_function]
|
78 |
+
|
79 |
+
llm = ChatCohere()
|
80 |
+
llm_with_tools = llm.bind_tools(tools=tools)
|
81 |
+
messages = [
|
82 |
+
HumanMessage(
|
83 |
+
content="What is the value of magic_function(2)?"
|
84 |
+
)
|
85 |
+
]
|
86 |
+
|
87 |
+
res = llm_with_tools.invoke(messages)
|
88 |
+
while res.tool_calls:
|
89 |
+
messages.append(res)
|
90 |
+
messages = invoke_tools(res.tool_calls, messages)
|
91 |
+
res = llm_with_tools.invoke(messages)
|
92 |
+
|
93 |
+
print(res.content)
|
94 |
+
```
|
95 |
+
Tool calling with Cohere LLM can be done by binding the necessary tools to the llm as seen above.
|
96 |
+
An alternative, is to support multi hop tool calling with the ReAct agent as seen below.
|
97 |
+
|
98 |
+
### ReAct Agent
|
99 |
+
|
100 |
+
The agent is based on the paper
|
101 |
+
[ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629).
|
102 |
+
|
103 |
+
```python
|
104 |
+
from langchain_community.tools.tavily_search import TavilySearchResults
|
105 |
+
from langchain_cohere import ChatCohere, create_cohere_react_agent
|
106 |
+
from langchain_core.prompts import ChatPromptTemplate
|
107 |
+
from langchain.agents import AgentExecutor
|
108 |
+
|
109 |
+
llm = ChatCohere()
|
110 |
+
|
111 |
+
internet_search = TavilySearchResults(max_results=4)
|
112 |
+
internet_search.name = "internet_search"
|
113 |
+
internet_search.description = "Route a user query to the internet"
|
114 |
+
|
115 |
+
prompt = ChatPromptTemplate.from_template("{input}")
|
116 |
+
|
117 |
+
agent = create_cohere_react_agent(
|
118 |
+
llm,
|
119 |
+
[internet_search],
|
120 |
+
prompt
|
121 |
+
)
|
122 |
+
|
123 |
+
agent_executor = AgentExecutor(agent=agent, tools=[internet_search], verbose=True)
|
124 |
+
|
125 |
+
agent_executor.invoke({
|
126 |
+
"input": "In what year was the company that was founded as Sound of Music added to the S&P 500?",
|
127 |
+
})
|
128 |
+
```
|
129 |
+
The ReAct agent can be used to call multiple tools in sequence.
|
130 |
+
|
131 |
+
### RAG Retriever
|
132 |
+
|
133 |
+
```python
|
134 |
+
from langchain_cohere import ChatCohere
|
135 |
+
from langchain.retrievers import CohereRagRetriever
|
136 |
+
from langchain_core.documents import Document
|
137 |
+
|
138 |
+
rag = CohereRagRetriever(llm=ChatCohere())
|
139 |
+
print(rag.invoke("What is cohere ai?"))
|
140 |
+
```
|
141 |
+
|
142 |
+
Usage of the Cohere [RAG Retriever](/docs/integrations/retrievers/cohere)
|
143 |
+
|
144 |
+
### Text Embedding
|
145 |
+
|
146 |
+
```python
|
147 |
+
from langchain_cohere import CohereEmbeddings
|
148 |
+
|
149 |
+
embeddings = CohereEmbeddings(model="embed-english-light-v3.0")
|
150 |
+
print(embeddings.embed_documents(["This is a test document."]))
|
151 |
+
```
|
152 |
+
|
153 |
+
Usage of the Cohere [Text Embeddings model](/docs/integrations/text_embedding/cohere)
|
154 |
+
|
155 |
+
### Reranker
|
156 |
+
|
157 |
+
Usage of the Cohere [Reranker](/docs/integrations/retrievers/cohere-reranker)
|
langchain_md_files/integrations/providers/college_confidential.mdx
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# College Confidential
|
2 |
+
|
3 |
+
>[College Confidential](https://www.collegeconfidential.com/) gives information on 3,800+ colleges and universities.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
There isn't any special setup for it.
|
8 |
+
|
9 |
+
|
10 |
+
## Document Loader
|
11 |
+
|
12 |
+
See a [usage example](/docs/integrations/document_loaders/college_confidential).
|
13 |
+
|
14 |
+
```python
|
15 |
+
from langchain_community.document_loaders import CollegeConfidentialLoader
|
16 |
+
```
|
langchain_md_files/integrations/providers/confident.mdx
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Confident AI
|
2 |
+
|
3 |
+
>[Confident AI](https://confident-ai.com) is a creator of the `DeepEval`.
|
4 |
+
>
|
5 |
+
>[DeepEval](https://github.com/confident-ai/deepeval) is a package for unit testing LLMs.
|
6 |
+
> Using `DeepEval`, everyone can build robust language models through faster iterations
|
7 |
+
> using both unit testing and integration testing. `DeepEval provides support for each step in the iteration
|
8 |
+
> from synthetic data creation to testing.
|
9 |
+
|
10 |
+
## Installation and Setup
|
11 |
+
|
12 |
+
You need to get the [DeepEval API credentials](https://app.confident-ai.com).
|
13 |
+
|
14 |
+
You need to install the `DeepEval` Python package:
|
15 |
+
|
16 |
+
```bash
|
17 |
+
pip install deepeval
|
18 |
+
```
|
19 |
+
|
20 |
+
## Callbacks
|
21 |
+
|
22 |
+
See an [example](/docs/integrations/callbacks/confident).
|
23 |
+
|
24 |
+
```python
|
25 |
+
from langchain.callbacks.confident_callback import DeepEvalCallbackHandler
|
26 |
+
```
|
langchain_md_files/integrations/providers/confluence.mdx
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Confluence
|
2 |
+
|
3 |
+
>[Confluence](https://www.atlassian.com/software/confluence) is a wiki collaboration platform that saves and organizes all of the project-related material. `Confluence` is a knowledge base that primarily handles content management activities.
|
4 |
+
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
```bash
|
9 |
+
pip install atlassian-python-api
|
10 |
+
```
|
11 |
+
|
12 |
+
We need to set up `username/api_key` or `Oauth2 login`.
|
13 |
+
See [instructions](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/).
|
14 |
+
|
15 |
+
|
16 |
+
## Document Loader
|
17 |
+
|
18 |
+
See a [usage example](/docs/integrations/document_loaders/confluence).
|
19 |
+
|
20 |
+
```python
|
21 |
+
from langchain_community.document_loaders import ConfluenceLoader
|
22 |
+
```
|
langchain_md_files/integrations/providers/connery.mdx
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Connery
|
2 |
+
|
3 |
+
>[Connery SDK](https://github.com/connery-io/connery-sdk) is an NPM package that
|
4 |
+
> includes both an SDK and a CLI, designed for the development of plugins and actions.
|
5 |
+
>
|
6 |
+
>The CLI automates many things in the development process. The SDK
|
7 |
+
> offers a JavaScript API for defining plugins and actions and packaging them
|
8 |
+
> into a plugin server with a standardized REST API generated from the metadata.
|
9 |
+
> The plugin server handles authorization, input validation, and logging.
|
10 |
+
> So you can focus on the logic of your actions.
|
11 |
+
>
|
12 |
+
> See the use cases and examples in the [Connery SDK documentation](https://sdk.connery.io/docs/use-cases/)
|
13 |
+
|
14 |
+
## Toolkit
|
15 |
+
|
16 |
+
See [usage example](/docs/integrations/tools/connery).
|
17 |
+
|
18 |
+
```python
|
19 |
+
from langchain_community.agent_toolkits.connery import ConneryToolkit
|
20 |
+
```
|
21 |
+
|
22 |
+
## Tools
|
23 |
+
|
24 |
+
### ConneryAction
|
25 |
+
|
26 |
+
```python
|
27 |
+
from langchain_community.tools.connery import ConneryService
|
28 |
+
```
|
langchain_md_files/integrations/providers/context.mdx
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Context
|
2 |
+
|
3 |
+
>[Context](https://context.ai/) provides user analytics for LLM-powered products and features.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
We need to install the `context-python` Python package:
|
8 |
+
|
9 |
+
```bash
|
10 |
+
pip install context-python
|
11 |
+
```
|
12 |
+
|
13 |
+
|
14 |
+
## Callbacks
|
15 |
+
|
16 |
+
See a [usage example](/docs/integrations/callbacks/context).
|
17 |
+
|
18 |
+
```python
|
19 |
+
from langchain.callbacks import ContextCallbackHandler
|
20 |
+
```
|
langchain_md_files/integrations/providers/couchbase.mdx
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Couchbase
|
2 |
+
|
3 |
+
>[Couchbase](http://couchbase.com/) is an award-winning distributed NoSQL cloud database
|
4 |
+
> that delivers unmatched versatility, performance, scalability, and financial value
|
5 |
+
> for all of your cloud, mobile, AI, and edge computing applications.
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
We have to install the `langchain-couchbase` package.
|
10 |
+
|
11 |
+
```bash
|
12 |
+
pip install langchain-couchbase
|
13 |
+
```
|
14 |
+
|
15 |
+
## Vector Store
|
16 |
+
|
17 |
+
See a [usage example](/docs/integrations/vectorstores/couchbase).
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_couchbase import CouchbaseVectorStore
|
21 |
+
```
|
22 |
+
|
23 |
+
## Document loader
|
24 |
+
|
25 |
+
See a [usage example](/docs/integrations/document_loaders/couchbase).
|
26 |
+
|
27 |
+
```python
|
28 |
+
from langchain_community.document_loaders.couchbase import CouchbaseLoader
|
29 |
+
```
|
30 |
+
|
31 |
+
## LLM Caches
|
32 |
+
|
33 |
+
### CouchbaseCache
|
34 |
+
Use Couchbase as a cache for prompts and responses.
|
35 |
+
|
36 |
+
See a [usage example](/docs/integrations/llm_caching/#couchbase-cache).
|
37 |
+
|
38 |
+
To import this cache:
|
39 |
+
```python
|
40 |
+
from langchain_couchbase.cache import CouchbaseCache
|
41 |
+
```
|
42 |
+
|
43 |
+
To use this cache with your LLMs:
|
44 |
+
```python
|
45 |
+
from langchain_core.globals import set_llm_cache
|
46 |
+
|
47 |
+
cluster = couchbase_cluster_connection_object
|
48 |
+
|
49 |
+
set_llm_cache(
|
50 |
+
CouchbaseCache(
|
51 |
+
cluster=cluster,
|
52 |
+
bucket_name=BUCKET_NAME,
|
53 |
+
scope_name=SCOPE_NAME,
|
54 |
+
collection_name=COLLECTION_NAME,
|
55 |
+
)
|
56 |
+
)
|
57 |
+
```
|
58 |
+
|
59 |
+
|
60 |
+
### CouchbaseSemanticCache
|
61 |
+
Semantic caching allows users to retrieve cached prompts based on the semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore.
|
62 |
+
The CouchbaseSemanticCache needs a Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/couchbase) on how to set up the index.
|
63 |
+
|
64 |
+
See a [usage example](/docs/integrations/llm_caching/#couchbase-semantic-cache).
|
65 |
+
|
66 |
+
To import this cache:
|
67 |
+
```python
|
68 |
+
from langchain_couchbase.cache import CouchbaseSemanticCache
|
69 |
+
```
|
70 |
+
|
71 |
+
To use this cache with your LLMs:
|
72 |
+
```python
|
73 |
+
from langchain_core.globals import set_llm_cache
|
74 |
+
|
75 |
+
# use any embedding provider...
|
76 |
+
from langchain_openai.Embeddings import OpenAIEmbeddings
|
77 |
+
|
78 |
+
embeddings = OpenAIEmbeddings()
|
79 |
+
cluster = couchbase_cluster_connection_object
|
80 |
+
|
81 |
+
set_llm_cache(
|
82 |
+
CouchbaseSemanticCache(
|
83 |
+
cluster=cluster,
|
84 |
+
embedding = embeddings,
|
85 |
+
bucket_name=BUCKET_NAME,
|
86 |
+
scope_name=SCOPE_NAME,
|
87 |
+
collection_name=COLLECTION_NAME,
|
88 |
+
index_name=INDEX_NAME,
|
89 |
+
)
|
90 |
+
)
|
91 |
+
```
|
92 |
+
|
93 |
+
## Chat Message History
|
94 |
+
Use Couchbase as the storage for your chat messages.
|
95 |
+
|
96 |
+
See a [usage example](/docs/integrations/memory/couchbase_chat_message_history).
|
97 |
+
|
98 |
+
To use the chat message history in your applications:
|
99 |
+
```python
|
100 |
+
from langchain_couchbase.chat_message_histories import CouchbaseChatMessageHistory
|
101 |
+
|
102 |
+
message_history = CouchbaseChatMessageHistory(
|
103 |
+
cluster=cluster,
|
104 |
+
bucket_name=BUCKET_NAME,
|
105 |
+
scope_name=SCOPE_NAME,
|
106 |
+
collection_name=COLLECTION_NAME,
|
107 |
+
session_id="test-session",
|
108 |
+
)
|
109 |
+
|
110 |
+
message_history.add_user_message("hi!")
|
111 |
+
```
|
langchain_md_files/integrations/providers/coze.mdx
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Coze
|
2 |
+
|
3 |
+
[Coze](https://www.coze.com/) is an AI chatbot development platform that enables
|
4 |
+
the creation and deployment of chatbots for handling diverse conversations across
|
5 |
+
various applications.
|
6 |
+
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
First, you need to get the `API_KEY` from the [Coze](https://www.coze.com/) website.
|
11 |
+
|
12 |
+
|
13 |
+
## Chat models
|
14 |
+
|
15 |
+
See a [usage example](/docs/integrations/chat/coze/).
|
16 |
+
|
17 |
+
```python
|
18 |
+
from langchain_community.chat_models import ChatCoze
|
19 |
+
```
|
langchain_md_files/integrations/providers/ctransformers.mdx
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# C Transformers
|
2 |
+
|
3 |
+
This page covers how to use the [C Transformers](https://github.com/marella/ctransformers) library within LangChain.
|
4 |
+
It is broken into two parts: installation and setup, and then references to specific C Transformers wrappers.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
- Install the Python package with `pip install ctransformers`
|
9 |
+
- Download a supported [GGML model](https://huggingface.co/TheBloke) (see [Supported Models](https://github.com/marella/ctransformers#supported-models))
|
10 |
+
|
11 |
+
## Wrappers
|
12 |
+
|
13 |
+
### LLM
|
14 |
+
|
15 |
+
There exists a CTransformers LLM wrapper, which you can access with:
|
16 |
+
|
17 |
+
```python
|
18 |
+
from langchain_community.llms import CTransformers
|
19 |
+
```
|
20 |
+
|
21 |
+
It provides a unified interface for all models:
|
22 |
+
|
23 |
+
```python
|
24 |
+
llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2')
|
25 |
+
|
26 |
+
print(llm.invoke('AI is going to'))
|
27 |
+
```
|
28 |
+
|
29 |
+
If you are getting `illegal instruction` error, try using `lib='avx'` or `lib='basic'`:
|
30 |
+
|
31 |
+
```py
|
32 |
+
llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2', lib='avx')
|
33 |
+
```
|
34 |
+
|
35 |
+
It can be used with models hosted on the Hugging Face Hub:
|
36 |
+
|
37 |
+
```py
|
38 |
+
llm = CTransformers(model='marella/gpt-2-ggml')
|
39 |
+
```
|
40 |
+
|
41 |
+
If a model repo has multiple model files (`.bin` files), specify a model file using:
|
42 |
+
|
43 |
+
```py
|
44 |
+
llm = CTransformers(model='marella/gpt-2-ggml', model_file='ggml-model.bin')
|
45 |
+
```
|
46 |
+
|
47 |
+
Additional parameters can be passed using the `config` parameter:
|
48 |
+
|
49 |
+
```py
|
50 |
+
config = {'max_new_tokens': 256, 'repetition_penalty': 1.1}
|
51 |
+
|
52 |
+
llm = CTransformers(model='marella/gpt-2-ggml', config=config)
|
53 |
+
```
|
54 |
+
|
55 |
+
See [Documentation](https://github.com/marella/ctransformers#config) for a list of available parameters.
|
56 |
+
|
57 |
+
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/ctransformers).
|
langchain_md_files/integrations/providers/ctranslate2.mdx
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# CTranslate2
|
2 |
+
|
3 |
+
>[CTranslate2](https://opennmt.net/CTranslate2/quickstart.html) is a C++ and Python library
|
4 |
+
> for efficient inference with Transformer models.
|
5 |
+
>
|
6 |
+
>The project implements a custom runtime that applies many performance optimization
|
7 |
+
> techniques such as weights quantization, layers fusion, batch reordering, etc.,
|
8 |
+
> to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
|
9 |
+
>
|
10 |
+
>A full list of features and supported models is included in the
|
11 |
+
> [project’s repository](https://opennmt.net/CTranslate2/guides/transformers.html).
|
12 |
+
> To start, please check out the official [quickstart guide](https://opennmt.net/CTranslate2/quickstart.html).
|
13 |
+
|
14 |
+
|
15 |
+
## Installation and Setup
|
16 |
+
|
17 |
+
Install the Python package:
|
18 |
+
|
19 |
+
```bash
|
20 |
+
pip install ctranslate2
|
21 |
+
```
|
22 |
+
|
23 |
+
|
24 |
+
## LLMs
|
25 |
+
|
26 |
+
See a [usage example](/docs/integrations/llms/ctranslate2).
|
27 |
+
|
28 |
+
```python
|
29 |
+
from langchain_community.llms import CTranslate2
|
30 |
+
```
|
langchain_md_files/integrations/providers/cube.mdx
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Cube
|
2 |
+
|
3 |
+
>[Cube](https://cube.dev/) is the Semantic Layer for building data apps. It helps
|
4 |
+
> data engineers and application developers access data from modern data stores,
|
5 |
+
> organize it into consistent definitions, and deliver it to every application.
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
We have to get the API key and the URL of the Cube instance. See
|
10 |
+
[these instructions](https://cube.dev/docs/product/apis-integrations/rest-api#configuration-base-path).
|
11 |
+
|
12 |
+
|
13 |
+
## Document loader
|
14 |
+
|
15 |
+
### Cube Semantic Layer
|
16 |
+
|
17 |
+
See a [usage example](/docs/integrations/document_loaders/cube_semantic).
|
18 |
+
|
19 |
+
```python
|
20 |
+
from langchain_community.document_loaders import CubeSemanticLoader
|
21 |
+
```
|
langchain_md_files/integrations/providers/dashvector.mdx
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# DashVector
|
2 |
+
|
3 |
+
> [DashVector](https://help.aliyun.com/document_detail/2510225.html) is a fully-managed vectorDB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements.
|
4 |
+
|
5 |
+
This document demonstrates to leverage DashVector within the LangChain ecosystem. In particular, it shows how to install DashVector, and how to use it as a VectorStore plugin in LangChain.
|
6 |
+
It is broken into two parts: installation and setup, and then references to specific DashVector wrappers.
|
7 |
+
|
8 |
+
## Installation and Setup
|
9 |
+
|
10 |
+
|
11 |
+
Install the Python SDK:
|
12 |
+
|
13 |
+
```bash
|
14 |
+
pip install dashvector
|
15 |
+
```
|
16 |
+
|
17 |
+
You must have an API key. Here are the [installation instructions](https://help.aliyun.com/document_detail/2510223.html).
|
18 |
+
|
19 |
+
|
20 |
+
## Embedding models
|
21 |
+
|
22 |
+
```python
|
23 |
+
from langchain_community.embeddings import DashScopeEmbeddings
|
24 |
+
```
|
25 |
+
|
26 |
+
See the [use example](/docs/integrations/vectorstores/dashvector).
|
27 |
+
|
28 |
+
|
29 |
+
## Vector Store
|
30 |
+
|
31 |
+
A DashVector Collection is wrapped as a familiar VectorStore for native usage within LangChain,
|
32 |
+
which allows it to be readily used for various scenarios, such as semantic search or example selection.
|
33 |
+
|
34 |
+
You may import the vectorstore by:
|
35 |
+
```python
|
36 |
+
from langchain_community.vectorstores import DashVector
|
37 |
+
```
|
38 |
+
|
39 |
+
For a detailed walkthrough of the DashVector wrapper, please refer to [this notebook](/docs/integrations/vectorstores/dashvector)
|
langchain_md_files/integrations/providers/datadog.mdx
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Datadog Tracing
|
2 |
+
|
3 |
+
>[ddtrace](https://github.com/DataDog/dd-trace-py) is a Datadog application performance monitoring (APM) library which provides an integration to monitor your LangChain application.
|
4 |
+
|
5 |
+
Key features of the ddtrace integration for LangChain:
|
6 |
+
- Traces: Capture LangChain requests, parameters, prompt-completions, and help visualize LangChain operations.
|
7 |
+
- Metrics: Capture LangChain request latency, errors, and token/cost usage (for OpenAI LLMs and chat models).
|
8 |
+
- Logs: Store prompt completion data for each LangChain operation.
|
9 |
+
- Dashboard: Combine metrics, logs, and trace data into a single plane to monitor LangChain requests.
|
10 |
+
- Monitors: Provide alerts in response to spikes in LangChain request latency or error rate.
|
11 |
+
|
12 |
+
Note: The ddtrace LangChain integration currently provides tracing for LLMs, chat models, Text Embedding Models, Chains, and Vectorstores.
|
13 |
+
|
14 |
+
## Installation and Setup
|
15 |
+
|
16 |
+
1. Enable APM and StatsD in your Datadog Agent, along with a Datadog API key. For example, in Docker:
|
17 |
+
|
18 |
+
```
|
19 |
+
docker run -d --cgroupns host \
|
20 |
+
--pid host \
|
21 |
+
-v /var/run/docker.sock:/var/run/docker.sock:ro \
|
22 |
+
-v /proc/:/host/proc/:ro \
|
23 |
+
-v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
|
24 |
+
-e DD_API_KEY=<DATADOG_API_KEY> \
|
25 |
+
-p 127.0.0.1:8126:8126/tcp \
|
26 |
+
-p 127.0.0.1:8125:8125/udp \
|
27 |
+
-e DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true \
|
28 |
+
-e DD_APM_ENABLED=true \
|
29 |
+
gcr.io/datadoghq/agent:latest
|
30 |
+
```
|
31 |
+
|
32 |
+
2. Install the Datadog APM Python library.
|
33 |
+
|
34 |
+
```
|
35 |
+
pip install ddtrace>=1.17
|
36 |
+
```
|
37 |
+
|
38 |
+
|
39 |
+
3. The LangChain integration can be enabled automatically when you prefix your LangChain Python application command with `ddtrace-run`:
|
40 |
+
|
41 |
+
```
|
42 |
+
DD_SERVICE="my-service" DD_ENV="staging" DD_API_KEY=<DATADOG_API_KEY> ddtrace-run python <your-app>.py
|
43 |
+
```
|
44 |
+
|
45 |
+
**Note**: If the Agent is using a non-default hostname or port, be sure to also set `DD_AGENT_HOST`, `DD_TRACE_AGENT_PORT`, or `DD_DOGSTATSD_PORT`.
|
46 |
+
|
47 |
+
Additionally, the LangChain integration can be enabled programmatically by adding `patch_all()` or `patch(langchain=True)` before the first import of `langchain` in your application.
|
48 |
+
|
49 |
+
Note that using `ddtrace-run` or `patch_all()` will also enable the `requests` and `aiohttp` integrations which trace HTTP requests to LLM providers, as well as the `openai` integration which traces requests to the OpenAI library.
|
50 |
+
|
51 |
+
```python
|
52 |
+
from ddtrace import config, patch
|
53 |
+
|
54 |
+
# Note: be sure to configure the integration before calling ``patch()``!
|
55 |
+
# e.g. config.langchain["logs_enabled"] = True
|
56 |
+
|
57 |
+
patch(langchain=True)
|
58 |
+
|
59 |
+
# to trace synchronous HTTP requests
|
60 |
+
# patch(langchain=True, requests=True)
|
61 |
+
|
62 |
+
# to trace asynchronous HTTP requests (to the OpenAI library)
|
63 |
+
# patch(langchain=True, aiohttp=True)
|
64 |
+
|
65 |
+
# to include underlying OpenAI spans from the OpenAI integration
|
66 |
+
# patch(langchain=True, openai=True)patch_all
|
67 |
+
```
|
68 |
+
|
69 |
+
See the [APM Python library documentation](https://ddtrace.readthedocs.io/en/stable/installation_quickstart.html) for more advanced usage.
|
70 |
+
|
71 |
+
|
72 |
+
## Configuration
|
73 |
+
|
74 |
+
See the [APM Python library documentation](https://ddtrace.readthedocs.io/en/stable/integrations.html#langchain) for all the available configuration options.
|
75 |
+
|
76 |
+
|
77 |
+
### Log Prompt & Completion Sampling
|
78 |
+
|
79 |
+
To enable log prompt and completion sampling, set the `DD_LANGCHAIN_LOGS_ENABLED=1` environment variable. By default, 10% of traced requests will emit logs containing the prompts and completions.
|
80 |
+
|
81 |
+
To adjust the log sample rate, see the [APM library documentation](https://ddtrace.readthedocs.io/en/stable/integrations.html#langchain).
|
82 |
+
|
83 |
+
**Note**: Logs submission requires `DD_API_KEY` to be specified when running `ddtrace-run`.
|
84 |
+
|
85 |
+
|
86 |
+
## Troubleshooting
|
87 |
+
|
88 |
+
Need help? Create an issue on [ddtrace](https://github.com/DataDog/dd-trace-py) or contact [Datadog support](https://docs.datadoghq.com/help/).
|
langchain_md_files/integrations/providers/datadog_logs.mdx
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Datadog Logs
|
2 |
+
|
3 |
+
>[Datadog](https://www.datadoghq.com/) is a monitoring and analytics platform for cloud-scale applications.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
```bash
|
8 |
+
pip install datadog_api_client
|
9 |
+
```
|
10 |
+
|
11 |
+
We must initialize the loader with the Datadog API key and APP key, and we need to set up the query to extract the desired logs.
|
12 |
+
|
13 |
+
## Document Loader
|
14 |
+
|
15 |
+
See a [usage example](/docs/integrations/document_loaders/datadog_logs).
|
16 |
+
|
17 |
+
```python
|
18 |
+
from langchain_community.document_loaders import DatadogLogsLoader
|
19 |
+
```
|
langchain_md_files/integrations/providers/dataforseo.mdx
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# DataForSEO
|
2 |
+
|
3 |
+
>[DataForSeo](https://dataforseo.com/) provides comprehensive SEO and digital marketing data solutions via API.
|
4 |
+
|
5 |
+
This page provides instructions on how to use the DataForSEO search APIs within LangChain.
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
|
9 |
+
Get a [DataForSEO API Access login and password](https://app.dataforseo.com/register), and set them as environment variables
|
10 |
+
(`DATAFORSEO_LOGIN` and `DATAFORSEO_PASSWORD` respectively).
|
11 |
+
|
12 |
+
```python
|
13 |
+
import os
|
14 |
+
|
15 |
+
os.environ["DATAFORSEO_LOGIN"] = "your_login"
|
16 |
+
os.environ["DATAFORSEO_PASSWORD"] = "your_password"
|
17 |
+
```
|
18 |
+
|
19 |
+
|
20 |
+
## Utility
|
21 |
+
|
22 |
+
The `DataForSEO` utility wraps the API. To import this utility, use:
|
23 |
+
|
24 |
+
```python
|
25 |
+
from langchain_community.utilities.dataforseo_api_search import DataForSeoAPIWrapper
|
26 |
+
```
|
27 |
+
|
28 |
+
For a detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/dataforseo).
|
29 |
+
|
30 |
+
## Tool
|
31 |
+
|
32 |
+
You can also load this wrapper as a Tool to use with an Agent:
|
33 |
+
|
34 |
+
```python
|
35 |
+
from langchain.agents import load_tools
|
36 |
+
tools = load_tools(["dataforseo-api-search"])
|
37 |
+
```
|
38 |
+
|
39 |
+
This will load the following tools:
|
40 |
+
|
41 |
+
```python
|
42 |
+
from langchain_community.tools import DataForSeoAPISearchRun
|
43 |
+
from langchain_community.tools import DataForSeoAPISearchResults
|
44 |
+
```
|
45 |
+
|
46 |
+
## Example usage
|
47 |
+
|
48 |
+
```python
|
49 |
+
dataforseo = DataForSeoAPIWrapper(api_login="your_login", api_password="your_password")
|
50 |
+
result = dataforseo.run("Bill Gates")
|
51 |
+
print(result)
|
52 |
+
```
|
langchain_md_files/integrations/providers/dataherald.mdx
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Dataherald
|
2 |
+
|
3 |
+
>[Dataherald](https://www.dataherald.com) is a natural language-to-SQL.
|
4 |
+
|
5 |
+
This page covers how to use the `Dataherald API` within LangChain.
|
6 |
+
|
7 |
+
## Installation and Setup
|
8 |
+
- Install requirements with
|
9 |
+
```bash
|
10 |
+
pip install dataherald
|
11 |
+
```
|
12 |
+
- Go to dataherald and sign up [here](https://www.dataherald.com)
|
13 |
+
- Create an app and get your `API KEY`
|
14 |
+
- Set your `API KEY` as an environment variable `DATAHERALD_API_KEY`
|
15 |
+
|
16 |
+
|
17 |
+
## Wrappers
|
18 |
+
|
19 |
+
### Utility
|
20 |
+
|
21 |
+
There exists a DataheraldAPIWrapper utility which wraps this API. To import this utility:
|
22 |
+
|
23 |
+
```python
|
24 |
+
from langchain_community.utilities.dataherald import DataheraldAPIWrapper
|
25 |
+
```
|
26 |
+
|
27 |
+
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/dataherald).
|
28 |
+
|
29 |
+
### Tool
|
30 |
+
|
31 |
+
You can use the tool in an agent like this:
|
32 |
+
```python
|
33 |
+
from langchain_community.utilities.dataherald import DataheraldAPIWrapper
|
34 |
+
from langchain_community.tools.dataherald.tool import DataheraldTextToSQL
|
35 |
+
from langchain_openai import ChatOpenAI
|
36 |
+
from langchain import hub
|
37 |
+
from langchain.agents import AgentExecutor, create_react_agent, load_tools
|
38 |
+
|
39 |
+
api_wrapper = DataheraldAPIWrapper(db_connection_id="<db_connection_id>")
|
40 |
+
tool = DataheraldTextToSQL(api_wrapper=api_wrapper)
|
41 |
+
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
|
42 |
+
prompt = hub.pull("hwchase17/react")
|
43 |
+
agent = create_react_agent(llm, tools, prompt)
|
44 |
+
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
|
45 |
+
agent_executor.invoke({"input":"Return the sql for this question: How many employees are in the company?"})
|
46 |
+
```
|
47 |
+
|
48 |
+
Output
|
49 |
+
```shell
|
50 |
+
> Entering new AgentExecutor chain...
|
51 |
+
I need to use a tool that can convert this question into SQL.
|
52 |
+
Action: dataherald
|
53 |
+
Action Input: How many employees are in the company?Answer: SELECT
|
54 |
+
COUNT(*) FROM employeesI now know the final answer
|
55 |
+
Final Answer: SELECT
|
56 |
+
COUNT(*)
|
57 |
+
FROM
|
58 |
+
employees
|
59 |
+
|
60 |
+
> Finished chain.
|
61 |
+
{'input': 'Return the sql for this question: How many employees are in the company?', 'output': "SELECT \n COUNT(*)\nFROM \n employees"}
|
62 |
+
```
|
63 |
+
|
64 |
+
For more information on tools, see [this page](/docs/how_to/tools_builtin).
|
langchain_md_files/integrations/providers/dedoc.mdx
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Dedoc
|
2 |
+
|
3 |
+
>[Dedoc](https://dedoc.readthedocs.io) is an [open-source](https://github.com/ispras/dedoc)
|
4 |
+
library/service that extracts texts, tables, attached files and document structure
|
5 |
+
(e.g., titles, list items, etc.) from files of various formats.
|
6 |
+
|
7 |
+
`Dedoc` supports `DOCX`, `XLSX`, `PPTX`, `EML`, `HTML`, `PDF`, images and more.
|
8 |
+
Full list of supported formats can be found [here](https://dedoc.readthedocs.io/en/latest/#id1).
|
9 |
+
|
10 |
+
## Installation and Setup
|
11 |
+
|
12 |
+
### Dedoc library
|
13 |
+
|
14 |
+
You can install `Dedoc` using `pip`.
|
15 |
+
In this case, you will need to install dependencies,
|
16 |
+
please go [here](https://dedoc.readthedocs.io/en/latest/getting_started/installation.html)
|
17 |
+
to get more information.
|
18 |
+
|
19 |
+
```bash
|
20 |
+
pip install dedoc
|
21 |
+
```
|
22 |
+
|
23 |
+
### Dedoc API
|
24 |
+
|
25 |
+
If you are going to use `Dedoc` API, you don't need to install `dedoc` library.
|
26 |
+
In this case, you should run the `Dedoc` service, e.g. `Docker` container (please see
|
27 |
+
[the documentation](https://dedoc.readthedocs.io/en/latest/getting_started/installation.html#install-and-run-dedoc-using-docker)
|
28 |
+
for more details):
|
29 |
+
|
30 |
+
```bash
|
31 |
+
docker pull dedocproject/dedoc
|
32 |
+
docker run -p 1231:1231
|
33 |
+
```
|
34 |
+
|
35 |
+
## Document Loader
|
36 |
+
|
37 |
+
* For handling files of any formats (supported by `Dedoc`), you can use `DedocFileLoader`:
|
38 |
+
|
39 |
+
```python
|
40 |
+
from langchain_community.document_loaders import DedocFileLoader
|
41 |
+
```
|
42 |
+
|
43 |
+
* For handling PDF files (with or without a textual layer), you can use `DedocPDFLoader`:
|
44 |
+
|
45 |
+
```python
|
46 |
+
from langchain_community.document_loaders import DedocPDFLoader
|
47 |
+
```
|
48 |
+
|
49 |
+
* For handling files of any formats without library installation,
|
50 |
+
you can use `Dedoc API` with `DedocAPIFileLoader`:
|
51 |
+
|
52 |
+
```python
|
53 |
+
from langchain_community.document_loaders import DedocAPIFileLoader
|
54 |
+
```
|
55 |
+
|
56 |
+
Please see a [usage example](/docs/integrations/document_loaders/dedoc) for more details.
|
langchain_md_files/integrations/providers/deepinfra.mdx
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# DeepInfra
|
2 |
+
|
3 |
+
>[DeepInfra](https://deepinfra.com/docs) allows us to run the
|
4 |
+
> [latest machine learning models](https://deepinfra.com/models) with ease.
|
5 |
+
> DeepInfra takes care of all the heavy lifting related to running, scaling and monitoring
|
6 |
+
> the models. Users can focus on your application and integrate the models with simple REST API calls.
|
7 |
+
|
8 |
+
>DeepInfra provides [examples](https://deepinfra.com/docs/advanced/langchain) of integration with LangChain.
|
9 |
+
|
10 |
+
This page covers how to use the `DeepInfra` ecosystem within `LangChain`.
|
11 |
+
It is broken into two parts: installation and setup, and then references to specific DeepInfra wrappers.
|
12 |
+
|
13 |
+
## Installation and Setup
|
14 |
+
|
15 |
+
- Get your DeepInfra api key from this link [here](https://deepinfra.com/).
|
16 |
+
- Get an DeepInfra api key and set it as an environment variable (`DEEPINFRA_API_TOKEN`)
|
17 |
+
|
18 |
+
## Available Models
|
19 |
+
|
20 |
+
DeepInfra provides a range of Open Source LLMs ready for deployment.
|
21 |
+
|
22 |
+
You can see supported models for
|
23 |
+
[text-generation](https://deepinfra.com/models?type=text-generation) and
|
24 |
+
[embeddings](https://deepinfra.com/models?type=embeddings).
|
25 |
+
|
26 |
+
You can view a [list of request and response parameters](https://deepinfra.com/meta-llama/Llama-2-70b-chat-hf/api).
|
27 |
+
|
28 |
+
Chat models [follow openai api](https://deepinfra.com/meta-llama/Llama-2-70b-chat-hf/api?example=openai-http)
|
29 |
+
|
30 |
+
|
31 |
+
## LLM
|
32 |
+
|
33 |
+
See a [usage example](/docs/integrations/llms/deepinfra).
|
34 |
+
|
35 |
+
```python
|
36 |
+
from langchain_community.llms import DeepInfra
|
37 |
+
```
|
38 |
+
|
39 |
+
## Embeddings
|
40 |
+
|
41 |
+
See a [usage example](/docs/integrations/text_embedding/deepinfra).
|
42 |
+
|
43 |
+
```python
|
44 |
+
from langchain_community.embeddings import DeepInfraEmbeddings
|
45 |
+
```
|
46 |
+
|
47 |
+
## Chat Models
|
48 |
+
|
49 |
+
See a [usage example](/docs/integrations/chat/deepinfra).
|
50 |
+
|
51 |
+
```python
|
52 |
+
from langchain_community.chat_models import ChatDeepInfra
|
53 |
+
```
|
langchain_md_files/integrations/providers/deepsparse.mdx
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# DeepSparse
|
2 |
+
|
3 |
+
This page covers how to use the [DeepSparse](https://github.com/neuralmagic/deepsparse) inference runtime within LangChain.
|
4 |
+
It is broken into two parts: installation and setup, and then examples of DeepSparse usage.
|
5 |
+
|
6 |
+
## Installation and Setup
|
7 |
+
|
8 |
+
- Install the Python package with `pip install deepsparse`
|
9 |
+
- Choose a [SparseZoo model](https://sparsezoo.neuralmagic.com/?useCase=text_generation) or export a support model to ONNX [using Optimum](https://github.com/neuralmagic/notebooks/blob/main/notebooks/opt-text-generation-deepsparse-quickstart/OPT_Text_Generation_DeepSparse_Quickstart.ipynb)
|
10 |
+
|
11 |
+
|
12 |
+
## LLMs
|
13 |
+
|
14 |
+
There exists a DeepSparse LLM wrapper, which you can access with:
|
15 |
+
|
16 |
+
```python
|
17 |
+
from langchain_community.llms import DeepSparse
|
18 |
+
```
|
19 |
+
|
20 |
+
It provides a unified interface for all models:
|
21 |
+
|
22 |
+
```python
|
23 |
+
llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none')
|
24 |
+
|
25 |
+
print(llm.invoke('def fib():'))
|
26 |
+
```
|
27 |
+
|
28 |
+
Additional parameters can be passed using the `config` parameter:
|
29 |
+
|
30 |
+
```python
|
31 |
+
config = {'max_generated_tokens': 256}
|
32 |
+
|
33 |
+
llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none', config=config)
|
34 |
+
```
|
langchain_md_files/integrations/providers/diffbot.mdx
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Diffbot
|
2 |
+
|
3 |
+
> [Diffbot](https://docs.diffbot.com/docs) is a suite of ML-based products that make it easy to structure and integrate web data.
|
4 |
+
|
5 |
+
## Installation and Setup
|
6 |
+
|
7 |
+
[Get a free Diffbot API token](https://app.diffbot.com/get-started/) and [follow these instructions](https://docs.diffbot.com/reference/authentication) to authenticate your requests.
|
8 |
+
|
9 |
+
## Document Loader
|
10 |
+
|
11 |
+
Diffbot's [Extract API](https://docs.diffbot.com/reference/extract-introduction) is a service that structures and normalizes data from web pages.
|
12 |
+
|
13 |
+
Unlike traditional web scraping tools, `Diffbot Extract` doesn't require any rules to read the content on a page. It uses a computer vision model to classify a page into one of 20 possible types, and then transforms raw HTML markup into JSON. The resulting structured JSON follows a consistent [type-based ontology](https://docs.diffbot.com/docs/ontology), which makes it easy to extract data from multiple different web sources with the same schema.
|
14 |
+
|
15 |
+
See a [usage example](/docs/integrations/document_loaders/diffbot).
|
16 |
+
|
17 |
+
```python
|
18 |
+
from langchain_community.document_loaders import DiffbotLoader
|
19 |
+
```
|
20 |
+
|
21 |
+
## Graphs
|
22 |
+
|
23 |
+
Diffbot's [Natural Language Processing API](https://www.diffbot.com/products/natural-language/) allows for the extraction of entities, relationships, and semantic meaning from unstructured text data.
|
24 |
+
|
25 |
+
See a [usage example](/docs/integrations/graphs/diffbot).
|
26 |
+
|
27 |
+
```python
|
28 |
+
from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer
|
29 |
+
```
|