Spaces:
No application file
No application file
--- | |
title: 🤖 Large language models (LLMs) | |
--- | |
## Overview | |
Embedchain comes with built-in support for various popular large language models. We handle the complexity of integrating these models for you, allowing you to easily customize your language model interactions through a user-friendly interface. | |
<CardGroup cols={4}> | |
<Card title="OpenAI" href="#openai"></Card> | |
<Card title="Google AI" href="#google-ai"></Card> | |
<Card title="Azure OpenAI" href="#azure-openai"></Card> | |
<Card title="Anthropic" href="#anthropic"></Card> | |
<Card title="Cohere" href="#cohere"></Card> | |
<Card title="Together" href="#together"></Card> | |
<Card title="Ollama" href="#ollama"></Card> | |
<Card title="vLLM" href="#vllm"></Card> | |
<Card title="GPT4All" href="#gpt4all"></Card> | |
<Card title="JinaChat" href="#jinachat"></Card> | |
<Card title="Hugging Face" href="#hugging-face"></Card> | |
<Card title="Llama2" href="#llama2"></Card> | |
<Card title="Vertex AI" href="#vertex-ai"></Card> | |
<Card title="Mistral AI" href="#mistral-ai"></Card> | |
<Card title="AWS Bedrock" href="#aws-bedrock"></Card> | |
<Card title="Groq" href="#groq"></Card> | |
</CardGroup> | |
## OpenAI | |
To use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment variable. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys). | |
Once you have obtained the key, you can use it like this: | |
```python | |
import os | |
from embedchain import App | |
os.environ['OPENAI_API_KEY'] = 'xxx' | |
app = App() | |
app.add("https://en.wikipedia.org/wiki/OpenAI") | |
app.query("What is OpenAI?") | |
``` | |
If you are looking to configure the different parameters of the LLM, you can do so by loading the app using a [yaml config](https://github.com/embedchain/embedchain/blob/main/configs/chroma.yaml) file. | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
os.environ['OPENAI_API_KEY'] = 'xxx' | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: openai | |
config: | |
model: 'gpt-3.5-turbo' | |
temperature: 0.5 | |
max_tokens: 1000 | |
top_p: 1 | |
stream: false | |
``` | |
</CodeGroup> | |
### Function Calling | |
Embedchain supports OpenAI [Function calling](https://platform.openai.com/docs/guides/function-calling) with a single function. It accepts inputs in accordance with the [Langchain interface](https://python.langchain.com/docs/modules/model_io/chat/function_calling#legacy-args-functions-and-function_call). | |
<Accordion title="Pydantic Model"> | |
```python | |
from pydantic import BaseModel | |
class multiply(BaseModel): | |
"""Multiply two integers together.""" | |
a: int = Field(..., description="First integer") | |
b: int = Field(..., description="Second integer") | |
``` | |
</Accordion> | |
<Accordion title="Python function"> | |
```python | |
def multiply(a: int, b: int) -> int: | |
"""Multiply two integers together. | |
Args: | |
a: First integer | |
b: Second integer | |
""" | |
return a * b | |
``` | |
</Accordion> | |
<Accordion title="OpenAI tool dictionary"> | |
```python | |
multiply = { | |
"type": "function", | |
"function": { | |
"name": "multiply", | |
"description": "Multiply two integers together.", | |
"parameters": { | |
"type": "object", | |
"properties": { | |
"a": { | |
"description": "First integer", | |
"type": "integer" | |
}, | |
"b": { | |
"description": "Second integer", | |
"type": "integer" | |
} | |
}, | |
"required": [ | |
"a", | |
"b" | |
] | |
} | |
} | |
} | |
``` | |
</Accordion> | |
With any of the previous inputs, the OpenAI LLM can be queried to provide the appropriate arguments for the function. | |
```python | |
import os | |
from embedchain import App | |
from embedchain.llm.openai import OpenAILlm | |
os.environ["OPENAI_API_KEY"] = "sk-xxx" | |
llm = OpenAILlm(tools=multiply) | |
app = App(llm=llm) | |
result = app.query("What is the result of 125 multiplied by fifteen?") | |
``` | |
## Google AI | |
To use Google AI model, you have to set the `GOOGLE_API_KEY` environment variable. You can obtain the Google API key from the [Google Maker Suite](https://makersuite.google.com/app/apikey) | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
os.environ["GOOGLE_API_KEY"] = "xxx" | |
app = App.from_config(config_path="config.yaml") | |
app.add("https://www.forbes.com/profile/elon-musk") | |
response = app.query("What is the net worth of Elon Musk?") | |
if app.llm.config.stream: # if stream is enabled, response is a generator | |
for chunk in response: | |
print(chunk) | |
else: | |
print(response) | |
``` | |
```yaml config.yaml | |
llm: | |
provider: google | |
config: | |
model: gemini-pro | |
max_tokens: 1000 | |
temperature: 0.5 | |
top_p: 1 | |
stream: false | |
embedder: | |
provider: google | |
config: | |
model: 'models/embedding-001' | |
task_type: "retrieval_document" | |
title: "Embeddings for Embedchain" | |
``` | |
</CodeGroup> | |
## Azure OpenAI | |
To use Azure OpenAI model, you have to set some of the azure openai related environment variables as given in the code block below: | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
os.environ["OPENAI_API_TYPE"] = "azure" | |
os.environ["OPENAI_API_BASE"] = "https://xxx.openai.azure.com/" | |
os.environ["OPENAI_API_KEY"] = "xxx" | |
os.environ["OPENAI_API_VERSION"] = "xxx" | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: azure_openai | |
config: | |
model: gpt-3.5-turbo | |
deployment_name: your_llm_deployment_name | |
temperature: 0.5 | |
max_tokens: 1000 | |
top_p: 1 | |
stream: false | |
embedder: | |
provider: azure_openai | |
config: | |
model: text-embedding-ada-002 | |
deployment_name: you_embedding_model_deployment_name | |
``` | |
</CodeGroup> | |
You can find the list of models and deployment name on the [Azure OpenAI Platform](https://oai.azure.com/portal). | |
## Anthropic | |
To use anthropic's model, please set the `ANTHROPIC_API_KEY` which you find on their [Account Settings Page](https://console.anthropic.com/account/keys). | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
os.environ["ANTHROPIC_API_KEY"] = "xxx" | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: anthropic | |
config: | |
model: 'claude-instant-1' | |
temperature: 0.5 | |
max_tokens: 1000 | |
top_p: 1 | |
stream: false | |
``` | |
</CodeGroup> | |
## Cohere | |
Install related dependencies using the following command: | |
```bash | |
pip install --upgrade 'embedchain[cohere]' | |
``` | |
Set the `COHERE_API_KEY` as environment variable which you can find on their [Account settings page](https://dashboard.cohere.com/api-keys). | |
Once you have the API key, you are all set to use it with Embedchain. | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
os.environ["COHERE_API_KEY"] = "xxx" | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: cohere | |
config: | |
model: large | |
temperature: 0.5 | |
max_tokens: 1000 | |
top_p: 1 | |
``` | |
</CodeGroup> | |
## Together | |
Install related dependencies using the following command: | |
```bash | |
pip install --upgrade 'embedchain[together]' | |
``` | |
Set the `TOGETHER_API_KEY` as environment variable which you can find on their [Account settings page](https://api.together.xyz/settings/api-keys). | |
Once you have the API key, you are all set to use it with Embedchain. | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
os.environ["TOGETHER_API_KEY"] = "xxx" | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: together | |
config: | |
model: togethercomputer/RedPajama-INCITE-7B-Base | |
temperature: 0.5 | |
max_tokens: 1000 | |
top_p: 1 | |
``` | |
</CodeGroup> | |
## Ollama | |
Setup Ollama using https://github.com/jmorganca/ollama | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: ollama | |
config: | |
model: 'llama2' | |
temperature: 0.5 | |
top_p: 1 | |
stream: true | |
``` | |
</CodeGroup> | |
## vLLM | |
Setup vLLM by following instructions given in [their docs](https://docs.vllm.ai/en/latest/getting_started/installation.html). | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: vllm | |
config: | |
model: 'meta-llama/Llama-2-70b-hf' | |
temperature: 0.5 | |
top_p: 1 | |
top_k: 10 | |
stream: true | |
trust_remote_code: true | |
``` | |
</CodeGroup> | |
## GPT4ALL | |
Install related dependencies using the following command: | |
```bash | |
pip install --upgrade 'embedchain[opensource]' | |
``` | |
GPT4all is a free-to-use, locally running, privacy-aware chatbot. No GPU or internet required. You can use this with Embedchain using the following code: | |
<CodeGroup> | |
```python main.py | |
from embedchain import App | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: gpt4all | |
config: | |
model: 'orca-mini-3b-gguf2-q4_0.gguf' | |
temperature: 0.5 | |
max_tokens: 1000 | |
top_p: 1 | |
stream: false | |
embedder: | |
provider: gpt4all | |
``` | |
</CodeGroup> | |
## JinaChat | |
First, set `JINACHAT_API_KEY` in environment variable which you can obtain from [their platform](https://chat.jina.ai/api). | |
Once you have the key, load the app using the config yaml file: | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
os.environ["JINACHAT_API_KEY"] = "xxx" | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: jina | |
config: | |
temperature: 0.5 | |
max_tokens: 1000 | |
top_p: 1 | |
stream: false | |
``` | |
</CodeGroup> | |
## Hugging Face | |
Install related dependencies using the following command: | |
```bash | |
pip install --upgrade 'embedchain[huggingface-hub]' | |
``` | |
First, set `HUGGINGFACE_ACCESS_TOKEN` in environment variable which you can obtain from [their platform](https://huggingface.co/settings/tokens). | |
Once you have the token, load the app using the config yaml file: | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "xxx" | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: huggingface | |
config: | |
model: 'google/flan-t5-xxl' | |
temperature: 0.5 | |
max_tokens: 1000 | |
top_p: 0.5 | |
stream: false | |
``` | |
</CodeGroup> | |
### Custom Endpoints | |
You can also use [Hugging Face Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index#-inference-endpoints) to access custom endpoints. First, set the `HUGGINGFACE_ACCESS_TOKEN` as above. | |
Then, load the app using the config yaml file: | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "xxx" | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: huggingface | |
config: | |
endpoint: https://api-inference.huggingface.co/models/gpt2 # replace with your personal endpoint | |
``` | |
</CodeGroup> | |
If your endpoint requires additional parameters, you can pass them in the `model_kwargs` field: | |
``` | |
llm: | |
provider: huggingface | |
config: | |
endpoint: <YOUR_ENDPOINT_URL_HERE> | |
model_kwargs: | |
max_new_tokens: 100 | |
temperature: 0.5 | |
``` | |
Currently only supports `text-generation` and `text2text-generation` for now [[ref](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html?highlight=huggingfaceendpoint#)]. | |
See langchain's [hugging face endpoint](https://python.langchain.com/docs/integrations/chat/huggingface#huggingfaceendpoint) for more information. | |
## Llama2 | |
Llama2 is integrated through [Replicate](https://replicate.com/). Set `REPLICATE_API_TOKEN` in environment variable which you can obtain from [their platform](https://replicate.com/account/api-tokens). | |
Once you have the token, load the app using the config yaml file: | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
os.environ["REPLICATE_API_TOKEN"] = "xxx" | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: llama2 | |
config: | |
model: 'a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5' | |
temperature: 0.5 | |
max_tokens: 1000 | |
top_p: 0.5 | |
stream: false | |
``` | |
</CodeGroup> | |
## Vertex AI | |
Setup Google Cloud Platform application credentials by following the instruction on [GCP](https://cloud.google.com/docs/authentication/external/set-up-adc). Once setup is done, use the following code to create an app using VertexAI as provider: | |
<CodeGroup> | |
```python main.py | |
from embedchain import App | |
# load llm configuration from config.yaml file | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: vertexai | |
config: | |
model: 'chat-bison' | |
temperature: 0.5 | |
top_p: 0.5 | |
``` | |
</CodeGroup> | |
## Mistral AI | |
Obtain the Mistral AI api key from their [console](https://console.mistral.ai/). | |
<CodeGroup> | |
```python main.py | |
os.environ["MISTRAL_API_KEY"] = "xxx" | |
app = App.from_config(config_path="config.yaml") | |
app.add("https://www.forbes.com/profile/elon-musk") | |
response = app.query("what is the net worth of Elon Musk?") | |
# As of January 16, 2024, Elon Musk's net worth is $225.4 billion. | |
response = app.chat("which companies does elon own?") | |
# Elon Musk owns Tesla, SpaceX, Boring Company, Twitter, and X. | |
response = app.chat("what question did I ask you already?") | |
# You have asked me several times already which companies Elon Musk owns, specifically Tesla, SpaceX, Boring Company, Twitter, and X. | |
``` | |
```yaml config.yaml | |
llm: | |
provider: mistralai | |
config: | |
model: mistral-tiny | |
temperature: 0.5 | |
max_tokens: 1000 | |
top_p: 1 | |
embedder: | |
provider: mistralai | |
config: | |
model: mistral-embed | |
``` | |
</CodeGroup> | |
## AWS Bedrock | |
### Setup | |
- Before using the AWS Bedrock LLM, make sure you have the appropriate model access from [Bedrock Console](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/modelaccess). | |
- You will also need to authenticate the `boto3` client by using a method in the [AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials) | |
- You can optionally export an `AWS_REGION` | |
### Usage | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
os.environ["AWS_ACCESS_KEY_ID"] = "xxx" | |
os.environ["AWS_SECRET_ACCESS_KEY"] = "xxx" | |
os.environ["AWS_REGION"] = "us-west-2" | |
app = App.from_config(config_path="config.yaml") | |
``` | |
```yaml config.yaml | |
llm: | |
provider: aws_bedrock | |
config: | |
model: amazon.titan-text-express-v1 | |
# check notes below for model_kwargs | |
model_kwargs: | |
temperature: 0.5 | |
topP: 1 | |
maxTokenCount: 1000 | |
``` | |
</CodeGroup> | |
<br /> | |
<Note> | |
The model arguments are different for each providers. Please refer to the [AWS Bedrock Documentation](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/providers) to find the appropriate arguments for your model. | |
</Note> | |
<br/ > | |
## Groq | |
[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine. | |
### Usage | |
In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key. | |
Set the API key as `GROQ_API_KEY` environment variable or pass in your app configuration to use the model as given below in the example. | |
<CodeGroup> | |
```python main.py | |
import os | |
from embedchain import App | |
# Set your API key here or pass as the environment variable | |
groq_api_key = "gsk_xxxx" | |
config = { | |
"llm": { | |
"provider": "groq", | |
"config": { | |
"model": "mixtral-8x7b-32768", | |
"api_key": groq_api_key, | |
"stream": True | |
} | |
} | |
} | |
app = App.from_config(config=config) | |
# Add your data source here | |
app.add("https://docs.embedchain.ai/sitemap.xml", data_type="sitemap") | |
app.query("Write a poem about Embedchain") | |
# In the realm of data, vast and wide, | |
# Embedchain stands with knowledge as its guide. | |
# A platform open, for all to try, | |
# Building bots that can truly fly. | |
# With REST API, data in reach, | |
# Deployment a breeze, as easy as a speech. | |
# Updating data sources, anytime, anyday, | |
# Embedchain's power, never sway. | |
# A knowledge base, an assistant so grand, | |
# Connecting to platforms, near and far. | |
# Discord, WhatsApp, Slack, and more, | |
# Embedchain's potential, never a bore. | |
``` | |
</CodeGroup> | |
<br/ > | |
<Snippet file="missing-llm-tip.mdx" /> | |