yangdx commited on
Commit
f9e511f
·
1 Parent(s): bca7d26

Updated API documentation for LLM bindings and configurations.

Browse files
Files changed (1) hide show
  1. lightrag/api/README.md +18 -4
lightrag/api/README.md CHANGED
@@ -35,12 +35,18 @@ For example, you have the possibility to use ollama for the embedding and openai
35
 
36
  #### For Ollama Server
37
  - Ollama must be running and accessible
38
- - Default connection: http://localhost:11434
39
- - Configure using --ollama-host if running on a different host/port
 
 
 
 
40
 
41
  #### For OpenAI Alike Server
42
- - Requires valid OpenAI API credentials set in environment variables
43
- - LLM_BINDING, LLM_MODEL, LLM_BINDING_API_KEY must be set by command line on in environment variables
 
 
44
 
45
  #### For Azure OpenAI Server
46
  Azure OpenAI API can be created using the following commands in Azure CLI (you need to install Azure CLI first from [https://docs.microsoft.com/en-us/cli/azure/install-azure-cli](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli)):
@@ -61,6 +67,13 @@ az cognitiveservices account keys list --name $RESOURCE_NAME -g $RESOURCE_GROUP_
61
  ```
62
  The output of the last command will give you the endpoint and the key for the OpenAI API. You can use these values to set the environment variables in the `.env` file.
63
 
 
 
 
 
 
 
 
64
  ### About Ollama API
65
 
66
  We provide an Ollama-compatible interfaces for LightRAG, aiming to emulate LightRAG as an Ollama chat model. This allows AI chat frontends supporting Ollama, such as Open WebUI, to access LightRAG easily.
@@ -158,6 +171,7 @@ PORT=7000 python lightrag.py
158
  | --llm-binding | ollama | LLM binding to be used. Supported: lollms, ollama, openai |
159
  | --llm-binding-host | (dynamic) | LLM server host URL. Defaults based on binding: http://localhost:11434 (ollama), http://localhost:9600 (lollms), https://api.openai.com/v1 (openai) |
160
  | --llm-model | mistral-nemo:latest | LLM model name |
 
161
  | --embedding-binding | ollama | Embedding binding to be used. Supported: lollms, ollama, openai |
162
  | --embedding-binding-host | (dynamic) | Embedding server host URL. Defaults based on binding: http://localhost:11434 (ollama), http://localhost:9600 (lollms), https://api.openai.com/v1 (openai) |
163
  | --embedding-model | bge-m3:latest | Embedding model name |
 
35
 
36
  #### For Ollama Server
37
  - Ollama must be running and accessible
38
+ - Requires environment variables setup or command line argument provided
39
+ - Environment variables: LLM_BINDING=ollama, LLM_BINDING_HOST, LLM_MODEL
40
+ - Command line arguments: --llm-binding=ollama, --llm-binding-host, --llm-model
41
+ - Default connection is http://localhost:11434 if not priveded
42
+
43
+ > The default MAX_TOKENS(num_ctx) for Ollama is 32768. If your Ollama server is lacking or GPU memory, set it to a lower value.
44
 
45
  #### For OpenAI Alike Server
46
+ - Requires environment variables setup or command line argument provided
47
+ - Environment variables: LLM_BINDING=ollama, LLM_BINDING_HOST, LLM_MODEL, LLM_BINDING_API_KEY
48
+ - Command line arguments: --llm-binding=ollama, --llm-binding-host, --llm-model, --llm-binding-api-key
49
+ - Default connection is https://api.openai.com/v1 if not priveded
50
 
51
  #### For Azure OpenAI Server
52
  Azure OpenAI API can be created using the following commands in Azure CLI (you need to install Azure CLI first from [https://docs.microsoft.com/en-us/cli/azure/install-azure-cli](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli)):
 
67
  ```
68
  The output of the last command will give you the endpoint and the key for the OpenAI API. You can use these values to set the environment variables in the `.env` file.
69
 
70
+ ```
71
+ LLM_BINDING=azure_openai
72
+ LLM_BINDING_HOST=endpoint_of_azure_ai
73
+ LLM_MODEL=model_name_of_azure_ai
74
+ LLM_BINDING_API_KEY=api_key_of_azure_ai
75
+ ```
76
+
77
  ### About Ollama API
78
 
79
  We provide an Ollama-compatible interfaces for LightRAG, aiming to emulate LightRAG as an Ollama chat model. This allows AI chat frontends supporting Ollama, such as Open WebUI, to access LightRAG easily.
 
171
  | --llm-binding | ollama | LLM binding to be used. Supported: lollms, ollama, openai |
172
  | --llm-binding-host | (dynamic) | LLM server host URL. Defaults based on binding: http://localhost:11434 (ollama), http://localhost:9600 (lollms), https://api.openai.com/v1 (openai) |
173
  | --llm-model | mistral-nemo:latest | LLM model name |
174
+ | --llm-binding-api-key | None | API Key for OpenAI Alike LLM |
175
  | --embedding-binding | ollama | Embedding binding to be used. Supported: lollms, ollama, openai |
176
  | --embedding-binding-host | (dynamic) | Embedding server host URL. Defaults based on binding: http://localhost:11434 (ollama), http://localhost:9600 (lollms), https://api.openai.com/v1 (openai) |
177
  | --embedding-model | bge-m3:latest | Embedding model name |