eienmojiki
commited on
Commit
•
0e00a90
1
Parent(s):
4a1f385
Update .env
Browse files
.env
CHANGED
@@ -7,49 +7,4 @@ REBUILD=true
|
|
7 |
|
8 |
GO_TAGS=stablediffusion
|
9 |
|
10 |
-
LOCALAI_IMAGE_PATH=/tmp/generated/images
|
11 |
-
|
12 |
-
## Specify a default upload limit in MB (whisper)
|
13 |
-
LOCALAI_UPLOAD_LIMIT=15
|
14 |
-
|
15 |
-
## List of external GRPC backends (note on the container image this variable is already set to use extra backends available in extra/)
|
16 |
-
# LOCALAI_EXTERNAL_GRPC_BACKENDS=my-backend:127.0.0.1:9000,my-backend2:/usr/bin/backend.py
|
17 |
-
|
18 |
-
### Advanced settings ###
|
19 |
-
### Those are not really used by LocalAI, but from components in the stack ###
|
20 |
-
##
|
21 |
-
### Preload libraries
|
22 |
-
# LD_PRELOAD=
|
23 |
-
|
24 |
-
### Huggingface cache for models
|
25 |
-
# HUGGINGFACE_HUB_CACHE=/usr/local/huggingface
|
26 |
-
|
27 |
-
### Python backends GRPC max workers
|
28 |
-
### Default number of workers for GRPC Python backends.
|
29 |
-
### This actually controls wether a backend can process multiple requests or not.
|
30 |
-
# PYTHON_GRPC_MAX_WORKERS=1
|
31 |
-
|
32 |
-
### Define the number of parallel LLAMA.cpp workers (Defaults to 1)
|
33 |
-
# LLAMACPP_PARALLEL=1
|
34 |
-
|
35 |
-
### Define a list of GRPC Servers for llama-cpp workers to distribute the load
|
36 |
-
# https://github.com/ggerganov/llama.cpp/pull/6829
|
37 |
-
# https://github.com/ggerganov/llama.cpp/blob/master/examples/rpc/README.md
|
38 |
-
# LLAMACPP_GRPC_SERVERS=""
|
39 |
-
|
40 |
-
### Enable to run parallel requests
|
41 |
-
# LOCALAI_PARALLEL_REQUESTS=true
|
42 |
-
|
43 |
-
### Watchdog settings
|
44 |
-
###
|
45 |
-
# Enables watchdog to kill backends that are inactive for too much time
|
46 |
-
# LOCALAI_WATCHDOG_IDLE=true
|
47 |
-
#
|
48 |
-
# Time in duration format (e.g. 1h30m) after which a backend is considered idle
|
49 |
-
# LOCALAI_WATCHDOG_IDLE_TIMEOUT=5m
|
50 |
-
#
|
51 |
-
# Enables watchdog to kill backends that are busy for too much time
|
52 |
-
# LOCALAI_WATCHDOG_BUSY=true
|
53 |
-
#
|
54 |
-
# Time in duration format (e.g. 1h30m) after which a backend is considered busy
|
55 |
-
# LOCALAI_WATCHDOG_BUSY_TIMEOUT=5m
|
|
|
7 |
|
8 |
GO_TAGS=stablediffusion
|
9 |
|
10 |
+
LOCALAI_IMAGE_PATH=/tmp/generated/images
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|