Spaces:
Sleeping
Sleeping
Commit
β’
3db3a54
1
Parent(s):
82e30a4
improve readme
Browse files
README.md
CHANGED
@@ -19,12 +19,16 @@ it requires various components to run for the frontend, backend, LLM, SDXL etc.
|
|
19 |
|
20 |
If you try to duplicate the project, you will see it requires some variables:
|
21 |
|
22 |
-
- `
|
23 |
-
- `HF_API_TOKEN`:
|
24 |
-
- `
|
25 |
-
- `
|
26 |
-
|
27 |
-
|
|
|
|
|
|
|
|
|
28 |
|
29 |
-> If you intend to run it with local, cloud-hosted and/or proprietary models **you are going to need to code π¨βπ»**.
|
30 |
|
@@ -41,6 +45,8 @@ This is a new option added recently, where you can use one of the models from th
|
|
41 |
To activate it, create a `.env.local` configuration file:
|
42 |
|
43 |
```bash
|
|
|
|
|
44 |
HF_API_TOKEN="Your Hugging Face token"
|
45 |
|
46 |
# codellama/CodeLlama-7b-hf" is used by default, but you can change this
|
@@ -53,7 +59,10 @@ HF_INFERENCE_API_MODEL="codellama/CodeLlama-7b-hf"
|
|
53 |
If your would like to run the AI Comic Factory on a private LLM running on the Hugging Face Inference Endpoint service, create a `.env.local` configuration file:
|
54 |
|
55 |
```bash
|
|
|
|
|
56 |
HF_API_TOKEN="Your Hugging Face token"
|
|
|
57 |
HF_INFERENCE_ENDPOINT_URL="path to your inference endpoint url"
|
58 |
```
|
59 |
|
|
|
19 |
|
20 |
If you try to duplicate the project, you will see it requires some variables:
|
21 |
|
22 |
+
- `LLM_ENGINE`: can be either "INFERENCE_API" or "INFERENCE_ENDPOINT"
|
23 |
+
- `HF_API_TOKEN`: necessary if you decide to use an inference api model or a custom inference endpoint
|
24 |
+
- `HF_INFERENCE_ENDPOINT_URL`: necessary if you decide to use a custom inference endpoint
|
25 |
+
- `RENDERING_ENGINE`: can only be "VIDEOCHAIN" for now, unless you code your custom solution
|
26 |
+
- `VIDEOCHAIN_API_URL`: url to the VideoChain API server
|
27 |
+
- `VIDEOCHAIN_API_TOKEN`: secret token to access the VideoChain API server
|
28 |
+
|
29 |
+
Please read the `.env` default config file for more informations.
|
30 |
+
To customise a variable locally, you should create a `.env.local`
|
31 |
+
(do not commit this file as it will contain your secrets).
|
32 |
|
33 |
-> If you intend to run it with local, cloud-hosted and/or proprietary models **you are going to need to code π¨βπ»**.
|
34 |
|
|
|
45 |
To activate it, create a `.env.local` configuration file:
|
46 |
|
47 |
```bash
|
48 |
+
LLM_ENGINE="INFERENCE_API"
|
49 |
+
|
50 |
HF_API_TOKEN="Your Hugging Face token"
|
51 |
|
52 |
# codellama/CodeLlama-7b-hf" is used by default, but you can change this
|
|
|
59 |
If your would like to run the AI Comic Factory on a private LLM running on the Hugging Face Inference Endpoint service, create a `.env.local` configuration file:
|
60 |
|
61 |
```bash
|
62 |
+
LLM_ENGINE="INFERENCE_ENDPOINT"
|
63 |
+
|
64 |
HF_API_TOKEN="Your Hugging Face token"
|
65 |
+
|
66 |
HF_INFERENCE_ENDPOINT_URL="path to your inference endpoint url"
|
67 |
```
|
68 |
|