jbilcke-hf HF staff commited on
Commit
98cdbd8
1 Parent(s): e8a29a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -1
README.md CHANGED
@@ -8,4 +8,66 @@ pinned: true
8
  app_port: 3000
9
  ---
10
 
11
- AI Comic Factory
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  app_port: 3000
9
  ---
10
 
11
+ AI Comic Factory
12
+
13
+ ## Running the project at home
14
+
15
+ First, I would like to highlight that everything is open-source (see [here](https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory/tree/main), [here](https://huggingface.co/spaces/jbilcke-hf/VideoChain-API/tree/main), [here](https://huggingface.co/spaces/hysts/SD-XL/tree/main), here).
16
+
17
+ However the project isn't a monolithic Space that can be duplicated and ran immediately:
18
+ it requires various components to run for the frontend, backend, LLM, SDXL etc.
19
+
20
+ If you try to duplicate the project, you will see it requires some variables:
21
+
22
+ - `HF_INFERENCE_ENDPOINT_URL`: This is the endpoint to call the LLM
23
+ - `HF_API_TOKEN`: The Hugging Face token used to call the inference endpoint (if you intent to use a LLM hosted on Hugging Face)
24
+ - `RENDERING_ENGINE_API`: This is the API that generates images
25
+ - `VC_SECRET_ACCESS_TOKEN`: Token used to call the rendering engine API (not used yet, but it's gonna be because [💸](https://en.wikipedia.org/wiki/No_such_thing_as_a_free_lunch))
26
+
27
+ This is the architecture for the current production AI Comic Factory.
28
+
29
+ -> If you intend to run it with local, cloud-hosted and/or proprietary models **you are going to need to code 👨‍💻**.
30
+
31
+ ## The LLM API (Large Language Model)
32
+
33
+ Currently the AI Comic Factory uses [Llama-2 70b](https://huggingface.co/blog/llama2) through an [Inference Endpoint](https://huggingface.co/docs/inference-endpoints/index).
34
+
35
+ You have two options:
36
+
37
+ ## Option 1: Fork and modify the code to use another LLM
38
+
39
+ If you fork the AI Comic Factory, you will be able to use another API and model, such a locally-running Llama 7b.
40
+
41
+ To run the LLM locally, you can use [TGI](https://github.com/huggingface/text-generation-inference) (Please read [this post](https://github.com/huggingface/text-generation-inference/issues/726) for more information about licensing).
42
+
43
+ ## Option 2: Fork and modify the code to use human content instead
44
+
45
+ Another option could be to disable the LLM completely and replace it with a human-generated story instead (by returning mock or static data).
46
+
47
+ ## Notes
48
+
49
+ It is possible that I modify the AI Comic Factory to make it easier in the future (eg. add support for OpenAI or Replicate)
50
+
51
+ ## The Rendering API
52
+
53
+ This API is used to generate the panel images. This is an API I created for my various projects at Hugging Face.
54
+
55
+ I haven't written documentation for it yet, but basically it is "just a wrapper ™" around other existing APIs:
56
+
57
+ - The [hysts/SD-XL](https://huggingface.co/spaces/hysts/SD-XL?duplicate=true) Space by [@hysts](https://huggingface.co/hysts)
58
+ - And other APIs for making videos, adding audio etc.. but you won't need them for the AI Comic Factory
59
+
60
+ ### Option 1: Deploy VideoChain yourself
61
+
62
+ You will have to [clone](https://huggingface.co/spaces/jbilcke-hf/VideoChain-API?duplicate=true) the [source-code](https://huggingface.co/spaces/jbilcke-hf/VideoChain-API/tree/main)
63
+
64
+ Unfortunately, I haven't had the time to write the documentation for VideoChain yet.
65
+ (When I do I will update this document to point to the VideoChain's README)
66
+
67
+ ### Option 2: Use another SDXL API
68
+
69
+ If you fork the project you will be able to modify the code to use the Stable Diffusion technology of your choice (local, open-source, your custom HF Space etc)
70
+
71
+ ## Notes
72
+
73
+ It is possible that I modify the AI Comic Factory to make it easier in the future (eg. add support for Replicate)