Anatole RAIMBERT commited on
Commit
9967d2a
·
1 Parent(s): 314e858

[CONF] changed dockerfile to enable nvidia gpu usage

Browse files
Files changed (3) hide show
  1. Dockerfile +10 -9
  2. README.md +29 -1
  3. docker-compose.yml +9 -9
Dockerfile CHANGED
@@ -1,4 +1,4 @@
1
- FROM ubuntu:22.04
2
 
3
  ENV DEBIAN_FRONTEND=noninteractive \
4
  TZ=Europe/Paris
@@ -48,28 +48,29 @@ ADD ./workflows ./user/default/workflows
48
  # Checkpoints
49
  RUN echo "Downloading checkpoints..."
50
  # Quiet because it's a big file (2.13 GB)
51
- # RUN wget -c https://huggingface.co/genai-archive/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.fp16.safetensors -P ./models/checkpoints/ --quiet
52
 
53
  # VAE
54
  RUN echo "Downloading VAE..."
55
- # RUN wget -c https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors -P ./models/vae/
56
 
57
  # Loras
58
  RUN echo "Downloading Loras..."
59
- # RUN wget -c https://huggingface.co/2ndChanceParis/industrial-lora/resolve/main/industrial_lora-07.safetensors -P ./models/loras
60
 
61
  # CLIP model
62
  RUN echo "Downloading CLIP models..."
63
  # Quiet because it's a big file (9.79 GB)
64
- # RUN wget -c https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors -P ./models/clip/ --quiet
65
- # RUN wget -c https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors -P ./models/clip/
66
 
67
  # Unet model
68
- # RUN echo "Downloading Unet models..."
69
  # Quiet because it's a big file (23.8 GB)
70
- # RUN wget -c https://huggingface.co/camenduru/FLUX.1-dev/resolve/fc63f3204a12362f98c04bc4c981a06eb9123eee/flux1-canny-dev.safetensors -P ./models/unet/ --quiet
71
 
72
  # --- Optional models ---
 
73
 
74
  # T2I-Adapter
75
  # RUN echo "Downloading T2I-Adapters..."
@@ -120,4 +121,4 @@ RUN cd custom_nodes && git clone https://github.com/ltdrdata/ComfyUI-Manager.git
120
 
121
  RUN echo "Done"
122
 
123
- CMD ["python", "main.py", "--cpu", "--listen", "0.0.0.0", "--port", "7860", "--output-directory", "${USE_PERSISTENT_DATA:+/data/}"]
 
1
+ FROM nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04
2
 
3
  ENV DEBIAN_FRONTEND=noninteractive \
4
  TZ=Europe/Paris
 
48
  # Checkpoints
49
  RUN echo "Downloading checkpoints..."
50
  # Quiet because it's a big file (2.13 GB)
51
+ RUN wget -c https://huggingface.co/genai-archive/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.fp16.safetensors -P ./models/checkpoints/ --quiet
52
 
53
  # VAE
54
  RUN echo "Downloading VAE..."
55
+ RUN wget -c https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors -P ./models/vae/
56
 
57
  # Loras
58
  RUN echo "Downloading Loras..."
59
+ RUN wget -c https://huggingface.co/2ndChanceParis/industrial-lora/resolve/main/industrial_lora-07.safetensors -P ./models/loras
60
 
61
  # CLIP model
62
  RUN echo "Downloading CLIP models..."
63
  # Quiet because it's a big file (9.79 GB)
64
+ RUN wget -c https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors -P ./models/clip/ --quiet
65
+ RUN wget -c https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors -P ./models/clip/
66
 
67
  # Unet model
68
+ RUN echo "Downloading Unet models..."
69
  # Quiet because it's a big file (23.8 GB)
70
+ RUN wget -c https://huggingface.co/camenduru/FLUX.1-dev/resolve/fc63f3204a12362f98c04bc4c981a06eb9123eee/flux1-canny-dev.safetensors -P ./models/unet/ --quiet
71
 
72
  # --- Optional models ---
73
+ # Not used at the moment
74
 
75
  # T2I-Adapter
76
  # RUN echo "Downloading T2I-Adapters..."
 
121
 
122
  RUN echo "Done"
123
 
124
+ CMD ["python", "main.py", "--listen", "0.0.0.0", "--port", "7860", "--output-directory", "${USE_PERSISTENT_DATA:+/data/}"]
README.md CHANGED
@@ -8,4 +8,32 @@ pinned: false
8
  short_description: ComfyUI API - Docker
9
  ---
10
 
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  short_description: ComfyUI API - Docker
9
  ---
10
 
11
+ # Comfy UI Api
12
+
13
+ This is a simple example of how to run ComfyUI on a nvidia cuda machine using Docker.
14
+
15
+ ## Requirements
16
+
17
+ - docker & docker-compose
18
+ - nvidia gpu
19
+
20
+ ## Installation
21
+
22
+ ### Local
23
+
24
+ 1. Clone the repository
25
+ 2. Run `docker compose up` in the root directory of the repository
26
+ 3. Open a browser and navigate to :
27
+ - ComfyUI: `http://localhost:7860`
28
+
29
+ ### Hugging Face Spaces
30
+
31
+ When pushing the repository to Hugging Face Spaces, the container is automatically built and started from the
32
+ root [Dockerfile](Dockerfile).
33
+ Hugging Face Spaces exposes port 7860 on which ComfyUI is running after build and startup.
34
+
35
+ ## Usage
36
+
37
+ ### Workflows
38
+
39
+ Workflows stored in [workflows](workflows) are copied to the container.
docker-compose.yml CHANGED
@@ -3,14 +3,14 @@ version: "3"
3
  services:
4
  comfyui:
5
  build: .
6
- # environment:
7
- # - NVIDIA_VISIBLE_DEVICES=all
8
  ports:
9
  - "7860:7860"
10
- # deploy:
11
- # resources:
12
- # reservations:
13
- # devices:
14
- # - driver: nvidia
15
- # count: 1
16
- # capabilities: [gpu]
 
3
  services:
4
  comfyui:
5
  build: .
6
+ environment:
7
+ - NVIDIA_VISIBLE_DEVICES=all
8
  ports:
9
  - "7860:7860"
10
+ deploy:
11
+ resources:
12
+ reservations:
13
+ devices:
14
+ - driver: nvidia
15
+ count: 1
16
+ capabilities: [ gpu ]