onurxtasar commited on
Commit
6219d2c
1 Parent(s): 15eaa35

Update README.md

Browse files

- Added metrics (Table 3 from the paper), and some training details
- Added Eyal's name

Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -11,7 +11,7 @@ license: cc-by-nc-nd-4.0
11
  # ⚡ FlashDiffusion: FlashSDXL ⚡
12
 
13
 
14
- Flash Diffusion is a diffusion distillation method proposed in [ADD ARXIV]() *by Clément Chadebec, Onur Tasar and Benjamin Aubin.*
15
  This model is a **26.4M** LoRA distilled version of [SDXL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) model that is able to generate images in **4 steps**. The main purpose of this model is to reproduce the main results of the paper.
16
 
17
 
@@ -26,7 +26,7 @@ The model can be used using the `StableDiffusionPipeline` from `diffusers` libra
26
  ```python
27
  from diffusers import DiffusionPipeline, LCMScheduler
28
 
29
- adapter_id = "jasperai/flash-sd"
30
 
31
  pipe = DiffusionPipeline.from_pretrained(
32
  "stabilityai/stable-diffusion-xl-base-1.0",
@@ -53,7 +53,11 @@ image = pipe(prompt, num_inference_steps=4, guidance_scale=0).images[0]
53
  </p>
54
 
55
  # Training Details
 
56
 
57
-
 
 
 
58
  ## License
59
  This model is released under the the Creative Commons BY-NC license.
 
11
  # ⚡ FlashDiffusion: FlashSDXL ⚡
12
 
13
 
14
+ Flash Diffusion is a diffusion distillation method proposed in [ADD ARXIV]() *by Clément Chadebec, Onur Tasar, Eyal Benaroche, and Benjamin Aubin.*
15
  This model is a **26.4M** LoRA distilled version of [SDXL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) model that is able to generate images in **4 steps**. The main purpose of this model is to reproduce the main results of the paper.
16
 
17
 
 
26
  ```python
27
  from diffusers import DiffusionPipeline, LCMScheduler
28
 
29
+ adapter_id = "jasperai/flash-sdxl"
30
 
31
  pipe = DiffusionPipeline.from_pretrained(
32
  "stabilityai/stable-diffusion-xl-base-1.0",
 
53
  </p>
54
 
55
  # Training Details
56
+ The model was trained for 20k iterations on 4 H100 GPUs (representing approximately 176 hours of training). Please refer to the [paper]() for further parameters details.
57
 
58
+ **Metrics on COCO 2014 validation (Table 3)**
59
+ - FID-10k: 21.62 (4 NFE)
60
+ - CLIP Score: 0.327 (4 NFE)
61
+
62
  ## License
63
  This model is released under the the Creative Commons BY-NC license.