dreamdrop-art commited on
Commit
130eafb
1 Parent(s): 8903ff7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -3
README.md CHANGED
@@ -1,3 +1,61 @@
1
- ---
2
- license: creativeml-openrail-m
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ library_name: diffusers
4
+ pipeline_tag: text-to-image
5
+ base_model: stabilityai/stable-diffusion-xl-base-1.0
6
+ tags:
7
+ - safetensors
8
+ - stable-diffusion
9
+ - sdxl
10
+ - flash
11
+ - sdxl-flash
12
+ - lightning
13
+ - turbo
14
+ - lcm
15
+ - hyper
16
+ - fast
17
+ - fast-sdxl
18
+ - sd-community
19
+ inference:
20
+ parameters:
21
+ num_inference_steps: 7
22
+ guidance_scale: 3
23
+ negative_prompt: >-
24
+ (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong
25
+ anatomy, extra limb, missing limb, floating limbs, (mutated hands and
26
+ fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting,
27
+ blurry, amputation
28
+ ---
29
+ # **SDXL Flash Mini** *in collaboration with [Project Fluently](https://hf.co/fluently)*
30
+
31
+ ![preview](images/preview.png)
32
+
33
+ Introducing the new fast model SDXL Flash (Mini), we learned that all fast XL models work fast, but the quality decreases, and we also made a fast model, but it is not as fast as LCM, Turbo, Lightning and Hyper, but the quality is higher. Below you will see the study with steps and cfg.
34
+
35
+ `It weighs less, consumes less video memory and other resources, and the quality has not dropped much.`
36
+
37
+ ### Steps and CFG (Guidance)
38
+
39
+ ![steps_and_cfg_grid_test](images/steps_cfg_grid.png)
40
+
41
+ ### Optimal settings
42
+ - **Steps**: 6-9
43
+ - **CFG Scale**: 2.5-3.5
44
+ - **Sampler**: DPM++ SDE
45
+
46
+ ### Diffusers usage
47
+
48
+ ```bash
49
+ pip install torch diffusers
50
+ ```
51
+
52
+ ```py
53
+ import torch
54
+ from diffusers import StableDiffusionXLPipeline, DPMSolverSinglestepScheduler
55
+ # Load model.
56
+ pipe = StableDiffusionXLPipeline.from_pretrained("sd-community/sdxl-flash-mini", torch_dtype=torch.float16, variant="fp16").to("cuda")
57
+ # Ensure sampler uses "trailing" timesteps.
58
+ pipe.scheduler = DPMSolverSinglestepScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
59
+ # Image generation.
60
+ pipe("a happy dog, sunny day, realism", num_inference_steps=7, guidance_scale=3).images[0].save("output.png")
61
+ ```