mdmachine commited on
Commit
276c295
1 Parent(s): 4b781b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -14
README.md CHANGED
@@ -6,10 +6,12 @@ base_model:
6
  - black-forest-labs/FLUX.1-dev
7
  ---
8
 
 
 
9
  **FLUX Model Merges & Tweaks: Detail Enhancement and Acceleration**
10
  =====================================================
11
 
12
- <img src="https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/resolve/main/images/ComfyUI_2024-11-24_0231.jpg" alt="ComfyUI 2024-11-24 0231">
13
 
14
  This repository contains merged models, built upon the base models:
15
  - [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha)
@@ -17,21 +19,34 @@ This repository contains merged models, built upon the base models:
17
 
18
  Detail enhancement and acceleration techniques have been applied, particularly optimized for NVIDIA 4XXX cards (maybe 3XXX too). The goal is to have high efficiency accelerated models with lower overhead. The (de-re-destill) model can be used with CFG remaining at 1. Also the baked-in accelerators work as intended.
19
 
 
 
 
 
20
  **Detail Plus! - Built upon the base model [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha):**
21
 
22
  **Detail Enhancement Used:**
23
  - **Style LORA - Extreme Detailer for FLUX.1-dev** (Weight: 0.5) ([Model Link](https://civitai.com/models/832683))
24
  - **Best of Flux: Style Enhancing LoRA** (Weight: 0.25) ([Model Link](https://civitai.com/models/821668))
25
 
26
- 1. **GGUF Quantized Models (Q8_0)**:
27
- - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
 
 
 
 
 
 
 
 
 
28
  - [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
 
29
  - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
30
 
31
- 2. **SAFETensors Format (fp8_34m3fn_fast)**:
32
- - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-fp8_e4m3fn_fast.safetensors)
33
- - [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-fp8_e4m3fn_fast.safetensors)
34
- - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-fp8_e4m3fn_fast.safetensors)
35
 
36
  **Detail Plus! De-Re-Distilled (Built upon the base model [Flux-dev-de-distill](https://huggingface.co/nyanko7/flux-dev-de-distill)
37
 
@@ -40,11 +55,39 @@ Detail enhancement and acceleration techniques have been applied, particularly o
40
  - **Best of Flux: Style Enhancing LoRA** (Weight: 0.15) ([Model Link](https://civitai.com/models/821668))
41
 
42
  **Re-Distillation Used:**
43
- - **Flux distilled lora** (Weight: 0.15) ([Model Link](https://civitai.com/models/977247/flux-distilled-lora))
 
 
 
 
 
 
 
 
 
 
44
 
45
- - Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast
46
- - Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast
47
- - Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  **Acceleration Credits:**
50
 
@@ -55,10 +98,12 @@ Detail enhancement and acceleration techniques have been applied, particularly o
55
  * [Alimama Creative](https://huggingface.co/alimama-creative), a renowned NLP innovator, optimized the following model:
56
  - [FLUX.1-Turbo-Alpha](https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha)
57
 
 
 
58
  **Attribution and Licensing Notice:**
59
 
60
- The FLUX.1 [dev] Model is licensed by Black Forest Labs, Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs, Inc.
61
 
62
- Our model weights are released under the FLUX.1 [dev] [Non-Commercial License](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/LICENSE.md).
63
 
64
- This merge combines the strengths of these models, applying detail enhancement and acceleration techniques to create a unique and powerful AI model built upon [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha). We hope this contributes positively to the development of the NLP community!
 
6
  - black-forest-labs/FLUX.1-dev
7
  ---
8
 
9
+ <img src="" alt="FLUX">
10
+
11
  **FLUX Model Merges & Tweaks: Detail Enhancement and Acceleration**
12
  =====================================================
13
 
14
+ <img src="" alt="FLUX Model Merges & Tweaks: Detail Enhancement and Acceleration">
15
 
16
  This repository contains merged models, built upon the base models:
17
  - [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha)
 
19
 
20
  Detail enhancement and acceleration techniques have been applied, particularly optimized for NVIDIA 4XXX cards (maybe 3XXX too). The goal is to have high efficiency accelerated models with lower overhead. The (de-re-destill) model can be used with CFG remaining at 1. Also the baked-in accelerators work as intended.
21
 
22
+ **=====================================================**
23
+
24
+ <img src="" alt="Detail Plus!">
25
+
26
  **Detail Plus! - Built upon the base model [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha):**
27
 
28
  **Detail Enhancement Used:**
29
  - **Style LORA - Extreme Detailer for FLUX.1-dev** (Weight: 0.5) ([Model Link](https://civitai.com/models/832683))
30
  - **Best of Flux: Style Enhancing LoRA** (Weight: 0.25) ([Model Link](https://civitai.com/models/821668))
31
 
32
+ 1. **SAFETensors Format (fp8_34m3fn_fast)**:
33
+ - [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus-fp8_e4m3fn_fast.safetensors)
34
+ - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-fp8_e4m3fn_fast.safetensors)
35
+ - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus-fp8_e4m3fn_fast.safetensors)
36
+
37
+ 2. **SAFETensors Format (FULL)**:
38
+ - flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.safetensors
39
+ - flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.safetensors
40
+ - flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.safetensors
41
+
42
+ 3. **GGUF Quantized Models (Q8_0)**:
43
  - [flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
44
+ - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
45
  - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized.gguf)
46
 
47
+ **=====================================================**
48
+
49
+ <img src="" alt="Detail Plus! De-Re-Distilled">
 
50
 
51
  **Detail Plus! De-Re-Distilled (Built upon the base model [Flux-dev-de-distill](https://huggingface.co/nyanko7/flux-dev-de-distill)
52
 
 
55
  - **Best of Flux: Style Enhancing LoRA** (Weight: 0.15) ([Model Link](https://civitai.com/models/821668))
56
 
57
  **Re-Distillation Used:**
58
+ - **Flux distilled lora** (Weight: -1.00) ([Model Link](https://civitai.com/models/977247/flux-distilled-lora))
59
+
60
+ 1. **SAFETensors Format (fp8_34m3fn_fast)**:
61
+ - [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/FP8/Version%202/Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast-V2.safetensors)
62
+ - [Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/FP8/Version%202/Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast-V2.safetensors)
63
+ - [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/FP8/Version%202/Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-fp8_e4m3fn_fast-V2.safetensors)
64
+
65
+ 2. **SAFETensors Format (FULL)**:
66
+ - Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2.safetensors
67
+ - Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2.safetensors
68
+ - Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2.safetensors
69
 
70
+ 3. **GGUF Quantized Models (BF16)**:
71
+ - Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-BF16.gguf
72
+ - Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-BF16.gguf
73
+ - Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-BF16.gguf
74
+
75
+ 4. **GGUF Quantized Models (Q8_0)**:
76
+ - [Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q8_0/Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0.gguf)
77
+ - [Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q8_0/Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0.gguf)
78
+ - [Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/de-re-distill/GGUF/Q8_0/Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q8_0.gguf)
79
+
80
+ 5. **GGUF Quantized Models (Q6_K)**:
81
+ - Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K.gguf
82
+ - Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K.gguf
83
+ - Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q6_K.gguf
84
+
85
+ 6. **GGUF Quantized Models (Q4_K_S)**:
86
+ - Flux-dev-de-re-distill-Hyper.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf
87
+ - Flux-dev-de-re-distill-Hyper.16.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf
88
+ - Flux-dev-de-re-distill-Turbo.8.Step-Detail.Plus-ReDis-MAIN-V2-Q4_K_S.gguf
89
+
90
+ **=====================================================**
91
 
92
  **Acceleration Credits:**
93
 
 
98
  * [Alimama Creative](https://huggingface.co/alimama-creative), a renowned NLP innovator, optimized the following model:
99
  - [FLUX.1-Turbo-Alpha](https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha)
100
 
101
+ **=====================================================**
102
+
103
  **Attribution and Licensing Notice:**
104
 
105
+ The [FLUX.1-dev Model](https://huggingface.co/black-forest-labs/FLUX.1-dev) is licensed by Black Forest Labs, Inc. under the FLUX.1-dev [Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). Copyright Black Forest Labs, Inc.
106
 
107
+ Our model weights are released under the FLUX.1-dev [Non-Commercial License](https://huggingface.co/mdmachine/FLUX.Model.Merge-Detail.Enhancement.and.Acceleration/blob/main/LICENSE.md).
108
 
109
+ This merge combines the strengths of these models, applying detail enhancement and acceleration techniques to create a unique and powerful AI model built upon [Freepik's Flux.1-Lite-8B-alpha](https://huggingface.co/Freepik/flux.1-lite-8B-alpha) & [Flux-dev-de-distill](https://huggingface.co/nyanko7/flux-dev-de-distill). We hope this contributes positively to the development of the NLP community!