playerzer0x commited on
Commit
c364f4c
·
verified ·
1 Parent(s): dcab484

Model card auto-generated by SimpleTuner

Browse files
Files changed (1) hide show
  1. README.md +179 -0
README.md ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: "FLUX.1-dev"
4
+ tags:
5
+ - flux
6
+ - flux-diffusers
7
+ - text-to-image
8
+ - diffusers
9
+ - simpletuner
10
+ - not-for-all-audiences
11
+ - lora
12
+ - template:sd-lora
13
+ - lycoris
14
+ inference: true
15
+
16
+ ---
17
+
18
+ # growwithdaisy/crrllcrrllxovrtn_subjects_flat_20241123_110734
19
+
20
+ This is a LyCORIS adapter derived from [FLUX.1-dev](https://huggingface.co/FLUX.1-dev).
21
+
22
+
23
+ The main validation prompt used during training was:
24
+ ```
25
+ a photo of a daisy
26
+ ```
27
+
28
+
29
+ ## Validation settings
30
+ - CFG: `3.5`
31
+ - CFG Rescale: `0.0`
32
+ - Steps: `20`
33
+ - Sampler: `FlowMatchEulerDiscreteScheduler`
34
+ - Seed: `69`
35
+ - Resolution: `1024x1024`
36
+ - Skip-layer guidance:
37
+
38
+ Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
39
+
40
+
41
+
42
+
43
+ <Gallery />
44
+
45
+ The text encoder **was not** trained.
46
+ You may reuse the base model text encoder for inference.
47
+
48
+
49
+ ## Training settings
50
+
51
+ - Training epochs: 57
52
+ - Training steps: 10001
53
+ - Learning rate: 0.0001
54
+ - Learning rate schedule: constant
55
+ - Warmup steps: 0
56
+ - Max grad norm: 2.0
57
+ - Effective batch size: 16
58
+ - Micro-batch size: 2
59
+ - Gradient accumulation steps: 1
60
+ - Number of GPUs: 8
61
+ - Gradient checkpointing: True
62
+ - Prediction type: flow-matching (extra parameters=['shift=3', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flow_matching_loss=compatible'])
63
+ - Optimizer: optimi-stableadamwweight_decay=1e-3
64
+ - Trainable parameter precision: Pure BF16
65
+ - Caption dropout probability: 5.0%
66
+
67
+ ### LyCORIS Config:
68
+ ```json
69
+ {
70
+ "algo": "lokr",
71
+ "multiplier": 1,
72
+ "linear_dim": 1000000,
73
+ "linear_alpha": 1,
74
+ "factor": 12,
75
+ "init_lokr_norm": 0.001,
76
+ "apply_preset": {
77
+ "target_module": [
78
+ "FluxTransformerBlock",
79
+ "FluxSingleTransformerBlock"
80
+ ],
81
+ "module_algo_map": {
82
+ "Attention": {
83
+ "factor": 12
84
+ },
85
+ "FeedForward": {
86
+ "factor": 6
87
+ }
88
+ }
89
+ }
90
+ }
91
+ ```
92
+
93
+ ## Datasets
94
+
95
+ ### crrllcrrllxovrtn_subjects_flat-512
96
+ - Repeats: 0
97
+ - Total number of images: ~272
98
+ - Total number of aspect buckets: 11
99
+ - Resolution: 0.262144 megapixels
100
+ - Cropped: False
101
+ - Crop style: None
102
+ - Crop aspect: None
103
+ - Used for regularisation data: No
104
+ ### crrllcrrllxovrtn_subjects_flat-768
105
+ - Repeats: 1
106
+ - Total number of images: ~232
107
+ - Total number of aspect buckets: 14
108
+ - Resolution: 0.589824 megapixels
109
+ - Cropped: False
110
+ - Crop style: None
111
+ - Crop aspect: None
112
+ - Used for regularisation data: No
113
+ ### crrllcrrllxovrtn_subjects_flat-1024
114
+ - Repeats: 2
115
+ - Total number of images: ~168
116
+ - Total number of aspect buckets: 12
117
+ - Resolution: 1.048576 megapixels
118
+ - Cropped: False
119
+ - Crop style: None
120
+ - Crop aspect: None
121
+ - Used for regularisation data: No
122
+
123
+
124
+ ## Inference
125
+
126
+
127
+ ```python
128
+ import torch
129
+ from diffusers import DiffusionPipeline
130
+ from lycoris import create_lycoris_from_weights
131
+
132
+
133
+ def download_adapter(repo_id: str):
134
+ import os
135
+ from huggingface_hub import hf_hub_download
136
+ adapter_filename = "pytorch_lora_weights.safetensors"
137
+ cache_dir = os.environ.get('HF_PATH', os.path.expanduser('~/.cache/huggingface/hub/models'))
138
+ cleaned_adapter_path = repo_id.replace("/", "_").replace("\\", "_").replace(":", "_")
139
+ path_to_adapter = os.path.join(cache_dir, cleaned_adapter_path)
140
+ path_to_adapter_file = os.path.join(path_to_adapter, adapter_filename)
141
+ os.makedirs(path_to_adapter, exist_ok=True)
142
+ hf_hub_download(
143
+ repo_id=repo_id, filename=adapter_filename, local_dir=path_to_adapter
144
+ )
145
+
146
+ return path_to_adapter_file
147
+
148
+ model_id = 'FLUX.1-dev'
149
+ adapter_repo_id = 'playerzer0x/growwithdaisy/crrllcrrllxovrtn_subjects_flat_20241123_110734'
150
+ adapter_filename = 'pytorch_lora_weights.safetensors'
151
+ adapter_file_path = download_adapter(repo_id=adapter_repo_id)
152
+ pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
153
+ lora_scale = 1.0
154
+ wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_file_path, pipeline.transformer)
155
+ wrapper.merge_to()
156
+
157
+ prompt = "a photo of a daisy"
158
+
159
+
160
+ ## Optional: quantise the model to save on vram.
161
+ ## Note: The model was not quantised during training, so it is not necessary to quantise it during inference time.
162
+ #from optimum.quanto import quantize, freeze, qint8
163
+ #quantize(pipeline.transformer, weights=qint8)
164
+ #freeze(pipeline.transformer)
165
+
166
+ pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
167
+ image = pipeline(
168
+ prompt=prompt,
169
+ num_inference_steps=20,
170
+ generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(69),
171
+ width=1024,
172
+ height=1024,
173
+ guidance_scale=3.5,
174
+ ).images[0]
175
+ image.save("output.png", format="PNG")
176
+ ```
177
+
178
+
179
+