mytoon commited on
Commit
ba2b361
1 Parent(s): a2a2b05

End of training

Browse files
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ tags:
4
+ - stable-diffusion-xl
5
+ - stable-diffusion-xl-diffusers
6
+ - text-to-image
7
+ - diffusers
8
+ - lora
9
+ - template:sd-lora
10
+
11
+ base_model: stabilityai/stable-diffusion-xl-base-1.0
12
+ instance_prompt: 1girl in TOK style
13
+ license: openrail++
14
+ ---
15
+
16
+ # SDXL LoRA DreamBooth - mytoon/toon_lora_fixed1
17
+
18
+ <Gallery />
19
+
20
+ ## Model description
21
+
22
+ These are mytoon/toon_lora_fixed1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
23
+
24
+ The weights were trained using [DreamBooth](https://dreambooth.github.io/).
25
+
26
+ LoRA for the text encoder was enabled: False.
27
+
28
+ Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
29
+
30
+ ## Trigger words
31
+
32
+ You should use 1girl in TOK style to trigger the image generation.
33
+
34
+ ## Download model
35
+
36
+ Weights for this model are available in Safetensors format.
37
+
38
+ [Download](mytoon/toon_lora_fixed1/tree/main) them in the Files & versions tab.
39
+
logs/dreambooth-lora-sd-xl/1701089579.7668817/events.out.tfevents.1701089579.eec392a5e4b2.1490.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3eb0f62679c8dbb7af09ab808c9eb6f92c67880b14878e44dd3d57aeaf6ea6a
3
+ size 3175
logs/dreambooth-lora-sd-xl/1701089579.768748/hparams.yml ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ adam_beta1: 0.9
2
+ adam_beta2: 0.999
3
+ adam_epsilon: 1.0e-08
4
+ adam_weight_decay: 0.0001
5
+ adam_weight_decay_text_encoder: 0.001
6
+ allow_tf32: false
7
+ cache_dir: null
8
+ caption_column: null
9
+ center_crop: false
10
+ checkpointing_steps: 717
11
+ checkpoints_total_limit: null
12
+ class_data_dir: null
13
+ class_prompt: null
14
+ crops_coords_top_left_h: 0
15
+ crops_coords_top_left_w: 0
16
+ dataloader_num_workers: 0
17
+ dataset_config_name: null
18
+ dataset_name: toon
19
+ enable_xformers_memory_efficient_attention: false
20
+ gradient_accumulation_steps: 3
21
+ gradient_checkpointing: true
22
+ hub_model_id: null
23
+ hub_token: null
24
+ image_column: image
25
+ instance_data_dir: null
26
+ instance_prompt: 1girl in TOK style
27
+ learning_rate: 0.0001
28
+ local_rank: -1
29
+ logging_dir: logs
30
+ lr_num_cycles: 1
31
+ lr_power: 1.0
32
+ lr_scheduler: constant
33
+ lr_warmup_steps: 0
34
+ max_grad_norm: 1.0
35
+ max_train_steps: 500
36
+ mixed_precision: fp16
37
+ num_class_images: 100
38
+ num_train_epochs: 500
39
+ num_validation_images: 4
40
+ optimizer: AdamW
41
+ output_dir: tool_lora
42
+ pretrained_model_name_or_path: stabilityai/stable-diffusion-xl-base-1.0
43
+ pretrained_vae_model_name_or_path: madebyollin/sdxl-vae-fp16-fix
44
+ prior_generation_precision: null
45
+ prior_loss_weight: 1.0
46
+ prodigy_beta3: null
47
+ prodigy_decouple: true
48
+ prodigy_safeguard_warmup: true
49
+ prodigy_use_bias_correction: true
50
+ push_to_hub: false
51
+ rank: 4
52
+ repeats: 1
53
+ report_to: tensorboard
54
+ resolution: 1024
55
+ resume_from_checkpoint: null
56
+ revision: null
57
+ sample_batch_size: 4
58
+ scale_lr: false
59
+ seed: 0
60
+ snr_gamma: 5.0
61
+ text_encoder_lr: 5.0e-06
62
+ train_batch_size: 1
63
+ train_text_encoder: false
64
+ use_8bit_adam: true
65
+ validation_epochs: 50
66
+ validation_prompt: null
67
+ variant: null
68
+ with_prior_preservation: false
logs/dreambooth-lora-sd-xl/events.out.tfevents.1701089579.eec392a5e4b2.1490.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15d4a33e25640a0c8d1b519fed4d48bf8db923281a61327463d9d131afd6ba4f
3
+ size 41834
pytorch_lora_weights.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c34d5510fe0014fcad27c1dd30ee868ab3d5380b1923152835f0a51c915952c
3
+ size 23396024