Commit
•
fdd3a09
1
Parent(s):
8a64e00
End of training
Browse files- README.md +66 -0
- embeddings.safetensors +3 -0
- logs/dreambooth-lora-sd-xl/1701973385.0754457/events.out.tfevents.1701973385.r-multimodalart-autotrain-apolizinho-comendo-salada-o-b49a4zmjz.262.1 +3 -0
- logs/dreambooth-lora-sd-xl/1701973385.07724/hparams.yml +74 -0
- logs/dreambooth-lora-sd-xl/events.out.tfevents.1701973385.r-multimodalart-autotrain-apolizinho-comendo-salada-o-b49a4zmjz.262.0 +3 -0
- pytorch_lora_weights.safetensors +3 -0
README.md
ADDED
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- stable-diffusion-xl
|
4 |
+
- stable-diffusion-xl-diffusers
|
5 |
+
- text-to-image
|
6 |
+
- diffusers
|
7 |
+
- lora
|
8 |
+
- template:sd-lora
|
9 |
+
|
10 |
+
base_model: stabilityai/stable-diffusion-xl-base-1.0
|
11 |
+
instance_prompt: A photo of <s0><s1>
|
12 |
+
license: openrail++
|
13 |
+
---
|
14 |
+
|
15 |
+
# SDXL LoRA DreamBooth - multimodalart/apolizinho-comendo-salada
|
16 |
+
|
17 |
+
<Gallery />
|
18 |
+
|
19 |
+
## Model description
|
20 |
+
|
21 |
+
### These are multimodalart/apolizinho-comendo-salada LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
|
22 |
+
|
23 |
+
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
|
24 |
+
|
25 |
+
LoRA for the text encoder was enabled: False.
|
26 |
+
|
27 |
+
Pivotal tuning was enabled: True.
|
28 |
+
|
29 |
+
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
30 |
+
|
31 |
+
## Trigger words
|
32 |
+
|
33 |
+
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
|
34 |
+
|
35 |
+
to trigger concept `TOK` → use `<s0><s1>` in your prompt
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
|
40 |
+
|
41 |
+
```py
|
42 |
+
from diffusers import AutoPipelineForText2Image
|
43 |
+
import torch
|
44 |
+
from huggingface_hub import hf_hub_download
|
45 |
+
from safetensors.torch import load_file
|
46 |
+
|
47 |
+
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
|
48 |
+
pipeline.load_lora_weights('multimodalart/apolizinho-comendo-salada', weight_name='pytorch_lora_weights.safetensors')
|
49 |
+
embedding_path = hf_hub_download(repo_id='multimodalart/apolizinho-comendo-salada', filename="embeddings.safetensors", repo_type="model")
|
50 |
+
state_dict = load_file(embedding_path)
|
51 |
+
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
|
52 |
+
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
|
53 |
+
|
54 |
+
image = pipeline('A photo of <s0><s1>').images[0]
|
55 |
+
```
|
56 |
+
|
57 |
+
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
58 |
+
|
59 |
+
## Download model
|
60 |
+
|
61 |
+
Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
|
62 |
+
- Download the LoRA *.safetensors [here](multimodalart/apolizinho-comendo-salada/tree/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
|
63 |
+
- Download the text embeddings *.safetensors [here](multimodalart/apolizinho-comendo-salada/tree/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
|
64 |
+
|
65 |
+
All [Files & versions](multimodalart/apolizinho-comendo-salada/tree/main).
|
66 |
+
|
embeddings.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:20bd8c6a03f7c8928d5590062789bf9cf45c479306009790c1670849c2e9e8c1
|
3 |
+
size 8344
|
logs/dreambooth-lora-sd-xl/1701973385.0754457/events.out.tfevents.1701973385.r-multimodalart-autotrain-apolizinho-comendo-salada-o-b49a4zmjz.262.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aa717db13d5c5e647f14a599b52965f9fe3ac4ef86716767bbd311edb1283d16
|
3 |
+
size 3801
|
logs/dreambooth-lora-sd-xl/1701973385.07724/hparams.yml
ADDED
@@ -0,0 +1,74 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
adam_beta1: 0.9
|
2 |
+
adam_beta2: 0.99
|
3 |
+
adam_epsilon: 1.0e-08
|
4 |
+
adam_weight_decay: 0.0001
|
5 |
+
adam_weight_decay_text_encoder: 0.0
|
6 |
+
allow_tf32: false
|
7 |
+
cache_dir: null
|
8 |
+
cache_latents: true
|
9 |
+
caption_column: prompt
|
10 |
+
center_crop: false
|
11 |
+
checkpointing_steps: 5000
|
12 |
+
checkpoints_total_limit: null
|
13 |
+
class_data_dir: 1bffed94-8172-4c7f-871b-dd568b5664bb
|
14 |
+
class_prompt: a photo of a person
|
15 |
+
crops_coords_top_left_h: 0
|
16 |
+
crops_coords_top_left_w: 0
|
17 |
+
dataloader_num_workers: 0
|
18 |
+
dataset_config_name: null
|
19 |
+
dataset_name: ./373f41a7-d55d-4151-a6d0-4f07a734150d
|
20 |
+
enable_xformers_memory_efficient_attention: false
|
21 |
+
gradient_accumulation_steps: 1
|
22 |
+
gradient_checkpointing: true
|
23 |
+
hub_model_id: null
|
24 |
+
hub_token: null
|
25 |
+
image_column: image
|
26 |
+
instance_data_dir: null
|
27 |
+
instance_prompt: A photo of <s0><s1>
|
28 |
+
learning_rate: 1.0
|
29 |
+
local_rank: -1
|
30 |
+
logging_dir: logs
|
31 |
+
lr_num_cycles: 1
|
32 |
+
lr_power: 1.0
|
33 |
+
lr_scheduler: constant
|
34 |
+
lr_warmup_steps: 0
|
35 |
+
max_grad_norm: 1.0
|
36 |
+
max_train_steps: 400
|
37 |
+
mixed_precision: bf16
|
38 |
+
num_class_images: 150
|
39 |
+
num_new_tokens_per_abstraction: 2
|
40 |
+
num_train_epochs: 6
|
41 |
+
num_validation_images: 4
|
42 |
+
optimizer: prodigy
|
43 |
+
output_dir: apolizinho-comendo-salada
|
44 |
+
pretrained_model_name_or_path: stabilityai/stable-diffusion-xl-base-1.0
|
45 |
+
pretrained_vae_model_name_or_path: madebyollin/sdxl-vae-fp16-fix
|
46 |
+
prior_generation_precision: null
|
47 |
+
prior_loss_weight: 1.0
|
48 |
+
prodigy_beta3: 0.0
|
49 |
+
prodigy_decouple: true
|
50 |
+
prodigy_safeguard_warmup: true
|
51 |
+
prodigy_use_bias_correction: true
|
52 |
+
push_to_hub: true
|
53 |
+
rank: 64
|
54 |
+
repeats: 3
|
55 |
+
report_to: tensorboard
|
56 |
+
resolution: 1024
|
57 |
+
resume_from_checkpoint: null
|
58 |
+
revision: null
|
59 |
+
sample_batch_size: 4
|
60 |
+
scale_lr: false
|
61 |
+
seed: 42
|
62 |
+
snr_gamma: null
|
63 |
+
text_encoder_lr: 1.0
|
64 |
+
token_abstraction: TOK
|
65 |
+
train_batch_size: 2
|
66 |
+
train_text_encoder: false
|
67 |
+
train_text_encoder_frac: 1.0
|
68 |
+
train_text_encoder_ti: true
|
69 |
+
train_text_encoder_ti_frac: 0.5
|
70 |
+
use_8bit_adam: false
|
71 |
+
validation_epochs: 50
|
72 |
+
validation_prompt: null
|
73 |
+
variant: null
|
74 |
+
with_prior_preservation: true
|
logs/dreambooth-lora-sd-xl/events.out.tfevents.1701973385.r-multimodalart-autotrain-apolizinho-comendo-salada-o-b49a4zmjz.262.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1a346a6a4c28bae57eb7a041982adc8861417f8fbd150fd3a689eabeb8169062
|
3 |
+
size 33434
|
pytorch_lora_weights.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2a1f8f859c36c9dea5abebf3290fabffda1516d5c5060a48b6fdcb02f4e61e04
|
3 |
+
size 371758976
|