End of training
Browse files- .gitattributes +1 -0
- README.md +4 -4
- val_imgs_grid.png +0 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
val_imgs_grid.png filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
|
2 |
---
|
3 |
license: creativeml-openrail-m
|
4 |
-
base_model:
|
5 |
datasets:
|
6 |
- fantasyfish/laion-art
|
7 |
tags:
|
@@ -14,7 +14,7 @@ inference: true
|
|
14 |
|
15 |
# Text-to-image finetuning - Vishnou/sd-laion-art
|
16 |
|
17 |
-
This pipeline was finetuned from **
|
18 |
|
19 |
![val_imgs_grid](./val_imgs_grid.png)
|
20 |
|
@@ -37,7 +37,7 @@ image.save("my_image.png")
|
|
37 |
|
38 |
These are the key hyperparameters used during training:
|
39 |
|
40 |
-
* Epochs:
|
41 |
* Learning rate: 1e-05
|
42 |
* Batch size: 16
|
43 |
* Gradient accumulation steps: 4
|
@@ -45,4 +45,4 @@ These are the key hyperparameters used during training:
|
|
45 |
* Mixed-precision: bf16
|
46 |
|
47 |
|
48 |
-
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/vishnou/text2image-fine-tune/runs/
|
|
|
1 |
|
2 |
---
|
3 |
license: creativeml-openrail-m
|
4 |
+
base_model: stabilityai/stable-diffusion-2-1
|
5 |
datasets:
|
6 |
- fantasyfish/laion-art
|
7 |
tags:
|
|
|
14 |
|
15 |
# Text-to-image finetuning - Vishnou/sd-laion-art
|
16 |
|
17 |
+
This pipeline was finetuned from **stabilityai/stable-diffusion-2-1** on the **fantasyfish/laion-art** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['A man in a suit']:
|
18 |
|
19 |
![val_imgs_grid](./val_imgs_grid.png)
|
20 |
|
|
|
37 |
|
38 |
These are the key hyperparameters used during training:
|
39 |
|
40 |
+
* Epochs: 3
|
41 |
* Learning rate: 1e-05
|
42 |
* Batch size: 16
|
43 |
* Gradient accumulation steps: 4
|
|
|
45 |
* Mixed-precision: bf16
|
46 |
|
47 |
|
48 |
+
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/vishnou/text2image-fine-tune/runs/l2qhp86s).
|
val_imgs_grid.png
CHANGED
Git LFS Details
|