JoeySalmons
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,23 @@
|
|
1 |
-
|
|
|
|
|
2 |
Checkpoint 278k-full_state_dict.pth has been trained on about 500 epochs and is well into being overfit on the 100k training images.
|
3 |
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
https://github.com/apapiu/transformer_latent_diffusion
|
6 |
|
7 |
-
|
|
|
|
|
|
|
|
|
8 |
https://www.reddit.com/r/MachineLearning/comments/198eiv1/p_small_latent_diffusion_transformer_from_scratch/
|
|
|
1 |
+
# Overview
|
2 |
+
|
3 |
+
These are latent diffusion transformer models trained from scratch on 100k 256x256 images.
|
4 |
Checkpoint 278k-full_state_dict.pth has been trained on about 500 epochs and is well into being overfit on the 100k training images.
|
5 |
|
6 |
+
The two checkpoints for 300k and 395k steps were further trained on a Midjourney dataset of 600k images for 9.4 epochs (300k steps) and 50 epochs (395k steps) at a constant LR of 5e-5.
|
7 |
+
The additional training on the MJ dataset took ~8 hours on a 4090 with batch size 256.
|
8 |
+
|
9 |
+
The models are the same as in the Google Colab below: embed_dim=512, n_layers=8, total parameters=30507328 (30M)
|
10 |
+
|
11 |
+
# Colab Training Notebook
|
12 |
+
https://colab.research.google.com/drive/1sKk0usxEF4bmdCDcNQJQNMt4l9qBOeAM?usp=sharing
|
13 |
+
|
14 |
+
# Github Repo (not mine)
|
15 |
+
This repo contains the original training code:
|
16 |
https://github.com/apapiu/transformer_latent_diffusion
|
17 |
|
18 |
+
# Datasets
|
19 |
+
https://huggingface.co/apapiu/small_ldt/tree/main
|
20 |
+
|
21 |
+
# Other
|
22 |
+
See this Reddit post by u/spring_m for some more information:
|
23 |
https://www.reddit.com/r/MachineLearning/comments/198eiv1/p_small_latent_diffusion_transformer_from_scratch/
|