Sean Halpin
commited on
Commit
•
6b3c4e8
1
Parent(s):
8fcb394
update model card README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ metrics: []
|
|
10 |
<!-- This model card has been generated automatically according to the information the training script had access to. You
|
11 |
should probably proofread and complete it, then remove this comment. -->
|
12 |
|
13 |
-
#
|
14 |
|
15 |
## Model description
|
16 |
|
@@ -37,8 +37,8 @@ on the `CelebA` dataset.
|
|
37 |
|
38 |
The following hyperparameters were used during training:
|
39 |
- learning_rate: 0.0001
|
40 |
-
- train_batch_size:
|
41 |
-
- eval_batch_size:
|
42 |
- gradient_accumulation_steps: 1
|
43 |
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
|
44 |
- lr_scheduler: cosine
|
@@ -50,5 +50,5 @@ The following hyperparameters were used during training:
|
|
50 |
|
51 |
### Training results
|
52 |
|
53 |
-
📈 [TensorBoard logs](https://huggingface.co/shalpin87/
|
54 |
|
|
|
10 |
<!-- This model card has been generated automatically according to the information the training script had access to. You
|
11 |
should probably proofread and complete it, then remove this comment. -->
|
12 |
|
13 |
+
# diffusion_conditional
|
14 |
|
15 |
## Model description
|
16 |
|
|
|
37 |
|
38 |
The following hyperparameters were used during training:
|
39 |
- learning_rate: 0.0001
|
40 |
+
- train_batch_size: 32
|
41 |
+
- eval_batch_size: 32
|
42 |
- gradient_accumulation_steps: 1
|
43 |
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
|
44 |
- lr_scheduler: cosine
|
|
|
50 |
|
51 |
### Training results
|
52 |
|
53 |
+
📈 [TensorBoard logs](https://huggingface.co/shalpin87/diffusion_conditional/tensorboard?#scalars)
|
54 |
|