cedpsam commited on
Commit
445b59c
1 Parent(s): 09c4fb2

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  model-index:
@@ -12,7 +11,7 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # EleutherAI_gpt-neo-125M-stablediffionprompts
14
 
15
- This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an [Gustavosta/Stable-Diffusion-Prompts](https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts).
16
 
17
  ## Model description
18
 
@@ -32,17 +31,21 @@ More information needed
32
 
33
  The following hyperparameters were used during training:
34
  - learning_rate: 5e-05
35
- - train_batch_size: 8
36
  - eval_batch_size: 8
37
  - seed: 42
38
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
  - lr_scheduler_type: linear
40
- - num_epochs: 3
41
  - mixed_precision_training: Native AMP
42
 
 
 
 
 
43
  ### Framework versions
44
 
45
- - Transformers 4.22.2
46
- - Pytorch 1.12.1+cu113
47
- - Datasets 2.5.1
48
  - Tokenizers 0.12.1
 
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
 
11
 
12
  # EleutherAI_gpt-neo-125M-stablediffionprompts
13
 
14
+ This model was trained from scratch on the None dataset.
15
 
16
  ## Model description
17
 
 
31
 
32
  The following hyperparameters were used during training:
33
  - learning_rate: 5e-05
34
+ - train_batch_size: 1024
35
  - eval_batch_size: 8
36
  - seed: 42
37
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
38
  - lr_scheduler_type: linear
39
+ - training_steps: 44000
40
  - mixed_precision_training: Native AMP
41
 
42
+ ### Training results
43
+
44
+
45
+
46
  ### Framework versions
47
 
48
+ - Transformers 4.20.1
49
+ - Pytorch 1.11.0
50
+ - Datasets 2.1.0
51
  - Tokenizers 0.12.1