Ashraf-kasem commited on
Commit
cc33197
1 Parent(s): be20873

Training in progress epoch 0

Browse files
README.md CHANGED
@@ -14,10 +14,8 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 6.8590
18
- - Train Accuracy: 0.0463
19
- - Validation Loss: 6.3627
20
- - Validation Accuracy: 0.0277
21
  - Epoch: 0
22
 
23
  ## Model description
@@ -37,19 +35,19 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
41
  - training_precision: mixed_float16
42
 
43
  ### Training results
44
 
45
- | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
46
- |:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
47
- | 6.8590 | 0.0463 | 6.3627 | 0.0277 | 0 |
48
 
49
 
50
  ### Framework versions
51
 
52
- - Transformers 4.26.0.dev0
53
- - TensorFlow 2.10.0
54
  - Datasets 2.8.0
55
  - Tokenizers 0.13.2
 
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 4.8667
18
+ - Validation Loss: 4.5766
 
 
19
  - Epoch: 0
20
 
21
  ## Model description
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 3892, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
+ | Train Loss | Validation Loss | Epoch |
44
+ |:----------:|:---------------:|:-----:|
45
+ | 4.8667 | 4.5766 | 0 |
46
 
47
 
48
  ### Framework versions
49
 
50
+ - Transformers 4.25.1
51
+ - TensorFlow 2.9.0
52
  - Datasets 2.8.0
53
  - Tokenizers 0.13.2
checkpoint/extra_data.pickle CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3375750977d80ab6edb52cc5bc194812b64761972ea69a32a58f7656f235c7bd
3
  size 995530137
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bed9e2a4f2ed868e7517c109610526f3e345ac59d12de62b8037d8631d068a7e
3
  size 995530137
checkpoint/weights.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c31efc43715da42aca76f566437e67de18175f583ec7f0d2ffb97180d7670d04
3
  size 497935440
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c8888721188c45722117ef0d47917c03a4f94e2b8672ffc36c0c9bf9dd19b00
3
  size 497935440
config.json CHANGED
@@ -32,7 +32,7 @@
32
  "max_length": 50
33
  }
34
  },
35
- "transformers_version": "4.26.0.dev0",
36
  "use_cache": true,
37
  "vocab_size": 50257
38
  }
 
32
  "max_length": 50
33
  }
34
  },
35
+ "transformers_version": "4.25.1",
36
  "use_cache": true,
37
  "vocab_size": 50257
38
  }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c31efc43715da42aca76f566437e67de18175f583ec7f0d2ffb97180d7670d04
3
  size 497935440
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c8888721188c45722117ef0d47917c03a4f94e2b8672ffc36c0c9bf9dd19b00
3
  size 497935440