Sultannn commited on
Commit
3846c80
1 Parent(s): a1d6f73

Training in progress epoch 0

Browse files
Files changed (5) hide show
  1. README.md +5 -9
  2. config.json +1 -1
  3. tf_model.h5 +1 -1
  4. tokenizer.json +1 -1
  5. tokenizer_config.json +1 -1
README.md CHANGED
@@ -13,9 +13,9 @@ probably proofread and complete it, then remove this comment. -->
13
 
14
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
- - Train Loss: 5.6842
17
- - Validation Loss: 6.0821
18
- - Epoch: 4
19
 
20
  ## Model description
21
 
@@ -34,18 +34,14 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 0.0006, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0006, 'decay_steps': 33845, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 700, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
38
  - training_precision: mixed_float16
39
 
40
  ### Training results
41
 
42
  | Train Loss | Validation Loss | Epoch |
43
  |:----------:|:---------------:|:-----:|
44
- | 6.9373 | 6.3278 | 0 |
45
- | 6.1036 | 6.0726 | 1 |
46
- | 5.8224 | 6.0130 | 2 |
47
- | 5.6876 | 5.9888 | 3 |
48
- | 5.6842 | 6.0821 | 4 |
49
 
50
 
51
  ### Framework versions
 
13
 
14
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Train Loss: 7.3561
17
+ - Validation Loss: 6.5449
18
+ - Epoch: 0
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 0.0001, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 35810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 700, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
38
  - training_precision: mixed_float16
39
 
40
  ### Training results
41
 
42
  | Train Loss | Validation Loss | Epoch |
43
  |:----------:|:---------------:|:-----:|
44
+ | 7.3561 | 6.5449 | 0 |
 
 
 
 
45
 
46
 
47
  ### Framework versions
config.json CHANGED
@@ -10,7 +10,7 @@
10
  "initializer_range": 0.02,
11
  "layer_norm_epsilon": 1e-05,
12
  "model_type": "gpt2",
13
- "n_ctx": 512,
14
  "n_embd": 768,
15
  "n_head": 12,
16
  "n_inner": null,
 
10
  "initializer_range": 0.02,
11
  "layer_norm_epsilon": 1e-05,
12
  "model_type": "gpt2",
13
+ "n_ctx": 256,
14
  "n_embd": 768,
15
  "n_head": 12,
16
  "n_inner": null,
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e0b5439034e568252566a81598810f99ed11dfca25896bd418ced8bb7d0c942a
3
  size 451065960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f279d4712d076f33fb9881e3f7fa5de1d76d23c8e951d315b0cd58294f538cc6
3
  size 451065960
tokenizer.json CHANGED
@@ -2,7 +2,7 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 512,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 256,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
tokenizer_config.json CHANGED
@@ -18,7 +18,7 @@
18
  "single_word": false
19
  },
20
  "errors": "replace",
21
- "max_len": 512,
22
  "name_or_path": "./GPT-2-puisi",
23
  "pad_token": null,
24
  "special_tokens_map_file": null,
 
18
  "single_word": false
19
  },
20
  "errors": "replace",
21
+ "max_len": 256,
22
  "name_or_path": "./GPT-2-puisi",
23
  "pad_token": null,
24
  "special_tokens_map_file": null,