srivatsavaasista commited on
Commit
09dcbd7
1 Parent(s): da923d3

Training in progress epoch 0

Browse files
Files changed (4) hide show
  1. README.md +7 -26
  2. special_tokens_map.json +1 -0
  3. tf_model.h5 +1 -1
  4. tokenizer.json +6 -1
README.md CHANGED
@@ -3,20 +3,20 @@ license: mit
3
  tags:
4
  - generated_from_keras_callback
5
  model-index:
6
- - name: textgenerator
7
  results: []
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
  probably proofread and complete it, then remove this comment. -->
12
 
13
- # textgenerator
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 5.7309
18
- - Validation Loss: 6.3030
19
- - Epoch: 19
20
 
21
  ## Model description
22
 
@@ -35,33 +35,14 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -887, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
- | 10.2937 | 9.6742 | 0 |
46
- | 9.2524 | 8.8693 | 1 |
47
- | 8.2666 | 7.9135 | 2 |
48
- | 7.3757 | 7.3273 | 3 |
49
- | 6.9147 | 6.9741 | 4 |
50
- | 6.5844 | 6.7259 | 5 |
51
- | 6.3340 | 6.5383 | 6 |
52
- | 6.0966 | 6.3904 | 7 |
53
- | 5.8915 | 6.3030 | 8 |
54
- | 5.7314 | 6.3030 | 9 |
55
- | 5.7268 | 6.3030 | 10 |
56
- | 5.7300 | 6.3030 | 11 |
57
- | 5.7283 | 6.3030 | 12 |
58
- | 5.7314 | 6.3030 | 13 |
59
- | 5.7284 | 6.3030 | 14 |
60
- | 5.7323 | 6.3030 | 15 |
61
- | 5.7304 | 6.3030 | 16 |
62
- | 5.7292 | 6.3030 | 17 |
63
- | 5.7311 | 6.3030 | 18 |
64
- | 5.7309 | 6.3030 | 19 |
65
 
66
 
67
  ### Framework versions
 
3
  tags:
4
  - generated_from_keras_callback
5
  model-index:
6
+ - name: srivatsavaasista/textgenerator
7
  results: []
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
  probably proofread and complete it, then remove this comment. -->
12
 
13
+ # srivatsavaasista/textgenerator
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 7.5550
18
+ - Validation Loss: 6.5004
19
+ - Epoch: 0
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 398, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
+ | 7.5550 | 6.5004 | 0 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
 
48
  ### Framework versions
special_tokens_map.json CHANGED
@@ -1,5 +1,6 @@
1
  {
2
  "bos_token": "<|endoftext|>",
3
  "eos_token": "<|endoftext|>",
 
4
  "unk_token": "<|endoftext|>"
5
  }
 
1
  {
2
  "bos_token": "<|endoftext|>",
3
  "eos_token": "<|endoftext|>",
4
+ "pad_token": "<|endoftext|>",
5
  "unk_token": "<|endoftext|>"
6
  }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:052e4d1272e318a233390071963b1352af2bd119d934dfe44dfb108bc13d1cac
3
  size 503289960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff29c36a1d3cb11d0e3475533506fb2e5533b573645c1c20b245a84e594c57b9
3
  size 503289960
tokenizer.json CHANGED
@@ -1,6 +1,11 @@
1
  {
2
  "version": "1.0",
3
- "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": {
4
+ "direction": "Right",
5
+ "max_length": 40,
6
+ "strategy": "LongestFirst",
7
+ "stride": 0
8
+ },
9
  "padding": null,
10
  "added_tokens": [
11
  {