ayshi commited on
Commit
680ce60
1 Parent(s): 3a166a2

Training in progress epoch 0

Browse files
Files changed (5) hide show
  1. README.md +8 -17
  2. config.json +1 -1
  3. special_tokens_map.json +7 -0
  4. tf_model.h5 +1 -1
  5. tokenizer_config.json +11 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: mit
3
- base_model: xlm-roberta-base
4
  tags:
5
  - generated_from_keras_callback
6
  model-index:
@@ -13,12 +13,12 @@ probably proofread and complete it, then remove this comment. -->
13
 
14
  # ayshi/basic_roberta
15
 
16
- This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Train Loss: 0.5068
19
- - Validation Loss: 0.8336
20
- - Train Accuracy: 0.7644
21
- - Epoch: 9
22
 
23
  ## Model description
24
 
@@ -37,23 +37,14 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 320, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
41
  - training_precision: float32
42
 
43
  ### Training results
44
 
45
  | Train Loss | Validation Loss | Train Accuracy | Epoch |
46
  |:----------:|:---------------:|:--------------:|:-----:|
47
- | 1.3394 | 1.1667 | 0.6667 | 0 |
48
- | 1.1446 | 1.1147 | 0.6667 | 1 |
49
- | 1.0726 | 1.0547 | 0.6667 | 2 |
50
- | 0.9986 | 1.0134 | 0.6844 | 3 |
51
- | 0.8905 | 0.9697 | 0.7378 | 4 |
52
- | 0.7645 | 0.9131 | 0.7556 | 5 |
53
- | 0.6761 | 0.8459 | 0.76 | 6 |
54
- | 0.5992 | 0.7954 | 0.7778 | 7 |
55
- | 0.5088 | 0.8055 | 0.7733 | 8 |
56
- | 0.5068 | 0.8336 | 0.7644 | 9 |
57
 
58
 
59
  ### Framework versions
 
1
  ---
2
  license: mit
3
+ base_model: ayshi/basic_roberta
4
  tags:
5
  - generated_from_keras_callback
6
  model-index:
 
13
 
14
  # ayshi/basic_roberta
15
 
16
+ This model is a fine-tuned version of [ayshi/basic_roberta](https://huggingface.co/ayshi/basic_roberta) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Train Loss: 0.5675
19
+ - Validation Loss: 0.8138
20
+ - Train Accuracy: 0.7556
21
+ - Epoch: 0
22
 
23
  ## Model description
24
 
 
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 960, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
41
  - training_precision: float32
42
 
43
  ### Training results
44
 
45
  | Train Loss | Validation Loss | Train Accuracy | Epoch |
46
  |:----------:|:---------------:|:--------------:|:-----:|
47
+ | 0.5675 | 0.8138 | 0.7556 | 0 |
 
 
 
 
 
 
 
 
 
48
 
49
 
50
  ### Framework versions
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "xlm-roberta-base",
3
  "architectures": [
4
  "XLMRobertaForSequenceClassification"
5
  ],
 
1
  {
2
+ "_name_or_path": "ayshi/basic_roberta",
3
  "architectures": [
4
  "XLMRobertaForSequenceClassification"
5
  ],
special_tokens_map.json CHANGED
@@ -1,4 +1,11 @@
1
  {
 
 
 
 
 
 
 
2
  "bos_token": "<s>",
3
  "cls_token": "<s>",
4
  "eos_token": "</s>",
 
1
  {
2
+ "additional_special_tokens": [
3
+ "<s>",
4
+ "<pad>",
5
+ "</s>",
6
+ "<unk>",
7
+ "<mask>"
8
+ ],
9
  "bos_token": "<s>",
10
  "cls_token": "<s>",
11
  "eos_token": "</s>",
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fd8a7258c0a6ebb8ee47331d7d2c492bcdbe5df37dec05e7800932b5cd775f68
3
  size 1112482624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:065b9a8885bdf0abc715afd0a6094bdaf9545e8e566addae7e99a08a706e7e73
3
  size 1112482624
tokenizer_config.json CHANGED
@@ -41,15 +41,25 @@
41
  "special": true
42
  }
43
  },
44
- "additional_special_tokens": [],
 
 
 
 
 
 
45
  "bos_token": "<s>",
46
  "clean_up_tokenization_spaces": true,
47
  "cls_token": "<s>",
48
  "eos_token": "</s>",
49
  "mask_token": "<mask>",
 
50
  "model_max_length": 512,
51
  "pad_token": "<pad>",
52
  "sep_token": "</s>",
 
53
  "tokenizer_class": "XLMRobertaTokenizer",
 
 
54
  "unk_token": "<unk>"
55
  }
 
41
  "special": true
42
  }
43
  },
44
+ "additional_special_tokens": [
45
+ "<s>",
46
+ "<pad>",
47
+ "</s>",
48
+ "<unk>",
49
+ "<mask>"
50
+ ],
51
  "bos_token": "<s>",
52
  "clean_up_tokenization_spaces": true,
53
  "cls_token": "<s>",
54
  "eos_token": "</s>",
55
  "mask_token": "<mask>",
56
+ "max_length": 512,
57
  "model_max_length": 512,
58
  "pad_token": "<pad>",
59
  "sep_token": "</s>",
60
+ "stride": 0,
61
  "tokenizer_class": "XLMRobertaTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
  "unk_token": "<unk>"
65
  }