robingeibel commited on
Commit
0ebee2b
1 Parent(s): 785a57c

Training in progress epoch 0

Browse files
Files changed (4) hide show
  1. README.md +6 -6
  2. config.json +2 -2
  3. tf_model.h5 +1 -1
  4. tokenizer_config.json +1 -1
README.md CHANGED
@@ -11,10 +11,10 @@ probably proofread and complete it, then remove this comment. -->
11
 
12
  # robingeibel/longformer-base-finetuned-big_patent
13
 
14
- This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
- - Train Loss: 1.3006
17
- - Validation Loss: 1.1233
18
  - Epoch: 0
19
 
20
  ## Model description
@@ -35,18 +35,18 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 152946, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
38
- - training_precision: float32
39
 
40
  ### Training results
41
 
42
  | Train Loss | Validation Loss | Epoch |
43
  |:----------:|:---------------:|:-----:|
44
- | 1.3006 | 1.1233 | 0 |
45
 
46
 
47
  ### Framework versions
48
 
49
- - Transformers 4.19.2
50
  - TensorFlow 2.8.2
51
  - Datasets 2.2.2
52
  - Tokenizers 0.12.1
 
11
 
12
  # robingeibel/longformer-base-finetuned-big_patent
13
 
14
+ This model is a fine-tuned version of [robingeibel/longformer-base-finetuned-big_patent](https://huggingface.co/robingeibel/longformer-base-finetuned-big_patent) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Train Loss: 1.2195
17
+ - Validation Loss: 1.0887
18
  - Epoch: 0
19
 
20
  ## Model description
 
35
 
36
  The following hyperparameters were used during training:
37
  - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 152946, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
38
+ - training_precision: mixed_float16
39
 
40
  ### Training results
41
 
42
  | Train Loss | Validation Loss | Epoch |
43
  |:----------:|:---------------:|:-----:|
44
+ | 1.2195 | 1.0887 | 0 |
45
 
46
 
47
  ### Framework versions
48
 
49
+ - Transformers 4.19.3
50
  - TensorFlow 2.8.2
51
  - Datasets 2.2.2
52
  - Tokenizers 0.12.1
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "allenai/longformer-base-4096",
3
  "architectures": [
4
  "LongformerForMaskedLM"
5
  ],
@@ -37,7 +37,7 @@
37
  "pad_token_id": 1,
38
  "position_embedding_type": "absolute",
39
  "sep_token_id": 2,
40
- "transformers_version": "4.19.2",
41
  "type_vocab_size": 1,
42
  "use_cache": true,
43
  "vocab_size": 50265
 
1
  {
2
+ "_name_or_path": "robingeibel/longformer-base-finetuned-big_patent",
3
  "architectures": [
4
  "LongformerForMaskedLM"
5
  ],
 
37
  "pad_token_id": 1,
38
  "position_embedding_type": "absolute",
39
  "sep_token_id": 2,
40
+ "transformers_version": "4.19.3",
41
  "type_vocab_size": 1,
42
  "use_cache": true,
43
  "vocab_size": 50265
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f124e42a7410c1cc98e2d5fd4e8c00c85ca28403a90e732fb3346f55f1f937b1
3
  size 762211788
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32ae97545e8f7375fcf0d3a3382a7cd606f9061ebc52660576ed7b9b78ece264
3
  size 762211788
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"errors": "replace", "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": "<mask>", "add_prefix_space": false, "trim_offsets": true, "model_max_length": 4096, "special_tokens_map_file": null, "name_or_path": "allenai/longformer-base-4096", "tokenizer_class": "LongformerTokenizer"}
 
1
+ {"errors": "replace", "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": "<mask>", "add_prefix_space": false, "trim_offsets": true, "model_max_length": 4096, "special_tokens_map_file": null, "name_or_path": "robingeibel/longformer-base-finetuned-big_patent", "tokenizer_class": "LongformerTokenizer"}