robingeibel commited on
Commit
09dfe23
1 Parent(s): 1b4561f

Training in progress epoch 0

Browse files
Files changed (8) hide show
  1. README.md +52 -0
  2. config.json +44 -0
  3. merges.txt +0 -0
  4. special_tokens_map.json +1 -0
  5. tf_model.h5 +3 -0
  6. tokenizer.json +0 -0
  7. tokenizer_config.json +1 -0
  8. vocab.json +0 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_keras_callback
4
+ model-index:
5
+ - name: robingeibel/longformer-base-finetuned-big_patent
6
+ results: []
7
+ ---
8
+
9
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
+ probably proofread and complete it, then remove this comment. -->
11
+
12
+ # robingeibel/longformer-base-finetuned-big_patent
13
+
14
+ This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on an unknown dataset.
15
+ It achieves the following results on the evaluation set:
16
+ - Train Loss: 1.3006
17
+ - Validation Loss: 1.1233
18
+ - Epoch: 0
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 152946, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
38
+ - training_precision: float32
39
+
40
+ ### Training results
41
+
42
+ | Train Loss | Validation Loss | Epoch |
43
+ |:----------:|:---------------:|:-----:|
44
+ | 1.3006 | 1.1233 | 0 |
45
+
46
+
47
+ ### Framework versions
48
+
49
+ - Transformers 4.19.2
50
+ - TensorFlow 2.8.2
51
+ - Datasets 2.2.2
52
+ - Tokenizers 0.12.1
config.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "allenai/longformer-base-4096",
3
+ "architectures": [
4
+ "LongformerForMaskedLM"
5
+ ],
6
+ "attention_mode": "longformer",
7
+ "attention_probs_dropout_prob": 0.1,
8
+ "attention_window": [
9
+ 512,
10
+ 512,
11
+ 512,
12
+ 512,
13
+ 512,
14
+ 512,
15
+ 512,
16
+ 512,
17
+ 512,
18
+ 512,
19
+ 512,
20
+ 512
21
+ ],
22
+ "bos_token_id": 0,
23
+ "classifier_dropout": null,
24
+ "eos_token_id": 2,
25
+ "gradient_checkpointing": false,
26
+ "hidden_act": "gelu",
27
+ "hidden_dropout_prob": 0.1,
28
+ "hidden_size": 768,
29
+ "ignore_attention_mask": false,
30
+ "initializer_range": 0.02,
31
+ "intermediate_size": 3072,
32
+ "layer_norm_eps": 1e-05,
33
+ "max_position_embeddings": 4098,
34
+ "model_type": "longformer",
35
+ "num_attention_heads": 12,
36
+ "num_hidden_layers": 12,
37
+ "pad_token_id": 1,
38
+ "position_embedding_type": "absolute",
39
+ "sep_token_id": 2,
40
+ "transformers_version": "4.19.2",
41
+ "type_vocab_size": 1,
42
+ "use_cache": true,
43
+ "vocab_size": 50265
44
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39d08164b03398cfc774530ba1129a7228bd1ae2facc9c7d99646d0fb9e025be
3
+ size 762211788
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"errors": "replace", "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": "<mask>", "add_prefix_space": false, "trim_offsets": true, "model_max_length": 4096, "special_tokens_map_file": null, "name_or_path": "allenai/longformer-base-4096", "tokenizer_class": "LongformerTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff