Lennart Keller
commited on
Commit
•
bc04f3b
1
Parent(s):
80e4623
inital commit => add model
Browse files- README.md +91 -0
- all_results.json +14 -0
- config.json +42 -0
- eval_results.json +9 -0
- merges.txt +0 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +1 -0
- tokenizer.json +0 -0
- tokenizer_config.json +1 -0
- train_results.json +8 -0
- trainer_state.json +0 -0
- training_args.bin +3 -0
- vocab.json +0 -0
README.md
ADDED
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- generated_from_trainer
|
4 |
+
model-index:
|
5 |
+
- name: first
|
6 |
+
results: []
|
7 |
+
---
|
8 |
+
|
9 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
10 |
+
should probably proofread and complete it, then remove this comment. -->
|
11 |
+
|
12 |
+
# first
|
13 |
+
|
14 |
+
This model is a fine-tuned version of [longformer-gottbert-base-8192-aw512-](https://huggingface.co/longformer-8192-aw512-gottbert-base) on the a 500 million token subset of the german parts of the OSCAR dataset.
|
15 |
+
It achieves the following results on the custom evaluation set:
|
16 |
+
- Loss: 1.4981
|
17 |
+
|
18 |
+
## Model description
|
19 |
+
|
20 |
+
The weights of the model are initialized from the german version of Roberta [gottbert-base](https://huggingface.co/uklfr/gottbert-base).
|
21 |
+
The local attention windows have a fixed size of 512 tokens across all layers.
|
22 |
+
The maximum sequence length is 8192.
|
23 |
+
|
24 |
+
## Intended uses & limitations
|
25 |
+
|
26 |
+
Longformer models enable processing long texts using a mixture of local attention on each subword token and task specific global attention on a subset of the tokens.
|
27 |
+
|
28 |
+
## Training and evaluation data
|
29 |
+
|
30 |
+
The [OSCAR](https://oscar-corpus.com) dataset is freely avaible corpus of filtered web texts from the Common Crawl in various languages. We used the 2017 version of the dataset.
|
31 |
+
|
32 |
+
## Training procedure
|
33 |
+
The model was trained with masked language modeling for 3 epochs on a customly created 500m tokens subset of the german proportion of the oscar dataset.
|
34 |
+
It was validated using 5% of the original subset.
|
35 |
+
### Training hyperparameters
|
36 |
+
|
37 |
+
The following hyperparameters were used during training:
|
38 |
+
- learning_rate: 3e-05
|
39 |
+
- train_batch_size: 2
|
40 |
+
- eval_batch_size: 4
|
41 |
+
- seed: 42
|
42 |
+
- gradient_accumulation_steps: 8
|
43 |
+
- total_train_batch_size: 16
|
44 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
45 |
+
- lr_scheduler_type: linear
|
46 |
+
- lr_scheduler_warmup_steps: 500
|
47 |
+
- num_epochs: 3.0
|
48 |
+
- mixed_precision_training: Native AMP
|
49 |
+
|
50 |
+
### Training results
|
51 |
+
|
52 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
53 |
+
|:-------------:|:-----:|:-----:|:---------------:|
|
54 |
+
| 2.5636 | 0.1 | 500 | 2.2399 |
|
55 |
+
| 2.0426 | 0.2 | 1000 | 1.8841 |
|
56 |
+
| 1.9653 | 0.3 | 1500 | 1.7807 |
|
57 |
+
| 1.9422 | 0.4 | 2000 | 1.7206 |
|
58 |
+
| 1.9323 | 0.49 | 2500 | 1.6800 |
|
59 |
+
| 1.7587 | 0.59 | 3000 | 1.6507 |
|
60 |
+
| 1.7239 | 0.69 | 3500 | 1.6316 |
|
61 |
+
| 1.7452 | 0.79 | 4000 | 1.6137 |
|
62 |
+
| 1.7415 | 0.89 | 4500 | 1.5983 |
|
63 |
+
| 1.7733 | 0.99 | 5000 | 1.5830 |
|
64 |
+
| 1.7656 | 1.09 | 5500 | 1.5735 |
|
65 |
+
| 1.6543 | 1.19 | 6000 | 1.5643 |
|
66 |
+
| 1.7131 | 1.28 | 6500 | 1.5546 |
|
67 |
+
| 1.6456 | 1.38 | 7000 | 1.5503 |
|
68 |
+
| 1.716 | 1.48 | 7500 | 1.5422 |
|
69 |
+
| 1.806 | 1.58 | 8000 | 1.5377 |
|
70 |
+
| 1.8407 | 1.68 | 8500 | 1.5327 |
|
71 |
+
| 1.6371 | 1.78 | 9000 | 1.5278 |
|
72 |
+
| 1.6453 | 1.88 | 9500 | 1.5231 |
|
73 |
+
| 1.7754 | 1.98 | 10000 | 1.5214 |
|
74 |
+
| 1.7695 | 2.08 | 10500 | 1.5165 |
|
75 |
+
| 1.7109 | 2.17 | 11000 | 1.5138 |
|
76 |
+
| 1.6992 | 2.27 | 11500 | 1.5107 |
|
77 |
+
| 1.6707 | 2.37 | 12000 | 1.5097 |
|
78 |
+
| 1.6835 | 2.47 | 12500 | 1.5040 |
|
79 |
+
| 1.7171 | 2.57 | 13000 | 1.5041 |
|
80 |
+
| 1.7257 | 2.67 | 13500 | 1.4990 |
|
81 |
+
| 1.6287 | 2.77 | 14000 | 1.5017 |
|
82 |
+
| 1.7737 | 2.87 | 14500 | 1.4983 |
|
83 |
+
| 1.4002 | 2.96 | 15000 | 1.4992 |
|
84 |
+
|
85 |
+
|
86 |
+
### Framework versions
|
87 |
+
|
88 |
+
- Transformers 4.15.0
|
89 |
+
- Pytorch 1.10.1+cu113
|
90 |
+
- Datasets 1.17.0
|
91 |
+
- Tokenizers 0.10.3
|
all_results.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"epoch": 3.0,
|
3 |
+
"eval_loss": 1.4981467723846436,
|
4 |
+
"eval_runtime": 720.2531,
|
5 |
+
"eval_samples": 4150,
|
6 |
+
"eval_samples_per_second": 5.762,
|
7 |
+
"eval_steps_per_second": 1.441,
|
8 |
+
"perplexity": 4.4733911717118096,
|
9 |
+
"train_loss": 1.8764679098788928,
|
10 |
+
"train_runtime": 196645.256,
|
11 |
+
"train_samples": 80971,
|
12 |
+
"train_samples_per_second": 1.235,
|
13 |
+
"train_steps_per_second": 0.077
|
14 |
+
}
|
config.json
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "longformer-8192-aw512-gottbert-base",
|
3 |
+
"architectures": [
|
4 |
+
"LongformerForMaskedLM"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"attention_window": [
|
8 |
+
512,
|
9 |
+
512,
|
10 |
+
512,
|
11 |
+
512,
|
12 |
+
512,
|
13 |
+
512,
|
14 |
+
512,
|
15 |
+
512,
|
16 |
+
512,
|
17 |
+
512,
|
18 |
+
512,
|
19 |
+
512
|
20 |
+
],
|
21 |
+
"bos_token_id": 0,
|
22 |
+
"classifier_dropout": null,
|
23 |
+
"eos_token_id": 2,
|
24 |
+
"hidden_act": "gelu",
|
25 |
+
"hidden_dropout_prob": 0.1,
|
26 |
+
"hidden_size": 768,
|
27 |
+
"initializer_range": 0.02,
|
28 |
+
"intermediate_size": 3072,
|
29 |
+
"layer_norm_eps": 1e-12,
|
30 |
+
"max_position_embeddings": 8194,
|
31 |
+
"model_type": "longformer",
|
32 |
+
"num_attention_heads": 12,
|
33 |
+
"num_hidden_layers": 12,
|
34 |
+
"pad_token_id": 1,
|
35 |
+
"position_embedding_type": "absolute",
|
36 |
+
"sep_token_id": 2,
|
37 |
+
"torch_dtype": "float32",
|
38 |
+
"transformers_version": "4.15.0",
|
39 |
+
"type_vocab_size": 2,
|
40 |
+
"use_cache": true,
|
41 |
+
"vocab_size": 52009
|
42 |
+
}
|
eval_results.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"epoch": 3.0,
|
3 |
+
"eval_loss": 1.4981467723846436,
|
4 |
+
"eval_runtime": 720.2531,
|
5 |
+
"eval_samples": 4150,
|
6 |
+
"eval_samples_per_second": 5.762,
|
7 |
+
"eval_steps_per_second": 1.441,
|
8 |
+
"perplexity": 4.4733911717118096
|
9 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:79f23bf67414cfbc36407cebd8e438d48a5598e8b6c76df8086ff216397b22d8
|
3 |
+
size 612971355
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true}}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "errors": "replace", "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "trim_offsets": true, "special_tokens_map_file": null, "name_or_path": "longformer-8192-aw512-gottbert-base", "model_max_length": 8192, "tokenizer_class": "LongformerTokenizer"}
|
train_results.json
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"epoch": 3.0,
|
3 |
+
"train_loss": 1.8764679098788928,
|
4 |
+
"train_runtime": 196645.256,
|
5 |
+
"train_samples": 80971,
|
6 |
+
"train_samples_per_second": 1.235,
|
7 |
+
"train_steps_per_second": 0.077
|
8 |
+
}
|
trainer_state.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:be3c499d0f69835094a068b390628bea5a14bf75b7df7b3e594baf2303b77bb0
|
3 |
+
size 2991
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|