kiddothe2b commited on
Commit
a9807f1
1 Parent(s): fa85618

Initial commit

Browse files
README.md CHANGED
@@ -1,3 +1,112 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ pipeline_tag: fill-mask
4
+ language: en
5
+ tags:
6
+ - long-documents
7
+ datasets:
8
+ - wikipedia
9
+ model-index:
10
+ - name: kiddothe2b/hierarchical-transformer-EC2-mini-1024
11
+ results: []
12
  ---
13
+
14
+ # Hierarchical Attention Transformer (HAT) / hierarchical-transformer-EC2-mini-1024
15
+
16
+ ## Model description
17
+
18
+ This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/xxx).
19
+
20
+ The model has been warm-started re-using the weights of miniature BERT [(Turc et al., 2019)](https://arxiv.org/abs/1908.08962), and continued pre-trained for MLM following the paradigm of Longformer released by [Beltagy et al. (2020)](](https://arxiv.org/abs/1908.08962)). It supports sequences of length up to 1,024.
21
+
22
+ HAT use a hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think segments as paragraphs or sentences.
23
+
24
+ ## Intended uses & limitations
25
+
26
+ You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
27
+ See the [model hub](https://huggingface.co/models?other=hierarchical-transformer) to look for fine-tuned versions on a task that interests you.
28
+
29
+ Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification or question answering.
30
+
31
+ ## How to use
32
+
33
+ You can use this model directly with a pipeline for masked language modeling:
34
+
35
+ ```python
36
+ from transformers import pipeline
37
+ mlm_model = pipeline('fill-mask', model='kiddothe2b/hierarchical-transformer-EC2-mini-1024', trust_remote_code=True)
38
+ mlm_model("Hello I'm a <mask> model.")
39
+ ```
40
+
41
+ You can also fine-tun it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks:
42
+
43
+ ```python
44
+ from transformers import AutoTokenizer, AutoModelforSequenceClassification
45
+ tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-EC2-mini-1024", trust_remote_code=True)
46
+ doc_classifier = AutoModelforSequenceClassification(model='kiddothe2b/hierarchical-transformer-EC2-mini-1024', trust_remote_code=True)
47
+ ```
48
+
49
+ ## Limitations and bias
50
+
51
+ The training data used for this model contains a lot of unfiltered content from the internet, which is far from
52
+ neutral. Therefore, the model can have biased predictions.
53
+
54
+
55
+ ## Training procedure
56
+
57
+ ### Training and evaluation data
58
+
59
+ The model has been warm-started from [google/bert_uncased_L-6_H-256_A-4](https://huggingface.co/google/bert_uncased_L-6_H-256_A-4) checkpoint and has been continued pre-trained for additional 50k steps on English [Wikipedia](https://huggingface.co/datasets/wikipedia).
60
+
61
+
62
+ ### Training hyperparameters
63
+
64
+ The following hyperparameters were used during training:
65
+ - learning_rate: 0.0001
66
+ - train_batch_size: 4
67
+ - eval_batch_size: 4
68
+ - seed: 42
69
+ - distributed_type: tpu
70
+ - num_devices: 8
71
+ - gradient_accumulation_steps: 4
72
+ - total_train_batch_size: 128
73
+ - total_eval_batch_size: 32
74
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
+ - lr_scheduler_type: linear
76
+ - lr_scheduler_warmup_ratio: 0.1
77
+ - training_steps: 50000
78
+
79
+
80
+ ### Training results
81
+
82
+ | Training Loss | Epoch | Step | Validation Loss |
83
+ |:-------------:|:-----:|:-----:|:---------------:|
84
+ | 2.3798 | 0.2 | 10000 | 2.2014 |
85
+ | 2.3267 | 0.4 | 20000 | 2.1535 |
86
+ | 2.2976 | 0.6 | 30000 | 2.1234 |
87
+ | 2.2649 | 0.8 | 40000 | 2.1010 |
88
+ | 2.254 | 1.14 | 50000 | 2.0870 |
89
+
90
+
91
+ ### Framework versions
92
+
93
+ - Transformers 4.19.0.dev0
94
+ - Pytorch 1.11.0+cu102
95
+ - Datasets 2.0.0
96
+ - Tokenizers 0.11.6
97
+
98
+
99
+ ##Citing
100
+ If you use HAT in your research, please cite [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/xxx)
101
+
102
+ ```
103
+ @misc{chalkidis-etal-2022-hat,
104
+ url = {https://arxiv.org/abs/xxx},
105
+ author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
106
+ title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
107
+ publisher = {arXiv},
108
+ year = {2022},
109
+ }
110
+ ```
111
+
112
+
all_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.14,
3
+ "eval_loss": 2.0871331691741943,
4
+ "eval_runtime": 3108.6506,
5
+ "eval_samples_per_second": 160.841,
6
+ "eval_steps_per_second": 5.026,
7
+ "perplexity": 8.061770272384594,
8
+ "train_loss": 2.33324001953125,
9
+ "train_runtime": 64035.2591,
10
+ "train_samples_per_second": 99.945,
11
+ "train_steps_per_second": 0.781
12
+ }
config.json ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "kiddothe2b/hierarchical-transformer-EC2-mini-1024",
3
+ "architectures": [
4
+ "HATForMaskedLM"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_hat.HATConfig",
8
+ "AutoTokenizer": "tokenization_hat.HATTokenizer",
9
+ "AutoModel": "modelling_hat.HATModel",
10
+ "AutoModelForMaskedLM": "modelling_hat.HATForMaskedLM",
11
+ "AutoModelForMultipleChoice": "modelling_hat.HATForMultipleChoice",
12
+ "AutoModelForQuestionAnswering": "modelling_hat.HATForQuestionAnswering",
13
+ "AutoModelForSequenceClassification": "modelling_hat.HATForSequenceClassification",
14
+ "AutoModelForTokenClassification": "modelling_hat.HATForTokenClassification"
15
+ },
16
+ "attention_probs_dropout_prob": 0.1,
17
+ "classifier_dropout": null,
18
+ "encoder_layout": {
19
+ "0": {
20
+ "document_encoder": false,
21
+ "sentence_encoder": true
22
+ },
23
+ "1": {
24
+ "document_encoder": true,
25
+ "sentence_encoder": true
26
+ },
27
+ "2": {
28
+ "document_encoder": true,
29
+ "sentence_encoder": false
30
+ },
31
+ "3": {
32
+ "document_encoder": false,
33
+ "sentence_encoder": true
34
+ },
35
+ "4": {
36
+ "document_encoder": true,
37
+ "sentence_encoder": true
38
+ },
39
+ "5": {
40
+ "document_encoder": true,
41
+ "sentence_encoder": false
42
+ },
43
+ "6": {
44
+ "document_encoder": false,
45
+ "sentence_encoder": true
46
+ },
47
+ "7": {
48
+ "document_encoder": false,
49
+ "sentence_encoder": true
50
+ },
51
+ "8": {
52
+ "document_encoder": false,
53
+ "sentence_encoder": true
54
+ },
55
+ "9": {
56
+ "document_encoder": false,
57
+ "sentence_encoder": true
58
+ }
59
+ },
60
+ "hidden_act": "gelu",
61
+ "hidden_dropout_prob": 0.1,
62
+ "hidden_size": 256,
63
+ "initializer_range": 0.02,
64
+ "intermediate_size": 1024,
65
+ "layer_norm_eps": 1e-12,
66
+ "max_position_embeddings": 128,
67
+ "max_sentence_length": 128,
68
+ "max_sentence_size": 128,
69
+ "max_sentences": 8,
70
+ "model_max_length": 1024,
71
+ "model_type": "hierarchical-transformer",
72
+ "num_attention_heads": 4,
73
+ "num_hidden_layers": 10,
74
+ "output_past": true,
75
+ "pad_token_id": 0,
76
+ "parameters": 136350720,
77
+ "position_embedding_type": "absolute",
78
+ "torch_dtype": "float32",
79
+ "transformers_version": "4.19.0.dev0",
80
+ "type_vocab_size": 2,
81
+ "use_cache": true,
82
+ "vocab_size": 30522
83
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d7ff5e5a9788ead0e99961cfc4883d311ae47ae3d2b7a97588b50e1ee95322e
3
+ size 101179615
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "data/PLMs/hi-transformer-e2-grouped", "do_basic_tokenize": true, "never_split": null, "tokenizer_class": "BertTokenizer", "auto_map": {"AutoTokenizer": ["tokenization_hat.HATTokenizer", "tokenization_hat.HATTokenizer"]}}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff