kiddothe2b commited on
Commit
ab7b7bf
1 Parent(s): 469fa15

Initial commit

Browse files
README.md CHANGED
@@ -1,3 +1,111 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ pipeline_tag: fill-mask
4
+ language: en
5
+ tags:
6
+ - long_documents
7
+ datasets:
8
+ - c4
9
+ model-index:
10
+ - name: kiddothe2b/hat-base-4096
11
+ results: []
12
  ---
13
+
14
+ # Hierarchical Attention Transformer (HAT) / hat-base-4096
15
+
16
+ ## Model description
17
+
18
+ This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/xxx).
19
+
20
+ The model has been warm-started re-using the weights of RoBERTa (Liu et al., 2019), and continued pre-trained for MLM in long sequences following the paradigm of Longformer released by Beltagy et al. (2020). It supports sequences of length up to 4,096.
21
+
22
+ HAT use a hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think segments as paragraphs or sentences.
23
+
24
+ ## Intended uses & limitations
25
+
26
+ You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
27
+ See the [model hub](https://huggingface.co/models?filter=hat) to look for fine-tuned versions on a task that
28
+ interests you.
29
+
30
+ Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification or question answering.
31
+
32
+ ## How to use
33
+
34
+ You can use this model directly with a pipeline for masked language modeling:
35
+
36
+ ```python
37
+ from transformers import pipeline
38
+ mlm_model = pipeline('fill-mask', model='kiddothe2b/hat-base-4096', trust_remote_code=True)
39
+ mlm_model("Hello I'm a <mask> model.")
40
+ ```
41
+
42
+ You can also fine-tun it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks:
43
+
44
+ ```python
45
+ from transformers import AutoTokenizer, AutoModelforSequenceClassification
46
+ tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hat-base-4096", trust_remote_code=True)
47
+ doc_classifier = AutoModelforSequenceClassification(model='kiddothe2b/hat-base-4096', trust_remote_code=True)
48
+ ```
49
+
50
+ ## Limitations and bias
51
+
52
+ The training data used for this model contains a lot of unfiltered content from the internet, which is far from
53
+ neutral. Therefore, the model can have biased predictions.
54
+
55
+
56
+ ## Training procedure
57
+
58
+ ### Training and evaluation data
59
+
60
+ The model has been warm-started from [roberta-base](https://huggingface.co/roberta-base) checkpoint and has been continued pre-trained for additional 50k steps in long sequences (> 1024 subwords) of [C4](https://huggingface.co/datasets/c4) (Raffel et al., 2020).
61
+
62
+
63
+ ### Training hyperparameters
64
+
65
+ The following hyperparameters were used during training:
66
+ - learning_rate: 0.0001
67
+ - train_batch_size: 2
68
+ - eval_batch_size: 2
69
+ - seed: 42
70
+ - distributed_type: tpu
71
+ - num_devices: 8
72
+ - gradient_accumulation_steps: 8
73
+ - total_train_batch_size: 128
74
+ - total_eval_batch_size: 16
75
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
76
+ - lr_scheduler_type: linear
77
+ - lr_scheduler_warmup_ratio: 0.1
78
+ - training_steps: 50000
79
+
80
+ ### Training results
81
+
82
+ | Training Loss | Epoch | Step | Validation Loss |
83
+ |:-------------:|:-----:|:-----:|:---------------:|
84
+ | 1.7437 | 0.2 | 10000 | 1.6370 |
85
+ | 1.6994 | 0.4 | 20000 | 1.6054 |
86
+ | 1.6726 | 0.6 | 30000 | 1.5718 |
87
+ | 1.644 | 0.8 | 40000 | 1.5526 |
88
+ | 1.6299 | 1.0 | 50000 | 1.5368 |
89
+
90
+
91
+ ### Framework versions
92
+
93
+ - Transformers 4.19.0.dev0
94
+ - Pytorch 1.11.0+cu102
95
+ - Datasets 2.0.0
96
+ - Tokenizers 0.11.6
97
+
98
+
99
+ ##Citing
100
+ If you use HAT in your research, please cite [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/xxx)
101
+
102
+ ```
103
+ @misc{chalkidis-etal-2022-hat,
104
+ url = {https://arxiv.org/abs/xxx},
105
+ author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
106
+ title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
107
+ publisher = {arXiv},
108
+ year = {2022},
109
+ }
110
+ ```
111
+
all_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_loss": 1.5364761352539062,
4
+ "eval_runtime": 607.3874,
5
+ "eval_samples_per_second": 37.153,
6
+ "eval_steps_per_second": 2.323,
7
+ "perplexity": 4.6481818133494235,
8
+ "train_loss": 1.339159200439453,
9
+ "train_runtime": 269343.0077,
10
+ "train_samples_per_second": 23.762,
11
+ "train_steps_per_second": 0.186
12
+ }
config.json ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "kiddothe2b/hat-base-4096",
3
+ "architectures": [
4
+ "HiTransformerForMaskedLM"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_hat.HATConfig",
8
+ "AutoTokenizer": "tokenization_hat.HATTokenizer",
9
+ "AutoModel": "modelling_hat.HATModel",
10
+ "AutoModelForMaskedLM": "modelling_hat.HATForMaskedLM",
11
+ "AutoModelForMultipleChoice": "modelling_hat.HATForMultipleChoice",
12
+ "AutoModelForQuestionAnswering": "modelling_hat.HATForQuestionAnswering",
13
+ "AutoModelForSequenceClassification": "modelling_hat.HATForSequenceClassification",
14
+ "AutoModelForTokenClassification": "modelling_hat.HATForTokenClassification"
15
+ },
16
+ "attention_probs_dropout_prob": 0.1,
17
+ "bos_token_id": 0,
18
+ "classifier_dropout": null,
19
+ "encoder_layout": {
20
+ "0": {
21
+ "document_encoder": false,
22
+ "sentence_encoder": true
23
+ },
24
+ "1": {
25
+ "document_encoder": false,
26
+ "sentence_encoder": true
27
+ },
28
+ "10": {
29
+ "document_encoder": false,
30
+ "sentence_encoder": true
31
+ },
32
+ "11": {
33
+ "document_encoder": true,
34
+ "sentence_encoder": true
35
+ },
36
+ "2": {
37
+ "document_encoder": true,
38
+ "sentence_encoder": true
39
+ },
40
+ "3": {
41
+ "document_encoder": false,
42
+ "sentence_encoder": true
43
+ },
44
+ "4": {
45
+ "document_encoder": false,
46
+ "sentence_encoder": true
47
+ },
48
+ "5": {
49
+ "document_encoder": true,
50
+ "sentence_encoder": true
51
+ },
52
+ "6": {
53
+ "document_encoder": false,
54
+ "sentence_encoder": true
55
+ },
56
+ "7": {
57
+ "document_encoder": false,
58
+ "sentence_encoder": true
59
+ },
60
+ "8": {
61
+ "document_encoder": true,
62
+ "sentence_encoder": true
63
+ },
64
+ "9": {
65
+ "document_encoder": false,
66
+ "sentence_encoder": true
67
+ }
68
+ },
69
+ "eos_token_id": 2,
70
+ "hidden_act": "gelu",
71
+ "hidden_dropout_prob": 0.1,
72
+ "hidden_size": 768,
73
+ "initializer_range": 0.02,
74
+ "intermediate_size": 3072,
75
+ "layer_norm_eps": 1e-12,
76
+ "max_position_embeddings": 130,
77
+ "max_sentence_length": 128,
78
+ "max_sentence_size": 128,
79
+ "max_sentences": 32,
80
+ "model_max_length": 4096,
81
+ "model_type": "hi-transformer",
82
+ "num_attention_heads": 12,
83
+ "num_hidden_layers": 12,
84
+ "output_past": true,
85
+ "pad_token_id": 1,
86
+ "parameters": 136350720,
87
+ "position_embedding_type": "absolute",
88
+ "torch_dtype": "float32",
89
+ "transformers_version": "4.19.0.dev0",
90
+ "type_vocab_size": 1,
91
+ "use_cache": true,
92
+ "vocab_size": 50265
93
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0657169a421e844f2d4782f24fb6435199c4445b0f40efdaf363d3750c53dd0c
3
+ size 766163359
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"errors": "replace", "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": "<mask>", "add_prefix_space": false, "trim_offsets": true, "model_max_length": 4096, "special_tokens_map_file": null, "name_or_path": "kiddothe2b/hat-base-4096", "tokenizer_class": "RobertaTokenizer", "auto_map": {"AutoTokenizer": ["tokenization_hat.HATTokenizer", "tokenization_hat.HATTokenizer"]}}
vocab.json ADDED
The diff for this file is too large to render. See raw diff