kiddothe2b
commited on
Commit
•
eb0d58e
1
Parent(s):
75e9b16
Update README.md
Browse files
README.md
CHANGED
@@ -2,6 +2,7 @@
|
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
pipeline_tag: fill-mask
|
4 |
language: en
|
|
|
5 |
tags:
|
6 |
- long_documents
|
7 |
datasets:
|
@@ -15,9 +16,9 @@ model-index:
|
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
-
[Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents. This version of Longformer is presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/
|
19 |
|
20 |
-
The model has been warm-started re-using the weights of miniature BERT
|
21 |
|
22 |
Longformer uses a combination of a sliding window (local) attention and global attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
|
23 |
|
@@ -27,7 +28,7 @@ You can use the raw model for masked language modeling, but it's mostly intended
|
|
27 |
See the [model hub](https://huggingface.co/models?filter=longformer) to look for fine-tuned versions on a task that
|
28 |
interests you.
|
29 |
|
30 |
-
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification or question answering.
|
31 |
|
32 |
## How to use
|
33 |
|
@@ -39,12 +40,12 @@ mlm_model = pipeline('fill-mask', model='kiddothe2b/longformer-mini-1024', trust
|
|
39 |
mlm_model("Hello I'm a <mask> model.")
|
40 |
```
|
41 |
|
42 |
-
You can also fine-
|
43 |
|
44 |
```python
|
45 |
from transformers import AutoTokenizer, AutoModelforSequenceClassification
|
46 |
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/longformer-mini-1024", trust_remote_code=True)
|
47 |
-
doc_classifier = AutoModelforSequenceClassification(
|
48 |
```
|
49 |
|
50 |
## Limitations and bias
|
@@ -93,19 +94,24 @@ The following hyperparameters were used during training:
|
|
93 |
- Tokenizers 0.11.6
|
94 |
|
95 |
|
96 |
-
## Citing
|
97 |
|
98 |
-
|
|
|
|
|
99 |
|
100 |
```
|
101 |
@misc{chalkidis-etal-2022-hat,
|
102 |
-
url = {https://arxiv.org/abs/
|
103 |
author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
|
104 |
title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
|
105 |
publisher = {arXiv},
|
106 |
year = {2022},
|
107 |
}
|
|
|
108 |
|
|
|
|
|
|
|
109 |
@article{Beltagy2020Longformer,
|
110 |
title={Longformer: The Long-Document Transformer},
|
111 |
author={Iz Beltagy and Matthew E. Peters and Arman Cohan},
|
|
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
pipeline_tag: fill-mask
|
4 |
language: en
|
5 |
+
arxiv:
|
6 |
tags:
|
7 |
- long_documents
|
8 |
datasets:
|
|
|
16 |
|
17 |
## Model description
|
18 |
|
19 |
+
[Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents. This version of Longformer is presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529).
|
20 |
|
21 |
+
The model has been warm-started re-using the weights of miniature BERT (Turc et al., 2019), and continued pre-trained for MLM following the paradigm of Longformer released by [Beltagy et al. (2020)](](https://arxiv.org/abs/1908.08962)). It supports sequences of length up to 1,024.
|
22 |
|
23 |
Longformer uses a combination of a sliding window (local) attention and global attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
|
24 |
|
|
|
28 |
See the [model hub](https://huggingface.co/models?filter=longformer) to look for fine-tuned versions on a task that
|
29 |
interests you.
|
30 |
|
31 |
+
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification, or question answering.
|
32 |
|
33 |
## How to use
|
34 |
|
|
|
40 |
mlm_model("Hello I'm a <mask> model.")
|
41 |
```
|
42 |
|
43 |
+
You can also fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks:
|
44 |
|
45 |
```python
|
46 |
from transformers import AutoTokenizer, AutoModelforSequenceClassification
|
47 |
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/longformer-mini-1024", trust_remote_code=True)
|
48 |
+
doc_classifier = AutoModelforSequenceClassification("kiddothe2b/longformer-mini-1024", trust_remote_code=True)
|
49 |
```
|
50 |
|
51 |
## Limitations and bias
|
|
|
94 |
- Tokenizers 0.11.6
|
95 |
|
96 |
|
|
|
97 |
|
98 |
+
## Citing
|
99 |
+
If you use HAT in your research, please cite:
|
100 |
+
[An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).
|
101 |
|
102 |
```
|
103 |
@misc{chalkidis-etal-2022-hat,
|
104 |
+
url = {https://arxiv.org/abs/2210.05529},
|
105 |
author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
|
106 |
title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
|
107 |
publisher = {arXiv},
|
108 |
year = {2022},
|
109 |
}
|
110 |
+
```
|
111 |
|
112 |
+
Also cite the original work: [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150).
|
113 |
+
|
114 |
+
```
|
115 |
@article{Beltagy2020Longformer,
|
116 |
title={Longformer: The Long-Document Transformer},
|
117 |
author={Iz Beltagy and Matthew E. Peters and Arman Cohan},
|