Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: apache-2.0
|
4 |
+
datasets:
|
5 |
+
- bookcorpus
|
6 |
+
- wikipedia
|
7 |
+
---
|
8 |
+
|
9 |
+
# ALBERT XXLarge v1
|
10 |
+
|
11 |
+
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
|
12 |
+
[this paper](https://arxiv.org/abs/1909.11942) and first released in
|
13 |
+
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
|
14 |
+
between english and English.
|
15 |
+
|
16 |
+
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
|
17 |
+
the Hugging Face team.
|
18 |
+
|
19 |
+
## Model description
|
20 |
+
|
21 |
+
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
|
22 |
+
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
|
23 |
+
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
|
24 |
+
was pretrained with two objectives:
|
25 |
+
|
26 |
+
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
|
27 |
+
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
|
28 |
+
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
|
29 |
+
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
|
30 |
+
sentence.
|
31 |
+
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
|
32 |
+
|
33 |
+
This way, the model learns an inner representation of the English language that can then be used to extract features
|
34 |
+
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
|
35 |
+
classifier using the features produced by the ALBERT model as inputs.
|
36 |
+
|
37 |
+
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
|
38 |
+
|
39 |
+
This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
|
40 |
+
|
41 |
+
This model has the following configuration:
|
42 |
+
|
43 |
+
- 12 repeating layers
|
44 |
+
- 128 embedding dimension
|
45 |
+
- 768 hidden dimension
|
46 |
+
- 12 attention heads
|
47 |
+
- 11M parameters
|
48 |
+
|
49 |
+
## Intended uses & limitations
|
50 |
+
|
51 |
+
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
|
52 |
+
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
|
53 |
+
fine-tuned versions on a task that interests you.
|
54 |
+
|
55 |
+
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
|
56 |
+
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
|
57 |
+
generation you should look at model like GPT2.
|
58 |
+
|
59 |
+
### How to use
|
60 |
+
|
61 |
+
You can use this model directly with a pipeline for masked language modeling:
|
62 |
+
In tf_transformers
|
63 |
+
|
64 |
+
```python
|
65 |
+
from tf_transformers.models import AlbertModel
|
66 |
+
from transformers import AlbertTokenizer
|
67 |
+
|
68 |
+
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1')
|
69 |
+
model = AlbertModel.from_pretrained("albert-xxlarge-v1")
|
70 |
+
|
71 |
+
text = "Replace me by any text you'd like."
|
72 |
+
inputs_tf = {}
|
73 |
+
inputs = tokenizer(text, return_tensors='tf')
|
74 |
+
|
75 |
+
|
76 |
+
inputs_tf["input_ids"] = inputs["input_ids"]
|
77 |
+
inputs_tf["input_type_ids"] = inputs["token_type_ids"]
|
78 |
+
inputs_tf["input_mask"] = inputs["attention_mask"]
|
79 |
+
outputs_tf = model(inputs_tf)
|
80 |
+
```
|
81 |
+
|
82 |
+
|
83 |
+
|
84 |
+
## Training data
|
85 |
+
|
86 |
+
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
|
87 |
+
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
|
88 |
+
headers).
|
89 |
+
|
90 |
+
## Training procedure
|
91 |
+
|
92 |
+
### Preprocessing
|
93 |
+
|
94 |
+
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
|
95 |
+
then of the form:
|
96 |
+
|
97 |
+
```
|
98 |
+
[CLS] Sentence A [SEP] Sentence B [SEP]
|
99 |
+
```
|
100 |
+
|
101 |
+
### Training
|
102 |
+
|
103 |
+
The ALBERT procedure follows the BERT setup.
|
104 |
+
|
105 |
+
The details of the masking procedure for each sentence are the following:
|
106 |
+
- 15% of the tokens are masked.
|
107 |
+
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
|
108 |
+
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
|
109 |
+
- In the 10% remaining cases, the masked tokens are left as is.
|
110 |
+
|
111 |
+
## Evaluation results
|
112 |
+
|
113 |
+
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
|
114 |
+
|
115 |
+
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|
116 |
+
|----------------|----------|----------|----------|----------|----------|----------|
|
117 |
+
|V2 |
|
118 |
+
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|
119 |
+
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|
120 |
+
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|
121 |
+
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|
122 |
+
|V1 |
|
123 |
+
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|
124 |
+
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|
125 |
+
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|
126 |
+
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
|
127 |
+
|
128 |
+
|
129 |
+
### BibTeX entry and citation info
|
130 |
+
|
131 |
+
```bibtex
|
132 |
+
@article{DBLP:journals/corr/abs-1909-11942,
|
133 |
+
author = {Zhenzhong Lan and
|
134 |
+
Mingda Chen and
|
135 |
+
Sebastian Goodman and
|
136 |
+
Kevin Gimpel and
|
137 |
+
Piyush Sharma and
|
138 |
+
Radu Soricut},
|
139 |
+
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
|
140 |
+
Representations},
|
141 |
+
journal = {CoRR},
|
142 |
+
volume = {abs/1909.11942},
|
143 |
+
year = {2019},
|
144 |
+
url = {http://arxiv.org/abs/1909.11942},
|
145 |
+
archivePrefix = {arXiv},
|
146 |
+
eprint = {1909.11942},
|
147 |
+
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
|
148 |
+
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
|
149 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
150 |
+
}
|
151 |
+
```
|