iwontbecreative commited on
Commit
8e370b3
1 Parent(s): e33b2da

Initial pass at model card.

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: multilingual
3
+ license: apache-2.0
4
+ datasets:
5
+ - wikipedia
6
+ ---
7
+
8
+ # RemBERT (for classification)
9
+
10
+ Pretrained RemBERT model on 110 languages using a masked language modeling (MLM) objective. It was introduced in the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821). A direct export of the model checkpoint was first made available in [this repository](https://github.com/google-research/google-research/tree/master/rembert). This version of the checkpoint is lightweight since it is meant to be finetuned for classification and excludes the output embedding weights.
11
+
12
+ ## Model description
13
+
14
+ RemBERT's main difference with mBERT is that the input and output embeddings are not tied. Instead, RemBERT uses small input embeddings and larger output embeddings. This makes the model more efficient since the output embeddings are discarded during fine-tuning. It is also more accurate, especially when reinvesting the input embeddings' parameters into the core model, as is done on RemBERT.
15
+
16
+ ## Intended uses & limitations
17
+
18
+ You should fine-tune this model for your downstream task. It is meant to be a general-purpose model, similar to mBERT. In our [paper](https://arxiv.org/abs/2010.12821), we have successfully applied this model to tasks such as classification, question answering, NER, POS-tagging. For tasks such as text generation you should look at models like GPT2.
19
+
20
+
21
+ ## Training data
22
+
23
+ The RemBERT model was pretrained on multilingual Wikipedia data over 110 languages. The full language list is on [this repository](https://github.com/google-research/google-research/tree/master/rembert)
24
+
25
+ ### BibTeX entry and citation info
26
+
27
+ ```bibtex
28
+ @inproceedings{DBLP:conf/iclr/ChungFTJR21,
29
+ author = {Hyung Won Chung and
30
+ Thibault F{\'{e}}vry and
31
+ Henry Tsai and
32
+ Melvin Johnson and
33
+ Sebastian Ruder},
34
+ title = {Rethinking Embedding Coupling in Pre-trained Language Models},
35
+ booktitle = {9th International Conference on Learning Representations, {ICLR} 2021,
36
+ Virtual Event, Austria, May 3-7, 2021},
37
+ publisher = {OpenReview.net},
38
+ year = {2021},
39
+ url = {https://openreview.net/forum?id=xpFFI\_NtgpW},
40
+ timestamp = {Wed, 23 Jun 2021 17:36:39 +0200},
41
+ biburl = {https://dblp.org/rec/conf/iclr/ChungFTJR21.bib},
42
+ bibsource = {dblp computer science bibliography, https://dblp.org}
43
+ }
44
+ ```