Update README.md
Browse files
README.md
CHANGED
@@ -119,6 +119,8 @@ XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 1
|
|
119 |
|
120 |
Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.
|
121 |
|
|
|
|
|
122 |
## Model description
|
123 |
|
124 |
XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
|
|
|
119 |
|
120 |
Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.
|
121 |
|
122 |
+
- **Languages:** English, Chinese, Indonesian, Malay, Thai, Vietnamese, Filipino, Tamil, Burmese, Khmer, Lao
|
123 |
+
|
124 |
## Model description
|
125 |
|
126 |
XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
|