Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,49 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
|
5 |
+
# Xlm-roberta (based) language-detection model (modern and medieval)
|
6 |
+
|
7 |
+
This model is a fine-tuned version of xlm-roberta-base on the monasterium.net dataset.
|
8 |
+
|
9 |
+
## Model description
|
10 |
+
On the top of this XLM-RoBERTa transformer model is a classification head. Please refer this model together with to the [XLM-RoBERTa (base-sized model)](https://huggingface.co/xlm-roberta-base) card or the paper [Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al.](https://arxiv.org/abs/1911.02116) for additional information.
|
11 |
+
|
12 |
+
## Intended uses & limitations
|
13 |
+
You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 40 modern and medieval languages:
|
14 |
+
|
15 |
+
Modern: Bulgarian (bg), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Finnish (fi), French (fr), German (de), Greek (el), Hungarian (hu), Irish (ga), Italian (it), Latvian (lv), Lithuanian (lt), Maltese (mt), Polish (pl), Portuguese (pt), Romanian (ro), Slovak (sk), Slovenian (sl), Spanish (es), Swedish (sv), Russian (ru), Turkish (tr), Basque (eu), Catalan (ca), Albanian (sq), Serbian (se), Ukrainian (uk), Norwegian (no), Arabic (ar), Chinese (zh), Hebrew (he)
|
16 |
+
|
17 |
+
Medieval: Middle High German (mhd), Latin (la), Middle Low German (gml), Old French (fro), Old Chruch Slavonic (chu), Early New High German (fnhd)
|
18 |
+
|
19 |
+
## Training and evaluation data
|
20 |
+
The model was fine-tuned on the Monasterium and Wikipedia datasets, which consists of text sequences in 40 languages. The training set contains 80k samples, while the validation and test sets 16k. The average accuracy on the test set is 99.59% (this matches the average macro/weighted F1-score being the test set perfectly balanced). A more detailed evaluation is provided by the following table.
|
21 |
+
|
22 |
+
## Training procedure
|
23 |
+
Fine-tuning was done via the Trainer API with WeightedLossTrainer.
|
24 |
+
|
25 |
+
## Training hyperparameters
|
26 |
+
The following hyperparameters were used during training:
|
27 |
+
- learning_rate: 2e-05
|
28 |
+
- train_batch_size: 20
|
29 |
+
- eval_batch_size: 20
|
30 |
+
- seed: 42
|
31 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
32 |
+
- lr_scheduler_type: linear
|
33 |
+
- num_epochs: 3
|
34 |
+
|
35 |
+
mixed_precision_training: Native AMP
|
36 |
+
|
37 |
+
## Training results
|
38 |
+
|
39 |
+
| Training Loss | Training Loss | F1
|
40 |
+
| ------------- | ------------- | -------- |
|
41 |
+
| 0.000300 | 0.048985 | 0.991585 |
|
42 |
+
| 0.000100 | 0.033340 | 0.994663 |
|
43 |
+
| 0.000000 | 0.032938 | 0.995979 |
|
44 |
+
|
45 |
+
## Framework versions
|
46 |
+
- Transformers 4.24.0
|
47 |
+
- Pytorch 1.8.0
|
48 |
+
- Datasets 2.6.1
|
49 |
+
- Tokenizers 0.13.3
|