eleldar commited on
Commit
944ad3d
1 Parent(s): cdde679

cloned model

Browse files
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ - f1
8
+ model-index:
9
+ - name: xlm-roberta-base-language-detection
10
+ results: []
11
+ ---
12
+
13
+ # xlm-roberta-base-language-detection
14
+
15
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset.
16
+
17
+ ## Model description
18
+
19
+ This model is an XLM-RoBERTa transformer model with a classification head on top (i.e. a linear layer on top of the pooled output).
20
+ For additional information please refer to the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) model card or to the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al.
21
+
22
+ ## Intended uses & limitations
23
+
24
+ You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 20 languages:
25
+
26
+ `arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)`
27
+
28
+ ## Training and evaluation data
29
+
30
+ The model was fine-tuned on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset, which consists of text sequences in 20 languages. The training set contains 70k samples, while the validation and test sets 10k each. The average accuracy on the test set is **99.6%** (this matches the average macro/weighted F1-score being the test set perfectly balanced). A more detailed evaluation is provided by the following table.
31
+
32
+ | Language | Precision | Recall | F1-score | support |
33
+ |:--------:|:---------:|:------:|:--------:|:-------:|
34
+ |ar |0.998 |0.996 |0.997 |500 |
35
+ |bg |0.998 |0.964 |0.981 |500 |
36
+ |de |0.998 |0.996 |0.997 |500 |
37
+ |el |0.996 |1.000 |0.998 |500 |
38
+ |en |1.000 |1.000 |1.000 |500 |
39
+ |es |0.967 |1.000 |0.983 |500 |
40
+ |fr |1.000 |1.000 |1.000 |500 |
41
+ |hi |0.994 |0.992 |0.993 |500 |
42
+ |it |1.000 |0.992 |0.996 |500 |
43
+ |ja |0.996 |0.996 |0.996 |500 |
44
+ |nl |1.000 |1.000 |1.000 |500 |
45
+ |pl |1.000 |1.000 |1.000 |500 |
46
+ |pt |0.988 |1.000 |0.994 |500 |
47
+ |ru |1.000 |0.994 |0.997 |500 |
48
+ |sw |1.000 |1.000 |1.000 |500 |
49
+ |th |1.000 |0.998 |0.999 |500 |
50
+ |tr |0.994 |0.992 |0.993 |500 |
51
+ |ur |1.000 |1.000 |1.000 |500 |
52
+ |vi |0.992 |1.000 |0.996 |500 |
53
+ |zh |1.000 |1.000 |1.000 |500 |
54
+
55
+ ### Benchmarks
56
+
57
+ As a baseline to compare `xlm-roberta-base-language-detection` against, we have used the Python [langid](https://github.com/saffsd/langid.py) library. Since it comes pre-trained on 97 languages, we have used its `.set_languages()` method to constrain the language set to our 20 languages. The average accuracy of langid on the test set is **98.5%**. More details are provided by the table below.
58
+
59
+ | Language | Precision | Recall | F1-score | support |
60
+ |:--------:|:---------:|:------:|:--------:|:-------:|
61
+ |ar |0.990 |0.970 |0.980 |500 |
62
+ |bg |0.998 |0.964 |0.981 |500 |
63
+ |de |0.992 |0.944 |0.967 |500 |
64
+ |el |1.000 |0.998 |0.999 |500 |
65
+ |en |1.000 |1.000 |1.000 |500 |
66
+ |es |1.000 |0.968 |0.984 |500 |
67
+ |fr |0.996 |1.000 |0.998 |500 |
68
+ |hi |0.949 |0.976 |0.963 |500 |
69
+ |it |0.990 |0.980 |0.985 |500 |
70
+ |ja |0.927 |0.988 |0.956 |500 |
71
+ |nl |0.980 |1.000 |0.990 |500 |
72
+ |pl |0.986 |0.996 |0.991 |500 |
73
+ |pt |0.950 |0.996 |0.973 |500 |
74
+ |ru |0.996 |0.974 |0.985 |500 |
75
+ |sw |1.000 |1.000 |1.000 |500 |
76
+ |th |1.000 |0.996 |0.998 |500 |
77
+ |tr |0.990 |0.968 |0.979 |500 |
78
+ |ur |0.998 |0.996 |0.997 |500 |
79
+ |vi |0.971 |0.990 |0.980 |500 |
80
+ |zh |1.000 |1.000 |1.000 |500 |
81
+
82
+ ## Training procedure
83
+
84
+ Fine-tuning was done via the `Trainer` API.
85
+
86
+ ### Training hyperparameters
87
+
88
+ The following hyperparameters were used during training:
89
+ - learning_rate: 2e-05
90
+ - train_batch_size: 64
91
+ - eval_batch_size: 128
92
+ - seed: 42
93
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
94
+ - lr_scheduler_type: linear
95
+ - num_epochs: 2
96
+ - mixed_precision_training: Native AMP
97
+
98
+ ### Training results
99
+
100
+ The validation results on the `valid` split of the Language Identification dataset are summarised here below.
101
+
102
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
103
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
104
+ | 0.2492 | 1.0 | 1094 | 0.0149 | 0.9969 | 0.9969 |
105
+ | 0.0101 | 2.0 | 2188 | 0.0103 | 0.9977 | 0.9977 |
106
+
107
+ In short, it achieves the following results on the validation set:
108
+ - Loss: 0.0101
109
+ - Accuracy: 0.9977
110
+ - F1: 0.9977
111
+
112
+ ### Framework versions
113
+
114
+ - Transformers 4.12.5
115
+ - Pytorch 1.10.0+cu111
116
+ - Datasets 1.15.1
117
+ - Tokenizers 0.10.3
config.json ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "papluca/xlm-roberta-base-language-detection",
3
+ "architectures": [
4
+ "XLMRobertaForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "id2label": {
14
+ "0": "ja",
15
+ "1": "nl",
16
+ "2": "ar",
17
+ "3": "pl",
18
+ "4": "de",
19
+ "5": "it",
20
+ "6": "pt",
21
+ "7": "tr",
22
+ "8": "es",
23
+ "9": "hi",
24
+ "10": "el",
25
+ "11": "ur",
26
+ "12": "bg",
27
+ "13": "en",
28
+ "14": "fr",
29
+ "15": "zh",
30
+ "16": "ru",
31
+ "17": "th",
32
+ "18": "sw",
33
+ "19": "vi"
34
+ },
35
+ "initializer_range": 0.02,
36
+ "intermediate_size": 3072,
37
+ "label2id": {
38
+ "ar": 2,
39
+ "bg": 12,
40
+ "de": 4,
41
+ "el": 10,
42
+ "en": 13,
43
+ "es": 8,
44
+ "fr": 14,
45
+ "hi": 9,
46
+ "it": 5,
47
+ "ja": 0,
48
+ "nl": 1,
49
+ "pl": 3,
50
+ "pt": 6,
51
+ "ru": 16,
52
+ "sw": 18,
53
+ "th": 17,
54
+ "tr": 7,
55
+ "ur": 11,
56
+ "vi": 19,
57
+ "zh": 15
58
+ },
59
+ "layer_norm_eps": 1e-05,
60
+ "max_position_embeddings": 514,
61
+ "model_type": "xlm-roberta",
62
+ "num_attention_heads": 12,
63
+ "num_hidden_layers": 12,
64
+ "output_past": true,
65
+ "pad_token_id": 1,
66
+ "position_embedding_type": "absolute",
67
+ "problem_type": "single_label_classification",
68
+ "torch_dtype": "float32",
69
+ "transformers_version": "4.12.5",
70
+ "type_vocab_size": 1,
71
+ "use_cache": true,
72
+ "vocab_size": 250002
73
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb6bded160fdd712245e1bd19c4de417e1508094a9f69d92ae287f32a8732888
3
+ size 1112318701
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6417044a1451c9a5fd302579ee5d39bae3831b0cd57bd008b61e79d33156f6e
3
+ size 1112525696
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "drive/MyDrive/Colab Notebooks/HuggingFace_course/HF_course_community_event/xlm-roberta-base-finetuned-language-detection", "tokenizer_class": "XLMRobertaTokenizer"}