dardem commited on
Commit
74ab718
·
1 Parent(s): 3d6f5b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -1
README.md CHANGED
@@ -1,3 +1,53 @@
1
  ---
2
- license: cc-by-nc-2.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - it
6
+ - pt
7
+ tags:
8
+ - formality
9
+ licenses:
10
+ - cc-by-nc-sa
11
  ---
12
+
13
+
14
+ **Model Overview**
15
+
16
+ This is the model presented in the paper "Detecting Text Formality: A Study of Text Classification Approaches".
17
+
18
+ The original model is [mDistilBERT (base)](https://huggingface.co/distilbert-base-multilingual-cased). Then, it was fine-tuned on the multilingual corpus for fomality classiication [X-FORMAL](https://arxiv.org/abs/2104.04108) that consists of 4 languages -- English (from [GYAFC](https://arxiv.org/abs/1803.06535)), French, Italian, and Brazilian Portuguese.
19
+ In our experiments, the model showed the best results within Transformer-based models for the cross-lingual formality classification knowledge transfer task. More details, code and data can be found [here](https://github.com/s-nlp/formality).
20
+
21
+ **Evaluation Results**
22
+
23
+ Here, we provide several metrics of the best models from each category participated in the comparison to understand the ranks of values. We report accuracy score for two setups -- multilingual model fine-tuned for each language separately and then fine-tuned on all languages.
24
+ For cross-lingual experiments results, please, refer to the paper.
25
+
26
+ | | En | It | Po | Fr | All |
27
+ |------------------|------|------|------|------|-------|
28
+ | bag-of-words | 79.1 | 71.3 | 70.6 | 72.5 | --- |
29
+ | CharBiLSTM | 87.0 | 79.1 | 75.9 | 81.3 | 82.7 |
30
+ | mDistilBERT-cased| 86.6 | 76.8 | 75.9 | 79.1 | 79.4 |
31
+ | mDeBERTa-base | 87.3 | 76.6 | 75.8 | 78.9 | 79.9 |
32
+
33
+ **How to use**
34
+ ```python
35
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
36
+ model_name = 'deberta-large-formality-ranker'
37
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
38
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
39
+ ```
40
+
41
+ **Citation**
42
+ ```
43
+ TBD
44
+ ```
45
+
46
+ ## Licensing Information
47
+
48
+ [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
49
+
50
+ [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
51
+
52
+ [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
53
+ [cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png