Datasets:
Genius1237
commited on
Commit
•
dfc6306
1
Parent(s):
f325b63
Add link to pretrained model, reorganize readme
Browse files
README.md
CHANGED
@@ -47,6 +47,30 @@ data/
|
|
47 |
`data/binary` is a filtered version of the above where sentences from the top and bottom 25 percentile of scores is only present. This is the data that we used for training and evaluation in the paper.
|
48 |
`data/unlabelled_train_sets`
|
49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
If you use the English train or test data, please cite the Stanford Politeness Dataset
|
51 |
```
|
52 |
@inproceedings{danescu-niculescu-mizil-etal-2013-computational,
|
@@ -81,22 +105,3 @@ If you use the test data from the 9 target languages, please cite our paper
|
|
81 |
}
|
82 |
|
83 |
```
|
84 |
-
|
85 |
-
## Code
|
86 |
-
`politeness_regresor.py` is used for training and evaluation of transformer models
|
87 |
-
|
88 |
-
To train a model
|
89 |
-
```
|
90 |
-
python politeness_regressor.py --train_file data/binary/en_train_binary.csv --test_file data/binary/en_test_binary.csv --model_save_location model.pt --pretrained_model xlm-roberta-large --gpus 1 --batch_size 4 --accumulate_grad_batches 8 --max_epochs 5 --checkpoint_callback False --logger False --precision 16 --train --test --binary --learning_rate 5e-6
|
91 |
-
```
|
92 |
-
|
93 |
-
To test this trained model on $lang
|
94 |
-
```
|
95 |
-
python politeness_regressor.py --test_file data/binary/${lang}_test_binary.csv --load_model model.pt --gpus 1 --batch_size 32 --test --binary
|
96 |
-
```
|
97 |
-
|
98 |
-
## Politeness Strategies
|
99 |
-
`strategies` contains the processed strategy lexicon for different languages. `strategies/learnt_strategies.xlsx` contains the human edited strategies for 4 langauges
|
100 |
-
|
101 |
-
## Annotation Interface
|
102 |
-
`annotation.html` contains the UI used for conducting data annotation
|
|
|
47 |
`data/binary` is a filtered version of the above where sentences from the top and bottom 25 percentile of scores is only present. This is the data that we used for training and evaluation in the paper.
|
48 |
`data/unlabelled_train_sets`
|
49 |
|
50 |
+
## Code
|
51 |
+
`politeness_regresor.py` is used for training and evaluation of transformer models
|
52 |
+
|
53 |
+
To train a model
|
54 |
+
```
|
55 |
+
python politeness_regressor.py --train_file data/binary/en_train_binary.csv --test_file data/binary/en_test_binary.csv --model_save_location model.pt --pretrained_model xlm-roberta-large --gpus 1 --batch_size 4 --accumulate_grad_batches 8 --max_epochs 5 --checkpoint_callback False --logger False --precision 16 --train --test --binary --learning_rate 5e-6
|
56 |
+
```
|
57 |
+
|
58 |
+
To test this trained model on $lang
|
59 |
+
```
|
60 |
+
python politeness_regressor.py --test_file data/binary/${lang}_test_binary.csv --load_model model.pt --gpus 1 --batch_size 32 --test --binary
|
61 |
+
```
|
62 |
+
|
63 |
+
## Pretrained Model
|
64 |
+
XLM-Roberta Large finetuned on the English train set (as discussed and evaluated in the paper) can be found [here](https://huggingface.co/Genius1237/xlm-roberta-large-tydip)
|
65 |
+
|
66 |
+
## Politeness Strategies
|
67 |
+
`strategies` contains the processed strategy lexicon for different languages. `strategies/learnt_strategies.xlsx` contains the human edited strategies for 4 langauges
|
68 |
+
|
69 |
+
## Annotation Interface
|
70 |
+
`annotation.html` contains the UI used for conducting data annotation
|
71 |
+
|
72 |
+
## Citation
|
73 |
+
|
74 |
If you use the English train or test data, please cite the Stanford Politeness Dataset
|
75 |
```
|
76 |
@inproceedings{danescu-niculescu-mizil-etal-2013-computational,
|
|
|
105 |
}
|
106 |
|
107 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|