Update README.md
Browse files
README.md
CHANGED
@@ -29,12 +29,8 @@ Our data has been collected by annotating tweets from a broad range of topics. I
|
|
29 |
|
30 |
## Performance
|
31 |
|
32 |
-
We evaluate our performance using [SENTIPOLC16 Evalita](http://www.di.unito.it/~tutreeb/sentipolc-evalita16/). We collapsed the FEEL-IT classes into 2 by mapping joy to the *positive* class and anger, fear and sadness into the *negative* class. We compare three different
|
33 |
-
|
34 |
-
|
35 |
-
This dataset comes with a training set and a testing set and thus we can compare the performance of different training datasets on the SENTIPOLC test set.
|
36 |
-
|
37 |
-
We use the fine-tuned UmBERTo model. The results show that FEEL-IT can provide better results on the SENTIPOLC test set than those that can be obtained with the SENTIPOLC training set.
|
38 |
|
39 |
| Training Dataset | Macro-F1 | Accuracy
|
40 |
| ------ | ------ |------ |
|
|
|
29 |
|
30 |
## Performance
|
31 |
|
32 |
+
We evaluate our performance using [SENTIPOLC16 Evalita](http://www.di.unito.it/~tutreeb/sentipolc-evalita16/). We collapsed the FEEL-IT classes into 2 by mapping joy to the *positive* class and anger, fear and sadness into the *negative* class. We compare three different experimental configurations training on FEEL-IT, SENTIPOLC16, or both by testing on the SENTIPOLC16 test set.
|
33 |
+
The results show that training on FEEL-IT can provide better results on the SENTIPOLC16 test set than those that can be obtained with the SENTIPOLC16 training set.
|
|
|
|
|
|
|
|
|
34 |
|
35 |
| Training Dataset | Macro-F1 | Accuracy
|
36 |
| ------ | ------ |------ |
|