|
--- |
|
license: cc-by-nc-sa-4.0 |
|
task_categories: |
|
- text-classification |
|
- zero-shot-classification |
|
language: |
|
- en |
|
tags: |
|
- hate speech |
|
- abuse detection |
|
- toxicity |
|
- robustness |
|
- AAA |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
|
|
# PLEAD-AAA |
|
This test set was generated applying the [AAA tool](https://github.com/Ago3/Adversifier) to the [PLEAD](https://huggingface.co/datasets/agostina3/PLEAD) dataset. |
|
|
|
## Reference |
|
If you use our dataset, please cite our papers: |
|
``` |
|
@article{calabrese-etal-2022-plead, |
|
author = {Agostina Calabrese and |
|
Bj{\"{o}}rn Ross and |
|
Mirella Lapata}, |
|
title = {Explainable Abuse Detection as Intent Classification and Slot Filling}, |
|
journal = {Transactions of the Association for Computational Linguistics}, |
|
year = {2022} |
|
} |
|
|
|
@inproceedings{calabrese2021aaa, |
|
title={Aaa: Fair evaluation for abuse detection systems wanted}, |
|
author={Calabrese, Agostina and Bevilacqua, Michele and Ross, Bj{\"o}rn and Tripodi, Rocco and Navigli, Roberto}, |
|
booktitle={Proceedings of the 13th ACM Web Science Conference 2021}, |
|
pages={243--252}, |
|
year={2021} |
|
} |
|
``` |