File size: 115,530 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 |
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:11:55.154037Z"
},
"title": "On Cross-Dataset Generalization in Automatic Detection of Online Abuse",
"authors": [
{
"first": "Isar",
"middle": [],
"last": "Nejadgholi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Research Council",
"location": {
"country": "Canada"
}
},
"email": "isar.nejadgholi@nrc-cnrc.gc.ca"
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Research Council",
"location": {
"country": "Canada"
}
},
"email": "svetlana.kiritchenko@nrc-cnrc.gc.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "NLP research has attained high performances in abusive language detection as a supervised classification task. While in research settings, training and test datasets are usually obtained from similar data samples, in practice systems are often applied on data that are different from the training set in topic and class distributions. Also, the ambiguity in class definitions inherited in this task aggravates the discrepancies between source and target datasets. We explore the topic bias and the task formulation bias in cross-dataset generalization. We show that the benign examples in the Wikipedia Detox dataset are biased towards platformspecific topics. We identify these examples using unsupervised topic modeling and manual inspection of topics' keywords. Removing these topics increases cross-dataset generalization, without reducing in-domain classification performance. For a robust dataset design, we suggest applying inexpensive unsupervised methods to inspect the collected data and downsize the non-generalizable content before manually annotating for class labels.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "NLP research has attained high performances in abusive language detection as a supervised classification task. While in research settings, training and test datasets are usually obtained from similar data samples, in practice systems are often applied on data that are different from the training set in topic and class distributions. Also, the ambiguity in class definitions inherited in this task aggravates the discrepancies between source and target datasets. We explore the topic bias and the task formulation bias in cross-dataset generalization. We show that the benign examples in the Wikipedia Detox dataset are biased towards platformspecific topics. We identify these examples using unsupervised topic modeling and manual inspection of topics' keywords. Removing these topics increases cross-dataset generalization, without reducing in-domain classification performance. For a robust dataset design, we suggest applying inexpensive unsupervised methods to inspect the collected data and downsize the non-generalizable content before manually annotating for class labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The NLP research community has devoted significant efforts to support the safety and inclusiveness of online discussion forums by developing automatic systems to detect hurtful, derogatory or obscene utterances. Most of these systems are based on supervised machine learning techniques, and require annotated data. Several publicly available datasets have been created for the task (Mishra et al., 2019; Vidgen and Derczynski, 2020) . However, due to the ambiguities in the task definition and complexities of data collection, cross-dataset generalizability remains a challenging and understudied issue of online abuse detection.",
"cite_spans": [
{
"start": 382,
"end": 403,
"text": "(Mishra et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 404,
"end": 432,
"text": "Vidgen and Derczynski, 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing datasets differ in the considered types of offensive behaviour and annotation schemes, data sources and data collection methods. There is no agreed-upon definition of harmful online behaviour yet. Several terms have been used to refer to the general concept of harmful online behavior, including toxicity (Hosseini et al., 2017) , hate speech (Schmidt and Wiegand, 2017) , offensive (Zampieri et al., 2019) and abusive language (Waseem et al., 2017; Vidgen et al., 2019a) . Still, in practice, every dataset only focuses on a narrow range of subtypes of such behaviours and a single online platform (Jurgens et al., 2019) . For example, annotated tweets for three categories, Racist, Offensive but not Racist and Clean, and Nobata et al. (2016) collected discussions from Yahoo! Finance news and applied a binary annotation scheme of Abusive versus Clean. Further, since pure random sampling usually results in small proportions of offensive examples (Founta et al., 2018) , various sampling techniques are often employed. Zampieri et al. (2019) used words and phrases frequently found in offensive messages to search for potential abusive tweets. Founta et al. (2018) and Razavi et al. (2010) started from random sampling, then boosted the abusive part of the datasets using specific search procedures. Hosseinmardi et al. (2015) used snowballing to collect abusive posts on Instagram. Due to this variability in category definitions and data collection techniques, a system trained on a particular dataset is prone to overfitting to the specific characteristics of that dataset. As a result, although models tend to perform well in cross-validation evaluation on one dataset, the cross-dataset generalizability remains low (van Aken et al., 2018; Wiegand et al., 2019) .",
"cite_spans": [
{
"start": 314,
"end": 337,
"text": "(Hosseini et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 352,
"end": 379,
"text": "(Schmidt and Wiegand, 2017)",
"ref_id": "BIBREF20"
},
{
"start": 392,
"end": 415,
"text": "(Zampieri et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 437,
"end": 458,
"text": "(Waseem et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 459,
"end": 480,
"text": "Vidgen et al., 2019a)",
"ref_id": "BIBREF23"
},
{
"start": 608,
"end": 630,
"text": "(Jurgens et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 733,
"end": 753,
"text": "Nobata et al. (2016)",
"ref_id": "BIBREF15"
},
{
"start": 960,
"end": 981,
"text": "(Founta et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 1032,
"end": 1054,
"text": "Zampieri et al. (2019)",
"ref_id": "BIBREF30"
},
{
"start": 1157,
"end": 1177,
"text": "Founta et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 1182,
"end": 1202,
"text": "Razavi et al. (2010)",
"ref_id": "BIBREF16"
},
{
"start": 1313,
"end": 1339,
"text": "Hosseinmardi et al. (2015)",
"ref_id": "BIBREF10"
},
{
"start": 1734,
"end": 1757,
"text": "(van Aken et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 1758,
"end": 1779,
"text": "Wiegand et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we investigate the impact of two types of biases originating from source data that can emerge in a cross-domain application of models: 1) task formulation bias (discrepancy in class definitions and annotation between the training and test sets) and 2) selection bias (discrepancy in the topic and class distributions between the training and test sets). Further, we suggest topicbased dataset pruning as a method of mitigating selection bias to increase generalizability. This approach is different from domain adaptation techniques based on data selection (Ruder and Plank, 2017; Liu et al., 2019) in that we apply an unsupervised topic modeling method for topic discovery without using the class labels. We show that some topics are more generalizable than others. The topics that are specific to the training dataset lead to overfitting and, therefore, lower generalizability. Excluding or down-sampling instances associated with such topics before the expensive annotation step can substantially reduce the annotation costs.",
"cite_spans": [
{
"start": 571,
"end": 594,
"text": "(Ruder and Plank, 2017;",
"ref_id": "BIBREF19"
},
{
"start": 595,
"end": 612,
"text": "Liu et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on the Wikipedia Detox or Wikidataset, (an extension of the dataset by Wulczyn et al. (2017) ), collected from English Wikipedia talk pages and annotated for toxicity. To explore the generalizability of the models trained on this dataset, we create an out-of-domain test set comprising various types of abusive behaviours by combining two existing datasets, namely Waseemdataset (Waseem and Hovy, 2016) and Fountadataset (Founta et al., 2018) , both collected from Twitter.",
"cite_spans": [
{
"start": 80,
"end": 101,
"text": "Wulczyn et al. (2017)",
"ref_id": "BIBREF29"
},
{
"start": 388,
"end": 411,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF26"
},
{
"start": 416,
"end": 451,
"text": "Fountadataset (Founta et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We identify topics included in the Wiki-dataset and manually examine keywords associated with the topics to heuristically determine topics' generalizability and their potential association with toxicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We assess the generalizability of the task formulations by training a classifier to detect the Toxic class in the Wiki-dataset and testing it on an outof-domain dataset comprising various types of offensive behaviours. We find that Wiki-Toxic is most generalizable to Founta-Abusive and least generalizable to Waseem-Sexism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that re-sampling techniques result in a trade-off between the True Positive and True Negative rates on the out-of-domain test set. This trade-off is mainly governed by the ratio of toxic to normal instances and not the size of the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We investigate the impact of topic distribution on generalizability and show that general and identity-related topics are more generalizable than platform-specific topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that excluding Wikipedia-specific data instances (54% of the dataset) does not affect the results of in-domain classification, and improves both True Positive and True Negative rates on the out-of-domain test set, unlike re-sampling methods. Through unsupervised topic modeling, such topics can be identified and excluded before annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on two types of biases originated from source data: task formulation and selection bias. Task formulation bias: In commercial applications, the definitions of offensive language heavily rely on community norms and context and, therefore, are imprecise, application-dependent, and constantly evolving (Chandrasekharan et al., 2018) . Similarly in NLP research, despite having clear overlaps, offensive class definitions vary significantly from one study to another. For example, the Toxic class in the Wiki-dataset refers to aggressive or disrespectful utterances that would likely make participants leave the discussion. This definition of toxic language includes some aspects of racism, sexism and hateful behaviour. Still, as highlighted by Vidgen et al. (2019a) , identity-based abuse is fundamentally different from general toxic behavior. Therefore, the Toxic class definition used in the Wiki-dataset differs in its scope from the abuserelated categories as defined in the Waseem-dataset and Founta-dataset. Wiegand et al. (2019) converted various category sets to binary (offensive vs. normal) and demonstrated that a system trained on one dataset can identify other forms of abuse to some extent. We use the same methodology and examine different offensive categories in outof-domain test sets to explore the deviation in a system's performance caused by the differences in the task definitions. Regardless of the task formulation, abusive language can be divided into explicit and implicit (Waseem et al., 2017) . Explicit abuse refers to utterances that include obscene and offensive expressions, such as stupid or scum, even though not all utterances that include obscene expressions are considered abusive in all contexts. Implicit abuse refers to more subtle harmful behaviours, such as stereotyping and micro-aggression. Explicit abuse is usually easier to detect by human annotators and automatic systems. Also, explicit abuse is more transferable between datasets as it is part of many definitions of online abuse, including personal attacks, hate speech, and identity-based abuse. The exact definition of implicit abuse, on the other hand, can substantially vary between task formulations as it is much dependent on the context, the author and the receiver of an utterance (Wiegand et al., 2019) .",
"cite_spans": [
{
"start": 309,
"end": 339,
"text": "(Chandrasekharan et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 752,
"end": 773,
"text": "Vidgen et al. (2019a)",
"ref_id": "BIBREF23"
},
{
"start": 1023,
"end": 1044,
"text": "Wiegand et al. (2019)",
"ref_id": "BIBREF27"
},
{
"start": 1508,
"end": 1529,
"text": "(Waseem et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 2299,
"end": 2321,
"text": "(Wiegand et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Biases Originating from Source Data",
"sec_num": "2"
},
{
"text": "Selection bias: Selection (or sampling) bias emerge when source data, on which the model is trained, is not representative of target data, on which the model is applied (Shah et al., 2020). We focus on two data characteristics affecting selection bias: topic distribution and class distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biases Originating from Source Data",
"sec_num": "2"
},
{
"text": "In practice, every dataset covers a limited number of topics, and the topic distributions depend on many factors, including the source of data, the search mechanism and the timing of the data collection. For example, our source dataset, Wikidataset, consists of Wikipedia talk pages dating from 2004-2015. On the other hand, one of the sources of our target dataset, Waseem-dataset, consists of tweets collected using terms and references to specific entities that frequently occur in tweets expressing hate speech. As a result of its sampling strategy, Waseem-dataset includes many tweets on the topic of 'women in sports'. Wiegand et al. (2019) showed that different data sampling methods result in various distributions of topics, which affects the generalizability of trained classifiers, especially in the case of implicit abuse detection. Unlike explicit abuse, implicitly abusive behaviour comes in a variety of semantic and syntactic forms. To train a generalizable classifier, one requires a training dataset that covers a broad range of topics, each with a good representation of offensive examples. We continue this line of work and investigate the impact of topic bias on cross-dataset generalizability by identifying and changing the distribution of topics in controlled experiments.",
"cite_spans": [
{
"start": 625,
"end": 646,
"text": "Wiegand et al. (2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Biases Originating from Source Data",
"sec_num": "2"
},
{
"text": "The amount of online abuse on mainstream platforms varies greatly but is always very low. Founta et al. (2018) found that abusive tweets form 0.1% to 3% of randomly collected datasets. Vidgen et al. (2019b) showed that depending on the platform the prevalence of abusive language can range between 0.001% and 8%. Despite various data sampling strategies aimed at increasing the proportion of offensive instances, the class imbalance (the difference in class sizes) in available datasets is often severe. When trained on highly imbalanced data, most statistical machine learning methods exhibit a bias towards the majority class, and their performance on a minority class, usually the class of interest, suffers. A number of techniques have been proposed to address class imbalance in data, including data re-sampling, cost-sensitive learning, and neural network specific learning algorithms (Branco et al., 2016; Haixiang et al., 2017; Johnson and Khoshgoftaar, 2019) . In practice, simple re-sampling techniques, such as down-sampling of over-represented classes, often improve the overall performance of the classifier (Johnson and Khoshgoftaar, 2019) . However, re-sampling techniques might lead to overfitting to one of the classes causing a trade-off between True Positive and True Negative rates. When aggregated in an averaged metric such as F-score, this trade-off is usually overlooked.",
"cite_spans": [
{
"start": 104,
"end": 110,
"text": "(2018)",
"ref_id": null
},
{
"start": 185,
"end": 206,
"text": "Vidgen et al. (2019b)",
"ref_id": "BIBREF24"
},
{
"start": 891,
"end": 912,
"text": "(Branco et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 913,
"end": 935,
"text": "Haixiang et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 936,
"end": 967,
"text": "Johnson and Khoshgoftaar, 2019)",
"ref_id": "BIBREF11"
},
{
"start": 1121,
"end": 1153,
"text": "(Johnson and Khoshgoftaar, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Biases Originating from Source Data",
"sec_num": "2"
},
{
"text": "We exploit three large-scale, publicly available English datasets frequently used for the task of online abuse detection. Our main dataset, Wiki-dataset (Wulczyn et al., 2017) , is used as a training set. The out-of-domain test set is obtained by combining the other two datasets, Founta-dataset (Founta et al., 2018) and Waseem-dataset (Waseem and Hovy, 2016) .",
"cite_spans": [
{
"start": 153,
"end": 175,
"text": "(Wulczyn et al., 2017)",
"ref_id": "BIBREF29"
},
{
"start": 337,
"end": 360,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "Training set: The Wiki-dataset includes 160K comments collected from English Wikipedia discussions and annotated for Toxic and Normal, through crowd-sourcing 1 . Every comment is annotated by 10 workers, and the final label is obtained through majority voting. The class Toxic comprises rude, hateful, aggressive, disrespectful or unreasonable comments that are likely to make a person leave a conversation 2 . The dataset consists of randomly collected comments and comments made by users blocked for violating Wikipedia's policies to augment the proportion of toxic texts. This dataset contains 15,362 instances of Toxic and 144,324 Normal texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "Out-of-Domain test set: The toxic portion of our test set is composed of four types of offensive language: Abusive and Hateful from the Fountadataset, and Sexist and Racist from the Waseemdataset. For the benign examples of our test set, we use the Normal class of the Founta-dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "The Founta-dataset is a collection of 80K tweets crowd-annotated for four classes: Abusive, Hateful, Spam and Normal. The data is randomly sampled and then boosted with tweets that are likely to belong to one or more of the minority classes by deploying an iterative data exploration technique. The Abusive class is defined as content with any strongly impolite, rude or hurtful language that shows a debasement of someone or something, or shows intense emotions. The Hateful class refers to tweets that express hatred towards a targeted individual or group, or are intended to be derogatory, to humiliate, or to insult members of a group, on the basis of attributes such as race, religion, ethnic origin, sexual orientation, disability, or gender. Spam refers to posts consisted of advertising/marketing, posts selling products of adult nature, links to malicious websites, phishing attempts and other unwanted information, usually sent repeatedly. Tweets that do not fall in any of the prior classes are labelled as Normal (Founta et al., 2018). We do not include the Spam class in our test set as this category does not constitute offensive language, in general. The Founta-dataset contains 27,150 of Abusive, 4,965 of Hateful and 53,851 of Normal instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "The Waseem-dataset includes 16K manually annotated tweets, labeled as Sexist, Racist or Neither. The corpus is collected by searching for common slurs and terms pertaining to minority groups as well as identifying tweeters that use these terms frequently. A tweet is annotated as Racist or Sexist if it uses a racial or sexist slur, attacks, seeks to silence, unjustifiably criticizes or misrepresents a minority or defends xenophobia or sexism. Tweets that do not fall in these two classes are labeled as Neither (Waseem and Hovy, 2016) . The Neither class represents a mixture of benign and abusive (but not sexist or racist) instances, and, therefore, is excluded from our test set. Waseem-dataset contains 3,430 of Sexist and 1,976 of Racist tweets.",
"cite_spans": [
{
"start": 514,
"end": 537,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "We start by exploring the content of the Wikidataset through topic modeling. We train a topic model using the Online Latent Dirichlet Allocation (OLDA) algorithm (Hoffman et al., 2010) as implemented in the Gensim library (\u0158eh\u016f\u0159ek and Sojka, 2010) with the default parameters. Latent Dirichlet Allocation (LDA) (Blei et al., 2003) is a Baysian probabilistic model of a collection of texts. Each text is assumed to be generated from a multi- nomial distribution over a given number of topics, and each topic is represented as a multinomial distribution over the vocabulary. We pre-process the texts by lemmatizing the words and removing the stop words. To determine the optimal number of topics, we use a coherence measure that calculates the degree of semantic similarity among the top words (R\u00f6der et al., 2015) . Top words are defined as the most probable words to be seen conditioned on a topic. We experimented with a range of topic numbers between 10 and 30 and obtained the maximal average coherence with 20 topics. Each topic is represented by 10 top words. For simplicity, each text is assigned a single topic that has the highest probability. The full list of topics and their top words are available in the Appendix.",
"cite_spans": [
{
"start": 162,
"end": 184,
"text": "(Hoffman et al., 2010)",
"ref_id": "BIBREF8"
},
{
"start": 311,
"end": 330,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF1"
},
{
"start": 792,
"end": 812,
"text": "(R\u00f6der et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Analysis of the Wiki-dataset",
"sec_num": "4"
},
{
"text": "We group the 20 extracted topics into three categories based on the coherency of the top words and their potential association with offensive language. This is done through manual examination of the 10 top words in each topic. Table 1 shows five out of ten top words for each topic that are most representative of the assigned category.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 234,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Topic Analysis of the Wiki-dataset",
"sec_num": "4"
},
{
"text": "The top words of two topics (topic 0 and topic 1) are general terms such as think, want, time, and life. This category forms 26% of the dataset. Since these topics appear incoherent, their association with offensiveness cannot be judged heuristically. Looking at the toxicity annotations we observe that 47% of the Toxic comments belong to these topics. These comments mostly convey personal insults, usually not tied to any identity group. The frequently used abusive terms in these Toxic comments include f*ck, stupid, idiot, *ss, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category 1: incoherent or mixture of general topics",
"sec_num": null
},
{
"text": "Category 2: coherent, high association with offensive language",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category 1: incoherent or mixture of general topics",
"sec_num": null
},
{
"text": "Seven of the topics can be associated with offensive language; their top words represent profanity or are related to identity groups frequently subjected to abuse. Topic 14 is the most explicitly offensive topic; nine out of ten top words are associated with insult and hatred. 97% of the instances belonging to this topic are annotated as Toxic, with 96% of them containing explicitly toxic words. 3 These are generic profanities with the word f*ck being the most frequently used word.",
"cite_spans": [
{
"start": 399,
"end": 400,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Category 1: incoherent or mixture of general topics",
"sec_num": null
},
{
"text": "The top words of the other six topics (topics 2, 7, 8, 9, 12, and 16) include either offensive words or terms related to identity groups based on gender, ethnicity, or religion. On average, 16% of the comments assigned to these topics are labeled as Toxic. We manually analyzed these comments, and found that each topic (except topic 12) tends to concentrate around a specific identity group. Offensive comments in topic 2 mostly contain sexual slur and target female and homosexual users. In topic 7, comments often contain racial and ethnicity based abuse. Topic 8 contains physical threats, often targeting Muslims and Jewish folks (the words die and kill are the most frequently used content words in the offensive messages of this topic). Comments in topic 9 involve many terms associated with Christianity (e.g., god, christian, Jesus). Topic 16 has the least amount of comments (0.3% of the dataset), with the offensive messages mostly targeting gay people (the word gay appears in 67% of the offensive messages in this topic). Topic 12 is comprised of personal attacks in the context of Wikipedia admin-contributor relations. The most common offensive words in this topic include f*ck, stupid, troll, ignorant, hypocrite, etc. 20% of the whole dataset and 35% of the comments labeled as Toxic belong to this category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category 1: incoherent or mixture of general topics",
"sec_num": null
},
{
"text": "3 Following Wiegand et al. (2019) , we estimate the proportion of explicitly offensive instances in a dataset as the proportion of abusive instances that contain at least one word from the lexicon of abusive words by Wiegand et al. (2018) . Category 3: coherent, low association with offensive language The remaining eleven topics include top words specific to Wikipedia and not directly associated with offensive language. For example, keywords of topic 4 are terms such as page, Wikipedia, edit and article, and only 0.4% of the 10,471 instances in this topic are labeled as Toxic. These eleven topics comprise 54% of the comments in the dataset and 18% of the Toxic comments.",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "Wiegand et al. (2019)",
"ref_id": "BIBREF27"
},
{
"start": 217,
"end": 238,
"text": "Wiegand et al. (2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Category 1: incoherent or mixture of general topics",
"sec_num": null
},
{
"text": "We apply the LDA topic model trained on the Wikidataset as described in Section 4 to the Out-of-Domain test set. As before, each textual instance is assigned a single topic that has the highest probability. Table 2 summarizes the distribution of topics for all classes in the three datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 207,
"end": 214,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Topic Distribution of the Test Set",
"sec_num": "5"
},
{
"text": "Observe that Category 3 is the least represented category of topics across all classes, except for the Normal class in the Wiki-dataset. Specifically, there is a significant deviation in the topic distribution between the Wiki-Normal and the Founta-Normal classes. This deviation can be explained by the difference in data sources. Normal conversations on Twitter are more likely to be about general concepts covered in Category 1 or identity-related topics covered in Category 2 than the specific topics such as writing and editing in Category 3. Other than Waseem-Racist, which has 67% overlap with Category 2, all types of offensive behaviour in the three datasets have more overlap with the general topics (Category 1) than identity-related topics (Category 2). For example, for the Waseem-Sexist, 50% of instances fall under Category 1, 35% under Category 2 and 15% under Category 3. Topic 1, which is a mixture of general topics, is the dominant topic among the Waseem-Sexist tweets. Out of the topics in Category 2, most of the sexist tweets are matched to topic 2 (focused on sexism and homophobia) and topic 12 (general personal insults). Note that given the sizes of the positive and negative test classes, all other common metrics, such as various kinds of averaged F1-scores, can be calculated from the accuracies per class. In addition, we report macro-averaged F-score, weighted by the sizes of the negative and positive classes, to show the overall impact of the proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Distribution of the Test Set",
"sec_num": "5"
},
{
"text": "Results: The overall performance of the classifier on the Out-of-Domain test set is quite high: weighted macro-averaged F 1 = 0.90. However, when the test set is broken down into the 20 topics of the Wiki-dataset and the accuracy is measured within the topics, the results vary greatly. For ex-ample, for the instances that fall under topic 14, the explicitly offensive topic, the F1-score is 0.99. For topic 15, a Wikipedia-specific topic, the F1-score is 0.80. Table 3 shows the overall accuracies for each test class as well as the accuracies for each topic category (described in Section 4) within each class.",
"cite_spans": [],
"ref_spans": [
{
"start": 463,
"end": 470,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Topic Distribution of the Test Set",
"sec_num": "5"
},
{
"text": "For the class Founta-Abusive, the classifier achieves 94% accuracy. 12% of the Founta-Abusive tweets fall under the explicitly offensive topic (topic 14), and those tweets are classified with a 100% accuracy. The accuracy score is highest on Category 2 and lowest on Category 3. For the Founta-Hateful class, the classifier recognizes 62% of the tweets correctly. The accuracy score is highest on Category 1 and lowest on Category 3. 8% of the Founta-Hateful tweets fall under the explicitly offensive topic (topic 14), and are classified with a 99% accuracy. For the Founta-Normal class, the classifier recognizes 96% of the tweets correctly. Unlike the Founta-Abusive and Founta-Hateful class, for the Founta-Normal class, the highest accuracy is achieved on Category 3. 0.1% of the Founta-Normal tweets fall under the explicitly offensive topic, and only 26% of them are classified correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Distribution of the Test Set",
"sec_num": "5"
},
{
"text": "The accuracy of the classifier on the Waseem-Sexist and Waseem-Racist classes is 0.26 and 0.35, respectively. This indicates that the Wiki-dataset, annotated for toxicity, is not well suited for detecting sexist or racist tweets. This observation could be explained by the fact that none of the coherent topics extracted from the Wiki-dataset is associated strongly with sexism or racism. Nevertheless, the tweets that fall under the explicit abuse topic (topic 14) are recognized with a 100% accuracy. Topic 8, which contains abuse mostly directed towards Jewish and Muslim people, is the most dominant topic in the Racist class (32% of the class) and the accuracy score on this topic is the highest, after the explicitly offensive topic. The Racist class overlaps the least with Category 3 (see Table 2 ), and the lowest accuracy score is obtained on this category. The definitions of the Toxic and Racist classes overlap mostly in general and identity-related abuse, therefore higher accuracy scores are obtained in Categories 1 and 2. Similar to Racist tweets, Sexist tweets have the least overlap and the lowest accuracy score on Category 3. The accuracy score is the highest on the explicitly offensive topic (100%) and varies substantially across other topics. ",
"cite_spans": [],
"ref_spans": [
{
"start": 797,
"end": 804,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Topic Distribution of the Test Set",
"sec_num": "5"
},
{
"text": "The generalizability of the classifier trained on the Wiki-dataset is affected by at least two factors: task formulation and topic distributions. The impact of task formulation: From task formulations described in Section 3, observe that the Wiki-dataset defines the class Toxic in a general way. The class Founta-Abusive is also a general formulation of offensive behaviour. The similarity of these two definitions is reflected clearly in our results. The classifier trained on the Wiki-dataset reaches 96% accuracy on the Founta-Abusive class. Unlike the Founta-Abusive class, the other three labels included in our analysis formulate a specific type of harassment against certain targets. Our topic analysis of the Wiki-dataset reveals that this dataset includes profanity and hateful content directed towards minority groups but the dataset is extremely unbalanced in covering these topics. Therefore, not only is the number of useful examples for learning these classes small, but the classification models do not learn these classes effectively because of the skewness of the training dataset. This observation is in line with the fact that the trained classifier detects some of the Waseem-Racist, Waseem-Sexist and Founta-Hateful tweets correctly, but overall performs poorly on these classes. The impact of topic distribution: Our analysis shows that independent of the class labels, for all the abuse-related test classes, the trained classifier performs worst when test examples fall under Category 3. Intuitively, this means that the platformspecific topics with low association with offensive language are least generalizable in terms of learn-ing offensive behaviour. Categories 1 and 2, which include a mixture of general and identity-related topics with high potential for offensiveness, have more commonalities across datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.1"
},
{
"text": "Our goal is to measure the impact of various topics on generalization. However, modifying the topic distribution will impact the class distribution and data size. To control for this, we first analyze the impact of class distribution and data size on the classifier's performance. Then, we study the effect of topic distribution by limiting the training data to different topic categories. Impact of class distribution: The class distribution in the Wiki-dataset is fairly imbalanced; the ratio of the size of Wiki-Toxic to Wiki-Normal is 1:10. Class imbalance can lead to poor predictive performance on minority classes, as most of the learning algorithms are developed with the assumption of the balanced class distribution. To investigate the impact of the class distribution on generalization, we keep all the Wiki-Toxic instances and randomly sample the Wiki-Normal class to build the training sets with various ratios of toxic to normal instances. Figure 1 shows the classifier's accuracy on the test classes when trained on subsets with different class distributions. Observe that with the increase of the Wiki-Normal class size in the training dataset, the accuracy on all offensive test classes decreases while the accuracy on the Founta-Normal class increases. The classifier assigns more instances to the the Normal class resulting in a lower True Positive (accuracy on the offensive classes) and a higher True Negative (accuracy on the Normal class) rates. The drop in accuracy is significant for the Waseem-Sexist, Waseem-Racist and Waseem-Hateful classes and relatively minor for the Founta-Abusive class. Note that the impact of the class distribution is not reflected in the overall F1-score. The classifier trained on a balanced data subset (with class size ratio of 1:1) reaches 0.896 weighted-averaged F1-score, which is very close to the F1-score of 0.899 resulted from training on the full dataset with the 1:10 class size ratio. However, in practice, the designers of such systems need to decide on the preferred class distribution depending on the distribution of classes in the test environment and the significance of the consequences of the False Positive and False Negative outcomes. Impact of dataset size: To investigate the impact of the size of the training set, we fix the class ratio at 1:1 and compare the classifier's performance when trained on data subsets of different sizes. We randomly select subsets from the Wiki-dataset with sizes of 10K (5K Toxic and 5K Normal instances) and 30K (15K Toxic and 15K Normal instances). Each experiment is repeated 5 times, and the averaged results are presented in Figure 2 . The height of the box shows the standard deviation of accuracies. Observe that the average accuracies remain unchanged when the dataset's size triples at the same class balance ratio. This finding contrasts with the general assumption that more training data results in a higher classification performance. Impact of topics: In order to measure the impact of topics covered in the training dataset, we compare the classifier's performance when trained on only one of the three categories of topics described in Section 4. To control for the effect of class balance and dataset size, we run the experiments for two cases of toxic-to-normal ratios, 3K-3K and 3K-27K. Each experiment is repeated 5 times, and the average accuracy per class is reported in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 954,
"end": 962,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 2641,
"end": 2649,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 3404,
"end": 3412,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Impact of Data Size, Class and Topic Distribution on Generalizability",
"sec_num": "7"
},
{
"text": "For both cases of class size ratios, shown in Figures 3a and 3b , we notice that the classifier trained on instances belonging to Category 3 reaches higher accuracies on the offensive classes, but a significantly lower accuracy on the Founta-Normal class. The benign part of Category 3 is overwhelmed by Wikipedia-specific examples. Therefore, utterances dissimilar to these topics are labelled as Toxic, leading to a high accuracy on the toxic classes and a low accuracy on the Normal class. This is an example of the negative impact of topic bias on the detection of offensive utterances.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 63,
"text": "Figures 3a and 3b",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Impact of Data Size, Class and Topic Distribution on Generalizability",
"sec_num": "7"
},
{
"text": "In contrast, the classifiers trained on Categories 1 and 2 perform comparably across test classes. The classifier trained on Category 2 is slightly more effective in recognizing Founta-Hateful utterances, especially when the training set is balanced. This observation can be explained by a better representation of identity-related hatred in Category 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Data Size, Class and Topic Distribution on Generalizability",
"sec_num": "7"
},
{
"text": "We showed that a classifier trained on instances from Category 3 suffers a big loss in accuracy on the Normal class. Here, we investigate how the performance of a classifier trained on the full Wikidataset changes when the Category 3 instances (all or the benign part only) are removed from the training set. Table 4 shows the results. Observe that removing the domain-specific benign examples, referred to as 'excl. C3 Normal' in Table 4 , improves the accuracies for all classes. As demonstrated in the previous experiments, this improvement cannot be attributed to the changes in the class balance ratio or the size of the training set, as both these factors cause a trade-off between True Positive and True Negative rates. Removing the Wikipediaspecific topics from the Wiki-dataset mitigates the topic bias and leads to this improvement. Similarly, when all the instances of Category 3 are removed from the training set ('excl. C3 all' in Table 4 ), the accuracy does not suffer and actually slightly improves on all classes, except Waseem-Racist. This is despite the fact that the training set has 58% less instances in the Normal class and 18% less instances in the Toxic class. The overall weighted-averaged F1-score on the full Out-of-Domain test set also slightly improves when the instances of Category 3 are excluded from the training data (Table 5) . Removing all the instances of Category 3 is particularly interesting since it can be done only with inspection of topics and without using the class labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 431,
"end": 438,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 944,
"end": 951,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 1352,
"end": 1361,
"text": "(Table 5)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Removing Platform-Specific Instances from the Training Set",
"sec_num": "8"
},
{
"text": "To assess the impact of removing Wikipediaspecific examples on in-domain classification, we train a model on the training set of the Wiki-dataset, with and without excluding Category 3 instances, and evaluate it on the full test set of the Wiki-dataset. We observe that the in-domain performance does not suffer from removing Category 3 from the training data (Table 5) .",
"cite_spans": [],
"ref_spans": [
{
"start": 360,
"end": 369,
"text": "(Table 5)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Removing Platform-Specific Instances from the Training Set",
"sec_num": "8"
},
{
"text": "In the task of online abuse detection, both False Positive and False Negative errors can lead to significant harm as one threatens the freedom of speech and ruins people's reputations, and the other ignores hurtful behaviour. Although balancing the class sizes has been traditionally exploited when dealing with imbalanced datasets, we showed that balanced class sizes may lead to high misclassification of normal utterances while improving the True Positive rates. This trade-off is not necessarily reflected in aggregated evaluation metrics such as F1-score but has important implications in real-life applications. We suggest evaluating each class (both positive and negative) separately taking Table 5 : Weighted macro-averaged F1-score for a classifier trained on portions of the Wiki-dataset and evaluated on the in-domain and out-of-domain test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 698,
"end": 705,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "9"
},
{
"text": "into account the potential costs of different types of errors. Furthermore, our analysis reveals that for generalizability, the size of the dataset is not as important as the class and topic distributions. We analyzed the impact of the topics included in the Wiki-dataset and showed that mitigating the topic bias improves accuracy rates across all the out-of-domain positive and negative classes. Our results suggest that the sheer amount of normal comments included in the training datasets might not be necessary and can even be harmful for generalization if the topic distribution of normal topics is skewed. When the classifier is trained on Category 3 instances only (Figure 3) , the Normal class is attributed to the over-represented topics, leading to high misclassification of normal texts or high False Positive rates.",
"cite_spans": [],
"ref_spans": [
{
"start": 673,
"end": 683,
"text": "(Figure 3)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "9"
},
{
"text": "In general, when collecting new datasets, texts can be inspected through topic modeling using simple heuristics (e.g., keep topics related to demographic groups often subjected to abuse) in an attempt to balance the distribution of various topics and possibly sub-sample over-represented and less generalizable topics (e.g., high volumes of messages related to an incident with a celebrity figure happened during the data collection time) before the expensive annotation step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "9"
},
{
"text": "Our work highlights the importance of heuristic scrutinizing of topics in collected datasets before performing a laborious and expensive annotation. We suggest that unsupervised topic modeling and manual assessment of extracted topics can be used to mitigate the topic bias. In the case of the Wikidataset, we showed that more than half of the dataset can be safely removed without affecting either the in-domain or the out-of-domain performance. For future work, we recommend that topic analysis, augmentation of topics associated with offensive vocabulary and targeted demographics, and filtering of non-generalizable topics should be applied iteratively during data collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "10"
},
{
"text": "https://meta.wikimedia.org/wiki/ Research:Detox/Data_Release 2 https://github.com/ewulczyn/ wiki-detox/blob/master/src/modeling/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Challenges for toxic comment classification: An in-depth error analysis",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Betty Van Aken",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Krestel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "L\u00f6ser",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Betty van Aken, Julian Risch, Ralf Krestel, and Alexan- der L\u00f6ser. 2018. Challenges for toxic comment clas- sification: An in-depth error analysis. In Proceed- ings of the 2nd Workshop on Abusive Language On- line.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Ma- chine Learning Research, 3(Jan):993-1022.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A survey of predictive modeling on imbalanced domains",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Branco",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [],
"last": "Torgo",
"suffix": ""
},
{
"first": "Rita",
"middle": [
"P"
],
"last": "Ribeiro",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "49",
"issue": "2",
"pages": "1--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Branco, Lu\u00eds Torgo, and Rita P. Ribeiro. 2016. A survey of predictive modeling on imbalanced do- mains. ACM Computing Surveys (CSUR), 49(2):1- 50.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Internet's hidden rules: An empirical study of Reddit norm violations at micro, meso, and macro scales",
"authors": [
{
"first": "Eshwar",
"middle": [],
"last": "Chandrasekharan",
"suffix": ""
},
{
"first": "Mattia",
"middle": [],
"last": "Samory",
"suffix": ""
},
{
"first": "Shagun",
"middle": [],
"last": "Jhaver",
"suffix": ""
},
{
"first": "Hunter",
"middle": [],
"last": "Charvat",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Bruckman",
"suffix": ""
},
{
"first": "Cliff",
"middle": [],
"last": "Lampe",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the ACM on Human-Computer Interaction",
"volume": "2",
"issue": "",
"pages": "1--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, and Eric Gilbert. 2018. The Internet's hidden rules: An empirical study of Reddit norm violations at micro, meso, and macro scales. Proceedings of the ACM on Human- Computer Interaction, 2(CSCW):1-25.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Min- nesota.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Large scale crowdsourcing and characterization of Twitter abusive behavior",
"authors": [
{
"first": "Constantinos",
"middle": [],
"last": "Antigoni Maria Founta",
"suffix": ""
},
{
"first": "Despoina",
"middle": [],
"last": "Djouvas",
"suffix": ""
},
{
"first": "Ilias",
"middle": [],
"last": "Chatzakou",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Leontiadis",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Blackburn",
"suffix": ""
},
{
"first": "Athena",
"middle": [],
"last": "Stringhini",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Vakali",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Sirivianos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kourtellis",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antigoni Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of Twit- ter abusive behavior. In Proceedings of the Interna- tional AAAI Conference on Web and Social Media.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning from class-imbalanced data: Review of methods and applications",
"authors": [
{
"first": "Guo",
"middle": [],
"last": "Haixiang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Yijing",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Gu",
"middle": [],
"last": "Mingyun",
"suffix": ""
},
{
"first": "Gong",
"middle": [],
"last": "Huang Yuanyue",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bing",
"suffix": ""
}
],
"year": 2017,
"venue": "Expert Systems with Applications",
"volume": "73",
"issue": "",
"pages": "220--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guo Haixiang, Li Yijing, Jennifer Shang, Gu Mingyun, Huang Yuanyue, and Gong Bing. 2017. Learning from class-imbalanced data: Review of methods and applications. Expert Systems with Applications, 73:220-239.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Online learning for latent Dirichlet allocation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "Francis",
"middle": [
"R"
],
"last": "Bach",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in Neural Information Processing Systems 23",
"volume": "",
"issue": "",
"pages": "856--864",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Hoffman, Francis R. Bach, and David M. Blei. 2010. Online learning for latent Dirichlet allocation. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 856-864. Curran Associates, Inc.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deceiving Google's Perspective API built for detecting toxic comments",
"authors": [
{
"first": "Hossein",
"middle": [],
"last": "Hosseini",
"suffix": ""
},
{
"first": "Sreeram",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Baosen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Radha",
"middle": [],
"last": "Poovendran",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.08138"
]
},
"num": null,
"urls": [],
"raw_text": "Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. Deceiving Google's Perspective API built for detecting toxic comments. arXiv preprint arXiv:1702.08138.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Analyzing labeled cyberbullying incidents on the Instagram social network",
"authors": [
{
"first": "Homa",
"middle": [],
"last": "Hosseinmardi",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"Arredondo"
],
"last": "Mattson",
"suffix": ""
},
{
"first": "Rahat",
"middle": [],
"last": "Ibn Rafiq",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Social Informatics",
"volume": "",
"issue": "",
"pages": "49--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Homa Hosseinmardi, Sabrina Arredondo Mattson, Ra- hat Ibn Rafiq, Richard Han, Qin Lv, and Shivakant Mishra. 2015. Analyzing labeled cyberbullying inci- dents on the Instagram social network. In Proceed- ings of the International Conference on Social Infor- matics, pages 49-66.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Survey on deep learning with class imbalance",
"authors": [
{
"first": "Justin",
"middle": [
"M"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "Taghi",
"middle": [
"M"
],
"last": "Khoshgoftaar",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Big Data",
"volume": "6",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin M. Johnson and Taghi M. Khoshgoftaar. 2019. Survey on deep learning with class imbalance. Jour- nal of Big Data, 6(1):27.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A just and comprehensive strategy for using NLP to address online abuse",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Libby",
"middle": [],
"last": "Hemphill",
"suffix": ""
},
{
"first": "Eshwar",
"middle": [],
"last": "Chandrasekharan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3658--3666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Jurgens, Libby Hemphill, and Eshwar Chan- drasekharan. 2019. A just and comprehensive strat- egy for using NLP to address online abuse. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3658- 3666, Florence, Italy.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Reinforced training data selection for domain adaptation",
"authors": [
{
"first": "Miaofeng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Hongbin",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1957--1968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miaofeng Liu, Yan Song, Hongbin Zou, and Tong Zhang. 2019. Reinforced training data selection for domain adaptation. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 1957-1968, Florence, Italy.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Tackling online abuse: A survey of automated abuse detection methods",
"authors": [
{
"first": "Pushkar",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.06024"
]
},
"num": null,
"urls": [],
"raw_text": "Pushkar Mishra, Helen Yannakoudakis, and Ekaterina Shutova. 2019. Tackling online abuse: A survey of automated abuse detection methods. arXiv preprint arXiv:1908.06024.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Abusive language detection in online user content",
"authors": [
{
"first": "Chikashi",
"middle": [],
"last": "Nobata",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Achint",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive lan- guage detection in online user content. In Proceed- ings of the International Conference on World Wide Web, pages 145-153.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Offensive language detection using multi-level classification",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Amir H Razavi",
"suffix": ""
},
{
"first": "Sasha",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Uritsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Matwin",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Canadian Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "16--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir H Razavi, Diana Inkpen, Sasha Uritsky, and Stan Matwin. 2010. Offensive language detection using multi-level classification. In Proceedings of the Canadian Conference on Artificial Intelligence, pages 16-27.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta. ELRA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploring the space of topic coherence measures",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "R\u00f6der",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Both",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Hinneburg",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 8th ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "399--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael R\u00f6der, Andreas Both, and Alexander Hinneb- urg. 2015. Exploring the space of topic coherence measures. In Proceedings of the 8th ACM Interna- tional Conference on Web Search and Data Mining, pages 399-408.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning to select data for transfer learning with Bayesian optimization",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "372--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with Bayesian op- timization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 372-382.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A survey on hate speech detection using natural language processing",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for So- cial Media, pages 1-10.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Predictive biases in natural language processing models: A conceptual framework and overview",
"authors": [
{
"first": "",
"middle": [],
"last": "Deven Santosh",
"suffix": ""
},
{
"first": "H",
"middle": [
"Andrew"
],
"last": "Shah",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5248--5264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5248-5264.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Directions in abusive language training data: Garbage in, garbage out",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.01670"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen and Leon Derczynski. 2020. Direc- tions in abusive language training data: Garbage in, garbage out. arXiv preprint arXiv:2004.01670.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Challenges and frontiers in abusive content detection",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Rebekah",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "80--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019a. Challenges and frontiers in abusive content detec- tion. In Proceedings of the Third Workshop on Abu- sive Language Online, pages 80-93, Florence, Italy.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "How much online abuse is there? Alan Turing Institute",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen, Helen Margetts, and Alex Harris. 2019b. How much online abuse is there? Alan Turing Insti- tute. November, 27.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Understanding abuse: A typology of abusive language detection subtasks",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "78--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online, pages 78-84, Vancouver, BC, Canada.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Hateful symbols or hateful people? Predictive features for hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? Predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Detection of Abusive Language: the Problem of Biased Datasets",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kleinbauer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "602--608",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of Abusive Language: the Problem of Biased Datasets. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 602- 608, Minneapolis, Minnesota.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Inducing a lexicon of abusive words -a feature-based approach",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Clayton",
"middle": [],
"last": "Greenberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1046--1056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words -a feature-based approach. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1046-1056, New Orleans, Louisiana.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Ex machina: Personal attacks seen at scale",
"authors": [
{
"first": "Ellery",
"middle": [],
"last": "Wulczyn",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "1391--1399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Pro- ceedings of the 26th International Conference on World Wide Web, pages 1391-1399.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Semeval-2019 Task 6: Identifying and categorizing offensive language in social media (OffensEval)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Semeval-2019 Task 6: Identifying and catego- rizing offensive language in social media (OffensE- val). In Proceedings of the 13th International Work- shop on Semantic Evaluation, pages 75-86.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The classifier's performance on various classes when trained on subsets of the Wiki-dataset with specific class distributions.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "The classifier's average performance on various classes when trained on balanced subsets of the Wiki-dataset of different sizes.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "The classifier's performance on various classes when trained on specific topic categories.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Topics identified in the Wiki-dataset. For each topic, five of ten top words that are most representative of the assigned category are shown.",
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Distribution of topic categories per class",
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td>6 Generalizability of the Model Trained</td></tr><tr><td>on the Wiki-dataset</td></tr><tr><td>To explore how well the Toxic class from the Wiki-</td></tr><tr><td>dataset generalizes to other types of offensive be-</td></tr><tr><td>haviour, we train a binary classifier (Toxic vs. Nor-</td></tr><tr><td>mal) on the Wiki-dataset (combining the train, de-</td></tr><tr><td>velopment and test sets) and test it on the Out-</td></tr><tr><td>of-Domain Test set. This classifier is expected to</td></tr><tr><td>predict a positive (Toxic) label for the instances of</td></tr><tr><td>classes Founta-Abusive, Founta-Hateful, Waseem-</td></tr><tr><td>Sexism and Waseem-Racism, and a negative (Nor-</td></tr><tr><td>mal) label for the tweets in the Founta-Normal</td></tr><tr><td>class. We fine-tune a BERT-based classifier (De-</td></tr><tr><td>vlin et al., 2019) with a linear prediction layer, the</td></tr><tr><td>batch size of 16 and the learning rate of 2 \u00d7 10 \u22125</td></tr><tr><td>for 2 epochs.</td></tr><tr><td>Evaluation metrics: In order to investigate the</td></tr><tr><td>trade-off between the True Positive and True Nega-</td></tr><tr><td>tive rates, in the following experiments we report</td></tr><tr><td>accuracy per test class. Accuracy per class is cal-</td></tr><tr><td>culated as the rate of correctly identified instances</td></tr><tr><td>within a class. Accuracy over the toxic classes</td></tr><tr><td>(Founta-Abusive, Founta-Hateful, Waseem-Sexism</td></tr><tr><td>and Waseem-Racism) indicates the True Positive</td></tr><tr><td>rate, while accuracy of the normal class (Founta-</td></tr><tr><td>Normal) measures the True Negative rate.</td></tr></table>",
"html": null,
"text": "Accuracy per test class and topic category for a classifier trained on Wiki-dataset. Best results in each row are in bold.",
"num": null
},
"TABREF7": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Accuracy per Out-of-Domain test class for a classifier trained on the Wiki-dataset, and the Wikidataset with Category 3 instances (Normal only or all) excluded.",
"num": null
}
}
}
} |