File size: 104,431 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T02:10:10.121866Z"
    },
    "title": "Semi-Supervised Cleansing of Web Argument Corpora",
    "authors": [
        {
            "first": "Jonas",
            "middle": [],
            "last": "Dorsch",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Webis Group Bauhaus-Universit\u00e4t Weimar Weimar",
                "location": {
                    "country": "Germany"
                }
            },
            "email": "jonas.dorsch@uni-weimar.de"
        },
        {
            "first": "Henning",
            "middle": [],
            "last": "Wachsmuth",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Paderborn University",
                "location": {
                    "settlement": "Paderborn",
                    "country": "Germany"
                }
            },
            "email": "henningw@upb.de"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Debate portals and similar web platforms constitute one of the main text sources in computational argumentation research and its applications. While the corpora built upon these sources are rich of argumentatively relevant content and structure, they also include text that is irrelevant, or even detrimental, to their purpose. In this paper, we present a precision-oriented approach to detecting such irrelevant text in a semi-supervised way. Given a few seed examples, the approach automatically learns basic lexical patterns of relevance and irrelevance and then incrementally bootstraps new patterns from sentences matching the patterns. In the existing args.me corpus with 400k argumentative texts, our approach detects almost 87k irrelevant sentences, at a precision of 0.97 according to manual evaluation. With low effort, the approach can be adapted to other web argument corpora, providing a generic way to improve corpus quality.",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Debate portals and similar web platforms constitute one of the main text sources in computational argumentation research and its applications. While the corpora built upon these sources are rich of argumentatively relevant content and structure, they also include text that is irrelevant, or even detrimental, to their purpose. In this paper, we present a precision-oriented approach to detecting such irrelevant text in a semi-supervised way. Given a few seed examples, the approach automatically learns basic lexical patterns of relevance and irrelevance and then incrementally bootstraps new patterns from sentences matching the patterns. In the existing args.me corpus with 400k argumentative texts, our approach detects almost 87k irrelevant sentences, at a precision of 0.97 according to manual evaluation. With low effort, the approach can be adapted to other web argument corpora, providing a generic way to improve corpus quality.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Computational argumentation research lays the ground for applications that support opinion formation, including argument search engines (Wachsmuth et al., 2017b) , collective deliberation (Uszkoreit et al., 2017) , and debating technologies (Toledo et al., 2019) . Such applications rely on large pools of up-to-date arguments, which can hardly be found anywere but on the web. One of the most used web argument sources are debate portals where people jointly collect arguments or debate each other on defined issues. Debate portals, and similar web platforms, are rich of argumentatively relevant content and structure, including arguments as well as facts, background information, and similar. This enables researchers to crawl large-scale argument corpora in a distantly-supervised manner (Al-Khatib et al., 2016) .",
                "cite_spans": [
                    {
                        "start": 136,
                        "end": 161,
                        "text": "(Wachsmuth et al., 2017b)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 188,
                        "end": 212,
                        "text": "(Uszkoreit et al., 2017)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 241,
                        "end": 262,
                        "text": "(Toledo et al., 2019)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 792,
                        "end": 816,
                        "text": "(Al-Khatib et al., 2016)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "However, the texts found on debate portals also comprise debate-specific language and boilerplate text that is likely to be irrelevant, if not even detrimental, to the mentioned applications. In the text in Figure 1 , for instance, the author defines the debated issue (sentence #2), states a thesis (#3-5), and presents two arguments (#6-8, #9-13) -all of which can be considered argumentatively relevant. In contrast, sentences #1, #14, and #15 add nothing of importance, merely making meta-comments and expressing gratitude. In other cases, irrelevant text includes salutations, insults, purely rhetorical moves, and spam. As detailed in Section 2, finding such text differs from finding non-argumentative text segments, since the latter may still be relevant as context for the argumentative segments, as in the case of sentence #2 in Figure 1 . Many existing approaches relying on debate portals do not clean the crawled arguments from irrelevant text. Until now, for example, the argument search engine args.me (Wachsmuth et al., 2017b) has just returned the full shown text as one pro argument for the query \"gay marriage\". This at least harms user experience, and it might even corrupt the support of opinion formation in some cases.",
                "cite_spans": [
                    {
                        "start": 1017,
                        "end": 1042,
                        "text": "(Wachsmuth et al., 2017b)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 207,
                        "end": 215,
                        "text": "Figure 1",
                        "ref_id": null
                    },
                    {
                        "start": 839,
                        "end": 847,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, we study how to find irrelevant text in web arguments such as those from debate portals automatically, in order to clean respective corpora on this basis. In particular, we develop a semi-supervised learning approach that aims to detect as many irrelevant sentences as possible with very high precision, i.e., hardly any relevant sentence should be classified as irrelevant (Section 3). Given a seed set of sentences, the approach learns basic lexical n-gram patterns that frequently match text in either relevant or irrelevant I would like to thank Brainmaster for accepting this debate.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Gay marriage is basically the marriage between two individuals of the same gender, I trust my opponent will have no problem with this definition. I will be arguing for gay marriage, and that it should be legal. I will be arguing that everything that does not physically harm other individuals should be legalized, gay marriage is one of these things. I will also be arguing that by banning the gay marriage we have gone against human rights.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "C1: Gay marriage does not physically harm other individuals in any way shape or form therefore, it should be legal. A marriage is a union between two individuals that love eachother, and it basically only effects these two individuals. If it is banned then it is hurting people, and if it is legalized then it isn't hurting anyone.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "C2: Banning gay marriage is against human rights. Every person is born with the equal human rights which are life, liberty, and the pursuit of happiness. Yet, banning gay marriage goes against to of these fundamental rights. How can someone pursue happiness when they can't marry the one they love? How can someone have liberty when they are not allowed to marry the one they love.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "I await my opponent's response. Figure 1 : Example text taken from a debate portal. Sentences #1, #14, and #15 can be considered irrelevant to the arguments made by the author. Our approach learns basic lexical patterns to detect such sentences, here shown bold and underlined. Italicized phrases indicate patterns in sentences learned to be relevant. sentences, and it keeps all patterns with some minimum precision (estimated on all matching sentences). Based on all matching sentences in a given corpus, it then bootstraps new patterns, revises previous ones, and incrementally repeats the process. The final set of irrelevance patterns is used to cleanse the corpus.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 32,
                        "end": 40,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We analyze our approach on the args.me corpus (Ajjour et al., 2019) , consisting of 387,606 arguments from four debate portals, more than any other available corpus to our knowledge (Section 4). Exploring different types of lexical patterns, we find that word n-grams ignoring stopwords serve best to distinguish relevant from irrelevant sentences. From the most frequent such n-grams, we manually select a set of seed sentences. Then, we run the bootstrapping process, analyze the patterns found by the approach over its different iterations, and evaluate its precision both in an automatic way and in a manual annotation study with three human annotators on 600 sentences (Section 5). At a Fleiss' \u03ba agreement of 0.50, our approach detects irrelevant sentences with a precision of 0.97, in total 86,916 of them in 68,814 arguments from the args.me corpus. We provide a cleaned version of the corpus to the community. 1 Finally, we discuss how to adopt our approach to improve the quality of web argument corpora, beyond the one studied (Section 6). Altogether, the contribution of this paper is three-fold:",
                "cite_spans": [
                    {
                        "start": 46,
                        "end": 67,
                        "text": "(Ajjour et al., 2019)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 919,
                        "end": 920,
                        "text": "1",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Vote pro!",
                "sec_num": null
            },
            {
                "text": "\u2022 A semi-supervised approach to detect argumentatively irrelevant sentences in web arguments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Vote pro!",
                "sec_num": null
            },
            {
                "text": "\u2022 Several common lexical patterns of relevance and irrelevance in web arguments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Vote pro!",
                "sec_num": null
            },
            {
                "text": "\u2022 A cleaned version of the largest available argument corpus, with notably less irrelevant text.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Vote pro!",
                "sec_num": null
            },
            {
                "text": "Initially, research on tasks such as argument mining has largely been carried out on small, well-curated collections of texts, including Wikipedia articles (Aharoni et al., 2014) , student essays (Stab and Gurevych, 2014) , pure arguments (Peldszus and Stede, 2015) , and presidential debates (Lawrence and Reed, 2017) . Major real-world applications of computational argumentation, however, need to scale up to web contexts to fulfill their purpose. This includes search engines that oppose pro and con arguments on controversial issues (Wachsmuth et al., 2017b) , technologies that debate humans (Toledo et al., 2019) , and more.",
                "cite_spans": [
                    {
                        "start": 156,
                        "end": 178,
                        "text": "(Aharoni et al., 2014)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 196,
                        "end": 221,
                        "text": "(Stab and Gurevych, 2014)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 239,
                        "end": 265,
                        "text": "(Peldszus and Stede, 2015)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 293,
                        "end": 318,
                        "text": "(Lawrence and Reed, 2017)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 538,
                        "end": 563,
                        "text": "(Wachsmuth et al., 2017b)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 598,
                        "end": 619,
                        "text": "(Toledo et al., 2019)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "To obtain web arguments, many works have relied on crawled debate portals and similar web platforms, often in a distant-supervision manner where argumentative structure and similar annotations are directly derived from available meta-information (Al-Khatib et al., 2016) . Corpora have been built in such a way based on several debate portals, including 4forums.com (Walker et al., 2012) , idebate.org (Cabrio and Villata, 2012) , createdebate.com (Habernal and Gurevych, 2016) , debate.org (Durmus and Cardie, 2019) , and reddit.com/r/changemyview (Egawa et al., 2020) . Naturally, less curation of the acquired web texts comes at the cost of more noise, which in turn calls for a cleansing of the resulting corpus.",
                "cite_spans": [
                    {
                        "start": 246,
                        "end": 270,
                        "text": "(Al-Khatib et al., 2016)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 366,
                        "end": 387,
                        "text": "(Walker et al., 2012)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 402,
                        "end": 428,
                        "text": "(Cabrio and Villata, 2012)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 448,
                        "end": 477,
                        "text": "(Habernal and Gurevych, 2016)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 491,
                        "end": 516,
                        "text": "(Durmus and Cardie, 2019)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 549,
                        "end": 569,
                        "text": "(Egawa et al., 2020)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Cleansing processes are described in several publications on argument corpora, mostly only referring to the acquired annotations though (Habernal and Gurevych, 2016; Toledo et al., 2019; Gretz et al., 2020) . In contrast, the paper at hand targets the cleansing of the corpus texts themselves. Only few works describe respective cleansing steps in detail. Among these, Al-Khatib et al. (2016) deleted special symbols and debate-specific phrases such as \"this house\" from crawled arguments, and Habernal and Gurevych (2017) removed quotations of previous posts in debate posts. Wachsmuth et al. (2017b) discarded certain types of noisy instances completely for the argument search engine args.me, but the texts in the original associated corpus (Ajjour et al., 2019) still contain much irrelevant text, as our experiments will reveal. Applying our approach has led to an improved version of that corpus.",
                "cite_spans": [
                    {
                        "start": 136,
                        "end": 165,
                        "text": "(Habernal and Gurevych, 2016;",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 166,
                        "end": 186,
                        "text": "Toledo et al., 2019;",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 187,
                        "end": 206,
                        "text": "Gretz et al., 2020)",
                        "ref_id": null
                    },
                    {
                        "start": 494,
                        "end": 522,
                        "text": "Habernal and Gurevych (2017)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 577,
                        "end": 601,
                        "text": "Wachsmuth et al. (2017b)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 744,
                        "end": 765,
                        "text": "(Ajjour et al., 2019)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "In this paper, we introduce a semi-supervised learning approach for corpus cleansing. In general, we follow the bootstrapping idea of successful pattern mining methods, such as DIPRE (Brin, 1998), Snowball (Agichtein and Gravano, 2000) , and Espresso (Pantel and Pennacchiotti, 2006) . While these methods aim at semantically relevant information, we distinguish pragmatically relevant from irrelevant text within an author's argumentative discourse. We are not aware of any other approach in this direction.",
                "cite_spans": [
                    {
                        "start": 206,
                        "end": 235,
                        "text": "(Agichtein and Gravano, 2000)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 251,
                        "end": 283,
                        "text": "(Pantel and Pennacchiotti, 2006)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "It is noteworthy in this regard that the cleansing task at hand differs notably from the unit segmentation of argumentative texts (Ajjour et al., 2017) . While all argumentative units match the notion of relevance considered here (defined in Section 3), also non-argumentative units may be seen as relevant, if they give facts, definitions, or other background information serving as context for the argumentative units. As such, our notion of relevance relates to the local relevance with respect to some conclusion rather than the global relevance of an argumentative statement in the discussion of an issue (Wachsmuth et al., 2017a) .",
                "cite_spans": [
                    {
                        "start": 130,
                        "end": 151,
                        "text": "(Ajjour et al., 2017)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 610,
                        "end": 635,
                        "text": "(Wachsmuth et al., 2017a)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "This section presents our semi-supervised learning approach to detecting irrelevant text in web arguments as well as to clean a respective corpus on this basis. The approach aims to find as many irrelevant text units as possible at an estimated precision beyond a threshold \u03c4 (in Section 5, we use \u03c4 = 0.95). To this end, it learns linguistic patterns that occur often in irrelevant units and rarely in relevant units (or vice versa). Later, we consider each sentence as one unit, but other granularities would work in principle, too. Figure 2 gives an overview of the three main stages of the approach, each of which will be detailed below:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 535,
                        "end": 543,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Approach",
                "sec_num": "3"
            },
            {
                "text": "(a) Seed Pattern Selection. Given a corpus as input, a pool of common linguistic patterns is mined from its units, from which seed patterns indicating irrelevance and relevance are selected manually.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Approach",
                "sec_num": "3"
            },
            {
                "text": "(b) Pattern Bootstrapping. All units matching any seed irrelevance (relevance) pattern are retrieved, new candidate patterns are mined from the units and added to the pool. Then, only high-precision irrelevance (relevance) patterns are kept in the pool, i.e., those found nearly only in irrelevant (relevant) units. This process is repeated until no new patterns are found or k iterations have passed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Approach",
                "sec_num": "3"
            },
            {
                "text": "(c) Corpus Cleansing. The final pool of irrelevance patterns is used to automatically remove irrelevant units from the corpus.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Approach",
                "sec_num": "3"
            },
            {
                "text": "It is important to see that the relevance patterns are eventually not used for the actual cleansing. They serve to distinguish relevant from irrelevant units only, thereby aiding the identification high-precision irrelevance patterns.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Approach",
                "sec_num": "3"
            },
            {
                "text": "While we have designed our approach for web arguments in particular, notice that the outlined processed is largely generic and could easily be transferred to other cleansing tasks where relevant and irrelevant units can be distinguished. What makes our approach specific to web arguments is what we mean by argumentative relevance and irrelevance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Approach",
                "sec_num": "3"
            },
            {
                "text": "We consider relevance here from the perspective of using the individual arguments in a corpus for empirical analysis of how people argue or for applications such as argument search and debating technologies. For such use cases, portal-specific debate structure emerging from sequences of arguments as well as purely rhetorical moves related to the underlying debates are not of interest. We thus define irrelevance as follows: Argumentative Irrelevance. A unit of a web argument is said to be irrelevant, if and only if it does not represent any claim, evidence, fact, background information, or similar statement related to the issue discussed by the author of the text. Examples of irrelevant units include meta-comments on a debate, salutations, expressions of gratitude, personal insults, purely rhetorical moves, and spam.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Argumentative Relevance and Irrelevance",
                "sec_num": "3.1"
            },
            {
                "text": "Any unit not matching the definition is considered to be relevant. While we could have also defined argumentative relevance instead, we decided to focus on irrelevant units, since they constitute the target concept to be detected. In other words, given that we target argument corpora, we expect irrelevant units to be the exception rather than the default. An estimation of the proportion of irrelevant units for the data processed in our experiments follows in Section 4.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Argumentative Relevance and Irrelevance",
                "sec_num": "3.1"
            },
            {
                "text": "The goal of stage (a) is to acquire a pool of linguistic patterns matching text units that can be considered either irrelevant or relevant. The set of all units matching any of these seed patterns then represents the ground-truth data that the pattern bootstrapping starts from. The selection of seed patterns is the only step that requires some level of supervision within our approach. To minimize manual effort, we propose to tackle the selection semi-automatically, i.e., we first mine the most promising candidate patterns automatically from sample data (we use a random 10% sample of the given corpus in Section 5). Then, we manually classify a subset of them to be seed patterns either of irrelevance or of relevance. To do so, however, we need to first define what is considered to be a candidate pattern.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Seed Pattern Selection",
                "sec_num": "3.2"
            },
            {
                "text": "Candidate Patterns. In general, any type of linguistic pattern may be mined from corpus texts, for which respective mining methods are available. Since we expect the given notion of relevance to be largely assessable based on a unit's words only, we restrict our view to basic lexical patterns here. For simplicity, we just look for n-grams, but we explore four types of patterns that emerge from making two choices:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Seed Pattern Selection",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 Counts vs. TF-IDF. In case of counts, we simply see the m most frequent n-grams as candidates for each n. In case of TF-IDF, we take those n-grams with the highest TF-IDF score in the sample data (each unit being one document). In our experiments, we use m = 100 and n \u2208 {1, . . . , 5}.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Seed Pattern Selection",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 W/ stopwords vs. w/o stopwords. We determine either n-grams based on the full unit texts (w/ stopwords) or we apply stopword removal before (w/o stopwords).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Seed Pattern Selection",
                "sec_num": "3.2"
            },
            {
                "text": "Since high TF-IDF scores usually indicate content, respective patterns are likely to be more useful for relevant than irrelevant sentences. Whether they outperform count-based patterns there is hard to predict, though. In Section 5, we compare the four pattern types against each other. Given all m candidates of the preferred pattern type (say, Counts w/o stopwords) for each n, the authors of this paper then manually agree for each candidate on whether to select it as an irrelevance pattern, a relevance pattern, or neither.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Seed Pattern Selection",
                "sec_num": "3.2"
            },
            {
                "text": "The goal of stage (b) is to incrementally extend the pool of irrelevance and relevance patterns using bootstrapping, i.e., by deriving new patterns from units matching the current patterns in the pool. This fully automatic process continues until no new patterns are found anymore or until a maximum number k of iterations has passed, e.g., if running time is a factor (in Section 5, we continue until the end).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pattern Bootstrapping",
                "sec_num": "3.3"
            },
            {
                "text": "In particular, the first step is to retrieve the sets of all units matching any irrelevance patterns and of all units matching any relevance pattern from the corpus. 2 As sketched in Figure 2 , these unit sets are used for two purposes: First, new candidate irrelevance (relevance) patterns are mined from the set of irrelevant (relevant) units and added to the pattern pool. Second, only those patterns are filtered and kept in the pool that indicate an irrelevant (relevant) unit with an estimated precision p \u2265 \u03c4 . We estimate p as follows:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 183,
                        "end": 191,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Pattern Bootstrapping",
                "sec_num": "3.3"
            },
            {
                "text": "Estimated Precision. Let tp be the number of all retrieved irrelevant (relevant) units that matches a specific irrelevance (relevance) pattern, and let f p be the number of all relevant (irrelevant) units matching this pattern. Then the precision of the pattern is estimated as p = tp / (tp + f p).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pattern Bootstrapping",
                "sec_num": "3.3"
            },
            {
                "text": "For the mining step, one parameter to decide upon is the minimum frequency of a pattern to consider it a candidate. We suggest to derive this parameter's value from the seed pattern frequencies. For example, if all seed patterns have at least 20 matches in the sample, and the full corpus has 10 times the sample size, then a reasonable value may be 20 \u2022 10 = 200. For the filtering step, it is favorable that the sizes of the two unit sets remain balanced, because imbalanced sizes decrease the comparability of the values tp and f p. We therefore suggest to adjust the minimum numbers based on the estimated proportion of irrelevant units. For example, if there are about 10 times as many relevant as irrelevant units, reasonable values may be 200 for irrelevance and 200 \u2022 10 = 2000 for relevance (the numbers given here exemplarily are those we use in Sections 4 and 5). An alternative is to test and adjust these parameters empirically.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pattern Bootstrapping",
                "sec_num": "3.3"
            },
            {
                "text": "An important characteristic of the outlined bootstrapping process is that patterns added to the pool in previous iterations may be removed later from the pool again. This is because the sets of retrieved relevant and irrelevant units change during the process, which in turn may change the precision estimations of the patterns. This can be understood as an internal revision mechanism of our approach that optimizes the precision of the final pool. We see the effect of this mechanism in our experiments in Section 5. 3",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pattern Bootstrapping",
                "sec_num": "3.3"
            },
            {
                "text": "The goal of stage (c) is to actually clean the given corpus, based on the final pool of irrelevance patterns. Relevance patterns play no role anymore in this stage; they are used only before, to be able to help identify irrelevance patterns with high precision, as described.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Corpus Cleansing",
                "sec_num": "3.4"
            },
            {
                "text": "A simple cleansing way would be to just remove all units from the corpus that match any irrelevance patterns. Instead, however, we suggest to restrict the removal to only those irrelevant units before the first and after the last relevant unit. As long as only units are removed that are actually irrelevant, we thereby avoid to negatively affect the coherence of arguments. Moreover, as for the example of Figure 1 , we will see below that most irrelevant units are indeed found in the beginning and ending of texts, i.e., the suggested restriction reduces recall to some extent only. Notice that this does not mean that most units in the beginning and ending are irrelevant; in line with our discussions above, we expect the majority of texts to contain no irrelevant unit at all. The following section supports that this is true for the corpus at hand.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 407,
                        "end": 415,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Corpus Cleansing",
                "sec_num": "3.4"
            },
            {
                "text": "The presented approach targets argumentative language of varying quality, as often observed in web-based corpora. Below, we assess its impact on the args.me corpus (Ajjour et al., 2019) , which is to our knowledge the largest available argument corpus to this date, about 7.3 GB in file size. The corpus represents the database underlying the argument search engine args.me (Wachsmuth et al., 2017b Table 1 : The top n-gram patterns agreed upon to indicate relevant and irrelevant sentences respectively, for each evaluated pattern type, along with their score (count or TF-IDF) in the 10% sample of the args.me corpus. We left out spam patterns, such as \"kfc ... kfc\", as they would have shadowed most other patterns. Based on the full lists (see supplementary material), we decided to use the type Counts w/o Stopwords.",
                "cite_spans": [
                    {
                        "start": 164,
                        "end": 185,
                        "text": "(Ajjour et al., 2019)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 374,
                        "end": 398,
                        "text": "(Wachsmuth et al., 2017b",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 399,
                        "end": 406,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "4"
            },
            {
                "text": "arguments that were mined from four debate portals using distant supervision: debate.org, debatewise.org, idebate.org, and debatepedia.org. Each argument consists of a mostly very short conclusion as well as a mostly notably longer premise, the latter containing the actual argumentative text. In total, the corpus spans around seven million sentences. We see each sentence as one unit in our approach. Many texts in the args.me corpus include sentences that are irrelevant to the actual argument, such as the example in Figure 1 . Needless to say, no ground-truth information on irrelevance is given, though. For a rough estimation of the proportion of irrelevant sentences, we conducted a pilot study where the two authors of this paper independently decided about the relevance of a set of sentences, following the definition in Section 3. In particular, we considered a corpus sample used previously by Alshomary et al. (2020) , which contains the top five pro and the top five con arguments each for the top 10 queries.",
                "cite_spans": [
                    {
                        "start": 907,
                        "end": 930,
                        "text": "Alshomary et al. (2020)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 521,
                        "end": 529,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "4"
            },
            {
                "text": "From the 1294 sentences in the 100 sample arguments, one of us classified 147 (11.3%) to be irrelevant, the other one 139 (10.7%). In terms of Cohen's \u03ba, we had a substantial inter-annotator agreement of 0.75. In total, 175 sentences (13.5%) were seen as irrelevant by either of us, 111 (8.5%) by both. Since we believe that, in doubt, a sentence should be deemed relevant, we take 8.5% as our estimation. In the whole corpus, we thus expect around 600,000 sentences to be irrelevant. The 111 sentences come from only 39 of the 100 arguments. Assuming this number is representative, about 150k arguments in the corpus should contain irrelevant sentences. In the following experiments, these numbers will give us a rough idea of the recall of our approach. There, we use a random 10% sample of all corpus arguments for the seed pattern selection, and the whole corpus for all subsequent steps.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "4"
            },
            {
                "text": "We now report on the step-by-step application of our approach from Section 3 to the corpus from Section 4 and on the manual evaluation of the obtained results. The goal was to assess the impact of the approach on the quality of web-based argument corpora. We hypothesized that the approach is able to detect a large number of irrelevant sentences with a precision as high as its internal precision threshold \u03c4 . 4 The full lists of positive and negative and seed patterns used for each n-gram type, along with the number of different sentences they match in the corpus (in parentheses), ordered by number of matches.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "5"
            },
            {
                "text": "To learn what pattern type is best to detect irrelevant sentences, we compared all four candidates emerging from the two choices discussed in Section 3 (Counts vs. TF-IDF, w/ or w/o stopwords). For each type, we retrieved the top 100 n-grams, n \u2208 {1, . . . , 5}, covering a large variety of issues debated in the underlying arguments. Then, the two authors of this paper both judged all 2000 resulting patterns as to whether they likely indicate always irrelevant sentences or always relevant ones. Based on the patterns that we both agreed upon, the most promising type was chosen for the seed patterns. Exemplarily, Table 1 lists the top 1-to 5-gram of each pattern type that indicate relevance or irrelevance respectively. We left out spam patterns such as \"wonyou wonyou wonyou\" and \"kfc kfc\", though, as they would limit insights, dominating the top positions; the full lists for each pattern type are given in the supplementary material. For both TF-IDF pattern types, we find the relevance patterns to clearly serve their purpose, relating to the content of arguments. Many such patterns are found in the full lists. However, rarely any TF-IDF pattern seemed to reliably indicate irrelevance. This matches the intuition that phrases with high TF-IDF scores are specific to a document's content rather than reflecting general language. In contrast, the two Counts pattern types yielded several irrelevance patterns, as the table demonstrates. We decided for Counts w/o Stopwords, since it produced patterns that clarified many cases which Counts w/ Stopwords left ambiguous. For example, \"would like thank opponent\" reveals irrelevance knowing the source debate portals (here, debate.org), whereas respective patterns with stopwords (\"would like to thank\", \"like to thank my\") leaves more doubts regarding the irrelevance of respective sentences. Table 2 presents the full set of 38 relevance and 17 irrelevance seed patterns for the type Counts w/o Stopwords. A pattern was not included if being redundant, i.e., if it was already covered by a shorter one, e.g., \"first round acceptance\" was covered by \"first round\". We observe that no 1-gram made it into the pool of irrelevance patterns; a single word seems not enough to be sure about irrelevance. As of length 2, however, we judged several patterns to be sufficiently reliable indicators of irrelevance, the most frequent ones occurring over 10,000 times in the corpus, namely, \"first round\" and \"thank opponent\".",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 618,
                        "end": 625,
                        "text": "Table 1",
                        "ref_id": null
                    },
                    {
                        "start": 1853,
                        "end": 1860,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Insights into Seed Pattern Selection",
                "sec_num": "5.1"
            },
            {
                "text": "As indicated in Section 3, we set \u03c4 to 0.95, kept all mined relevance patterns with at least 2000 matches as candidates and all mined irrelevance patterns with at least 200 matches. Given the seed patterns, we then ran the bootstrapping process until no new pattern was found anymore, which happened in iteration 6. On a standard computer (Intel Core i7, 2.7 GHz, 16 GB RAM), the whole process took about two hours. Table 3 shows key statistics for each iteration (and the seed pattern selection). In case of the relevance Table 3 : Counts of relevance and irrelevance patterns, counts of different sentences they match, their automatically estimated mean precision, and their manually evaluated mean precision (majority agreement, full agreement in parentheses) in each iteration of our approach. The last row shows the results at the end.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 416,
                        "end": 423,
                        "text": "Table 3",
                        "ref_id": null
                    },
                    {
                        "start": 523,
                        "end": 530,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Insights into Pattern Bootstrapping",
                "sec_num": "5.2"
            },
            {
                "text": "patterns, the 38 seed patterns already match more than 600k different sentences, with a mean estimated precision of 1.00, i.e., they virtually never matched any sentence retrieved for the seed irrelevance patterns. Already in iteration 2, the revision effect discussed in Section 3 starts: 57 relevant sentences were removed there, because they also matched newly mined irrelevance patterns. Still, the set of relevance patterns remained stable, and this behavior continued in subsequent iterations. For the irrelevance patterns, we observe a monotonous growth of the pattern pool in the first five iterations, with more than 10k different sentences being detected as irrelevant in iterations 1-5 in addition to the seed sentences. In total, 122 patterns were found; their mean estimated precision remained at least 0.97 in all iterations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insights into Pattern Bootstrapping",
                "sec_num": "5.2"
            },
            {
                "text": "To analyze the behavior of our approach during the bootstrapping process, we chose a random sample of 600 irrelevant sentences for manual evaluation (found in the supplementary material): 100 matching the seed irrelevance patterns, and 100 each for the irrelevance patterns from the five iterations. Relevant patterns were disregarded, as they are not needed for corpus cleansing. We randomized the ordering of all sentences and gave them independently to three annotators with background on computational argumentation, none being an author of this paper (one master and two PhD students; two male, one female; one each from Europe, the Middle East, and East Asia). We asked the annotators to classify each sentence as relevant or irrelevant, based on the definition from Section 3. The annotators got some intuitive guidelines (see supplementary material) and could ask questions beforehand.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insights into Pattern Bootstrapping",
                "sec_num": "5.2"
            },
            {
                "text": "We observed an inter-annotator agreement of 0.50 in terms of Fleiss' \u03ba, which seems reasonable given that relevance assessment is inherently subjective (Croft et al., 2009) . Given the annotations, we computed the mean precision of our approach in detecting irrelevant sentences for each iteration, once in terms of majority agreement (irrelevance correct if two annotators say so) and once for full agreement (all three say so). The right-most column in Table 3 shows the results, revealing that the majority-agreement precision is perfect until the end of iteration 2. While the next two iterations remain promising, the precision decreases to 0.88 in the final iteration (0.79 under full agreement), suggesting that patterns get worse over time. An early termination may thus be favorable, but the best moment is naturally unknown in practice.",
                "cite_spans": [
                    {
                        "start": 152,
                        "end": 172,
                        "text": "(Croft et al., 2009)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 455,
                        "end": 462,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Insights into Pattern Bootstrapping",
                "sec_num": "5.2"
            },
            {
                "text": "52,849 different sentences are matched by the detected irrelevance patterns eventually, at an overall precision of 0.97. Some of them occur multiple times, resulting in 86,916 irrelevant sentences in total that come from 68,814 arguments. Under the roughly estimated irrelevance proportion from Section 4, the recall would hence be around 0.15 for irrelevant sentences and around 0.46 for arguments with irrelevance sentences. The seed step alone found 71,926 irrelevant sentences in total, i.e., a recall of roughly 0.12. If we consider the seed step as a baseline for the full approach, we see that precision decreases by 3% (1.00 to 0.97), but recall increases by about 20% (0.12 to 0.15). While there is arguably room for optimization, we still conclude that the results support the impact of our approach and, by that, our hypothesis.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insights into Pattern Bootstrapping",
                "sec_num": "5.2"
            },
            {
                "text": "Based on the final pool of 122 irrelevance patterns, we explored the cleansing potential for the given corpus. Figure 3(a) shows a histogram of the corpus texts with a certain number of detected irrelevant sentences. We see that most texts contain one such sentence only, in all but six cases seven or less. These six cases all have more than 30 irrelevant sentences; manual inspection revealed that they all contain spam where the same word sequence repeats itself. In Figure 3(b) , we plot the positions of irrelevant sentences in the corpus texts. As expected, most of them are found in the beginning or the end. Due to our discussed restriction of discarding only these, the final number of sentences removed from the args.me corpus sums up to 53,502 (found in 48,089 arguments). In addition to the original args.me corpus, we now also provide the cleaned corpus version at https://webis.de/data.html#args-me-corpus.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 111,
                        "end": 122,
                        "text": "Figure 3(a)",
                        "ref_id": "FIGREF2"
                    },
                    {
                        "start": 470,
                        "end": 481,
                        "text": "Figure 3(b)",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Insights into Corpus Cleansing",
                "sec_num": "5.3"
            },
            {
                "text": "Web-based argument corpora play an important role in computational argumentation research and its applications. Not all text in such corpora is relevant to the arguments, though. In this paper, we have presented an approach that detects irrelevant text units in argumentative texts with low supervision. The approach iteratively bootstraps linguistic patterns of irrelevance and relevance from units matching known patterns. On the 387k arguments in the args.me corpus, the approach detected 87k irrelevant sentences at a precision of 0.97, from which at least 53k can be removed without notably reducing the arguments' coherence. These results demonstrate the potential of our approach to improve corpus quality. Naturally, the approach has limitations. On one hand, the results revealed that, under the employed configuration, a large proportion of detected sentences came from the seed patterns. To obtain good seed patterns, manual effort is needed. On the other hand, the recall of our approach seems not so high, as far as we can estimate from the data inspected. While not all irrelevant units can be captured by the simple patterns we considered, another reason may lie in the restriction that only new candidate patterns are found which occur in sentences matching previous patterns. Particularly patterns that show up only in short units may thus be overlooked, if they are not covered by the seed patterns already. Improvements might, e.g., consider units adjacent to irrelevant units, but this may come at the cost of reduced precision. In this regard, notice that the impact our approach to some extent depends on the availability of a reliable unit boundary detector (say, a sentence splitter), which is not a trivial requirement for noisy web data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "Finally, an arising question may be how complex it is to apply the approach to other than the data processed here. Following our proposed process to obtain frequent candidate seed patterns automatically, the main manual effort boils down to finding reliable seed patterns among these candidates. In our case, this took no more than a few hours, which seems negligible given the potential impact on corpus quality. Besides, only some initial tuning of the approach parameters to the data at hand may be needed. We are thus confident that the approach can be easily adopted to clean other argument corpora (including transcribed corpora with spoken argumentative language) as well as to many other cleansing tasks where the irrelevance of text units can be defined in a measurable way.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "Both the original and the cleaned args.me corpus are found at: https://webis.de/data.html#args-me-corpus",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We include units that match both relevance and irrelevance patterns, since the subsequent filtering step accounts for them. Also, other performance optimizations are useful, such as storing previously found units. We leave them out here for simplicity.3 Depending on what sentences match the patterns, it is theoretically possible that a pattern first belongs to the relevance pool and later to the irrelevance pool (or vice versa). We did not observe notable cases in this regard, though.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Source code and supplementary material can be found here: https://github.com/webis-de/ArgMining-20",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "We thank Milad Alshomary, Wei-Fan Chen, and Jana Puschmann for their participation in the manual evaluation, and the anonymous reviewers for their helpful comments. Thank you also to Johannes Kiesel as part of the Webis Group for the technical support and the integration of the results into args.me.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Snowball: Extracting relations from large plain-text collections",
                "authors": [
                    {
                        "first": "Eugene",
                        "middle": [],
                        "last": "Agichtein",
                        "suffix": ""
                    },
                    {
                        "first": "Luis",
                        "middle": [],
                        "last": "Gravano",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the Fifth ACM Conference on Digital Libraries, DL '00",
                "volume": "",
                "issue": "",
                "pages": "85--94",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the Fifth ACM Conference on Digital Libraries, DL '00, pages 85-94, New York, NY, USA. Association for Computing Machinery.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics",
                "authors": [
                    {
                        "first": "Ehud",
                        "middle": [],
                        "last": "Aharoni",
                        "suffix": ""
                    },
                    {
                        "first": "Anatoly",
                        "middle": [],
                        "last": "Polnarov",
                        "suffix": ""
                    },
                    {
                        "first": "Tamar",
                        "middle": [],
                        "last": "Lavee",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Hershcovich",
                        "suffix": ""
                    },
                    {
                        "first": "Ran",
                        "middle": [],
                        "last": "Levy",
                        "suffix": ""
                    },
                    {
                        "first": "Ruty",
                        "middle": [],
                        "last": "Rinott",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Gutfreund",
                        "suffix": ""
                    },
                    {
                        "first": "Noam",
                        "middle": [],
                        "last": "Slonim",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the First Workshop on Argumentation Mining",
                "volume": "",
                "issue": "",
                "pages": "64--68",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ehud Aharoni, Anatoly Polnarov, Tamar Lavee, Daniel Hershcovich, Ran Levy, Ruty Rinott, Dan Gutfreund, and Noam Slonim. 2014. A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics. In Proceedings of the First Workshop on Argumentation Mining, pages 64-68, Baltimore, Maryland, June. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Unit segmentation of argumentative texts",
                "authors": [
                    {
                        "first": "Yamen",
                        "middle": [],
                        "last": "Ajjour",
                        "suffix": ""
                    },
                    {
                        "first": "Wei-Fan",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Johannes",
                        "middle": [],
                        "last": "Kiesel",
                        "suffix": ""
                    },
                    {
                        "first": "Henning",
                        "middle": [],
                        "last": "Wachsmuth",
                        "suffix": ""
                    },
                    {
                        "first": "Benno",
                        "middle": [],
                        "last": "Stein",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 4th Workshop on Argument Mining",
                "volume": "",
                "issue": "",
                "pages": "118--128",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yamen Ajjour, Wei-Fan Chen, Johannes Kiesel, Henning Wachsmuth, and Benno Stein. 2017. Unit segmentation of argumentative texts. In Proceedings of the 4th Workshop on Argument Mining, pages 118-128. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Data acquisition for argument search: The args.me corpus",
                "authors": [
                    {
                        "first": "Yamen",
                        "middle": [],
                        "last": "Ajjour",
                        "suffix": ""
                    },
                    {
                        "first": "Henning",
                        "middle": [],
                        "last": "Wachsmuth",
                        "suffix": ""
                    },
                    {
                        "first": "Johannes",
                        "middle": [],
                        "last": "Kiesel",
                        "suffix": ""
                    },
                    {
                        "first": "Martin",
                        "middle": [],
                        "last": "Potthast",
                        "suffix": ""
                    },
                    {
                        "first": "Matthias",
                        "middle": [],
                        "last": "Hagen",
                        "suffix": ""
                    },
                    {
                        "first": "Benno",
                        "middle": [],
                        "last": "Stein",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "KI 2019: Advances in Artificial Intelligence -42nd German Conference on AI",
                "volume": "",
                "issue": "",
                "pages": "48--59",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yamen Ajjour, Henning Wachsmuth, Johannes Kiesel, Martin Potthast, Matthias Hagen, and Benno Stein. 2019. Data acquisition for argument search: The args.me corpus. In KI 2019: Advances in Artificial Intelligence - 42nd German Conference on AI, Kassel, Germany, September 23-26, 2019, Proceedings, pages 48-59.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Cross-domain mining of argumentative text through distant supervision",
                "authors": [
                    {
                        "first": "Khalid",
                        "middle": [],
                        "last": "Al-Khatib",
                        "suffix": ""
                    },
                    {
                        "first": "Henning",
                        "middle": [],
                        "last": "Wachsmuth",
                        "suffix": ""
                    },
                    {
                        "first": "Matthias",
                        "middle": [],
                        "last": "Hagen",
                        "suffix": ""
                    },
                    {
                        "first": "Jonas",
                        "middle": [],
                        "last": "K\u00f6hler",
                        "suffix": ""
                    },
                    {
                        "first": "Benno",
                        "middle": [],
                        "last": "Stein",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "",
                "issue": "",
                "pages": "1395--1404",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, Jonas K\u00f6hler, and Benno Stein. 2016. Cross-domain mining of argumentative text through distant supervision. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1395-1404. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Extractive snippet generation for arguments",
                "authors": [
                    {
                        "first": "Milad",
                        "middle": [],
                        "last": "Alshomary",
                        "suffix": ""
                    },
                    {
                        "first": "Nick",
                        "middle": [],
                        "last": "D\u00fcsterhus",
                        "suffix": ""
                    },
                    {
                        "first": "Henning",
                        "middle": [],
                        "last": "Wachsmuth",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "43nd International ACM Conference on Research and Development in Information Retrieval, SIGIR '20",
                "volume": "",
                "issue": "",
                "pages": "1969--1972",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Milad Alshomary, Nick D\u00fcsterhus, and Henning Wachsmuth. 2020. Extractive snippet generation for arguments. In 43nd International ACM Conference on Research and Development in Information Retrieval, SIGIR '20, pages 1969-1972, New York, NY, USA. Association for Computing Machinery.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Extracting patterns and relations from the world wide web",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Sergey Brin",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Selected Papers from the International Workshop on The World Wide Web and Databases, WebDB '98",
                "volume": "",
                "issue": "",
                "pages": "172--183",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sergey Brin. 1998. Extracting patterns and relations from the world wide web. In Selected Papers from the In- ternational Workshop on The World Wide Web and Databases, WebDB '98, pages 172-183, Berlin, Heidelberg. Springer-Verlag.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Combining textual entailment and argumentation theory for supporting online debates interactions",
                "authors": [
                    {
                        "first": "Elena",
                        "middle": [],
                        "last": "Cabrio",
                        "suffix": ""
                    },
                    {
                        "first": "Serena",
                        "middle": [],
                        "last": "Villata",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
                "volume": "2",
                "issue": "",
                "pages": "208--212",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Elena Cabrio and Serena Villata. 2012. Combining textual entailment and argumentation theory for supporting online debates interactions. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 208-212. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Search Engines: Information Retrieval in Practice",
                "authors": [
                    {
                        "first": "Bruce",
                        "middle": [],
                        "last": "Croft",
                        "suffix": ""
                    },
                    {
                        "first": "Donald",
                        "middle": [],
                        "last": "Metzler",
                        "suffix": ""
                    },
                    {
                        "first": "Trevor",
                        "middle": [],
                        "last": "Strohman",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bruce Croft, Donald Metzler, and Trevor Strohman. 2009. Search Engines: Information Retrieval in Practice. Addison-Wesley, USA, 1st edition.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "A corpus for modeling user and language effects in argumentation on online debating",
                "authors": [
                    {
                        "first": "Esin",
                        "middle": [],
                        "last": "Durmus",
                        "suffix": ""
                    },
                    {
                        "first": "Claire",
                        "middle": [],
                        "last": "Cardie",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "602--607",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Esin Durmus and Claire Cardie. 2019. A corpus for modeling user and language effects in argumentation on online debating. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 602-607, Florence, Italy, July. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Corpus for modeling user interactions in online persuasive discussions",
                "authors": [
                    {
                        "first": "Ryo",
                        "middle": [],
                        "last": "Egawa",
                        "suffix": ""
                    },
                    {
                        "first": "Gaku",
                        "middle": [],
                        "last": "Morio",
                        "suffix": ""
                    },
                    {
                        "first": "Katsuhide",
                        "middle": [],
                        "last": "Fujita",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
                "volume": "",
                "issue": "",
                "pages": "1135--1141",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ryo Egawa, Gaku Morio, and Katsuhide Fujita. 2020. Corpus for modeling user interactions in online persuasive discussions. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 1135-1141, Marseille, France, May. European Language Resources Association.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Ranit Aharonov, and Noam Slonim. 2020. A large-scale dataset for argument quality ranking: Construction and analysis",
                "authors": [
                    {
                        "first": "Shai",
                        "middle": [],
                        "last": "Gretz",
                        "suffix": ""
                    },
                    {
                        "first": "Roni",
                        "middle": [],
                        "last": "Friedman",
                        "suffix": ""
                    },
                    {
                        "first": "Edo",
                        "middle": [],
                        "last": "Cohen-Karlik",
                        "suffix": ""
                    },
                    {
                        "first": "Assaf",
                        "middle": [],
                        "last": "Toledo",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Lahav",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "7805--7813",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shai Gretz, Roni Friedman, Edo Cohen-Karlik, Assaf Toledo, Dan Lahav, Ranit Aharonov, and Noam Slonim. 2020. A large-scale dataset for argument quality ranking: Construction and analysis. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 7805-7813. AAAI.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Which argument is more convincing? Analyzing and predicting convincingness of web arguments using bidirectional lstm",
                "authors": [
                    {
                        "first": "Ivan",
                        "middle": [],
                        "last": "Habernal",
                        "suffix": ""
                    },
                    {
                        "first": "Iryna",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "1589--1599",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ivan Habernal and Iryna Gurevych. 2016. Which argument is more convincing? Analyzing and predicting con- vincingness of web arguments using bidirectional lstm. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1589-1599. Association for Com- putational Linguistics.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Argumentation mining in user-generated web discourse",
                "authors": [
                    {
                        "first": "Ivan",
                        "middle": [],
                        "last": "Habernal",
                        "suffix": ""
                    },
                    {
                        "first": "Iryna",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Computational Linguistics",
                "volume": "43",
                "issue": "1",
                "pages": "125--179",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ivan Habernal and Iryna Gurevych. 2017. Argumentation mining in user-generated web discourse. Computational Linguistics, 43(1):125-179, April.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Using complex argumentative interactions to reconstruct the argumentative structure of large-scale debates",
                "authors": [
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Lawrence",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Reed",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 4th Workshop on Argument Mining",
                "volume": "",
                "issue": "",
                "pages": "108--117",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John Lawrence and Chris Reed. 2017. Using complex argumentative interactions to reconstruct the argumentative structure of large-scale debates. In Proceedings of the 4th Workshop on Argument Mining, pages 108-117, Copenhagen, Denmark, September. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Espresso: Leveraging generic patterns for automatically harvesting semantic relations",
                "authors": [
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Pantel",
                        "suffix": ""
                    },
                    {
                        "first": "Marco",
                        "middle": [],
                        "last": "Pennacchiotti",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "113--120",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automatically harvesting semantic relations. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 113-120, Sydney, Australia, July. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Joint prediction in MST-style discourse parsing for argumentation mining",
                "authors": [
                    {
                        "first": "Andreas",
                        "middle": [],
                        "last": "Peldszus",
                        "suffix": ""
                    },
                    {
                        "first": "Manfred",
                        "middle": [],
                        "last": "Stede",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "938--948",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andreas Peldszus and Manfred Stede. 2015. Joint prediction in MST-style discourse parsing for argumentation mining. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 938-948, Lisbon, Portugal, September. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Annotating argument components and relations in persuasive essays",
                "authors": [
                    {
                        "first": "Christian",
                        "middle": [],
                        "last": "Stab",
                        "suffix": ""
                    },
                    {
                        "first": "Iryna",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
                "volume": "",
                "issue": "",
                "pages": "1501--1510",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christian Stab and Iryna Gurevych. 2014. Annotating argument components and relations in persuasive essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1501-1510. Dublin City University and Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Automatic argument quality assessment -New datasets and methods",
                "authors": [
                    {
                        "first": "Assaf",
                        "middle": [],
                        "last": "Toledo",
                        "suffix": ""
                    },
                    {
                        "first": "Shai",
                        "middle": [],
                        "last": "Gretz",
                        "suffix": ""
                    },
                    {
                        "first": "Edo",
                        "middle": [],
                        "last": "Cohen-Karlik",
                        "suffix": ""
                    },
                    {
                        "first": "Roni",
                        "middle": [],
                        "last": "Friedman",
                        "suffix": ""
                    },
                    {
                        "first": "Elad",
                        "middle": [],
                        "last": "Venezian",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Lahav",
                        "suffix": ""
                    },
                    {
                        "first": "Michal",
                        "middle": [],
                        "last": "Jacovi",
                        "suffix": ""
                    },
                    {
                        "first": "Ranit",
                        "middle": [],
                        "last": "Aharonov",
                        "suffix": ""
                    },
                    {
                        "first": "Noam",
                        "middle": [],
                        "last": "Slonim",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
                "volume": "",
                "issue": "",
                "pages": "5625--5635",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Assaf Toledo, Shai Gretz, Edo Cohen-Karlik, Roni Friedman, Elad Venezian, Dan Lahav, Michal Jacovi, Ranit Aharonov, and Noam Slonim. 2019. Automatic argument quality assessment -New datasets and methods. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5625-5635. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Common round: Application of language technologies to large-scale web debates",
                "authors": [
                    {
                        "first": "Hans",
                        "middle": [],
                        "last": "Uszkoreit",
                        "suffix": ""
                    },
                    {
                        "first": "Aleksandra",
                        "middle": [],
                        "last": "Gabryszak",
                        "suffix": ""
                    },
                    {
                        "first": "Leonhard",
                        "middle": [],
                        "last": "Hennig",
                        "suffix": ""
                    },
                    {
                        "first": "J\u00f6rg",
                        "middle": [],
                        "last": "Steffen",
                        "suffix": ""
                    },
                    {
                        "first": "Renlong",
                        "middle": [],
                        "last": "Ai",
                        "suffix": ""
                    },
                    {
                        "first": "Stephan",
                        "middle": [],
                        "last": "Busemann",
                        "suffix": ""
                    },
                    {
                        "first": "Jon",
                        "middle": [],
                        "last": "Dehdari",
                        "suffix": ""
                    },
                    {
                        "first": "Josef",
                        "middle": [],
                        "last": "Van Genabith",
                        "suffix": ""
                    },
                    {
                        "first": "Georg",
                        "middle": [],
                        "last": "Heigold",
                        "suffix": ""
                    },
                    {
                        "first": "Nils",
                        "middle": [],
                        "last": "Rethmeier",
                        "suffix": ""
                    },
                    {
                        "first": "Raphael",
                        "middle": [],
                        "last": "Rubino",
                        "suffix": ""
                    },
                    {
                        "first": "Sven",
                        "middle": [],
                        "last": "Schmeier",
                        "suffix": ""
                    },
                    {
                        "first": "Philippe",
                        "middle": [],
                        "last": "Thomas",
                        "suffix": ""
                    },
                    {
                        "first": "He",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Feiyu",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "5--8",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hans Uszkoreit, Aleksandra Gabryszak, Leonhard Hennig, J\u00f6rg Steffen, Renlong Ai, Stephan Busemann, Jon De- hdari, Josef van Genabith, Georg Heigold, Nils Rethmeier, Raphael Rubino, Sven Schmeier, Philippe Thomas, He Wang, and Feiyu Xu. 2017. Common round: Application of language technologies to large-scale web debates. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 5-8, Valencia, Spain, April. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Computational argumentation quality assessment in natural language",
                "authors": [
                    {
                        "first": "Henning",
                        "middle": [],
                        "last": "Wachsmuth",
                        "suffix": ""
                    },
                    {
                        "first": "Nona",
                        "middle": [],
                        "last": "Naderi",
                        "suffix": ""
                    },
                    {
                        "first": "Yufang",
                        "middle": [],
                        "last": "Hou",
                        "suffix": ""
                    },
                    {
                        "first": "Yonatan",
                        "middle": [],
                        "last": "Bilu",
                        "suffix": ""
                    },
                    {
                        "first": "Vinodkumar",
                        "middle": [],
                        "last": "Prabhakaran",
                        "suffix": ""
                    },
                    {
                        "first": "Tim",
                        "middle": [
                            "Alberdingk"
                        ],
                        "last": "Thijm",
                        "suffix": ""
                    },
                    {
                        "first": "Graeme",
                        "middle": [],
                        "last": "Hirst",
                        "suffix": ""
                    },
                    {
                        "first": "Benno",
                        "middle": [],
                        "last": "Stein",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "176--187",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, and Benno Stein. 2017a. Computational argumentation quality assessment in natural language. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguis- tics: Volume 1, Long Papers, pages 176-187. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Building an argument search engine for the web",
                "authors": [
                    {
                        "first": "Henning",
                        "middle": [],
                        "last": "Wachsmuth",
                        "suffix": ""
                    },
                    {
                        "first": "Martin",
                        "middle": [],
                        "last": "Potthast",
                        "suffix": ""
                    },
                    {
                        "first": "Khalid",
                        "middle": [],
                        "last": "Al-Khatib",
                        "suffix": ""
                    },
                    {
                        "first": "Yamen",
                        "middle": [],
                        "last": "Ajjour",
                        "suffix": ""
                    },
                    {
                        "first": "Jana",
                        "middle": [],
                        "last": "Puschmann",
                        "suffix": ""
                    },
                    {
                        "first": "Jiani",
                        "middle": [],
                        "last": "Qu",
                        "suffix": ""
                    },
                    {
                        "first": "Jonas",
                        "middle": [],
                        "last": "Dorsch",
                        "suffix": ""
                    },
                    {
                        "first": "Viorel",
                        "middle": [],
                        "last": "Morari",
                        "suffix": ""
                    },
                    {
                        "first": "Janek",
                        "middle": [],
                        "last": "Bevendorff",
                        "suffix": ""
                    },
                    {
                        "first": "Benno",
                        "middle": [],
                        "last": "Stein",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 4th Workshop on Argument Mining",
                "volume": "",
                "issue": "",
                "pages": "49--59",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Henning Wachsmuth, Martin Potthast, Khalid Al-Khatib, Yamen Ajjour, Jana Puschmann, Jiani Qu, Jonas Dorsch, Viorel Morari, Janek Bevendorff, and Benno Stein. 2017b. Building an argument search engine for the web. In Proceedings of the 4th Workshop on Argument Mining, pages 49-59. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "A corpus for research on deliberation and debate",
                "authors": [
                    {
                        "first": "Marilyn",
                        "middle": [],
                        "last": "Walker",
                        "suffix": ""
                    },
                    {
                        "first": "Jean",
                        "middle": [
                            "Fox"
                        ],
                        "last": "Tree",
                        "suffix": ""
                    },
                    {
                        "first": "Pranav",
                        "middle": [],
                        "last": "Anand",
                        "suffix": ""
                    },
                    {
                        "first": "Rob",
                        "middle": [],
                        "last": "Abbott",
                        "suffix": ""
                    },
                    {
                        "first": "Joseph",
                        "middle": [],
                        "last": "King",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
                "volume": "",
                "issue": "",
                "pages": "812--817",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Marilyn Walker, Jean Fox Tree, Pranav Anand, Rob Abbott, and Joseph King. 2012. A corpus for research on deliberation and debate. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 812-817, Istanbul, Turkey, May. European Language Resources Association (ELRA).",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF1": {
                "num": null,
                "text": "Conceptual process of our semi-supervised bootstrapping approach: (a) Seed (ir)relevance patterns are selected manually from intially mined candidates. (b) New (ir)relevance patterns are mined and filtered automatically from text units matching the existing patterns, until no new patterns are found or k iterations have passed. (c) The corpus is cleaned by removing units matching the irrelevance patterns.",
                "type_str": "figure",
                "uris": null
            },
            "FIGREF2": {
                "num": null,
                "text": "(a) Histograms of the number of texts in the args.me corpus with a certain a number of irrelevant sentences, as detected by our approach. (b) Histogram of the number of detected (upper number) and removed (lower number) irrelevant sentences over the different sentence positions of a text.",
                "type_str": "figure",
                "uris": null
            },
            "TABREF0": {
                "content": "<table><tr><td>Irrelevant #1</td></tr><tr><td>Relevant #2 (non-argumentat.)</td></tr><tr><td>Relevant #3-5 (argumentative)</td></tr><tr><td>Relevant #6-8 (argumentative)</td></tr><tr><td>Relevant #9-13 (argumentative)</td></tr><tr><td>Irrelevant #14</td></tr></table>",
                "num": null,
                "text": "Irrelevant #15https://www.debate.org/debates/Gay-Marriage/75/",
                "html": null,
                "type_str": "table"
            },
            "TABREF3": {
                "content": "<table/>",
                "num": null,
                "text": "",
                "html": null,
                "type_str": "table"
            }
        }
    }
}