File size: 121,197 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
{
    "paper_id": "2021",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T02:09:51.619801Z"
    },
    "title": "M-Arg: Multimodal Argument Mining Dataset for Political Debates with Audio and Transcripts",
    "authors": [
        {
            "first": "Rafael",
            "middle": [],
            "last": "Mestre",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Southampton",
                "location": {}
            },
            "email": "r.mestre@soton.ac.uk"
        },
        {
            "first": "Razvan",
            "middle": [],
            "last": "Milicin",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Southampton",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Stuart",
            "middle": [
                "E"
            ],
            "last": "Middleton",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Southampton",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Matt",
            "middle": [],
            "last": "Ryan",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Southampton",
                "location": {}
            },
            "email": "m.ryan@soton.ac.uk"
        },
        {
            "first": "Jiatong",
            "middle": [],
            "last": "Zhu",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Southampton",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Timothy",
            "middle": [
                "J"
            ],
            "last": "Norman",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Southampton",
                "location": {}
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Argumentation mining aims at extracting, analysing and modelling people's arguments, but large, high-quality annotated datasets are limited, and no multimodal datasets exist for this task. In this paper, we present M-Arg, a multimodal argument mining dataset with a corpus of US 2020 presidential debates, annotated through crowd-sourced annotations. This dataset allows models to be trained to extract arguments from natural dialogue such as debates using information like the intonation and rhythm of the speaker. Our dataset contains 7 hours of annotated US presidential debates, 6527 utterances and 4104 relation labels, and we report results from different baseline models, namely a text-only model, an audio-only model and multimodal models that extract features from both text and audio. With accuracy reaching 0.86 in multimodal models, we find that audio features provide added value with respect to text-only models.",
    "pdf_parse": {
        "paper_id": "2021",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Argumentation mining aims at extracting, analysing and modelling people's arguments, but large, high-quality annotated datasets are limited, and no multimodal datasets exist for this task. In this paper, we present M-Arg, a multimodal argument mining dataset with a corpus of US 2020 presidential debates, annotated through crowd-sourced annotations. This dataset allows models to be trained to extract arguments from natural dialogue such as debates using information like the intonation and rhythm of the speaker. Our dataset contains 7 hours of annotated US presidential debates, 6527 utterances and 4104 relation labels, and we report results from different baseline models, namely a text-only model, an audio-only model and multimodal models that extract features from both text and audio. With accuracy reaching 0.86 in multimodal models, we find that audio features provide added value with respect to text-only models.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Understanding and modelling argumentation is a recent and key challenge in Natural Language Processing (NLP). Most work addressing this task has focused on extracting arguments from argumentative essays (Stab and Gurevych, 2017) , social networks like Twitter (Bosc et al., 2016) or online reviews (Cocarascu and Toni, 2018) and discussions (Habernal and Gurevych, 2017) , and not much attention has been paid to mining arguments in natural dialogue. The two most common research questions consider how argumentative relations between units (e.g. support or attack) are annotated or how claims and/or premises are identified (Lawrence and Reed, 2019) . We offer, to the best of our knowledge, the first multimodal argumentation mining dataset (M-Arg) of political debates annotated for such argumentative relations of support and attack, using crowd-sourcing techniques. Our contributions are: i) to provide a high quality annotated dataset of political debates with audio and time-stamped transcripts for multimodal argumentation mining; ii) to offer benchmark model results for the research community; and iii) a comparative analysis of the value that multi-modal models bring compared to text-only and audio-only models (Section 5).",
                "cite_spans": [
                    {
                        "start": 203,
                        "end": 228,
                        "text": "(Stab and Gurevych, 2017)",
                        "ref_id": "BIBREF28"
                    },
                    {
                        "start": 260,
                        "end": 279,
                        "text": "(Bosc et al., 2016)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 298,
                        "end": 324,
                        "text": "(Cocarascu and Toni, 2018)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 341,
                        "end": 370,
                        "text": "(Habernal and Gurevych, 2017)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 625,
                        "end": 650,
                        "text": "(Lawrence and Reed, 2019)",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The dataset is derived from a collection of US 2020 presidential debates. Five debates were used with the principal speakers being Donald Trump, Joe Biden, Mike Pence and Kamala Harris, and a moderator (Table 1) . In three of the debates the candidates spoke only with each other and the moderator, while in the remaining two they interacted with the audience in so-called Town Hall events. The lengths of the audio files ranged from approximately 1 hour to 1 hour 35 minutes. The debates were tokenised by sentences or utterances, with 6527 in total. The relationship between pairs of sentences were then classified by crowd-workers as support, attack or neither using the annotation scheme proposed by Carstens and Toni (2015) (Section 3) . The crowd-workers were presented with the sentence pair along with a small extract from the debate to provide context. The resulting dataset consists of 4104 pairs of sentences with the argumentative relationship between them classified, along with features such as the trustworthiness of the crowd-workers, the level of agreement between crowd-workers, and their self-confidence scores (Section 4). Prior to giving details of our methodology, the dataset and comparative analysis, we provide a brief review of related research.",
                "cite_spans": [
                    {
                        "start": 704,
                        "end": 728,
                        "text": "Carstens and Toni (2015)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 729,
                        "end": 740,
                        "text": "(Section 3)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 202,
                        "end": 211,
                        "text": "(Table 1)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Much of the research in argumentation mining has been dedicated to the identification of argumentative discourse units (ADUs) like claims, major claims and premises. For instance, in a first iteration, Stab and Gurevych (2014) Table 1 : Description of the five debates used in the dataset. The column \"Split\" indicates the number of sub-files in which the audio was split.",
                "cite_spans": [
                    {
                        "start": 202,
                        "end": 226,
                        "text": "Stab and Gurevych (2014)",
                        "ref_id": "BIBREF27"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 227,
                        "end": 234,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "2"
            },
            {
                "text": "persuasive essays with 1673 sentences and 1552 argumentative units. Then, they extended their dataset to 402 essays, achieving a total of 7116 sentences and 6089 argumentative components (Stab and Gurevych, 2017) . Carstens and Toni (2015) advocate a relation-based approach towards argumentation mining. Instead of separating the issue of identifying argumentative units and their relation, they reconstitute the task as one of classifying the relationship between sentence pairs as support, attack or neither. They argue that this relation depends upon the context of the discussion. We take the same approach. There are, however, few datasets for relation-based argumentation mining (Paul et al., 2020) . Carstens and Toni (2015) , for example, annotate 854 pairs of sentences for support/attack without identifying the arguments first. Likewise, the DART dataset (Bosc et al., 2016) consists of 4000 tweets, 446 support relations and 112 attack relations, and Stab and Gurevych (2017) annotate 3616 supports and 219 attack relations in their second version of their essay dataset. While certain tasks in argument mining have been applied in other disciplines, interdisciplinary approaches are important for the impact of these methods to be fully realised. Some research in political science has started to bridge the gap in tasks like identifying emotion rhetoric (Osnabr\u00fcge et al., 2021) , gender and emotional expression in politics (Boussalis et al., 2021) , emotional mining in political campaigns (Greco and Polli, 2020) , lexicometrics of Euromanifestos (Jadot and Kelbel, 2017) , and, from the AI perspective, ethos mining (Duthie and Budzynska, 2018) using Hansard as a dataset. Argument mining in political debate is, however, still largely to be explored, although Visser et al. (2021) provide an annotation of 2016 US presidential debates with argument types. Benoit et al. (2016) have advocated for the use of crowd-sourced text analysis for political science, finding high levels of agreement and reproducibility between crowd-workers and experts. However, in subjective tasks like identifying support/attack relations, lower levels of agreement are expected. For instance, Faulkner (2014) used Amazon Mechanical Turk (AMT) to annotate 8176 sentences with \"for\", \"against\" or \"neutral\", achieving about 66% of neutral cases and a Cohen's \u03ba of 0.70. Al-Khatib et al. (2020) also used AMT to obtain 16429 labels of different types, including 1736 \"relation\" labels, defined in their case as \"positive\", \"negative\" or \"no-argument\", with \u03ba = 0.51.",
                "cite_spans": [
                    {
                        "start": 187,
                        "end": 212,
                        "text": "(Stab and Gurevych, 2017)",
                        "ref_id": "BIBREF28"
                    },
                    {
                        "start": 215,
                        "end": 239,
                        "text": "Carstens and Toni (2015)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 686,
                        "end": 705,
                        "text": "(Paul et al., 2020)",
                        "ref_id": "BIBREF26"
                    },
                    {
                        "start": 708,
                        "end": 732,
                        "text": "Carstens and Toni (2015)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 867,
                        "end": 886,
                        "text": "(Bosc et al., 2016)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 964,
                        "end": 988,
                        "text": "Stab and Gurevych (2017)",
                        "ref_id": "BIBREF28"
                    },
                    {
                        "start": 1369,
                        "end": 1393,
                        "text": "(Osnabr\u00fcge et al., 2021)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 1440,
                        "end": 1464,
                        "text": "(Boussalis et al., 2021)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 1507,
                        "end": 1530,
                        "text": "(Greco and Polli, 2020)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 1565,
                        "end": 1589,
                        "text": "(Jadot and Kelbel, 2017)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 1635,
                        "end": 1663,
                        "text": "(Duthie and Budzynska, 2018)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 1780,
                        "end": 1800,
                        "text": "Visser et al. (2021)",
                        "ref_id": "BIBREF30"
                    },
                    {
                        "start": 1876,
                        "end": 1896,
                        "text": "Benoit et al. (2016)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 2367,
                        "end": 2390,
                        "text": "Al-Khatib et al. (2020)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "2"
            },
            {
                "text": "These datasets focus exclusively on text, and, as far as we can tell, there is not much argument mining research using multiple modalities such as both text and audio, particularly focusing on identifying support or attack relations between ADUs. There are, however, some datasets that could be used by the community for this task. For instance, Mirkin et al. (2019) and Orbach et al. (2020b) provide datasets of debate speeches with transcriptions that could help in the extraction of arguments. Likewise, Mirkin et al. (2020) and Orbach et al. (2020a) come closer to argumentation mining research offering datasets of argumentative content and general-purpose rebuttal in speeches. Also, Kopev et al. (2019) use audio and transcripts of political debates to detect deception. Other research explores emotion recognition or sentiment analysis using the IEMOCAP dataset, which contains text, audio and video with emotion annotations (Busso et al., 2008; Cai et al., 2019) . Classic NLP models for relation classification have relied on bag of words (BoW) approaches with common classifiers like random forests, support vector machines or na\u00efve Bayes (Carstens and Toni, 2017) , although more recently LSTMs and Bi-LSTMs have been used with good results (Cocarascu and Toni, 2018) . Some efforts are being devoted to the use of background knowledge or context. For instance, Paul et al. (2020) proposed Bi-LSTM encoders with self-attention, together with commonsense knowledge extraction. The use of both textual and audio features for the identification of argumentative relations, with approaches similar to those used in multimodal emotion recognition, seems to be mostly unexplored.",
                "cite_spans": [
                    {
                        "start": 346,
                        "end": 366,
                        "text": "Mirkin et al. (2019)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 371,
                        "end": 392,
                        "text": "Orbach et al. (2020b)",
                        "ref_id": "BIBREF24"
                    },
                    {
                        "start": 507,
                        "end": 527,
                        "text": "Mirkin et al. (2020)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 532,
                        "end": 553,
                        "text": "Orbach et al. (2020a)",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 690,
                        "end": 709,
                        "text": "Kopev et al. (2019)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 933,
                        "end": 953,
                        "text": "(Busso et al., 2008;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 954,
                        "end": 971,
                        "text": "Cai et al., 2019)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 1150,
                        "end": 1175,
                        "text": "(Carstens and Toni, 2017)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 1253,
                        "end": 1279,
                        "text": "(Cocarascu and Toni, 2018)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 1374,
                        "end": 1392,
                        "text": "Paul et al. (2020)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "2"
            },
            {
                "text": "Argument mining of political debates can be seen as a long conversation text classification problem where context matters. Unlike the well studied a) b) c) problem area of single and short utterance classification (e.g. 2-3 utterances), dialogue modelling and classification of longer conversations has received little attention to date (Xu et al., 2021) . Recent approaches to handle long sequence classification include augmented transformer models with information retrieval (IR) or summarisation models (Xu et al. 2021; Tigunova et al. 2020) . We constrain ourselves in this paper to providing results for a set of short conversation classification baseline models as we want to focus on showing the value of using multimodal data. However, we expect recent advances in long conversation classification models to yield good results with our dataset in the future.",
                "cite_spans": [
                    {
                        "start": 337,
                        "end": 354,
                        "text": "(Xu et al., 2021)",
                        "ref_id": "BIBREF31"
                    },
                    {
                        "start": 507,
                        "end": 523,
                        "text": "(Xu et al. 2021;",
                        "ref_id": "BIBREF31"
                    },
                    {
                        "start": 524,
                        "end": 545,
                        "text": "Tigunova et al. 2020)",
                        "ref_id": "BIBREF29"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "2"
            },
            {
                "text": "The original source of the M-Arg dataset was available as audio tracks with transcripts from a Kaggle competition. 1 This public-domain dataset was originally constructed by downloading audio from YouTube and transcripts from Rev, 2 as explained in the source metadata. The M-Arg dataset with annotations, full transcripts and audio files, source code and model checkpoints for reproducibility is available online in our GitHub repository. 3",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "3"
            },
            {
                "text": "1 The source materials can be found in https://www.kaggle.com/headsortails/ us-election-2020-presidential-debates as of August 9th, 2021. The version used was v. 7.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "3"
            },
            {
                "text": "2 https://www.rev.com 3 https://github.com/rafamestre/m-arg_ multimodal-argumentation-dataset",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "3"
            },
            {
                "text": "The original data was presented as audio in .mp3 files and transcriptions in both .txt and .csv files. The .csv files contained three columns: speaker, minute, and text. Since the timestamps did not align perfectly to the audio clips, we performed our own tokenisation and text-audio alignment. The M-Arg dataset associates each sentence with a matched timestamp in the corresponding debate audio file. To do this, each text was split into utterances, defined as single sentences 4 . Visual inspection revealed the transcriptions to be grammatically correct, with no apparent typos and proper use of punctuation, and so automatic sentence-level tokenization performed well. The utterances were then force-aligned to the audio using the web application of the aeneas tool 5 , obtaining new timestamps. The source audio files were split into different files to comply with the file size limit for the force alignment and to avoid segments where the debate was starting, finishing or going to a break, applause, music, etc. Table 1 summarises the datasets.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 1021,
                        "end": 1028,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Data overview",
                "sec_num": "3.1"
            },
            {
                "text": "Across the five debates, Donald Trump and Joe Biden spoke the most, as can be seen in Figure  1 together, spoke a roughly similar number of utterances. Figure 1 (b), however, indicates that the main participants in the debate did not speak in a similar manner. The violin plots show the probability density of the average number of words per sentence, and we can observe that Trump spoke sentences of a smaller average length than the rest of the participants. Finally, 1(c) shows the most common words (after removing stop-words) in a stacked barplot according to the speaker (removing moderators and audience members), with certain differences in the usage of words. These results indicate potentially important differences in communication strategies and styles.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 86,
                        "end": 95,
                        "text": "Figure  1",
                        "ref_id": "FIGREF0"
                    },
                    {
                        "start": 152,
                        "end": 160,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Data overview",
                "sec_num": "3.1"
            },
            {
                "text": "The M-Arg dataset consists of 4104 labelled pairs of sentences selected from the debates. Sections of the debates were manually labelled by the authors for their \"topic\", following the explanations of the moderator introducing each section, obtaining high level classifications like \"foreign policy\". Excerpts of 15 sentences were randomly selected (the \"context\") and a pair of sentences within the context were chosen to classify their relation (with their distance weighted by a Gaussian distribution to ensure they were close enough). Approximately 1500 sentences were forced to be from different speakers, to balance the dataset by increasing the possibility of finding attack relations. More details on the pair generation strategy and codes can be found in the repository alongside the dataset 6 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pairs creation",
                "sec_num": "3.2"
            },
            {
                "text": "The annotation scheme was based on the relationbased argumentation scheme from Carstens and Toni (2015) . They argue that an argumentative relation of support or attack is highly dependent on the context. Carstens and Toni (2015) suggest starting from a root claim to construct pairs or match sentences containing the same entities, but we chose to divide them into topics and weight them by distance as explained in Section 3.2. We presented the crowd-workers with a pair of sentences along with the labelled topic of discussion (e.g. \"families\" or \"climate change\"), as well as a short 15-sentence extract of the dialogue surrounding these sentences as context. The crowd-workers are asked to use this context, as well as any personal knowledge, to classify the argumentative relation as support, attack or neither, to the best of their ability. By not relying only on the surface meaning of the sentences, we open the way for the use of this dataset in more complex scenarios. For instance, it could be applied together with long-or short-text summarisation to take into account the context in a dialogue (Xu et al., 2021) or knowledge-based models linked to databases or fact-checking websites (Paul et al., 2020) . Consider the following pair:",
                "cite_spans": [
                    {
                        "start": 79,
                        "end": 103,
                        "text": "Carstens and Toni (2015)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 1108,
                        "end": 1125,
                        "text": "(Xu et al., 2021)",
                        "ref_id": "BIBREF31"
                    },
                    {
                        "start": 1198,
                        "end": 1217,
                        "text": "(Paul et al., 2020)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Annotation scheme",
                "sec_num": "3.3"
            },
            {
                "text": "\u2022 Joe Biden: It's criminal.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Annotation scheme",
                "sec_num": "3.3"
            },
            {
                "text": "\u2022 Donald Trump: They are so well taken care of.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Annotation scheme",
                "sec_num": "3.3"
            },
            {
                "text": "At a first glance, it is not possible to know what Biden and Trump are talking about. We might assume that the relationship is attack, but this would be a big assumption only based on the fact that they are opposing candidates. Reading the context, we a) b) c) d) might find out that they are actually talking about the infamous controversy of the U.S. Immigration and Customs Enforcement's (ICE) camps, where children were put in cages and separated from their parents. Joe Biden claims that what the Trump administration has done is criminal. Trump answers by saying that they (the children) are \"so well taken care of\" because reporters went there and saw that the facilities were very clean. The argumentative relation between these two sentences is clearly an attack. Indeed, out of the 81 annotators who classified this test question, 84% of them agreed that this was an attack relation. Work on context summarisation or knowledge extraction could help train models to understand why this was an attack. In our dataset, ADUs consist of sentences fully delimited by periods, but in many cases they will not (Lawrence and Reed, 2019) . They might span several sentences or even be one of the clauses within a sentence. Likewise, during a heated debate, the structure of the argument might not be easily identifiable:",
                "cite_spans": [
                    {
                        "start": 1112,
                        "end": 1137,
                        "text": "(Lawrence and Reed, 2019)",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Annotation scheme",
                "sec_num": "3.3"
            },
            {
                "text": "\u2022 Joe Biden: We learned that this president paid 50 times the tax in China as a secret bank account with China, does business in China, and in fact, is talking about me taking money?",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Annotation scheme",
                "sec_num": "3.3"
            },
            {
                "text": "\u2022 Joe Biden: What are you hiding?",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Annotation scheme",
                "sec_num": "3.3"
            },
            {
                "text": "This pair might be interpreted together as a legitimate question being raised to attack the ethos of Donald Trump. However, in and of itself, the second sentence can be taken as a rhetorical way of claiming that Trump is hiding something. In this context, the first sentence is supporting this claim. Annotators did well in this task, with 83% of judgements of this test question correctly labelling it as support.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Annotation scheme",
                "sec_num": "3.3"
            },
            {
                "text": "For the annotation task, the platform Appen was used. 7 Crowd-workers were presented with a pair of sentences, topic and context. They were asked to classify the argumentative relation and report their confidence on a Likert scale, ranging from 1: \"not confident at all\", to 5: \"very confident\". Each worker was then paid per \"page\" of work completed, with each page containing between 4 and 6 tasks. To ensure accuracy in the annotations, the contributors were quizzed at the beginning and during the annotations (once per page) with test (or gold) questions. The trust in the annotator was thus defined as the percentage of test questions that they answered correctly, and we set an accuracy threshold of 81%. Other quality settings were enabled, such as: minimum time spent per page to 90 seconds and no more than 60% of supports and 35% of attacks classified. If the annotators did not meet any of these standards, their judgements were not used. Dynamic judgements, that could range from 3 to 7 annotations if the agreement in the annotation was below 70%, were also enabled to improve the agreement of each annotation. A total of 101 test questions were used in this annotation and 104 reliable workers participated, out of 287 that attempted it. Overall, considering the quality settings (e.g. dynamic judgements, tainted answers), 21646 reliable annotations were collected (5746 belonging to gold questions and 15900 to random pairs), and a separate 1663 annotations were rejected. 8",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Crowd-worker annotation",
                "sec_num": "3.4"
            },
            {
                "text": "The M-Arg dataset consists of a total of 4104 pairs of sentences (including golden ones), of which 384 are support relations, 120 are attack relations and 3600 are neither support nor attack, as shown in Figure 2(a) . Despite efforts to increase the number of support/attack relations, as explained in Section 3.2, the dataset is imbalanced towards the neither side. This is nevertheless expected, as most of the utterances during a debate are not argumentative in nature. Eighteen different topics were identified in the debates, with the most common ones being \"COVID\", \"Racism\", \"Climate change\" and \"Economy\". Figure 2b shows the total number of utterances from each topic throughout the whole dataset. 9 Some topics, such as \"LGBTQ\" or \"Leadership\" had very few instances, since they were only discussed briefly in one of the debates. Many of these topics, however, could be combined, such as \"Taxes\" and \"Economy\", as desired. For each topic we can see in Figure 2 (c) the distribution of argumentative relations that were annotated. Topics such as \"Foreign Policy\" or \"Taxes\" did not contain attack relations, most likely due to the fact that those sections were small.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 204,
                        "end": 215,
                        "text": "Figure 2(a)",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 614,
                        "end": 623,
                        "text": "Figure 2b",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 962,
                        "end": 970,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Description and relevant examples",
                "sec_num": "4.1"
            },
            {
                "text": "Whether an argument is supporting or attacking a claim is a subjective matter. Philosophy of argumentation has attempted to establish more or less general argumentation frameworks with different categorisations. However, it is almost certain that thresholds for what quality of information supports or attacks an argument, or judgements on whether such argument is sufficiently valid or not vary by person and context. Our annotated dataset, thus, provides a collective representation of how people reason and understand arguments, and a large number of disagreements are expected. Indeed, the prevalence of fake news or fallacies, or even reasonable disagreements over interpretation of values and inferences in everyday political discourse, has shown us that the same premises can be deemed supports or attacks in different contexts. As we cannot expect that everyone thinks of arguments or fallacies in the same way, the annotation task needs to be accessible and understandable but still closely guided and validated by the theoretical frameworks to reflect informed but real interpre-tations of support and attack in open dialogue in political domains. In the instructions for the annotation task, the contributors were asked to focus on whether a sentence provided a reason that supported or attacked its counterpart, in order to avoid confusing an attack towards something/someone with an attack towards a claim. Consider the following example in which Biden is not providing any reason to support his claim, although it was meant to attack Trump. Many people would interpret this as attack (as many annotators did):",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description and relevant examples",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 Donald Trump: There's abuse, tremendous abuse.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description and relevant examples",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 Joe Biden: Simply not true.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description and relevant examples",
                "sec_num": "4.1"
            },
            {
                "text": "This case was correctly labeled as neither, but only with an agreement score of 56%. In other cases, support was simply confused with repetition:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description and relevant examples",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 Joe Biden And what's happening is too many transgender women of color are being murdered.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description and relevant examples",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 Joe Biden: They're being murdered.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description and relevant examples",
                "sec_num": "4.1"
            },
            {
                "text": "This was annotated as support by three crowdworkers, but this is simply coherence between the sentences or a simple reiteration of a claim. This subjectivity is summarised in Figure 3(a)-(c) , which shows the agreement score, the trust in the annotator and the self-confidence score for each label. In general, crowd-workers labeled relations independently of their trust score and their selfconfidence score. However, attack relations were more controversial with 25% of annotations above 0.87, whereas for support and neither at least 25% of annotations had an agreement of 1. Overall, the average agreement was 0.87 and the median agreement 1.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 175,
                        "end": 190,
                        "text": "Figure 3(a)-(c)",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Description and relevant examples",
                "sec_num": "4.1"
            },
            {
                "text": "The presence of subjectivity leads us to evaluate the agreement among crowd-workers (also known as intern-annotator agreement) using Krippendorf's \u03b1 (Krippendorff, 1980) . This agreement score allows for a variable number of annotations in each instance, with an unspecified number of crowdworkers that do not necessarily need to annotate every single instance, making it suitable for our case. Considering all the annotations, we obtained \u03b1 = 0.43. Considering that the distribution of annotations show that some contributors annotated many sentences, while others very few ( Figure  3(d) ), we filtered by the most diligent workers but found no significant change in \u03b1. 10 However, given that annotators are assigned a trust score and they annotated with different self-confidence, we calculated different \u03b1's by filtering by these values. Table 2 shows the \u03b1 scores when we filter by crowd-worker trust rating for all annotations (left) and only for those that were annotated with the maximum confidence (right). We can see that the crowd-worker agreement increases up to 0.53 when we filter annotators with higher trust (although it drops again for the maximum trust), however at the cost of decreasing the number of workers and annotations in the dataset. Likewise, if we only consider those annotations that were provided with 10 Krippendorff's \u03b1 was calculated with the nltk.metrics.agreement module (v. 3.6). Results were doublechecked with the krippendorff module, https://github. com/pln-fing-udelar/fast-krippendorff, which yielded almost identical results. high certainty, we see an overall large value of 0.57, going up to 0.79 if we also filter for high trust workers. Other reports in the literature classifying argumentative relations have yielded Cohen's \u03ba = 0.70 (Faulkner, 2014) , \u03ba = 0.51 (Al-Khatib et al., 2020) , \u03b1 = 0.67 (Bosc et al., 2016) , \u03b1 = 0.81 (Stab and Gurevych, 2014) .",
                "cite_spans": [
                    {
                        "start": 149,
                        "end": 169,
                        "text": "(Krippendorff, 1980)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 1333,
                        "end": 1335,
                        "text": "10",
                        "ref_id": null
                    },
                    {
                        "start": 1781,
                        "end": 1797,
                        "text": "(Faulkner, 2014)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 1809,
                        "end": 1833,
                        "text": "(Al-Khatib et al., 2020)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 1845,
                        "end": 1864,
                        "text": "(Bosc et al., 2016)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 1876,
                        "end": 1901,
                        "text": "(Stab and Gurevych, 2014)",
                        "ref_id": "BIBREF27"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 577,
                        "end": 589,
                        "text": "Figure  3(d)",
                        "ref_id": "FIGREF2"
                    },
                    {
                        "start": 842,
                        "end": 849,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Crowd-worker agreement",
                "sec_num": "5.1"
            },
            {
                "text": "It might seem surprising to obtain a low crowdworker agreement, given that the average agreement in the annotation was 85%, as mentioned above. These numbers, however, need to be considered with care. Krippendorff's \u03b1 measures disagreement beyond that expected by chance, but our data is not balanced, so our labels are not equally probable. It has been observed that Krippendorff's \u03b1 can be heavily attenuated in imbalanced datasets (Jeni. et al., 2013) . Indeed, if we sub-sample our dataset for 100 attack, support and neither relations, we find \u03b1 = 0.540 \u00b1 0.015 (standard deviation after 10 trials). If we sub-sample it unbalanced, with 10 attack, 10 support and 1000 neither, we find \u03b1 = 0.170 \u00b1 0.047 (standard deviation after 10 trials). This is a big difference in values, even though the source data is the same. In any case, given the subjectivity of the task, we do not believe a small \u03b1 to be necessarily a bad result, since many judgements might not lend an obvious collective answer and, most importantly, people might believe one instance is a supportive argument, while others believe it is not an argument at all. We believe there is significant value in these unclear annotations, as they give insight into how people understand the arguments put forward in political debate.",
                "cite_spans": [
                    {
                        "start": 434,
                        "end": 454,
                        "text": "(Jeni. et al., 2013)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Crowd-worker agreement",
                "sec_num": "5.1"
            },
            {
                "text": "To measure the quality of our corpus and study the potential added value of audio features in argumentation mining, we evaluated the performance of different classification models based on a multimodal model, as well as text-only and audio-only models (Figure 4(a) ). First, the input pair of sentences were split into audio and text. In the multimodal model, each sentence pair was passed through an audio and text module and their outputs concatenated, passed through a 100-unit middle layer and a 3-output classification layer. In the text-only and audio-only models, the sentences were only passed through the text or audio module, respectively, and the middle and classification layers were the same. The audio module shown in Figure 4 (b) was based on a previous model by Cai et al. (2019) for multimodal emotion recognition and consisted of a feature extraction module followed by a CNN in parallel with a Bi-LSTM, chosen to maximise the extraction of local and global features. The textonly module Figure 4 (c) consisted of a BERT preprocessor and a BERT encoding of L=12 hidden layers (i.e., Transformer blocks), a hidden size of H=768, and A=12 attention heads. The missing dropout rates can be found in Table 3 . Audio feature extraction was performed using the Python module \"librosa\" (McFee et al., 2015) . The features were: Mel-frequency cepstral coeffi-cients (MFCCs), which are widely used features for characterising and detecting voice signals (Klapuri and Davy, 2006) ; several spectral features like spectral centroids (Klapuri and Davy, 2006) , spectral bandwidth (Klapuri and Davy, 2006), spectral roll-off (McFee et al., 2015) and spectral contrast (Jiang et al., 2002) ; and a 12-bit chroma vector (McFee et al., 2015) . For each sentence, we used the timestamp to clip the audio file with a buffer of \u00b12 s to ensure the full audio of the utterance was captured.",
                "cite_spans": [
                    {
                        "start": 778,
                        "end": 795,
                        "text": "Cai et al. (2019)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 1297,
                        "end": 1317,
                        "text": "(McFee et al., 2015)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 1463,
                        "end": 1487,
                        "text": "(Klapuri and Davy, 2006)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 1540,
                        "end": 1564,
                        "text": "(Klapuri and Davy, 2006)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 1586,
                        "end": 1598,
                        "text": "(Klapuri and",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 1599,
                        "end": 1650,
                        "text": "Davy, 2006), spectral roll-off (McFee et al., 2015)",
                        "ref_id": null
                    },
                    {
                        "start": 1673,
                        "end": 1693,
                        "text": "(Jiang et al., 2002)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 1723,
                        "end": 1743,
                        "text": "(McFee et al., 2015)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 252,
                        "end": 264,
                        "text": "(Figure 4(a)",
                        "ref_id": "FIGREF3"
                    },
                    {
                        "start": 732,
                        "end": 740,
                        "text": "Figure 4",
                        "ref_id": "FIGREF3"
                    },
                    {
                        "start": 1006,
                        "end": 1014,
                        "text": "Figure 4",
                        "ref_id": "FIGREF3"
                    },
                    {
                        "start": 1214,
                        "end": 1221,
                        "text": "Table 3",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Argumentative relation classification",
                "sec_num": "5.2"
            },
            {
                "text": "To train the three models, we used the Adam optimiser with a learning rate of 0.00005, a batch size of 16 and 50 epochs. A time-based learning rate schedule function was used with a decay rate of 0.0000002 and the loss function was categorical cross-entropy. Table 3 shows the evaluation metrics of these models. The values in \"text dropout\" and \"audio dropout\" are the rates of all the dropout layers of the respective models (Figure 4 ). All three models perform well identifying neither labels, but they do not perform so well identifying attacks or supports, with the highest F 1 values being 0.24 and 0.21, respectively. The text-only model fails to identify attack relations to the same level of the audio-only and multimodal models, most likely due to the imbalance of the data. A multimodal model with a dropout rate of 0.2, however, increases the F 1 for attacks from 0.06 in the text-only model to 0.24, and for supports from 0.14 to 0.21. Surprisingly, the audio-only model performs better than the text-only model in identifying attacks and neither, and closely matches the multimodal model. As proof of concept, we filtered the annotations by their agreement (\u22650.85) and we assessed the best performing multimodal model. We obtained an even higher accuracy value of 0.91, especially for neither labels, with m-F1 of 0.95, although identification of support and attack relations was worse, most likely due to a decrease of useful labels. Over-all, we believe that audio provides relevant features for the identification of argumentative relations and its added value with respect to text helps recapitulate the complexity of this type of data in heavily unbalanced datasets.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 259,
                        "end": 266,
                        "text": "Table 3",
                        "ref_id": "TABREF4"
                    },
                    {
                        "start": 427,
                        "end": 436,
                        "text": "(Figure 4",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Argumentative relation classification",
                "sec_num": "5.2"
            },
            {
                "text": "In this paper, we have presented a multimodal argumentation mining dataset (M-Arg) for political debates based on a corpus of the US 2020 presidential debates with audio and transcripts. The dataset was annotated using crowd-sourcing techniques and we present descriptive statistics of the dataset itself, as well as of the annotations, with discussion of some interesting examples. As a baseline for future research, we evaluated the classification performance of audio-only, text-only and multimodal models. We found that the audio-only and multimodal models could perform with high levels of accuracy and F 1 , although they encountered problems classifying support and attack relations very efficiently. The text-only model performed similarly, but its accuracy in attack classifications was low due to the imbalance of the data. Adding the audio to the text, however, in a multimodal model, helped increase the metrics of both support and attacks, although they still remained quite low. We believe these to be encouraging results, as improvements like reinforcement learning to tackle data imbalance, optimised extraction of audio features, addition of (cross-)attention layers, summarisation of the surrounding context or use of background knowledge databases, could greatly improve these performance metrics. Moreover, the data can be filtered according to annotation agreement, the annotator's trust and self-confidence in the annotation, potentially training models with higher precision and/or recall, although with less data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions and future work",
                "sec_num": "6"
            },
            {
                "text": "One limitation of our dataset is that ADUs are defined in a very simple manner (by a period with tokeniser). ADUs might be full sentences on certain occasions, but they might encompass several sentences or simply a clause within one. Further work to improve this dataset would include the identification of ADUs (without necessarily labelling them as claim or premise). Likewise, even if a sentence contains a full ADU, in natural dialogue it might not present as a clearly stated claim or premise, but might contain irony or rhetorical questions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions and future work",
                "sec_num": "6"
            },
            {
                "text": "As already discussed, whether a pair of sentences are showing support or attack is a somewhat sub-jective matter, and for that reason we obtain Krippendorff's \u03b1 = 0.43. One cannot expect crowdworkers to identify, or even easily understand, all types of arguments or what constitutes a fallacy; philosophers continue to disagree. Yet with some instruction and information these annotations can better reflect real-world judgements about support and attack arguments. For certain applications, especially where including marginalised voices, AI systems will need to understand and detect how people argue, even if they do not follow the dictates of argumentation theory (Young, 2000) . We believe our dataset will be of interest for understanding what people think a proper argument is.",
                "cite_spans": [
                    {
                        "start": 668,
                        "end": 681,
                        "text": "(Young, 2000)",
                        "ref_id": "BIBREF32"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions and future work",
                "sec_num": "6"
            },
            {
                "text": "Using the sentence tokenizer PunktSentenceTokenizer from www.nltk.org.5 The website of the web application is https:// aeneasweb.org/help and their GitHub https:// github.com/readbeyond/aeneas/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "GitHub: https://github.com/rafamestre/ m-arg_multimodal-argumentation-dataset",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://appen.com/. 8 Extensive details of the annotation scheme and quality settings can be found in the GitHub of the project: https://github.com/rafamestre/m-arg_ multimodal-argumentation-dataset.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Due to the constraints in random pair generation, the topic distribution in the annotated dataset differs slightly, but the distribution closely resembles that of the original dataset.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "This work has been funded by the Web Science Institute of the University of Southampton. The authors would also like to acknowledge the support of UK Research and Innovation (UKRI) funding (grant ref MR/S032711/1).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            },
            {
                "text": "Ethics approval for this research was received from the University of Southampton's Faculty of Social Science Ethics and Research Governance committee, Ref: 66226, Date 22/07/2021. The original dataset in which we build the M-Arg dataset was available under license CC0: Public Domain. Fair treatment of the workers involved in the annotation of the dataset was ensured by Appen's code of ethics (https://appen.com/ crowd-wellness/). We aimed at providing a fair wage for the work provided and, according to the platform statistics, the workers received an hourly compensation with median $7.69 and interquartile mean $7.57. Before sharing our annotated dataset, we have stripped all information that could be potentially sensitive, such as IP's or locations, and we have re-anonymised the anonymous worker ID's that were provided.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ethical considerations",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "End-to-end argumentation knowledge graph construction",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Al-Khatib",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Hou",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Wachsmuth",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Jochim",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Bonin",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Stein",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proc. AAAI Conference on Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "7367--7374",
                "other_ids": {
                    "DOI": [
                        "10.1609/aaai.v34i05.6231"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "K. Al-Khatib, Y. Hou, H. Wachsmuth, C. Jochim, F. Bonin, and B. Stein. 2020. End-to-end argumen- tation knowledge graph construction. Proc. AAAI Conference on Artificial Intelligence, pages 7367- 7374.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Crowd-sourced text analysis: Reproducible and agile production of political data",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Benoit",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Conway",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [
                            "E"
                        ],
                        "last": "Lauderdale",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Laver",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Mikhaylov",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "American Political Science Review",
                "volume": "110",
                "issue": "2",
                "pages": "278--295",
                "other_ids": {
                    "DOI": [
                        "10.1017/S0003055416000058"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "K. Benoit, D. Conway, B. E. Lauderdale, M. Laver, and S. Mikhaylov. 2016. Crowd-sourced text anal- ysis: Reproducible and agile production of po- litical data. American Political Science Review, 110(2):278-295.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "DART: A dataset of arguments and their relations on twitter",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Bosc",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Cabrio",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Villata",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proc. 10th Int. Conf. on Language Resources and Evaluation",
                "volume": "",
                "issue": "",
                "pages": "1258--1263",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "T. Bosc, E. Cabrio, and S. Villata. 2016. DART: A dataset of arguments and their relations on twitter. Proc. 10th Int. Conf. on Language Resources and Evaluation, pages 1258-1263.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Gender, candidate emotional expression, and voter reactions during televised debates",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Boussalis",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [
                            "G"
                        ],
                        "last": "Coan",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "R"
                        ],
                        "last": "Holman",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "M\u00fcller",
                        "suffix": ""
                    }
                ],
                "year": 2021,
                "venue": "American Political Science Review",
                "volume": "",
                "issue": "",
                "pages": "1--16",
                "other_ids": {
                    "DOI": [
                        "10.1017/S0003055421000666"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "C. Boussalis, T. G. Coan, M. R. Holman, and S. M\u00fcller. 2021. Gender, candidate emotional expression, and voter reactions during televised debates. American Political Science Review, pages 1-16.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "IEMOCAP: Interactive emotional dyadic motion capture database. Language Resources and Evaluation",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Busso",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Bulut",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "C"
                        ],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Kazemzadeh",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Mower",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Kim",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "N"
                        ],
                        "last": "Chang",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "S"
                        ],
                        "last": "Narayanan",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "",
                "volume": "42",
                "issue": "",
                "pages": "335--359",
                "other_ids": {
                    "DOI": [
                        "10.1007/s10579-008-9076-6"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "C. Busso, M. Bulut, C. C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan. 2008. IEMOCAP: Interactive emotional dyadic motion capture database. Language Re- sources and Evaluation, 42(4):335-359.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Audiotextual emotion recognition based on improved neural networks",
                "authors": [
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Cai",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Hu",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Dong",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Zhou",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Mathematical Problems in Engineering",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "DOI": [
                        "10.1155/2019/2593036"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "L. Cai, Y. Hu, J. Dong, and S. Zhou. 2019. Audio- textual emotion recognition based on improved neu- ral networks. Mathematical Problems in Engineer- ing, 2019.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Towards relation based argumentation mining",
                "authors": [
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Carstens",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Toni",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proc. 2nd Workshop on Argumentation Mining",
                "volume": "",
                "issue": "",
                "pages": "29--34",
                "other_ids": {
                    "DOI": [
                        "10.3115/v1/w15-0504"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "L. Carstens and F. Toni. 2015. Towards relation based argumentation mining. In Proc. 2nd Workshop on Argumentation Mining, pages 29-34.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Using argumentation to improve classification in natural language problems",
                "authors": [
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Carstens",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Toni",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "ACM Trans. on Internet Technology",
                "volume": "17",
                "issue": "3",
                "pages": "1--23",
                "other_ids": {
                    "DOI": [
                        "10.1145/3017679"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "L. Carstens and F. Toni. 2017. Using argumentation to improve classification in natural language problems. ACM Trans. on Internet Technology, 17(3):1-23.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Combining deep learning and argumentative reasoning for the analysis of social media textual content using small data sets",
                "authors": [
                    {
                        "first": "O",
                        "middle": [],
                        "last": "Cocarascu",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Toni",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Computational Linguistics",
                "volume": "44",
                "issue": "4",
                "pages": "833--858",
                "other_ids": {
                    "DOI": [
                        "10.1162/colia00338"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "O. Cocarascu and F. Toni. 2018. Combining deep learn- ing and argumentative reasoning for the analysis of social media textual content using small data sets. Computational Linguistics, 44(4):833-858.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "A deep modular RNN approach for ethos mining",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Duthie",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Budzynska",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proc. Int. Joint Conf. on Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "4041--4047",
                "other_ids": {
                    "DOI": [
                        "10.24963/ijcai.2018/562"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "R. Duthie and K. Budzynska. 2018. A deep modular RNN approach for ethos mining. Proc. Int. Joint Conf. on Artificial Intelligence, pages 4041-4047.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Automated Classification of Argument Stance in Student Essays: A Linguistically Motivated Approach with an Application for Supporting Argument Summarization",
                "authors": [
                    {
                        "first": "A",
                        "middle": [
                            "R"
                        ],
                        "last": "Faulkner",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. R. Faulkner. 2014. Automated Classification of Argument Stance in Student Essays: A Linguisti- cally Motivated Approach with an Application for Supporting Argument Summarization. Ph.D. thesis, CUNY.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "The political debate on immigration in the election campaigns in Europe",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Greco",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Polli",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Springer Proceedings in Complexity",
                "volume": "",
                "issue": "",
                "pages": "111--123",
                "other_ids": {
                    "DOI": [
                        "10.1007/978-3-030-48993-9_9"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "F. Greco and A. Polli. 2020. The political debate on immigration in the election campaigns in Europe. In Springer Proceedings in Complexity, pages 111-123. Springer, Cham.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Argumentation mining in user-generated Web discourse",
                "authors": [
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Habernal",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Cmputational Linguistics",
                "volume": "43",
                "issue": "1",
                "pages": "125--179",
                "other_ids": {
                    "DOI": [
                        "10.1162/COLI"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "I. Habernal and I. Gurevych. 2017. Argumentation mining in user-generated Web discourse. Cmputa- tional Linguistics, 43(1):125-179.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Assessing the politicisation of the European debate using a lexicometric study of the 2014 Euromanifestos",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Jadot",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Kelbel",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "",
                "volume": "55",
                "issue": "",
                "pages": "60--85",
                "other_ids": {
                    "DOI": [
                        "10.3917/poeu.055.0060"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "C. Jadot and C. Kelbel. 2017. 'Same, same, but dif- ferent.' Assessing the politicisation of the European debate using a lexicometric study of the 2014 Euro- manifestos. Politique Europeenne, 55(1):60-85.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Facing imbalanced data -Recommendations for the use of performance metrics",
                "authors": [
                    {
                        "first": "L",
                        "middle": [
                            "A"
                        ],
                        "last": "Jeni",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "F"
                        ],
                        "last": "Cohn",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "De La Torre",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings -2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013",
                "volume": "",
                "issue": "",
                "pages": "245--251",
                "other_ids": {
                    "DOI": [
                        "10.1109/ACII.2013.47"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "L. A. Jeni., J. F. Cohn, and F. De La Torre. 2013. Fac- ing imbalanced data -Recommendations for the use of performance metrics. Proceedings -2013 Hu- maine Association Conference on Affective Comput- ing and Intelligent Interaction, ACII 2013, pages 245-251.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Music type classification by spectral contrast feature",
                "authors": [
                    {
                        "first": "D",
                        "middle": [
                            "N"
                        ],
                        "last": "Jiang",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Lu",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [
                            "J"
                        ],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "H"
                        ],
                        "last": "Tao",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [
                            "H"
                        ],
                        "last": "Cai",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proc. IEEE Int. Conf. on Multimedia and Expo",
                "volume": "",
                "issue": "",
                "pages": "113--116",
                "other_ids": {
                    "DOI": [
                        "10.1109/ICME.2002.1035731"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "D. N. Jiang, L. Lu, H. J. Zhang, J. H. Tao, and L. H. Cai. 2002. Music type classification by spectral contrast feature. Proc. IEEE Int. Conf. on Multimedia and Expo, pages 113-116.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Signal processing methods for music transcription",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Klapuri",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Davy",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Signal Processing Methods for Music Transcription",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "DOI": [
                        "10.1007/0-387-32845-9"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "A. Klapuri and M. Davy. 2006. Signal processing methods for music transcription. In Signal Pro- cessing Methods for Music Transcription, chapter 5. Springer Science & Business Media.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Detecting deception in political debates using acoustic and textual features",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Kopev",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Ali",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Koychev",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Nakov",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proc. IEEE Automatic Speech Recognition and Understanding Workshop",
                "volume": "",
                "issue": "",
                "pages": "652--659",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "D. Kopev, A. Ali, I. Koychev, and P. Nakov. 2019. De- tecting deception in political debates using acous- tic and textual features. In Proc. IEEE Automatic Speech Recognition and Understanding Workshop, pages 652-659.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Content Analysis: An Introduction to its Methodology",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Krippendorff",
                        "suffix": ""
                    }
                ],
                "year": 1980,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "K. Krippendorff. 1980. Content Analysis: An Intro- duction to its Methodology. SAGE Publications, Inc, Thousand Oaks, CA.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Argument mining: A survey",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Lawrence",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Reed",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Computational Linguistics",
                "volume": "45",
                "issue": "4",
                "pages": "765--818",
                "other_ids": {
                    "DOI": [
                        "10.1162/COLIa00364"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "J. Lawrence and C. Reed. 2019. Argument mining: A survey. Computational Linguistics, 45(4):765-818.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "librosa: Audio and music signal analysis in Python",
                "authors": [
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Mcfee",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Raffel",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Liang",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Ellis",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Mcvicar",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Battenberg",
                        "suffix": ""
                    },
                    {
                        "first": "O",
                        "middle": [],
                        "last": "Nieto",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proc. 14th Python in Science Conference",
                "volume": "",
                "issue": "",
                "pages": "18--24",
                "other_ids": {
                    "DOI": [
                        "10.25080/majora-7b98e3ed-003"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "B. McFee, C. Raffel, D. Liang, D. Ellis, M. McVicar, E. Battenberg, and O. Nieto. 2015. librosa: Au- dio and music signal analysis in Python. Proc. 14th Python in Science Conference, pages 18-24.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "A recorded debating dataset",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Mirkin",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Jacovi",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Lavee",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [
                            "K"
                        ],
                        "last": "Kuo",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Thomas",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Sager",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Kotlerman",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Venezian",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Slonim",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "LREC 2018 -11th International Conference on Language Resources and Evaluation",
                "volume": "",
                "issue": "",
                "pages": "250--254",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Mirkin, M. Jacovi, T. Lavee, H. K. Kuo, S. Thomas, L. Sager, L. Kotlerman, E. Venezian, and N. Slonim. 2019. A recorded debating dataset. In LREC 2018 -11th International Conference on Language Re- sources and Evaluation, pages 250-254.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Listening comprehension over argumentative content",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Mirkin",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Moshkowich",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Orbach",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Kotlerman",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Kantor",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Lavee",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Jacovi",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Bilu",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Aharonov",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Slonim",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "719--724",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/d18-1078"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "S. Mirkin, G. Moshkowich, M. Orbach, L. Kotlerman, Y. Kantor, T. Lavee, M. Jacovi, Y. Bilu, R. Aharonov, and N. Slonim. 2020. Listening comprehension over argumentative content. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2018, pages 719-724.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "A dataset of general-purpose rebuttal",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Orbach",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Bilu",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Gera",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Kantor",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Dankin",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Lavee",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Kotlerman",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Mirkin",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Jacovi",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Aharonov",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Slonim",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "EMNLP-IJCNLP 2019 -2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "5591--5601",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/d19-1561"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "M. Orbach, Y. Bilu, A. Gera, Y. Kantor, L. Dankin, T. Lavee, L. Kotlerman, S. Mirkin, M. Jacovi, R. Aharonov, and N. Slonim. 2020a. A dataset of general-purpose rebuttal. In EMNLP-IJCNLP 2019 -2019 Conference on Empirical Methods in Natu- ral Language Processing and 9th International Joint Conference on Natural Language Processing, Pro- ceedings of the Conference, pages 5591-5601. As- sociation for Computational Linguistics.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Out of the Echo Chamber: Detecting Countering Debate Speeches",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Orbach",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Bilu",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Toledo",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Lahav",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Jacovi",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Aharonov",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Slonim",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "7073--7086",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/2020.acl-main.633"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "M. Orbach, Y. Bilu, A. Toledo, D. Lahav, M. Ja- covi, R. Aharonov, and N. Slonim. 2020b. Out of the Echo Chamber: Detecting Countering Debate Speeches. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7073-7086.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Playing to the gallery: Emotive rhetoric in parliaments",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Osnabr\u00fcge",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "B"
                        ],
                        "last": "Hobolt",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Rodon",
                        "suffix": ""
                    }
                ],
                "year": 2021,
                "venue": "American Political Science Review",
                "volume": "",
                "issue": "",
                "pages": "1--15",
                "other_ids": {
                    "DOI": [
                        "10.1017/s0003055421000356"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "M. Osnabr\u00fcge, S. B. Hobolt, and T. Rodon. 2021. Play- ing to the gallery: Emotive rhetoric in parliaments. American Political Science Review, pages 1-15.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Argumentative relation classification with background knowledge",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Paul",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Opitz",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Becker",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Kobbe",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Hirst",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Frank",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Frontiers in Artificial Intelligence and Applications",
                "volume": "326",
                "issue": "",
                "pages": "319--330",
                "other_ids": {
                    "DOI": [
                        "10.3233/FAIA200515"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "D. Paul, J. Opitz, M. Becker, J. Kobbe, G. Hirst, and A. Frank. 2020. Argumentative relation classifica- tion with background knowledge. Frontiers in Arti- ficial Intelligence and Applications, 326:319-330.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "Annotating argument components and relations in persuasive essays",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Stab",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proc. 25th Int. Conf. on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "1501--1510",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "C. Stab and I. Gurevych. 2014. Annotating argument components and relations in persuasive essays. In Proc. 25th Int. Conf. on Computational Linguistics, pages 1501-1510.",
                "links": null
            },
            "BIBREF28": {
                "ref_id": "b28",
                "title": "Parsing argumentation structures in persuasive essays",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Stab",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Computational Linguistics",
                "volume": "43",
                "issue": "3",
                "pages": "619--659",
                "other_ids": {
                    "DOI": [
                        "10.1162/COLI_a_00295"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "C. Stab and I. Gurevych. 2017. Parsing argumentation structures in persuasive essays. Computational Lin- guistics, 43(3):619-659.",
                "links": null
            },
            "BIBREF29": {
                "ref_id": "b29",
                "title": "CHARM: Inferring personal attributes from conversations",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Tigunova",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Yates",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Mirza",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Weikum",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proc. 2020 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "5391--5404",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/2020.emnlp-main.434"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "A. Tigunova, A. Yates, P. Mirza, and G. Weikum. 2020. CHARM: Inferring personal attributes from conversations. In Proc. 2020 Conference on Empiri- cal Methods in Natural Language Processing, pages 5391-5404.",
                "links": null
            },
            "BIBREF30": {
                "ref_id": "b30",
                "title": "Annotating argument schemes. Argumentation",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Visser",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Lawrence",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Reed",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Wagemans",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Walton",
                        "suffix": ""
                    }
                ],
                "year": 2021,
                "venue": "",
                "volume": "35",
                "issue": "",
                "pages": "101--139",
                "other_ids": {
                    "DOI": [
                        "10.1007/s10503-020-09519-x"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "J. Visser, J. Lawrence, C. Reed, J. Wagemans, and D. Walton. 2021. Annotating argument schemes. Argumentation, 35(1):101-139.",
                "links": null
            },
            "BIBREF31": {
                "ref_id": "b31",
                "title": "Beyond goldfish memory: Long-term open-domain conversation",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Szlam",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Weston",
                        "suffix": ""
                    }
                ],
                "year": 2021,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:2107.07567"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "J. Xu, A. Szlam, and J. Weston. 2021. Beyond gold- fish memory: Long-term open-domain conversation. arXiv:2107.07567.",
                "links": null
            },
            "BIBREF32": {
                "ref_id": "b32",
                "title": "Inclusion and Democracy",
                "authors": [
                    {
                        "first": "I",
                        "middle": [
                            "M"
                        ],
                        "last": "Young",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "DOI": [
                        "10.1002/9781119084679.ch19"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "I. M. Young. 2000. Inclusion and Democracy. Oxford University Press.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "Descriptive visualisation of the original dataset. a) Number of utterances (after sentence tokenisation) by each person across all the debates. Audience members were all aggregated in the label \"Audience Members\". b) Average number of words per sentence of the four main participants of the debate, showing their density distribution. c) Number of sentences in which the most common used words appeared in by their speaker."
            },
            "FIGREF1": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "(a). This is expected, as they were both present in three of the five debates. Mike Pence and Kamala Harris, who only participated in one of the debates Descriptive visualisation of the annotated dataset, M-Arg. a) Number of pairs annotated for support, attack or neither. b) Total number of sentences in the original dataset labeled as one of the topics in the y-axis. c) Percentage of argumentative relations of pairs of sentences belonging to each one of the topics."
            },
            "FIGREF2": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "Relationship between annotations and confidence parameters. a) Distribution of annotations according to the annotation agreement; b) to the trust in the annotator; and c) to the self-confidence score given by the annotator. d) Distribution of the annotators with respect to the number of annotations they provided."
            },
            "FIGREF3": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "Schematic of the relation classification models. a) In the multimodal model each sentences is passed in parallel through an audio module and a text module. b) The text module consists only on a BERT encoding layer with dropout. c) The audio module is based on parallel CNN and Bi-LSTM."
            },
            "TABREF2": {
                "num": null,
                "type_str": "table",
                "content": "<table><tr><td>Multimodal Text-only</td><td>Sentence 1 Sentence 2 Sent. 1 Sent. 2</td><td>Audio module Text module Audio module Text module Text module Text module</td><td>Concatenations Batch Norm. Dense layer 100 ReLU Act. Dropout 0.1</td><td>Classification layer 3 SoftMax Output Output</td><td>Conv layer + Batch Norm. + ReLU Act. + Dropout (8x7x7) kernel b) Audio module Feature extraction</td><td>(1x1) stride</td><td>Max Pool (2x2) stride (2x2) size Bi-LSTM (64x7x7) kernel L2 regularisation (0.5 rate) 128 neurons (1x1) stride</td><td colspan=\"2\">Max Pool Flatten (2x2) size Dropout (2x2) stride</td><td>Concatenate</td></tr><tr><td>Audio-only</td><td>Sent. 1 Sent. 2</td><td>Audio module Audio module</td><td/><td>Output</td><td>Text</td><td/><td>pre-processor</td><td>BERT encoding</td><td colspan=\"2\">Dropout Pooled</td></tr></table>",
                "html": null,
                "text": "Krippendorff's \u03b1 values for different filterings of the data. Notice the value in bold corresponds to the overall \u03b1 from the whole dataset, since our trust threshold was \u2265 0.81. With decreasing number of annotations, high fluctuations in \u03b1 are to be expected, hence the smaller value of 0.44 for the highest trust."
            },
            "TABREF4": {
                "num": null,
                "type_str": "table",
                "content": "<table/>",
                "html": null,
                "text": "Models' performance. M-F 1 stands for macro-averaged F 1 , w-F 1 for weighted-F 1 , and m-F 1 for microaveraged F 1 ."
            }
        }
    }
}