File size: 140,622 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T02:11:56.173397Z"
    },
    "title": "A Unified Typology of Harmful Content",
    "authors": [
        {
            "first": "Michele",
            "middle": [],
            "last": "Banko",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Sentropy Technologies",
                "location": {
                    "addrLine": "380 Portage Avenue",
                    "postCode": "94306",
                    "settlement": "Palo Alto",
                    "region": "CA"
                }
            },
            "email": "mbanko@sentropy.io"
        },
        {
            "first": "Brendon",
            "middle": [],
            "last": "Mackeen",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Sentropy Technologies",
                "location": {
                    "addrLine": "380 Portage Avenue",
                    "postCode": "94306",
                    "settlement": "Palo Alto",
                    "region": "CA"
                }
            },
            "email": "brendon@sentropy.io"
        },
        {
            "first": "Laurie",
            "middle": [],
            "last": "Ray",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Sentropy Technologies",
                "location": {
                    "addrLine": "380 Portage Avenue",
                    "postCode": "94306",
                    "settlement": "Palo Alto",
                    "region": "CA"
                }
            },
            "email": "laurie@sentropy.io"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "The ability to recognize harmful content within online communities has come into focus for researchers, engineers and policy makers seeking to protect users from abuse. While the number of datasets aiming to capture forms of abuse has grown in recent years, the community has not standardized around how various harmful behaviors are defined, creating challenges for reliable moderation, modeling and evaluation. As a step towards attaining shared understanding of how online abuse may be modeled, we synthesize the most common types of abuse described by industry, policy, community and health experts into a unified typology of harmful content, with detailed criteria and exceptions for each type of abuse.",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "The ability to recognize harmful content within online communities has come into focus for researchers, engineers and policy makers seeking to protect users from abuse. While the number of datasets aiming to capture forms of abuse has grown in recent years, the community has not standardized around how various harmful behaviors are defined, creating challenges for reliable moderation, modeling and evaluation. As a step towards attaining shared understanding of how online abuse may be modeled, we synthesize the most common types of abuse described by industry, policy, community and health experts into a unified typology of harmful content, with detailed criteria and exceptions for each type of abuse.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Content moderation, the practice of monitoring and reviewing user-generated content to ensure compliance with legal requirements, community guidelines, and user agreements, is important for creating safe and equitable online spaces. While traditional content moderation systems rely heavily on human reviewers who use a set of proprietary guidelines to determine if content is in violation of policy, the use of algorithmic approaches has become a part of moderation workflows in recent years. While not a full replacement for human content moderators, the use of AI promises to reduce trauma and cost incurred by purely human-centric workflows.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "As a result, the ability to recognize abusive content using data-driven approaches has attracted attention from researchers in the computational and social sciences. To study, model and measure systems designed to recognize online abuse, researchers typically create labelled datasets using crowdsourcing platforms or in-house annotators. While the number of research datasets continues to grow (Vidgen and Derczynski, 2020) , the research community has not reached a consensus on how common abuse types are defined. Despite the use of best practices that leverage multiple annotators, definitional ambiguity can lead to the creation of datasets of questionable consistency (Ross et al., 2017; Waseem, 2016; Wulczyn et al., 2017) . Furthermore, without thorough domain understanding, research datasets built to capture abusive content may be prone to unintended bias (Wiegand et al., 2019) . Together, these shortcomings create challenges for reliable modeling and study of abuse as it occurs in the real world.",
                "cite_spans": [
                    {
                        "start": 395,
                        "end": 424,
                        "text": "(Vidgen and Derczynski, 2020)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 674,
                        "end": 693,
                        "text": "(Ross et al., 2017;",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 694,
                        "end": 707,
                        "text": "Waseem, 2016;",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 708,
                        "end": 729,
                        "text": "Wulczyn et al., 2017)",
                        "ref_id": "BIBREF27"
                    },
                    {
                        "start": 867,
                        "end": 889,
                        "text": "(Wiegand et al., 2019)",
                        "ref_id": "BIBREF25"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Harmful content has also drawn attention from internet companies, such as those in the social media, online gaming, and dating industries, who seek to protect their users from abuse. These companies typically employ a Trust and Safety organization to define and enforce violations of content policies, and to develop tools aimed at identifying instances of abuse on their platforms. Several online platforms which have seen large volumes of harmful content on their platforms, have created content policies that can be useful in specifying definitions of various abuse classes. In the absence of a standard upon which content policies can be based, community standards within digital platforms are largely shaped by users who report abuse they have experienced firsthand. Additionally, some aspects of content policies are informed by requirements handed down from local law enforcement agencies wishing to prosecute users engaging in illegal activity online.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Recently, the demand for internet companies to more aggressively reduce the spread of cyberbullying, radicalization, deception, exploitation, and other forms of dangerous content has been increasingly called for by governmental and civil society organizations. The proposed Online Harms Bill in the UK and amendments to Section 230 in the United States call for stricter accountability, trans-parency, and regulations to be imposed on companies hosting user-generated content. Civil society organizations have yielded numerous proposals for better describing types of harmful content online so that internet companies may better understand the nature and impact of such content.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, we enumerate, consolidate and define the most common types of abuse described in content policies for several major online platforms and white papers from civil society organizations. We look for commonalities in both what types of abuse have been identified and how they are defined. Our goal is to provide a unified typology of harmful content, with clear criteria and exceptions for each type of abuse. While hate speech and harassment have attracted attention from the natural language processing community in recent years, upon close study, we find that the domain of harmful content is broader than many may have realized. We hope this typology will benefit content moderation systems by:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 Defining abuse types that are readily usable by content moderators, both human and algorithmic",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 Encouraging the construction of accurate, complete and unbiased datasets used for model training and evaluation",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 Creating awareness of types of abuse that have received limited attention in the research community thus far 2 Background and Related Work",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Several efforts to categorize online abuse began by closely studying specific types of harm. With a focus on cyberbullying, Van Hee et al. (2015) published a scheme for annotation, which considers the presence, severity, role of the author (harasser, victim or bystander) and a number of fine-grained categories, such as insults and threats. Waseem et al. (2017) discussed the lack of consensus around how hate speech is defined, noting that messages labeled as hate speech in some datasets are only considered to be offensive in others. They devised a two-fold typology that considers whether hate is directed at a specific target (as opposed to taking the form of a general statement), and the degree of explicitness. Anzovino et al. (2018) studied misogynistic social media posts, and modeled seven types of abuse, most of which extend beyond abuse directed at women. Similar to these works, we break apart class definitions into fine-grained categories when possible in an attempt to disambiguate potentially underspecified requirements. We build upon this body of work by considering a larger set of abuse types, as opposed to just cyberbullying or hate speech. More broadly, Vidgen et al. (2019) noted the difficulties in categorizing abusive content, and proposed a three dimensional scheme for defining abuse classes. They suggest to consider (1) the type of the abuse target (e.g. individual, identity, entity or concept), (2) the recipient of the abuse (e.g. a specific individual, women, capitalism), and (3) the manner in which the abuse is articulated (e.g. as an insult, aggression, stereotype, untruth). We consider target type and manner in our categorization and take the suggested scheme one step further by instantiating a large set of what the authors refer to as subtasks. In some cases, where there is no clear or uniform target (e.g. misinformation) we found it helpful to organize types based on topic or possible outcome that can be easily reasoned about by those impacted by moderation systems.",
                "cite_spans": [
                    {
                        "start": 342,
                        "end": 362,
                        "text": "Waseem et al. (2017)",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 720,
                        "end": 742,
                        "text": "Anzovino et al. (2018)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 1181,
                        "end": 1201,
                        "text": "Vidgen et al. (2019)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abuse Typologies",
                "sec_num": "2.1"
            },
            {
                "text": "The natural language processing community has largely focused on detection of hate speech and cyberbullying (Schmidt and Wiegand, 2017) . As a result, a number of research datasets have been produced (Vidgen and Derczynski, 2020 ), yet none have used the same definition, or have annotated only partial phenomena (e.g. annotating racist and sexist speech, but not hate speech directed at all groups who require protection).",
                "cite_spans": [
                    {
                        "start": 108,
                        "end": 135,
                        "text": "(Schmidt and Wiegand, 2017)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 200,
                        "end": 228,
                        "text": "(Vidgen and Derczynski, 2020",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hate Speech",
                "sec_num": "2.2"
            },
            {
                "text": "While building a hate speech corpus from Twitter data, Ross et al. (2017) investigated how the reliability of the annotations is affected by the provision of accompanying definitions. They compared annotations in which the annotators were provided Twitter's definition of hate speech versus no definition. While annotators shown the definition were more likely to ban the tweet, the authors found that even when presented with Twitter's definition, inter-annotator agreement, measured using Krippendorf's alpha, was at best 0.3, depending on the question asked. 1 Ross et al. concluded that more detailed coding schemes are needed to be able to distinguish hate speech from other content.",
                "cite_spans": [
                    {
                        "start": 55,
                        "end": 73,
                        "text": "Ross et al. (2017)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 562,
                        "end": 563,
                        "text": "1",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hate Speech",
                "sec_num": "2.2"
            },
            {
                "text": "Other works which report the difficulty of achieving high levels of interannotator agreement when compiling hate speech datasets include , who found that 5% of tweets were coded as hate speech by the majority of annotators with only 1.3% being annotated unanimously as containing hate speech. The creators of the 2018 Kaggle Toxic Comment Classification Challenge (Wulczyn et al., 2017) report that while the challenge dataset was built using ten annotators per label, agreement was weak (Krippendorff alpha of 0.45).",
                "cite_spans": [
                    {
                        "start": 364,
                        "end": 386,
                        "text": "(Wulczyn et al., 2017)",
                        "ref_id": "BIBREF27"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hate Speech",
                "sec_num": "2.2"
            },
            {
                "text": "Demonstrating the importance of having welldefined annotation guidelines, Waseem and Hovy (2016) articulated an eleven-point definition of gendered and racial attacks. The use of detailed criteria yielded a high level of agreement. The authors measured inter-annotator agreement, defined using Cohen's kappa, to be 0.84.",
                "cite_spans": [
                    {
                        "start": 74,
                        "end": 96,
                        "text": "Waseem and Hovy (2016)",
                        "ref_id": "BIBREF24"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hate Speech",
                "sec_num": "2.2"
            },
            {
                "text": "In recent years, machine learning has been used to recognize forms of self-harm such as pro-eating disorder content in social media posts (Chancellor et al., 2016; Wang et al., 2017) and suicidal ideation (Burnan et al., 2015; Cao et al., 2019) . Snyder et al. (2017) developed an automated framework for detecting dox files, i.e. files which reveal personally identifiable data without consent, and measuring the frequency, content, targets, and effects of doxing on popular dox-posting sites.",
                "cite_spans": [
                    {
                        "start": 138,
                        "end": 163,
                        "text": "(Chancellor et al., 2016;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 164,
                        "end": 182,
                        "text": "Wang et al., 2017)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 205,
                        "end": 226,
                        "text": "(Burnan et al., 2015;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 227,
                        "end": 244,
                        "text": "Cao et al., 2019)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 247,
                        "end": 267,
                        "text": "Snyder et al. (2017)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Forms of Harmful Content",
                "sec_num": "2.3"
            },
            {
                "text": "Detection of sexually explicit content includes efforts to recognize instances of child sexual abuse (Lee et al., 2020) and human trafficking (Dubrawski et al., 2015; Tong et al., 2017) .",
                "cite_spans": [
                    {
                        "start": 101,
                        "end": 119,
                        "text": "(Lee et al., 2020)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 142,
                        "end": 166,
                        "text": "(Dubrawski et al., 2015;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 167,
                        "end": 185,
                        "text": "Tong et al., 2017)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Forms of Harmful Content",
                "sec_num": "2.3"
            },
            {
                "text": "Another endeavor related to online harm that has gained attention within the research community is the detection of misinformation, which is surveyed by Su et al. (2020) .",
                "cite_spans": [
                    {
                        "start": 153,
                        "end": 169,
                        "text": "Su et al. (2020)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Forms of Harmful Content",
                "sec_num": "2.3"
            },
            {
                "text": "To develop a unified typology of harmful content, we employed a grounded theory approach, in which we synthesized inputs from several sources:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "3"
            },
            {
                "text": "\u2022 Community guidelines and content policy made public by large online platforms, specifically Discord, 2 Facebook, 3 Pinterest, 4 Red-dit, 5 Twitter, 6 and YouTube 7",
                "cite_spans": [
                    {
                        "start": 150,
                        "end": 151,
                        "text": "6",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "3"
            },
            {
                "text": "\u2022 The International Covenant on Civil and Political Rights, 8 an international human rights treaty developed by the United Nations",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "3"
            },
            {
                "text": "\u2022 Proposals from members of civil society organizations such as the Women's Media Center; Internet and Jurisdiction Policy Network (2019) and Benesch (2020) \u2022 Recommendations from experts and health organizations who study psychological and physical impact of abuse, including the American Association of Suicidology and the Conflict Tactics Scale",
                "cite_spans": [
                    {
                        "start": 142,
                        "end": 156,
                        "text": "Benesch (2020)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "3"
            },
            {
                "text": "While qualitatively analyzing the data mentioned above, we used the following principles to guide the creation of the typology: Avoid the use of subjective adjectives as core definitions. As prior research has shown, annotation tasks that make use of underspecified or subjective phrases such as \"hateful,\" \"toxic\" or \"would make you leave a conversation,\" without further explanation are likely to be interpreted differently depending on the annotator. Enumerate problematic content types using precise objective criteria when possible.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Principles",
                "sec_num": "3.1"
            },
            {
                "text": "Prefer fine-grained classes over those spanning multiple behaviors. Behavior that is casually described as \"toxic\" or \"bullying\" may contain a mix of identity-based hate speech, general insults, threats and inappropriate sexual language. Narrowly defined classes simplify annotation requirements and provide a level of explainability that is missing from underspecified labels.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Principles",
                "sec_num": "3.1"
            },
            {
                "text": "Consider the type of the subject of abuse. To keep definitions well-scoped, we consider the subject of the attack, and avoid mixing subject types within a single definition when possible. For instance, instead of having a generic class aimed at recognizing sexually explicit content, we advocate for annotating sexually charged content directed at an individual separately from content advertising for adult sexual services. However, we find that there are some instances in which a broad form of harm can not be uniformly defined by the type of the target, and employ a topical approach that may be more understandable to moderators and users of social platforms. An example of this is Misinformation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Principles",
                "sec_num": "3.1"
            },
            {
                "text": "Consider potential downstream actions. If a type of behavior is universally associated with an outcome, avoid definitions that mix behaviors that do not share the outcome. For instance, child sexual abuse content is not tolerated under any circumstances in most countries and must be reported to law enforcement, whereas insults that make use of sexual terms are unlikely to have legal ramifications. A platform may have strict policies against attacks on protected groups but permit mild forms of non-identity based insults. Distinguishing between the two simplifies the ability to enforce policy and therefore, improves the usefulness of a moderation system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Principles",
                "sec_num": "3.1"
            },
            {
                "text": "Despite our preference for fine-grained classes, they are not defined to be mutually exclusive. Additionally, hierarchical arrangement of types is not always possible. As a result, there are cases where multiple types may apply to a single input. For example, Time to shoot this n*****, where the last word represents a racial slur, should be classified as both Identity Attack and Threat of Violence. Time to shoot up this school is a violent threat without an identity-based attack. N***** aren't welcome here is a non-violent Identity Attack.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Principles",
                "sec_num": "3.1"
            },
            {
                "text": "All forms of abuse are problematic and require some means to identify and address them in order to mitigate their impact on users. While the strength of statements involving abuse may be interpreted differently depending on the recipient and context, some forms of online harm present immediate or lasting danger to individuals or stand in violation of the law. For each type of abuse we present, we establish qualifications for what may be considered severe abuse. The ability to detect severe abuse is critical for content moderation systems seeking to identity extreme or time-sensitive violations quickly.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "3.2"
            },
            {
                "text": "The concept of \"severe toxicity\" is annotated in the Kaggle Toxic Comment Classification Challenge (2018), where it is defined it as \"rude, disrespectful, or unreasonable comments that are very likely to make people leave a discussion.\" Mov-ing away from the use of subjective adjectives, we consider the following attributes when determining severity:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 Use of language expressing direct intent (severe) vs. use of language that is passive or merely wishful (not severe)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 Time-sensitive or immediate threats of harm are considered to be severe",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 Consequences of, or degree of harm associated with, the abuse, i.e. actions resulting in death or long-lasting physical or psychological trauma shall be treated as severe",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 Vulnerability of the target, e.g. attacks directed at members of groups that have been historically marginalized, dehumanized or objectified are considered to be severe",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 Violations of personal privacy and consent are treated as severe",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 Violations of applicable laws, including internationally recognized policies are handled as severe",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "3.2"
            },
            {
                "text": "Using the data and guidelines described in Section 3, we arrived at the typology depicted in Figure 1 . In the remainder of this section, we describe each type in detail. Within each section, we present the types in lexicographic order.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 93,
                        "end": 101,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Abuse Class Definitions",
                "sec_num": "4"
            },
            {
                "text": "Hate and Harassment describes abuse directed at a specific individual or group of people (e.g. identity) meant to torment, demean, undermine, frighten or humiliate the target. Abuse directed at institutions or abstract concepts is not included in this set of definitions. In the remainder of this section, we present criteria for defining common forms of hate and harassment: Doxing, Identity Attack, Identity Misrepresentation, Insult, Sexual Aggression, and Threat of Violence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hate and Harassment",
                "sec_num": "4.1"
            },
            {
                "text": "Doxing is a form of severe abuse in which a malicious party tries to harm an individual by releasing personally identifiable information about the target to the general public. During a doxing attack, sensitive information is typically distributed on web sites that permit anonymous posting and Personal data that should not be shared without the consent of others includes:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Doxing",
                "sec_num": "4.1.1"
            },
            {
                "text": "\u2022 Physical or virtual locations such as home, work and IP addresses, or GPS locations",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Doxing",
                "sec_num": "4.1.1"
            },
            {
                "text": "\u2022 Contact information such as private email address and phone numbers",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Doxing",
                "sec_num": "4.1.1"
            },
            {
                "text": "\u2022 Identification numbers such as Social Security, passport, government or school ids",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Doxing",
                "sec_num": "4.1.1"
            },
            {
                "text": "\u2022 Digital identities such as social network accounts, chat identities, and passwords",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Doxing",
                "sec_num": "4.1.1"
            },
            {
                "text": "\u2022 Personal financial information such as bank account or credit card information",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Doxing",
                "sec_num": "4.1.1"
            },
            {
                "text": "\u2022 Criminal and medical histories",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Doxing",
                "sec_num": "4.1.1"
            },
            {
                "text": "Mentions of data already in the public domain such as one's place of education or employment, email addresses that have been voluntarily shared (such as on a personal homepage) are not considered instances of doxing, nor are cases where people willingly share their own private information.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Doxing",
                "sec_num": "4.1.1"
            },
            {
                "text": "Identity Attack is a form of online abuse where malicious actors severely attack individuals or groups of people based on their membership in a protected or vulnerable group. During an Identity Attack, a bad actor will use language reflecting the intent to dehumanize, persecute or promote violence based on the identity of the subject. The use of slurs and/or derogatory epithets may be present but is not a requirement.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "While there is neither agreement in what constitutes hate speech across academic datasets, nor is there any industry or legal standard of this definition, constructions of Identity Attack definitions typically attempt to: 1) protect vulnerable groups, 2) protect specific characteristics or attributes of individuals, 3) prohibit hate speech but fail to offer a definition. For platforms that fall into either of the first two categories the policies described protect users from attacks and violence on the basis of identity-based attributes including age, disability, ethnicity, gender identity, military status, nationality, race, religion, and sexual orientation. Some platforms offer additional protections for vulnerable groups such as immigration status, socioeconomic class or the presence of a medical condition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "Jigsaw (Kaggle, 2018) defines Identity Attack as: \"Negative or hateful comments targeting someone because of their identity.\" Underspecified definitions such as this are difficult to use in practice due to the open-endedness of how \"negative\" or \"hateful\" may be interpreted by annotators or community moderators. As mentioned in our survey of related work, we promote the use of more finegrained classes, focusing here on severe attacks (e.g. those with dehumanizing and/or violent intent) and defining a separate class for more mild phenomena such as spread of negative stereotypes or misinformation related to vulnerable groups. Another distinction we suggest is to ensure the subject of the attack refers to a human or group thereof, as opposed to institutions or organizations. For example, an attack on people who practice a religion (e.g. Jews, Muslims) falls under the class, but attacks on religion itself (e.g. Judiasm, Islam) itself do not. As a result, statements such as You deserve to be euthanized, you dirty **** and ****s deserve to be euthanized\" 9 would be treated as Identity Attack, whereas **** is a religion that should cease to exist would not. In some cases, it is possible that use of organizations are replacements for the individuals belonging to them, but for the first version of the typology we propose to maintain this distinction.",
                "cite_spans": [
                    {
                        "start": 7,
                        "end": 21,
                        "text": "(Kaggle, 2018)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "Here we summarize types of content that warrant a classification of Identity Attack, all of which are considered severe forms of abuse:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Explicit use of slurs and other derogatory epithets referencing an identity group",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Violent threats or calls for harm directed at an identity group",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Calls for exclusion, domination or suppression of rights, directed at an identity group",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Dehumanization of an identity group, including comparisons to animals, insects, diseases or filth, generalizations involving physical unattractiveness, low intelligence, mental instability and/or moral deficiencies",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Expressions of superiority of one group over a protected or vulnerable group",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Admissions of hate and intolerance towards members of an identity group",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Denial of another's identity, calls for conversion therapy, deadnaming",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Support for hate groups communicating intent described above",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "The following should be considered nonexamples for the Identity Attack abuse class:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Attacks on institutions or organizations (as opposed to the people belonging to them)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Promotion of negative stereotypes, fear or misinformation related to an identity group (defined as Identity Misrepresentation in Section 4.1.3) 9 As per the WOAH guidelines we use **** in place of any group identifier to avoid reproducing harm",
                "cite_spans": [
                    {
                        "start": 146,
                        "end": 147,
                        "text": "9",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 In-group usage of slurs and their variants, reclamation of hateful terms by the those who have been historically targeted",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Discussion of meta-linguistic nature or education related to slurs or hate speech",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "\u2022 Accounts of the speech behavior of parties external to the immediate conversational context",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Attack",
                "sec_num": "4.1.2"
            },
            {
                "text": "Identity Misrepresentation is defined by statements or claims that are used to convey pejorative misrepresentations, stereotypes, and other insulting generalizations about protected or vulnerable populations. As with Identity Attack, protected groups are defined by attributes including age, disability, ethnicity, gender identity, military status, nationality, race, religion, and sexual orientation. Vulnerable groups such as those defined by immigration status, socio-economic class or the presence of a medical condition, may also be offered protection. Statements belonging to this class fall below the severity of Identity Attack. They may be presented as fact but may lack supporting evidence or be opinions in disguise. Criteria outlined in the definition of Identity Attack belong uniquely to that class. For example, a stereotype suggesting that group of people is inferior (e.g. has low IQ) would fall under the definition of Identity Attack, whereas generalizations regarding food preferences (e.g. eats foods that are unappealing to others, without conveying explicit hatred towards the group), non-dehumanizing assumptions about physical appearance (e.g. wearing a style of facial hair implies support for extremism) or stereotypes about spending habits (e.g. frugality) should be treated as Identity Misrepresentation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Misrepresentation",
                "sec_num": "4.1.3"
            },
            {
                "text": "A summary of qualifying criteria for the Identity Misrepresentation class is as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Misrepresentation",
                "sec_num": "4.1.3"
            },
            {
                "text": "\u2022 Dissemination of negative stereotypes and generalizations about a protected or vulnerable group, apart from those that involve explicit dehumanization or claims of inferiority ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Identity Misrepresentation",
                "sec_num": "4.1.3"
            },
            {
                "text": "Two datasets shared by Kaggle (2012 Kaggle ( , 2018 have provided guidelines used to determine whether or not content can be considered insulting. The latter defines Insult as: \"insulting, inflammatory, or negative comment towards a person or a group of people.\" The earlier task provides more detail, whereupon insults are constrained to be person-toperson speech acts in which the target is assumed to be active in the conversation. This definition allows for the presence of profanity, racial slurs and other offensive terms. Using these specifications, it is not obvious how to distinguish between an Insult and an Identity Attack (which is also defined in the 2018 challenge). We propose an important distinction in that statements that make use of identity-based slurs and epithets are to be elevated to the level of Identity Attack.",
                "cite_spans": [
                    {
                        "start": 23,
                        "end": 35,
                        "text": "Kaggle (2012",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 36,
                        "end": 51,
                        "text": "Kaggle ( , 2018",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "With the assumption that the subject of an Insult is a participant in the conversation, an Insult is defined as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "\u2022 General name-calling, directed profanity and other insulting language or imagery not referencing membership in a protected group or otherwise meeting the criteria for Identity Attack",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "\u2022 Content mocking someone for their personality, opinions, character or emotional state",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "\u2022 Body shaming, attacks on physical appearance, or shaming related to sexual or romantic history",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "\u2022 Mocking someone due to their status as a survivor of assault or abuse",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "\u2022 Encouraging others to insult an individual",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "\u2022 Images manipulated with the intent to insult the subject",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "The following should be considered non-examples of insults:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "\u2022 Insults strictly based on the target's membership in a group with protected status, including use of slurs",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "\u2022 Insults aimed at non-participant subjects, such as celebrities and other high-profile individuals",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "\u2022 Self-referential insults and self-deprecation",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "\u2022 Insults directed at inanimate objects",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "\u2022 Harassment education or awareness",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "With regard to the scale of severity, Insults are not elevated to the level of severe abuse, as they do not explicitly threaten physical safety, contain identity-based attacks, compromise personal privacy or involve criminal behavior.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Insult",
                "sec_num": "4.1.4"
            },
            {
                "text": "Various forms of sexual content are present online, such as pornography, nudity, and offers for adult services. Here we focus on a type of personto-person abuse, Sexual Aggression. This type of content includes unwanted sexual advances, undesirable sexualization, non-consensual sharing of sexual content, and other forms of unsolicited sexual conversations. Sexual Aggression is defined as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sexual Aggression",
                "sec_num": "4.1.5"
            },
            {
                "text": "\u2022 Threats or descriptions of sexual activity, fantasy or non-consensual sex acts directed at an individual",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sexual Aggression",
                "sec_num": "4.1.5"
            },
            {
                "text": "\u2022 Unsolicited graphic descriptions of a person (including oneself) that are sexual in nature",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sexual Aggression",
                "sec_num": "4.1.5"
            },
            {
                "text": "\u2022 Unwanted sexualization, sexual advances or comments intended to sexually degrade an individual",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sexual Aggression",
                "sec_num": "4.1.5"
            },
            {
                "text": "\u2022 Solicitations or offers of non-commercial sexual interactions",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sexual Aggression",
                "sec_num": "4.1.5"
            },
            {
                "text": "\u2022 Unwanted requests for nude or sexually graphic images or videos",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sexual Aggression",
                "sec_num": "4.1.5"
            },
            {
                "text": "\u2022 Sharing of content depicting any person in a state of nudity or engaged in sexual activity created or shared without their permission, including fakes (e.g. revenge porn)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sexual Aggression",
                "sec_num": "4.1.5"
            },
            {
                "text": "\u2022 Sharing of content revealing intimate parts of a person's body, even if clothed or in public, created or posted without their permission (e.g., \"creepshots\" or \"upskirt\" images)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sexual Aggression",
                "sec_num": "4.1.5"
            },
            {
                "text": "\u2022 Sextortion, threat of exposing a person's intimate images, conversations or other intimate information",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sexual Aggression",
                "sec_num": "4.1.5"
            },
            {
                "text": "Sexual Aggression does not refer to:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Pornography created with consent of all participants",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Solicitation or offers of commercial sex transactions",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Definitions of sexual terms",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Sexual health and wellness discussions",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Non-graphic use of words associated with sex",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Insults that make use of sexual terms",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Flirting, compliments, or come-ons that are not sexually graphic or degrading Using our criteria for severe abuse, the following subset of content meeting the definition of Sexual Aggression is to be considered severe:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Threats of non-consensual sexual activity \u2022 Anecdotal or personal accounts of violence without glorification (e.g. survivor stories, criminal rehabilitation accounts)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Historical descriptions or research studies of violence",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Hyperbolic or metaphorical violence Taxonomies of violence (Straus et al., 1996) treat physical threats as more severe than verbal or psychological threats. While threats made via online communication are technically verbal, severe threats are those in which there is credible belief that the aggressor could or would carry out a threat physically. Severe forms of violent threats intend to do at least one of the following:",
                "cite_spans": [
                    {
                        "start": 61,
                        "end": 82,
                        "text": "(Straus et al., 1996)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Create the fear or belief that the violent act will occur in real life",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Threaten acts that result in serious consequences, such as a long-term injury or illness or fatality",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Convey the abuser's desire to carry out the threat personally",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "Mild forms of physically violent threats have at least one of the following features:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 The abuser's intent is to insult or dismiss the target, with little to no harmful consequences (e.g. \"I could easily take you down\")",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Threaten acts that result in minor or no lasting harm to a person's health or well-being, (e.g. \"I'll slap you if you don't stop\")",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "\u2022 Passive threats stated as wishes or hopes for an unfortunate event or illness to occur 4.2 Self-Inflicted Harm Self-Inflicted Harm describes forms of harmful behavior, both physical and psychological, directed at one's self. The detection of content belonging to this class is intended to flag such behaviors in order to provide help to those in distress and prevent the spread of dangerous behavior within online communities. In the remainder of this section, we discuss definitions for two common forms of self-inflicted harm: Eating Disorder Promotion and Self-Harm.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "132",
                "sec_num": null
            },
            {
                "text": "Eating disorders (EDs) are mental disorders characterized by abnormal eating habits and attitudes towards food. Many online platforms explicitly prohibit pro-ED content in order to prevent the spread of unhealthy behavior. While the DSM-V offers clinical definitions of such disorders, here we summarize types of dangerous content related to the way eating disorders may be discussed online:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Eating Disorder Promotion",
                "sec_num": "4.2.1"
            },
            {
                "text": "\u2022 Promotion of eating disorders as legitimate lifestyle choices (e.g. pro-ana, pro-mia content)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Eating Disorder Promotion",
                "sec_num": "4.2.1"
            },
            {
                "text": "\u2022 Glorification of slim or emaciated bodies (e.g. thinspiration)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Eating Disorder Promotion",
                "sec_num": "4.2.1"
            },
            {
                "text": "\u2022 Content featuring high-fat food or overweight people intended to induce disgust (e.g. reverse thinspiration)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Eating Disorder Promotion",
                "sec_num": "4.2.1"
            },
            {
                "text": "\u2022 Sharing instructions for unhealthy weight loss methods",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Eating Disorder Promotion",
                "sec_num": "4.2.1"
            },
            {
                "text": "The following should be considered nonexamples of Eating Disorder Promotion:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Eating Disorder Promotion",
                "sec_num": "4.2.1"
            },
            {
                "text": "\u2022 Research, advocacy, and education related to eating disorders",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Eating Disorder Promotion",
                "sec_num": "4.2.1"
            },
            {
                "text": "\u2022 Discussion of recovery mechanisms and resources to prevent eating disorders",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Eating Disorder Promotion",
                "sec_num": "4.2.1"
            },
            {
                "text": "\u2022 Anecdotes of individuals who have suffered from eating disorders in a manner that does not glorify the disorder Pro-ED content potentially creates long-lasting impact to one's health and therefore is considered to be severe.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Eating Disorder Promotion",
                "sec_num": "4.2.1"
            },
            {
                "text": "Self-Harm is a behavior in which a person purposefully physically hurts themself using methods such as cutting with a sharp object, burning, biting, and pulling out hair. Practitioners of such behavior do so in order to cope with emotional distress. While according to the DSM-V, people who exhibit selfharming behaviors do not intend to cause long-term, serious harm or fatality, suicide is an additional, albeit extremely different, form of self-harm, which we include in our definition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "Self-Harm includes the following content:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Discussion of current or recent acts of deliberately harming one's own body.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Suicidal ideation, discussing details of a suicide plan, or stating that one intends to commit suicide",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Requests for instructions on how to conduct or hide self-harm or suicide",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Describing emotions or symptoms of mental illness explicitly related to self-harm, or traumatic experiences and triggers",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Promotion of or assistance with self-harming behaviors",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "Self-Harm content does not refer to:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Anecdotes of personal recovery, treatment",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Sharing coping methods for addressing thoughts of self-harm or suicide",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Support for individuals who are considering or are actively harming themselves",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Recollection of self-harming behaviors or suicidal attempts that occurred at least 12 months in the past that does not promote self-harm or suicide",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Research or education related to prevention of self-harm or suicide",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Discussion of depression or other mental illnesses, symptoms, or depressed thoughts and feelings that are not explicitly tied to selfharm or suicide",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "Identifying severe expressions of Self-Harm primarily rests on determining the individual's intent. An individual who is cutting or punching walls is doing so in order to help them cope with emotional pain. Suicidal individuals are not attempting to cope but rather responding to unbearable physical or emotional pain by ending their lives.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "Severe forms of Self-Harm include:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Suicidal ideation and planning",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Threatening to take action to kill, cut or otherwise hurt oneself",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Asking for or providing instructions or how to commit suicide or self-harm",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Positive reflections on death and dying or the perceived benefits of the individual's death",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "Less severe forms of Self-Harm include:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Advice on hiding evidence of non-suicidal self-harm",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Showing off self-harm scars or positive reflections on self-harm behaviors",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Admitting to active or recent acts of nonsuicidal self-harm",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Discussing events or objects that have recently \"triggered\" an individual to harm one's self",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "\u2022 Discussing reductions in recent non-suicidal self-harming behaviors without clear evidence of cessation",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Self-Harm",
                "sec_num": "4.2.2"
            },
            {
                "text": "Ideological Harm describes the spread of beliefs that may lead to real world harm to society at large over time. Content belonging to this class may include statements without an explicit human target at the time of creation, for example, statements that openly question health or government policies that may lead to public crises, or expressions of praise for ideologies associated with crime, violence or exclusion. In this section, we present definitions of two common forms of ideological harm: Extremism, Terrorism and Organized Crime and Misinformation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ideological Harm",
                "sec_num": "4.3"
            },
            {
                "text": "While to date there is no internationally agreed upon definition of terrorism, the UN General Assembly defines it as \"criminal acts intended or calculated to provoke a state of terror in the public, a group of persons or particular persons for political purposes are in any circumstance unjustifiable, whatever the considerations of a political, philosophical, ideological, racial, ethnic, religious or any other nature that may be invoked to justify them.\" Various national governments and international organizations maintain lists of organizations they officially recognize as terrorist. While terrorist groups are predominantly associated with violent behaviors, extremism refers to both violent and peaceful forms of expression. Organized crime groups, which frequently engage in violent criminal behavior, are not typically driven by political or ideological goals, but instead operate for economic gain.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "Harmful content related to with Extremism, Terrorism and Organized Crime includes:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "\u2022 Recruiting for a terrorist organization, extremist group or organized crime group",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "\u2022 Praise and promotion of organized crime, terrorist or extremist groups, or acts committed by such groups",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "\u2022 Assisting a terrorist organization, extremist group or organized crime group",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "\u2022 Content that includes symbols known to represent a terrorist organization, extremist group or organized crime group At its core, every goal or belief of this class fits the criteria of severely abusive content. Either through exclusion, segregation, eradication or criminal activity, severe harm is intended.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "White Supremacist Extremism One notable subtype of this type that we draw attention to is White Supremacist Extremism (WSE). The United States Congress recently identified white supremacist extremism as the most significant domestic terrorism threat facing the United States. 10 WSE describes content seeking to revive and implement various ideologies of white supremacy. Content policies developed to address white supremacist ideologies are often established as part of a broader \"hate speech\" definition. While certain WSE statements attacking individuals based on religion, race or immigration status indeed overlap with our definition of Identity Attack, the motivation to elevate WSE to its own type of abuse is driven by a few factors. WSE content is often marked by various ideologies and linguistic patterns not expressed in direct person-to-person abuse. Attributes of the abuser are often in focus (e.g. whiteness and national identity), as opposed to characteristics of the abused. Additional features of WSE language include the use of dog whistle phrases and emoji, nostalgic references to \"better times\" in history, and the promotion of conspiracies and pseudo-science related to race, religion and sexuality.",
                "cite_spans": [
                    {
                        "start": 276,
                        "end": 278,
                        "text": "10",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "WSE content can be generalized as belonging to one or more of the following ideologies:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "\u2022 Neo-Nazism: idolization of Adolph Hitler, praise of Nazi policies or beliefs, use of Nazi symbols or slogans",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "\u2022 White racial supremacy: belief in white racial superiority, promotion of eugenics, incitement or allusions to a race war, concerns about \"white genocide,\" cynicism towards interracial relationships and miscegenation",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "\u2022 White cultural supremacy: promotion of a white ethnostate, xenophobic attitudes, nostalgia for times of segregation",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "\u2022 Holocaust denial, propagation of Jewish conspiracy theories",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "\u2022 Recruitment or requests for financial support for WSE ideology, incitement of extreme physical fitness as a readiness measure for race-driven conflict",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extremism, Terrorism and Organized Crime",
                "sec_num": "4.3.1"
            },
            {
                "text": "Simply stated, Misinformation is false or misleading information. It may be spread by users who are unaware of its credibility and lack a deliberate intent to harm. Disinformation, a subset of misinformation, refers to the knowing spread of misinformation. The intent behind disinformation is malicious, such as to damage the credibility of a person or organization, or to gain political or financial advantage. Types of Misinformation include fake news, false rumors, conspiracy theories, hoaxes, and opinion spam. Increasingly more forms of misinformation are disallowed on many online platforms, including:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Misinformation",
                "sec_num": "4.3.2"
            },
            {
                "text": "\u2022 Medically unproven health claims that create risk to public health and safety, including the promotion of false cures, incorrect information about public health or emergencies",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Misinformation",
                "sec_num": "4.3.2"
            },
            {
                "text": "\u2022 False or misleading content about members of protected or vulnerable groups",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Misinformation",
                "sec_num": "4.3.2"
            },
            {
                "text": "\u2022 False or misleading content that compromises the integrity of an election, or civic participation in an election",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Misinformation",
                "sec_num": "4.3.2"
            },
            {
                "text": "\u2022 Conspiracy theories",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Misinformation",
                "sec_num": "4.3.2"
            },
            {
                "text": "\u2022 Denial of a well-documented event",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Misinformation",
                "sec_num": "4.3.2"
            },
            {
                "text": "\u2022 Opinion spam, fabricated product reviews",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Misinformation",
                "sec_num": "4.3.2"
            },
            {
                "text": "\u2022 Removal of factual information with intent to erode trust or inflict harm, such as the omission of date, time or context",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Misinformation",
                "sec_num": "4.3.2"
            },
            {
                "text": "\u2022 Manipulation of visual or audio content with the intent to deceive",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Misinformation",
                "sec_num": "4.3.2"
            },
            {
                "text": "The spread of misinformation poses risks to society, erodes trust, hurts decision-making abilities, and may even lead to harmful global health or political events. Misinformation that may result in physical harm, civil unrest or health crises should be considered severe.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Misinformation",
                "sec_num": "4.3.2"
            },
            {
                "text": "In order to provide safe online spaces, content created by users seeking to benefit by causing harm to others financially, sexually or physically is not permitted within digital communities. Forms of Exploitation include Adult Sexual Services, Child Sexual Abuse Material and Scams.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Exploitation",
                "sec_num": "4.4"
            },
            {
                "text": "Certain forms of sexual solicitation and commerce cross over into illegal behavior that exploit often vulnerable participants, and are thus treated as a type of severe abuse. Adult Sexual Services includes:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Adult Sexual Services",
                "sec_num": "4.4.1"
            },
            {
                "text": "\u2022 Promotion or solicitation of illegal sexual services such as prostitution, escort services, paid sexual fetish/domination services and sensual massages",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Adult Sexual Services",
                "sec_num": "4.4.1"
            },
            {
                "text": "\u2022 Organization of human trafficking",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Adult Sexual Services",
                "sec_num": "4.4.1"
            },
            {
                "text": "\u2022 Recruitment for live sex performances, sex chat",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Adult Sexual Services",
                "sec_num": "4.4.1"
            },
            {
                "text": "Child Sexual Abuse Material (CSAM), sometimes referred to as \"child pornography,\" is defined as content involving sexual abuse and exploitation of anyone under the age of eighteen. Materials included in the definition of CSAM have expanded beyond sexual images involving minors to include exploitative text content as well. CSAM is a severe form of abuse that includes:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Child Sexual Abuse Material",
                "sec_num": "4.4.2"
            },
            {
                "text": "\u2022 Images and videos which depict minors in a pornographic, sexually suggestive, or sexually violent manner, including illustrated or digitally altered pornography that depicts minors (e.g. lolicon, shotacon, or cub)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Child Sexual Abuse Material",
                "sec_num": "4.4.2"
            },
            {
                "text": "\u2022 Sharing adult pornography or CSAM with a minor",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Child Sexual Abuse Material",
                "sec_num": "4.4.2"
            },
            {
                "text": "\u2022 Grooming of minors (the development of relationships of trust with the intent to sexually exploit)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Child Sexual Abuse Material",
                "sec_num": "4.4.2"
            },
            {
                "text": "\u2022 Sexual remarks directed at minors",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Child Sexual Abuse Material",
                "sec_num": "4.4.2"
            },
            {
                "text": "\u2022 Arranging real-world sexual encounters or direct solicitation of sexual material from a minor",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Child Sexual Abuse Material",
                "sec_num": "4.4.2"
            },
            {
                "text": "\u2022 Providing advice for or advocacy of child sexual abuse",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Child Sexual Abuse Material",
                "sec_num": "4.4.2"
            },
            {
                "text": "Online scams are attempts to trick a person into providing funds or sensitive information using deceptive or invasive techniques. The perpetrator of a scam may attempt to build insincere relationships over the course of a conversation or misrepresent themselves as someone with skill or authority. Types of Scams that are commonly prohibited from digital communities include:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Scams",
                "sec_num": "4.4.3"
            },
            {
                "text": "\u2022 Attempts to trick users into sending money or sharing personal information (e.g. phishing)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Scams",
                "sec_num": "4.4.3"
            },
            {
                "text": "\u2022 Promise of funds in return for a smaller initial payment via wire transfer, gift cards, or prepaid debit card (e.g. money-flipping)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Scams",
                "sec_num": "4.4.3"
            },
            {
                "text": "\u2022 Offers promising cash or gifts, such as lottery scams",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Scams",
                "sec_num": "4.4.3"
            },
            {
                "text": "\u2022 Romantic and military impersonation",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Scams",
                "sec_num": "4.4.3"
            },
            {
                "text": "\u2022 Promises of debt relief or credit repair",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Scams",
                "sec_num": "4.4.3"
            },
            {
                "text": "\u2022 Recruitment into pyramid schemes",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Scams",
                "sec_num": "4.4.3"
            },
            {
                "text": "Upon careful synthesis of content policies, human rights treaties and recommendations from experts in physical and psychological harm, we have presented a typology of harmful content along with a set of best practices for developing precise definitions of types. In the future, we plan to report on the impact of how the proposed definitions impact the quality of datasets and models built using them, and to share public datasets based on this typology that may be used by the research community. We have published the typology at https: //gitlab.com/sentropy-technologies/ typology-of-online-harm and encourage those who study online abuse to contribute.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions and Future Work",
                "sec_num": "5"
            },
            {
                "text": "Krippendorff (2004) suggests that for annotations to be considered reliable, a minimum score of 0.80 is desirable, with 0.667 being the lowest conceivable limit",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://discord.com/guidelines 3 https://www.facebook.com/communitystandards/ 4 https://policy.pinterest.com/en/community-guidelines",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://www.reddithelp.com/hc/en-us/sections/ 360008810092-Account-and-Community-Restrictions 6 https://help.twitter.com/en/rules-and-policies 7 https://support.google.com/youtube/topic/2803176? hl=en&reftopic = 6151248 8 https://www.ohchr.org/en/professionalinterest/pages/ ccpr.aspx",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://www.congress.gov/116/bills/s894/BILLS-116s894is.xml",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "We wish to thank Cindy Wang, Taylor Rhyne, Bertie Vidgen and the WOAH reviewers for providing detailed feedback on this work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Content and jurisdiction program: Operational approaches, norms, criteria, mechanisms",
                "authors": [],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Internet and Jurisdiction Policy Network. 2019. Con- tent and jurisdiction program: Operational ap- proaches, norms, criteria, mechanisms.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Automatic Identification and Classification of Misogynistic Language on Twitter",
                "authors": [
                    {
                        "first": "Maria",
                        "middle": [],
                        "last": "Anzovino",
                        "suffix": ""
                    },
                    {
                        "first": "Elisabetta",
                        "middle": [],
                        "last": "Fersini",
                        "suffix": ""
                    },
                    {
                        "first": "Paolo",
                        "middle": [],
                        "last": "Rosso",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "57--64",
                "other_ids": {
                    "DOI": [
                        "10.1007/978-3-319-91947-8_6"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Maria Anzovino, Elisabetta Fersini, and Paolo Rosso. 2018. Automatic Identification and Classification of Misogynistic Language on Twitter, pages 57-64.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Proposals for Improved Regulation of Harmful Online Content",
                "authors": [
                    {
                        "first": "Susan",
                        "middle": [],
                        "last": "Benesch",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Susan Benesch. 2020. Proposals for Improved Regula- tion of Harmful Online Content.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Machine classification and analysis of suiciderelated communication on twitter",
                "authors": [
                    {
                        "first": "Pete",
                        "middle": [],
                        "last": "Burnan",
                        "suffix": ""
                    },
                    {
                        "first": "Walter",
                        "middle": [],
                        "last": "Colombo",
                        "suffix": ""
                    },
                    {
                        "first": "Jonathan",
                        "middle": [],
                        "last": "Scourfield",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the 26th ACM Conference on Hypertext & Social Media, HT '15",
                "volume": "",
                "issue": "",
                "pages": "75--84",
                "other_ids": {
                    "DOI": [
                        "10.1145/2700171.2791023"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Pete Burnan, Walter Colombo, and Jonathan Scourfield. 2015. Machine classification and analysis of suicide- related communication on twitter. In Proceedings of the 26th ACM Conference on Hypertext & Social Media, HT '15, page 75-84, New York, NY, USA. Association for Computing Machinery.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Latent suicide risk detection on microblog via suicideoriented word embeddings and layered attention",
                "authors": [
                    {
                        "first": "Lei",
                        "middle": [],
                        "last": "Cao",
                        "suffix": ""
                    },
                    {
                        "first": "Huijun",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Ling",
                        "middle": [],
                        "last": "Feng",
                        "suffix": ""
                    },
                    {
                        "first": "Zihan",
                        "middle": [],
                        "last": "Wei",
                        "suffix": ""
                    },
                    {
                        "first": "Xin",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Ningyun",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Xiaohao",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
                "volume": "",
                "issue": "",
                "pages": "1718--1728",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/D19-1181"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Lei Cao, Huijun Zhang, Ling Feng, Zihan Wei, Xin Wang, Ningyun Li, and Xiaohao He. 2019. La- tent suicide risk detection on microblog via suicide- oriented word embeddings and layered attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1718- 1728, Hong Kong, China. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "this post will just get taken down\": Characterizing removed pro-eating disorder social media content",
                "authors": [
                    {
                        "first": "Stevie",
                        "middle": [],
                        "last": "Chancellor",
                        "suffix": ""
                    },
                    {
                        "first": "(",
                        "middle": [],
                        "last": "Zhiyuan",
                        "suffix": ""
                    },
                    {
                        "first": "Munmun De",
                        "middle": [],
                        "last": "Jerry) Lin",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Choudhury",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16",
                "volume": "",
                "issue": "",
                "pages": "1157--1162",
                "other_ids": {
                    "DOI": [
                        "10.1145/2858036.2858248"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Stevie Chancellor, Zhiyuan (Jerry) Lin, and Munmun De Choudhury. 2016. \"this post will just get taken down\": Characterizing removed pro-eating disorder social media content. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, page 1157-1162, New York, NY, USA. Association for Computing Machinery.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Automated hate speech detection and the problem of offensive language",
                "authors": [
                    {
                        "first": "Thomas",
                        "middle": [],
                        "last": "Davidson",
                        "suffix": ""
                    },
                    {
                        "first": "Dana",
                        "middle": [],
                        "last": "Warmsley",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Macy",
                        "suffix": ""
                    },
                    {
                        "first": "Ingmar",
                        "middle": [],
                        "last": "Weber",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the Eleventh International AAAI Conference on Web and Social Media",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the Eleventh International AAAI Conference on Web and Social Media.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Leveraging publicly available data to discern patterns of human trafficking activity",
                "authors": [
                    {
                        "first": "Artur",
                        "middle": [],
                        "last": "Dubrawski",
                        "suffix": ""
                    },
                    {
                        "first": "Kyle",
                        "middle": [],
                        "last": "Miller",
                        "suffix": ""
                    },
                    {
                        "first": "Matthew",
                        "middle": [],
                        "last": "Barnes",
                        "suffix": ""
                    },
                    {
                        "first": "Benedikt",
                        "middle": [],
                        "last": "Boecking",
                        "suffix": ""
                    },
                    {
                        "first": "Emily",
                        "middle": [],
                        "last": "Kennedy",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "In Journal of Human Trafficking",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Artur Dubrawski, Kyle Miller, Matthew Barnes, Benedikt Boecking, and Emily Kennedy. 2015. Leveraging publicly available data to discern pat- terns of human trafficking activity. In Journal of Human Trafficking.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Detecting insults in social commentary",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Kaggle",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kaggle. 2012. Detecting insults in social commentary.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Toxic comment classification challenge",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Kaggle",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kaggle. 2018. Toxic comment classification challenge.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Content analysis: An introduction to its methodology",
                "authors": [
                    {
                        "first": "Klaus",
                        "middle": [],
                        "last": "Krippendorff",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Klaus Krippendorff. 2004. Content analysis: An intro- duction to its methodology. Sage.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Detecting child sexual abuse material: A comprehensive survey",
                "authors": [
                    {
                        "first": "Hee-Eun",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Tatiana",
                        "middle": [],
                        "last": "Ermakova",
                        "suffix": ""
                    },
                    {
                        "first": "Vasilis",
                        "middle": [],
                        "last": "Ververis",
                        "suffix": ""
                    },
                    {
                        "first": "Benjamin",
                        "middle": [],
                        "last": "Fabian",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Forensic Science International: Digital Investigation",
                "volume": "34",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "DOI": [
                        "10.1016/j.fsidi.2020.301022"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Hee-Eun Lee, Tatiana Ermakova, Vasilis Ververis, and Benjamin Fabian. 2020. Detecting child sexual abuse material: A comprehensive survey\". Foren- sic Science International: Digital Investigation, 34:301022.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Measuring the reliability of hate speech annotations: The case of the european refugee crisis",
                "authors": [
                    {
                        "first": "Bj\u00f6rn",
                        "middle": [],
                        "last": "Ross",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Rist",
                        "suffix": ""
                    },
                    {
                        "first": "Guillermo",
                        "middle": [],
                        "last": "Carbonell",
                        "suffix": ""
                    },
                    {
                        "first": "Benjamin",
                        "middle": [],
                        "last": "Cabrera",
                        "suffix": ""
                    },
                    {
                        "first": "Nils",
                        "middle": [],
                        "last": "Kurowsky",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Wojatzki",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the Workshop on Natural Language Processing for ComputerMediated Communication",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "DOI": [
                        "10.17185/duepublico/42132"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Bj\u00f6rn Ross, Michael Rist, Guillermo Carbonell, Ben- jamin Cabrera, Nils Kurowsky, and Michael Wo- jatzki. 2017. Measuring the reliability of hate speech annotations: The case of the european refugee crisis. In Proceedings of the Workshop on Natural Language Processing for ComputerMedi- ated Communication.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "A survey on hate speech detection using natural language processing",
                "authors": [
                    {
                        "first": "Anna",
                        "middle": [],
                        "last": "Schmidt",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Wiegand",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the Fifth International workshop on natural language processing for social media",
                "volume": "",
                "issue": "",
                "pages": "1--10",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International workshop on natural language processing for social media, pages 1-10.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Fifteen minutes of unwanted fame: Detecting and characterizing doxing",
                "authors": [
                    {
                        "first": "Peter",
                        "middle": [],
                        "last": "Snyder",
                        "suffix": ""
                    },
                    {
                        "first": "Periwinkle",
                        "middle": [],
                        "last": "Doerfler",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Kanich",
                        "suffix": ""
                    },
                    {
                        "first": "Damon",
                        "middle": [],
                        "last": "Mccoy",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 2017 Internet Measurement Conference",
                "volume": "",
                "issue": "",
                "pages": "432--444",
                "other_ids": {
                    "DOI": [
                        "10.1145/3131365.3131385"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Peter Snyder, Periwinkle Doerfler, Chris Kanich, and Damon McCoy. 2017. Fifteen minutes of unwanted fame: Detecting and characterizing doxing. In Pro- ceedings of the 2017 Internet Measurement Confer- ence, page 432-444, New York, NY, USA. Associa- tion for Computing Machinery.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "The revised conflict tactics scales (cts2): Development and preliminary psychometric data",
                "authors": [
                    {
                        "first": "Murray",
                        "middle": [
                            "A"
                        ],
                        "last": "Straus",
                        "suffix": ""
                    },
                    {
                        "first": "Sherry",
                        "middle": [
                            "L"
                        ],
                        "last": "Hamby",
                        "suffix": ""
                    },
                    {
                        "first": "Sue",
                        "middle": [],
                        "last": "Boney-Mccoy",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [
                            "B"
                        ],
                        "last": "Sugarman",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Journal of Family Issues",
                "volume": "17",
                "issue": "3",
                "pages": "283--316",
                "other_ids": {
                    "DOI": [
                        "10.1177/019251396017003001"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Murray A. Straus, Sherry L. Hamby, Sue Boney- McCoy, and David B. Sugarman. 1996. The revised conflict tactics scales (cts2): Development and pre- liminary psychometric data. Journal of Family Is- sues, 17(3):283-316.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Motivations, methods and metrics of misinformation detection: An nlp perspective",
                "authors": [
                    {
                        "first": "Qi",
                        "middle": [],
                        "last": "Su",
                        "suffix": ""
                    },
                    {
                        "first": "Mingyu",
                        "middle": [],
                        "last": "Wan",
                        "suffix": ""
                    },
                    {
                        "first": "Xiaoqian",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Chu-Ren",
                        "middle": [],
                        "last": "Huang",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Natural Language Processing Research",
                "volume": "1",
                "issue": "",
                "pages": "1--13",
                "other_ids": {
                    "DOI": [
                        "10.2991/nlpr.d.200522.001"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Qi Su, Mingyu Wan, Xiaoqian Liu, and Chu-Ren Huang. 2020. Motivations, methods and metrics of misinformation detection: An nlp perspective. Nat- ural Language Processing Research, 1:1-13.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Combating human trafficking with deep multimodal models",
                "authors": [
                    {
                        "first": "Edmund",
                        "middle": [],
                        "last": "Tong",
                        "suffix": ""
                    },
                    {
                        "first": "Amir",
                        "middle": [],
                        "last": "Zadeh",
                        "suffix": ""
                    },
                    {
                        "first": "Cara",
                        "middle": [],
                        "last": "Jones",
                        "suffix": ""
                    },
                    {
                        "first": "Louis-Philippe",
                        "middle": [],
                        "last": "Morency",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Edmund Tong, Amir Zadeh, Cara Jones, and Louis- Philippe Morency. 2017. Combating human traf- ficking with deep multimodal models. CoRR, abs/1705.02735.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Guy De Pauw, Walter Daelemans, and V\u00e9ronique Hoste. 2015. Detection and fine-grained classification of cyberbullying events",
                "authors": [
                    {
                        "first": "Cynthia",
                        "middle": [],
                        "last": "Van Hee",
                        "suffix": ""
                    },
                    {
                        "first": "Els",
                        "middle": [],
                        "last": "Lefever",
                        "suffix": ""
                    },
                    {
                        "first": "Ben",
                        "middle": [],
                        "last": "Verhoeven",
                        "suffix": ""
                    },
                    {
                        "first": "Julie",
                        "middle": [],
                        "last": "Mennes",
                        "suffix": ""
                    },
                    {
                        "first": "Bart",
                        "middle": [],
                        "last": "Desmet",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "International Conference Recent Advances in Natural Language Processing (RANLP)",
                "volume": "",
                "issue": "",
                "pages": "672--680",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Cynthia Van Hee, Els Lefever, Ben Verhoeven, Julie Mennes, Bart Desmet, Guy De Pauw, Walter Daele- mans, and V\u00e9ronique Hoste. 2015. Detection and fine-grained classification of cyberbullying events. In International Conference Recent Advances in Nat- ural Language Processing (RANLP), pages 672- 680.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Directions in abusive language training data: Garbage in, garbage out. ArXiv",
                "authors": [
                    {
                        "first": "Bertie",
                        "middle": [],
                        "last": "Vidgen",
                        "suffix": ""
                    },
                    {
                        "first": "Leon",
                        "middle": [],
                        "last": "Derczynski",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bertie Vidgen and Leon Derczynski. 2020. Direc- tions in abusive language training data: Garbage in, garbage out. ArXiv, abs/2004.01670.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Challenges and frontiers in abusive content detection",
                "authors": [
                    {
                        "first": "Bertie",
                        "middle": [],
                        "last": "Vidgen",
                        "suffix": ""
                    },
                    {
                        "first": "Alex",
                        "middle": [],
                        "last": "Harris",
                        "suffix": ""
                    },
                    {
                        "first": "Dong",
                        "middle": [],
                        "last": "Nguyen",
                        "suffix": ""
                    },
                    {
                        "first": "Rebekah",
                        "middle": [],
                        "last": "Tromble",
                        "suffix": ""
                    },
                    {
                        "first": "Scott",
                        "middle": [],
                        "last": "Hale",
                        "suffix": ""
                    },
                    {
                        "first": "Helen",
                        "middle": [],
                        "last": "Margetts",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detec- tion. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Detecting and characterizing eating-disorder communities on social media",
                "authors": [
                    {
                        "first": "Tao",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Markus",
                        "middle": [],
                        "last": "Brede",
                        "suffix": ""
                    },
                    {
                        "first": "Antonella",
                        "middle": [],
                        "last": "Ianni",
                        "suffix": ""
                    },
                    {
                        "first": "Emmanouil",
                        "middle": [],
                        "last": "Mentzakis",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the Tenth ACM International conference on web search and data mining",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "DOI": [
                        "10.1145/3018661.3018706"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Tao Wang, Markus Brede, Antonella Ianni, and Em- manouil Mentzakis. 2017. Detecting and character- izing eating-disorder communities on social media. In Proceedings of the Tenth ACM International con- ference on web search and data mining.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter",
                "authors": [
                    {
                        "first": "Zeerak",
                        "middle": [],
                        "last": "Waseem",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the First Workshop on NLP and Computational Social Science",
                "volume": "",
                "issue": "",
                "pages": "138--142",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/W16-5618"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Zeerak Waseem. 2016. Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 138- 142.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Understanding abuse: A typology of abusive language detection subtasks",
                "authors": [
                    {
                        "first": "Zeerak",
                        "middle": [],
                        "last": "Waseem",
                        "suffix": ""
                    },
                    {
                        "first": "Thomas",
                        "middle": [],
                        "last": "Davidson",
                        "suffix": ""
                    },
                    {
                        "first": "Dana",
                        "middle": [],
                        "last": "Warmsley",
                        "suffix": ""
                    },
                    {
                        "first": "Ingmar",
                        "middle": [],
                        "last": "Weber",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the First Workshop on Abusive Language Online",
                "volume": "",
                "issue": "",
                "pages": "78--84",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/W17-3012"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Lan- guage Online, pages 78-84. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
                "authors": [
                    {
                        "first": "Zeerak",
                        "middle": [],
                        "last": "Waseem",
                        "suffix": ""
                    },
                    {
                        "first": "Dirk",
                        "middle": [],
                        "last": "Hovy",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the NAACL Student Research Workshop",
                "volume": "",
                "issue": "",
                "pages": "88--93",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/N16-2013"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Detection of Abusive Language: the Problem of Biased Datasets",
                "authors": [
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Wiegand",
                        "suffix": ""
                    },
                    {
                        "first": "Josef",
                        "middle": [],
                        "last": "Ruppenhofer",
                        "suffix": ""
                    },
                    {
                        "first": "Thomas",
                        "middle": [],
                        "last": "Kleinbauer",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "1",
                "issue": "",
                "pages": "602--608",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/N19-1060"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of Abusive Language: the Problem of Biased Datasets. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 602-608, Minneapolis, Minnesota. Association for Computational Linguis- tics.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Women's Media Center",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Women's Media Center. Online abuse 101.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "Ex machina: Personal attacks seen at scale",
                "authors": [
                    {
                        "first": "Ellery",
                        "middle": [],
                        "last": "Wulczyn",
                        "suffix": ""
                    },
                    {
                        "first": "Nithum",
                        "middle": [],
                        "last": "Thain",
                        "suffix": ""
                    },
                    {
                        "first": "Lucas",
                        "middle": [],
                        "last": "Dixon",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee",
                "volume": "",
                "issue": "",
                "pages": "1391--1399",
                "other_ids": {
                    "DOI": [
                        "10.1145/3038912.3052591"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Pro- ceedings of the 26th International Conference on World Wide Web, WWW '17, page 1391-1399, Re- public and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "type_str": "figure",
                "text": "A Typology of Harmful Content do not proactively remove harassing content, such as onion sites, torrents, IRC, and anonymous text sharing websites such as pastebin.com, 4chan and 8chan. Doxing can lead to another form of harm known as SWATing, in which someone calls law enforcement with false reports of violence at an address in order to cause harm at the target's residence (e.g. a SWAT team kicking in their door).",
                "num": null
            },
            "TABREF0": {
                "html": null,
                "type_str": "table",
                "text": "Identity Attack and Insult should be considered non-examples of Identity Misrepresentation.",
                "content": "<table><tr><td>Positive criteria defined for</td></tr><tr><td>\u2022 Statements about protected or vulnerable</td></tr><tr><td>groups presented as declarative truth without</td></tr><tr><td>supporting evidence</td></tr><tr><td>\u2022 Microaggressions, subtle expressions of bias</td></tr><tr><td>towards a protected or vulnerable group</td></tr><tr><td>\u2022 Intent to spread fear of protected or vulnerable</td></tr><tr><td>groups, without calls for violence</td></tr></table>",
                "num": null
            }
        }
    }
}