Hetan07 commited on
Commit
0e929cb
1 Parent(s): d8be960

Upload 10 files

Browse files
csvs/X_test.csv ADDED
The diff for this file is too large to render. See raw diff
 
csvs/X_train.csv ADDED
The diff for this file is too large to render. See raw diff
 
csvs/y_test.csv ADDED
@@ -0,0 +1,694 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ,winner_index
2
+ 397,1
3
+ 559,1
4
+ 401,0
5
+ 923,0
6
+ 334,1
7
+ 1726,0
8
+ 1653,0
9
+ 1905,0
10
+ 1899,0
11
+ 1455,0
12
+ 855,1
13
+ 548,1
14
+ 1666,0
15
+ 1116,1
16
+ 344,0
17
+ 1277,1
18
+ 410,0
19
+ 1174,1
20
+ 1308,0
21
+ 2188,0
22
+ 2324,0
23
+ 222,1
24
+ 1783,0
25
+ 1395,0
26
+ 888,1
27
+ 1275,1
28
+ 999,0
29
+ 978,0
30
+ 175,1
31
+ 94,0
32
+ 542,0
33
+ 341,0
34
+ 1833,0
35
+ 1027,0
36
+ 679,1
37
+ 711,1
38
+ 306,0
39
+ 393,1
40
+ 1764,0
41
+ 1204,1
42
+ 1310,1
43
+ 326,0
44
+ 414,0
45
+ 180,1
46
+ 208,0
47
+ 1009,0
48
+ 753,1
49
+ 589,0
50
+ 312,0
51
+ 336,0
52
+ 219,0
53
+ 477,0
54
+ 355,0
55
+ 644,0
56
+ 1059,0
57
+ 757,1
58
+ 450,0
59
+ 1279,0
60
+ 1304,1
61
+ 2250,0
62
+ 1124,0
63
+ 133,1
64
+ 1031,0
65
+ 1024,1
66
+ 1818,0
67
+ 1305,0
68
+ 150,1
69
+ 177,0
70
+ 122,0
71
+ 990,1
72
+ 489,0
73
+ 1692,0
74
+ 1106,1
75
+ 1623,0
76
+ 680,1
77
+ 1155,0
78
+ 103,0
79
+ 494,0
80
+ 964,1
81
+ 2338,0
82
+ 603,0
83
+ 1646,0
84
+ 1415,0
85
+ 800,0
86
+ 2161,0
87
+ 1159,1
88
+ 584,0
89
+ 1228,0
90
+ 659,0
91
+ 1645,0
92
+ 612,0
93
+ 510,1
94
+ 1838,0
95
+ 885,1
96
+ 1374,0
97
+ 621,1
98
+ 83,1
99
+ 541,0
100
+ 844,1
101
+ 2237,0
102
+ 598,1
103
+ 58,1
104
+ 522,0
105
+ 1307,0
106
+ 2045,0
107
+ 463,0
108
+ 1048,0
109
+ 1003,1
110
+ 149,1
111
+ 1177,0
112
+ 556,1
113
+ 1758,0
114
+ 965,0
115
+ 361,0
116
+ 2179,0
117
+ 2361,0
118
+ 10,0
119
+ 301,1
120
+ 1857,0
121
+ 230,0
122
+ 113,0
123
+ 1516,0
124
+ 450,1
125
+ 435,0
126
+ 1738,0
127
+ 697,1
128
+ 1007,1
129
+ 488,1
130
+ 474,1
131
+ 2344,0
132
+ 975,0
133
+ 318,1
134
+ 266,0
135
+ 17,1
136
+ 1328,1
137
+ 483,0
138
+ 661,0
139
+ 683,1
140
+ 27,0
141
+ 2317,0
142
+ 1605,0
143
+ 142,0
144
+ 666,0
145
+ 2038,0
146
+ 592,0
147
+ 648,0
148
+ 1409,0
149
+ 634,1
150
+ 359,1
151
+ 549,0
152
+ 626,1
153
+ 1824,0
154
+ 889,0
155
+ 746,1
156
+ 62,1
157
+ 97,1
158
+ 2352,0
159
+ 391,1
160
+ 1503,0
161
+ 1517,0
162
+ 886,0
163
+ 2068,0
164
+ 828,1
165
+ 1377,0
166
+ 145,1
167
+ 1449,0
168
+ 1802,0
169
+ 167,0
170
+ 537,1
171
+ 843,0
172
+ 825,1
173
+ 1909,0
174
+ 1250,1
175
+ 1797,0
176
+ 536,1
177
+ 91,1
178
+ 1106,0
179
+ 996,1
180
+ 840,0
181
+ 716,1
182
+ 1198,0
183
+ 227,0
184
+ 1386,0
185
+ 801,1
186
+ 430,0
187
+ 1695,0
188
+ 328,1
189
+ 1337,1
190
+ 849,0
191
+ 1220,1
192
+ 1815,0
193
+ 913,1
194
+ 650,1
195
+ 1072,0
196
+ 1023,0
197
+ 636,1
198
+ 106,0
199
+ 473,0
200
+ 660,1
201
+ 1398,0
202
+ 510,0
203
+ 181,0
204
+ 882,0
205
+ 1385,0
206
+ 1298,1
207
+ 796,0
208
+ 387,0
209
+ 431,1
210
+ 1271,1
211
+ 1332,1
212
+ 639,0
213
+ 472,0
214
+ 545,1
215
+ 65,1
216
+ 2232,0
217
+ 397,0
218
+ 214,1
219
+ 688,1
220
+ 356,0
221
+ 405,1
222
+ 1348,1
223
+ 18,1
224
+ 1830,0
225
+ 739,1
226
+ 1788,0
227
+ 1435,0
228
+ 544,0
229
+ 1750,0
230
+ 663,1
231
+ 892,0
232
+ 476,1
233
+ 770,0
234
+ 2249,0
235
+ 2139,0
236
+ 1123,1
237
+ 1326,1
238
+ 1107,0
239
+ 1349,1
240
+ 816,0
241
+ 265,0
242
+ 1640,0
243
+ 1234,0
244
+ 553,1
245
+ 1933,0
246
+ 1921,0
247
+ 909,1
248
+ 691,0
249
+ 772,0
250
+ 872,0
251
+ 675,1
252
+ 1160,1
253
+ 585,0
254
+ 1664,0
255
+ 150,0
256
+ 1053,1
257
+ 700,1
258
+ 617,0
259
+ 1892,0
260
+ 1249,1
261
+ 205,0
262
+ 470,0
263
+ 348,1
264
+ 582,1
265
+ 1002,0
266
+ 1554,0
267
+ 290,0
268
+ 502,1
269
+ 130,0
270
+ 2046,0
271
+ 471,1
272
+ 635,0
273
+ 987,0
274
+ 493,0
275
+ 1242,1
276
+ 38,1
277
+ 910,1
278
+ 1609,0
279
+ 85,1
280
+ 760,1
281
+ 244,0
282
+ 1762,0
283
+ 622,0
284
+ 420,0
285
+ 420,1
286
+ 1212,0
287
+ 2111,0
288
+ 1082,0
289
+ 1059,1
290
+ 1081,1
291
+ 533,1
292
+ 2365,0
293
+ 466,0
294
+ 898,1
295
+ 705,1
296
+ 631,1
297
+ 1300,1
298
+ 1443,0
299
+ 800,1
300
+ 984,0
301
+ 1615,0
302
+ 1260,0
303
+ 1672,0
304
+ 1636,0
305
+ 582,0
306
+ 1330,1
307
+ 61,0
308
+ 1010,1
309
+ 2299,0
310
+ 1796,0
311
+ 907,1
312
+ 586,1
313
+ 1032,1
314
+ 1613,0
315
+ 950,0
316
+ 249,1
317
+ 1948,0
318
+ 132,1
319
+ 1011,0
320
+ 1224,1
321
+ 1352,0
322
+ 308,1
323
+ 1751,0
324
+ 724,0
325
+ 766,1
326
+ 1316,0
327
+ 821,0
328
+ 721,0
329
+ 982,1
330
+ 941,1
331
+ 1025,0
332
+ 1603,0
333
+ 1014,1
334
+ 989,0
335
+ 123,0
336
+ 2222,0
337
+ 390,1
338
+ 565,1
339
+ 293,0
340
+ 333,1
341
+ 2115,0
342
+ 216,0
343
+ 778,1
344
+ 1073,1
345
+ 812,0
346
+ 1264,0
347
+ 758,1
348
+ 1223,0
349
+ 1877,0
350
+ 577,1
351
+ 104,1
352
+ 91,0
353
+ 1233,0
354
+ 245,1
355
+ 1262,1
356
+ 1188,1
357
+ 1208,1
358
+ 956,1
359
+ 1644,0
360
+ 802,0
361
+ 1383,0
362
+ 1649,0
363
+ 780,0
364
+ 1050,0
365
+ 1343,1
366
+ 1424,0
367
+ 116,1
368
+ 1835,0
369
+ 191,1
370
+ 475,0
371
+ 1224,0
372
+ 1820,0
373
+ 23,0
374
+ 1626,0
375
+ 2274,0
376
+ 381,0
377
+ 757,0
378
+ 752,0
379
+ 335,0
380
+ 475,1
381
+ 608,1
382
+ 895,1
383
+ 1241,1
384
+ 192,0
385
+ 925,0
386
+ 696,1
387
+ 764,1
388
+ 432,1
389
+ 899,1
390
+ 1154,0
391
+ 677,0
392
+ 507,1
393
+ 2180,0
394
+ 1390,0
395
+ 1648,0
396
+ 335,1
397
+ 24,0
398
+ 1722,0
399
+ 600,1
400
+ 295,0
401
+ 1437,0
402
+ 283,0
403
+ 592,1
404
+ 1963,0
405
+ 546,1
406
+ 2112,0
407
+ 942,1
408
+ 543,0
409
+ 2351,0
410
+ 60,1
411
+ 924,1
412
+ 195,0
413
+ 1075,0
414
+ 404,1
415
+ 161,0
416
+ 1915,0
417
+ 1241,0
418
+ 2138,0
419
+ 39,1
420
+ 376,0
421
+ 462,1
422
+ 520,1
423
+ 166,1
424
+ 1199,0
425
+ 1902,0
426
+ 918,0
427
+ 1425,0
428
+ 1019,1
429
+ 570,1
430
+ 347,0
431
+ 842,1
432
+ 751,0
433
+ 1885,0
434
+ 664,0
435
+ 1454,0
436
+ 707,0
437
+ 683,0
438
+ 1289,1
439
+ 47,1
440
+ 198,0
441
+ 44,0
442
+ 1427,0
443
+ 1043,0
444
+ 24,1
445
+ 973,0
446
+ 1684,0
447
+ 828,0
448
+ 197,0
449
+ 1812,0
450
+ 2193,0
451
+ 1366,0
452
+ 623,0
453
+ 1141,1
454
+ 995,0
455
+ 2113,0
456
+ 1157,1
457
+ 784,0
458
+ 1755,0
459
+ 444,1
460
+ 2205,0
461
+ 723,1
462
+ 916,0
463
+ 1391,0
464
+ 417,0
465
+ 377,0
466
+ 550,1
467
+ 1048,1
468
+ 261,1
469
+ 1679,0
470
+ 197,1
471
+ 1209,1
472
+ 86,0
473
+ 1102,1
474
+ 94,1
475
+ 110,0
476
+ 144,0
477
+ 803,0
478
+ 193,1
479
+ 337,0
480
+ 316,0
481
+ 1924,0
482
+ 1290,1
483
+ 370,1
484
+ 853,0
485
+ 933,1
486
+ 1898,0
487
+ 265,1
488
+ 662,1
489
+ 839,1
490
+ 1451,0
491
+ 671,0
492
+ 552,0
493
+ 1506,0
494
+ 1715,0
495
+ 110,1
496
+ 1988,0
497
+ 2320,0
498
+ 945,0
499
+ 1232,0
500
+ 681,0
501
+ 1292,0
502
+ 2006,0
503
+ 643,1
504
+ 1638,0
505
+ 1096,1
506
+ 917,1
507
+ 2177,0
508
+ 1011,1
509
+ 355,1
510
+ 1,1
511
+ 1057,0
512
+ 418,0
513
+ 1149,0
514
+ 563,0
515
+ 899,0
516
+ 1151,1
517
+ 799,1
518
+ 462,0
519
+ 327,1
520
+ 1142,1
521
+ 1358,0
522
+ 948,0
523
+ 1064,1
524
+ 131,0
525
+ 1279,1
526
+ 1564,0
527
+ 112,1
528
+ 1005,0
529
+ 1682,0
530
+ 918,1
531
+ 793,0
532
+ 32,1
533
+ 647,1
534
+ 424,1
535
+ 1070,0
536
+ 1996,0
537
+ 1159,0
538
+ 275,1
539
+ 490,1
540
+ 2163,0
541
+ 736,1
542
+ 973,1
543
+ 2321,0
544
+ 601,1
545
+ 901,0
546
+ 2085,0
547
+ 293,1
548
+ 1459,0
549
+ 1244,1
550
+ 905,0
551
+ 935,1
552
+ 887,1
553
+ 95,1
554
+ 108,1
555
+ 1044,0
556
+ 479,0
557
+ 370,0
558
+ 1126,0
559
+ 255,1
560
+ 1254,0
561
+ 169,1
562
+ 28,0
563
+ 216,1
564
+ 863,0
565
+ 1965,0
566
+ 581,0
567
+ 486,0
568
+ 163,0
569
+ 1038,0
570
+ 2042,0
571
+ 130,1
572
+ 1229,1
573
+ 1851,0
574
+ 891,1
575
+ 761,1
576
+ 136,1
577
+ 868,1
578
+ 1195,1
579
+ 1978,0
580
+ 497,0
581
+ 817,0
582
+ 517,0
583
+ 262,0
584
+ 1147,0
585
+ 446,0
586
+ 964,0
587
+ 314,0
588
+ 1181,1
589
+ 250,1
590
+ 1207,0
591
+ 823,1
592
+ 187,1
593
+ 726,1
594
+ 886,1
595
+ 1180,0
596
+ 542,1
597
+ 1122,1
598
+ 228,1
599
+ 1346,1
600
+ 2013,0
601
+ 569,0
602
+ 336,1
603
+ 2167,0
604
+ 1987,0
605
+ 354,1
606
+ 607,1
607
+ 550,0
608
+ 2131,0
609
+ 678,0
610
+ 2063,0
611
+ 170,0
612
+ 1201,1
613
+ 332,0
614
+ 806,1
615
+ 1476,0
616
+ 195,1
617
+ 818,0
618
+ 1276,1
619
+ 2175,0
620
+ 622,1
621
+ 141,1
622
+ 140,0
623
+ 549,1
624
+ 576,0
625
+ 30,0
626
+ 718,0
627
+ 1577,0
628
+ 1100,0
629
+ 1592,0
630
+ 2101,0
631
+ 2089,0
632
+ 753,0
633
+ 921,0
634
+ 1145,0
635
+ 612,1
636
+ 2062,0
637
+ 1217,0
638
+ 534,0
639
+ 400,1
640
+ 766,0
641
+ 491,0
642
+ 1128,1
643
+ 1512,0
644
+ 1163,1
645
+ 485,1
646
+ 1022,0
647
+ 70,1
648
+ 1373,0
649
+ 1060,0
650
+ 867,1
651
+ 893,0
652
+ 2255,0
653
+ 776,0
654
+ 1826,0
655
+ 271,1
656
+ 2316,0
657
+ 1556,0
658
+ 878,0
659
+ 2066,0
660
+ 1148,1
661
+ 1662,0
662
+ 174,1
663
+ 1051,1
664
+ 548,0
665
+ 455,0
666
+ 628,0
667
+ 685,1
668
+ 833,1
669
+ 730,1
670
+ 2226,0
671
+ 433,1
672
+ 992,1
673
+ 1055,1
674
+ 1487,0
675
+ 1321,0
676
+ 1823,0
677
+ 1678,0
678
+ 1509,0
679
+ 131,1
680
+ 323,1
681
+ 2069,0
682
+ 1282,0
683
+ 457,0
684
+ 1008,0
685
+ 129,0
686
+ 1121,0
687
+ 42,1
688
+ 203,0
689
+ 156,1
690
+ 1054,0
691
+ 539,1
692
+ 1119,0
693
+ 732,0
694
+ 1819,0
csvs/y_train.csv ADDED
@@ -0,0 +1,2772 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ,winner_index
2
+ 182,0
3
+ 1245,0
4
+ 880,0
5
+ 170,1
6
+ 1171,1
7
+ 940,0
8
+ 16,0
9
+ 605,0
10
+ 2256,0
11
+ 2367,0
12
+ 1744,0
13
+ 1560,0
14
+ 985,1
15
+ 1945,0
16
+ 926,1
17
+ 595,0
18
+ 346,1
19
+ 855,0
20
+ 102,0
21
+ 1472,0
22
+ 488,0
23
+ 786,0
24
+ 2105,0
25
+ 1514,0
26
+ 380,1
27
+ 2130,0
28
+ 432,0
29
+ 1273,0
30
+ 852,0
31
+ 386,1
32
+ 272,1
33
+ 181,1
34
+ 392,1
35
+ 851,1
36
+ 2238,0
37
+ 427,0
38
+ 1119,1
39
+ 572,1
40
+ 806,0
41
+ 530,1
42
+ 405,0
43
+ 1416,0
44
+ 241,1
45
+ 1031,1
46
+ 2363,0
47
+ 689,0
48
+ 304,1
49
+ 1005,1
50
+ 25,0
51
+ 2332,0
52
+ 896,0
53
+ 179,1
54
+ 1242,0
55
+ 2372,0
56
+ 1711,0
57
+ 1128,0
58
+ 344,1
59
+ 715,1
60
+ 1521,0
61
+ 2356,0
62
+ 2257,0
63
+ 1298,0
64
+ 1290,0
65
+ 175,0
66
+ 456,1
67
+ 495,1
68
+ 613,0
69
+ 1326,0
70
+ 183,0
71
+ 627,1
72
+ 2199,0
73
+ 606,0
74
+ 1445,0
75
+ 77,1
76
+ 1544,0
77
+ 1110,1
78
+ 1862,0
79
+ 1716,0
80
+ 378,0
81
+ 1082,1
82
+ 1158,0
83
+ 1094,1
84
+ 1868,0
85
+ 937,1
86
+ 2306,0
87
+ 890,1
88
+ 426,0
89
+ 1276,0
90
+ 1072,1
91
+ 2296,0
92
+ 1074,1
93
+ 1701,0
94
+ 211,1
95
+ 93,0
96
+ 604,0
97
+ 111,1
98
+ 1342,1
99
+ 48,1
100
+ 690,1
101
+ 1177,1
102
+ 254,1
103
+ 1798,0
104
+ 2230,0
105
+ 619,0
106
+ 545,0
107
+ 1479,0
108
+ 1185,0
109
+ 771,1
110
+ 1175,0
111
+ 1651,0
112
+ 214,0
113
+ 1907,0
114
+ 2044,0
115
+ 1465,0
116
+ 812,1
117
+ 1077,1
118
+ 791,0
119
+ 200,1
120
+ 847,0
121
+ 1339,1
122
+ 1101,1
123
+ 192,1
124
+ 1872,0
125
+ 429,0
126
+ 271,0
127
+ 2135,0
128
+ 1187,0
129
+ 1247,0
130
+ 299,0
131
+ 976,1
132
+ 831,0
133
+ 652,1
134
+ 865,1
135
+ 190,1
136
+ 515,1
137
+ 371,1
138
+ 2005,0
139
+ 632,1
140
+ 1317,1
141
+ 425,1
142
+ 76,0
143
+ 299,1
144
+ 1036,0
145
+ 1084,0
146
+ 281,0
147
+ 783,1
148
+ 1207,1
149
+ 1220,0
150
+ 237,0
151
+ 762,0
152
+ 1922,0
153
+ 670,1
154
+ 1853,0
155
+ 126,1
156
+ 2221,0
157
+ 229,1
158
+ 1080,1
159
+ 2265,0
160
+ 813,1
161
+ 1211,1
162
+ 1113,0
163
+ 1105,0
164
+ 303,1
165
+ 834,1
166
+ 1328,0
167
+ 748,0
168
+ 151,1
169
+ 1315,1
170
+ 47,0
171
+ 290,1
172
+ 229,0
173
+ 672,0
174
+ 714,1
175
+ 1375,0
176
+ 1297,1
177
+ 623,1
178
+ 352,1
179
+ 460,1
180
+ 1144,1
181
+ 1143,0
182
+ 566,0
183
+ 539,0
184
+ 1295,1
185
+ 1655,0
186
+ 939,1
187
+ 1482,0
188
+ 9,1
189
+ 374,0
190
+ 1076,1
191
+ 1208,0
192
+ 1357,0
193
+ 1389,0
194
+ 88,0
195
+ 870,1
196
+ 2247,0
197
+ 698,1
198
+ 903,1
199
+ 915,0
200
+ 1874,0
201
+ 1292,1
202
+ 59,0
203
+ 526,0
204
+ 1263,0
205
+ 638,1
206
+ 669,1
207
+ 1026,1
208
+ 890,0
209
+ 2110,0
210
+ 668,0
211
+ 1941,0
212
+ 1105,1
213
+ 738,1
214
+ 2092,0
215
+ 931,0
216
+ 1401,0
217
+ 1345,0
218
+ 929,0
219
+ 933,0
220
+ 2165,0
221
+ 781,1
222
+ 136,0
223
+ 11,0
224
+ 2362,0
225
+ 1500,0
226
+ 802,1
227
+ 1402,0
228
+ 407,1
229
+ 445,0
230
+ 1189,1
231
+ 1675,0
232
+ 848,0
233
+ 1203,0
234
+ 785,0
235
+ 1066,0
236
+ 749,0
237
+ 1318,0
238
+ 980,1
239
+ 1234,1
240
+ 698,0
241
+ 2326,0
242
+ 1502,0
243
+ 505,0
244
+ 1840,0
245
+ 2,1
246
+ 844,0
247
+ 167,1
248
+ 528,1
249
+ 11,1
250
+ 762,1
251
+ 805,0
252
+ 1747,0
253
+ 430,1
254
+ 411,1
255
+ 2169,0
256
+ 113,1
257
+ 219,1
258
+ 1404,0
259
+ 647,0
260
+ 657,0
261
+ 798,0
262
+ 1100,1
263
+ 620,1
264
+ 1121,1
265
+ 2305,0
266
+ 1896,0
267
+ 1746,0
268
+ 1710,0
269
+ 1569,0
270
+ 740,0
271
+ 1227,1
272
+ 1089,1
273
+ 525,1
274
+ 55,1
275
+ 630,0
276
+ 562,0
277
+ 1299,0
278
+ 1611,0
279
+ 747,1
280
+ 36,0
281
+ 2330,0
282
+ 653,1
283
+ 2129,0
284
+ 765,0
285
+ 56,1
286
+ 991,1
287
+ 452,1
288
+ 1221,1
289
+ 362,0
290
+ 1083,1
291
+ 460,0
292
+ 1900,0
293
+ 2301,0
294
+ 1414,0
295
+ 1017,1
296
+ 2314,0
297
+ 308,0
298
+ 1870,0
299
+ 316,1
300
+ 518,0
301
+ 349,0
302
+ 1136,0
303
+ 217,1
304
+ 2244,0
305
+ 614,1
306
+ 1844,0
307
+ 1299,1
308
+ 343,1
309
+ 959,0
310
+ 1541,0
311
+ 930,1
312
+ 1343,0
313
+ 120,1
314
+ 2280,0
315
+ 83,0
316
+ 296,1
317
+ 841,0
318
+ 7,1
319
+ 365,0
320
+ 21,0
321
+ 1205,1
322
+ 1096,0
323
+ 115,0
324
+ 936,1
325
+ 87,0
326
+ 2026,0
327
+ 442,0
328
+ 160,1
329
+ 307,1
330
+ 1156,0
331
+ 1773,0
332
+ 534,1
333
+ 570,0
334
+ 599,0
335
+ 803,1
336
+ 1494,0
337
+ 578,0
338
+ 883,0
339
+ 950,1
340
+ 2036,0
341
+ 203,1
342
+ 2100,0
343
+ 13,1
344
+ 1702,0
345
+ 1230,0
346
+ 1349,0
347
+ 274,0
348
+ 673,1
349
+ 569,1
350
+ 248,0
351
+ 2349,0
352
+ 69,1
353
+ 1432,0
354
+ 459,0
355
+ 836,1
356
+ 385,0
357
+ 1323,1
358
+ 1302,0
359
+ 1951,0
360
+ 1341,1
361
+ 291,1
362
+ 1335,1
363
+ 161,1
364
+ 2166,0
365
+ 1114,1
366
+ 252,1
367
+ 423,0
368
+ 1452,0
369
+ 1186,0
370
+ 583,1
371
+ 1889,0
372
+ 579,1
373
+ 324,0
374
+ 139,1
375
+ 1691,0
376
+ 1034,1
377
+ 342,1
378
+ 374,1
379
+ 790,1
380
+ 147,1
381
+ 748,1
382
+ 600,0
383
+ 1098,1
384
+ 1768,0
385
+ 317,0
386
+ 81,0
387
+ 1340,0
388
+ 472,1
389
+ 934,1
390
+ 822,0
391
+ 1331,1
392
+ 1035,1
393
+ 1319,0
394
+ 1243,0
395
+ 1260,1
396
+ 0,1
397
+ 1674,0
398
+ 257,1
399
+ 2334,0
400
+ 1338,1
401
+ 1405,0
402
+ 1928,0
403
+ 1620,0
404
+ 715,0
405
+ 2182,0
406
+ 1595,0
407
+ 1216,1
408
+ 453,1
409
+ 1308,1
410
+ 624,0
411
+ 1051,0
412
+ 71,0
413
+ 22,1
414
+ 1527,0
415
+ 1867,0
416
+ 282,0
417
+ 338,1
418
+ 320,1
419
+ 10,1
420
+ 41,1
421
+ 961,1
422
+ 1728,0
423
+ 561,0
424
+ 744,1
425
+ 535,0
426
+ 1047,1
427
+ 575,1
428
+ 1303,0
429
+ 168,1
430
+ 1420,0
431
+ 1665,0
432
+ 1515,0
433
+ 1311,1
434
+ 2359,0
435
+ 1351,0
436
+ 788,1
437
+ 1274,1
438
+ 684,1
439
+ 1736,0
440
+ 674,0
441
+ 1686,0
442
+ 164,0
443
+ 1132,0
444
+ 621,0
445
+ 109,0
446
+ 6,1
447
+ 373,1
448
+ 1008,1
449
+ 1929,0
450
+ 1287,0
451
+ 568,0
452
+ 652,0
453
+ 4,0
454
+ 172,1
455
+ 312,1
456
+ 2376,0
457
+ 85,0
458
+ 300,0
459
+ 1075,1
460
+ 787,0
461
+ 14,1
462
+ 489,1
463
+ 407,0
464
+ 369,1
465
+ 275,0
466
+ 2055,0
467
+ 1103,0
468
+ 314,1
469
+ 487,1
470
+ 135,1
471
+ 134,1
472
+ 1122,0
473
+ 1286,0
474
+ 2033,0
475
+ 866,1
476
+ 478,1
477
+ 722,1
478
+ 792,0
479
+ 95,0
480
+ 454,0
481
+ 313,1
482
+ 597,0
483
+ 343,0
484
+ 541,1
485
+ 2346,0
486
+ 2212,0
487
+ 349,1
488
+ 1323,0
489
+ 104,0
490
+ 2070,0
491
+ 808,0
492
+ 815,1
493
+ 1990,0
494
+ 1583,0
495
+ 201,1
496
+ 1050,1
497
+ 834,0
498
+ 218,0
499
+ 602,1
500
+ 2194,0
501
+ 580,0
502
+ 100,1
503
+ 1488,0
504
+ 1265,1
505
+ 850,1
506
+ 46,1
507
+ 231,0
508
+ 2076,0
509
+ 2151,0
510
+ 125,1
511
+ 1071,0
512
+ 419,1
513
+ 1334,0
514
+ 1344,1
515
+ 775,1
516
+ 1152,0
517
+ 709,1
518
+ 1204,0
519
+ 1164,0
520
+ 609,1
521
+ 419,0
522
+ 944,0
523
+ 2303,0
524
+ 1297,0
525
+ 1799,0
526
+ 526,1
527
+ 353,0
528
+ 2302,0
529
+ 889,1
530
+ 1980,0
531
+ 1238,0
532
+ 225,1
533
+ 468,1
534
+ 1060,1
535
+ 287,1
536
+ 1765,0
537
+ 1629,0
538
+ 559,0
539
+ 642,0
540
+ 2001,0
541
+ 740,1
542
+ 463,1
543
+ 1198,1
544
+ 1680,0
545
+ 210,0
546
+ 1117,0
547
+ 1992,0
548
+ 1189,0
549
+ 1269,0
550
+ 728,1
551
+ 297,1
552
+ 239,1
553
+ 2174,0
554
+ 159,1
555
+ 879,1
556
+ 2127,0
557
+ 1384,0
558
+ 1800,0
559
+ 72,1
560
+ 975,1
561
+ 272,0
562
+ 339,1
563
+ 1372,0
564
+ 1307,1
565
+ 1133,0
566
+ 691,1
567
+ 731,0
568
+ 74,1
569
+ 2266,0
570
+ 158,0
571
+ 790,0
572
+ 938,1
573
+ 1559,0
574
+ 491,1
575
+ 473,1
576
+ 1045,1
577
+ 732,1
578
+ 1091,1
579
+ 536,0
580
+ 1301,0
581
+ 2019,0
582
+ 1182,0
583
+ 661,1
584
+ 1071,1
585
+ 189,0
586
+ 501,0
587
+ 745,1
588
+ 1125,1
589
+ 1704,0
590
+ 341,1
591
+ 815,0
592
+ 233,1
593
+ 840,1
594
+ 809,0
595
+ 1120,1
596
+ 589,1
597
+ 1192,0
598
+ 1673,0
599
+ 2197,0
600
+ 369,0
601
+ 767,0
602
+ 687,0
603
+ 241,0
604
+ 402,0
605
+ 651,1
606
+ 618,0
607
+ 963,1
608
+ 551,0
609
+ 671,1
610
+ 915,1
611
+ 597,1
612
+ 1314,1
613
+ 587,0
614
+ 1203,1
615
+ 152,1
616
+ 994,1
617
+ 767,1
618
+ 2243,0
619
+ 295,1
620
+ 364,1
621
+ 2297,0
622
+ 1995,0
623
+ 702,1
624
+ 1832,0
625
+ 317,1
626
+ 1120,0
627
+ 1086,0
628
+ 673,0
629
+ 777,0
630
+ 1423,0
631
+ 1473,0
632
+ 1137,1
633
+ 1376,0
634
+ 754,1
635
+ 284,1
636
+ 1312,1
637
+ 846,0
638
+ 669,0
639
+ 832,0
640
+ 884,1
641
+ 1316,1
642
+ 34,0
643
+ 1249,0
644
+ 2210,0
645
+ 321,1
646
+ 1251,1
647
+ 268,0
648
+ 41,0
649
+ 464,0
650
+ 69,0
651
+ 992,0
652
+ 306,1
653
+ 213,0
654
+ 108,0
655
+ 215,0
656
+ 512,1
657
+ 874,1
658
+ 846,1
659
+ 28,1
660
+ 1604,0
661
+ 1520,0
662
+ 499,0
663
+ 449,0
664
+ 116,0
665
+ 1040,1
666
+ 1092,1
667
+ 1761,0
668
+ 1484,0
669
+ 760,0
670
+ 1140,1
671
+ 395,1
672
+ 1142,0
673
+ 1586,0
674
+ 1092,0
675
+ 300,1
676
+ 2192,0
677
+ 830,1
678
+ 1748,0
679
+ 269,1
680
+ 1294,1
681
+ 1231,0
682
+ 701,1
683
+ 326,1
684
+ 1439,0
685
+ 928,1
686
+ 398,0
687
+ 145,0
688
+ 1670,0
689
+ 428,1
690
+ 2253,0
691
+ 1009,1
692
+ 986,0
693
+ 270,0
694
+ 1382,0
695
+ 924,0
696
+ 204,1
697
+ 540,0
698
+ 1491,0
699
+ 1522,0
700
+ 1113,1
701
+ 199,1
702
+ 298,0
703
+ 838,0
704
+ 2366,0
705
+ 302,0
706
+ 982,0
707
+ 1123,0
708
+ 1183,1
709
+ 60,0
710
+ 1133,1
711
+ 1061,0
712
+ 664,1
713
+ 712,1
714
+ 1813,0
715
+ 820,0
716
+ 1743,0
717
+ 977,0
718
+ 909,0
719
+ 689,1
720
+ 759,0
721
+ 2375,0
722
+ 761,0
723
+ 2128,0
724
+ 309,1
725
+ 120,0
726
+ 1314,0
727
+ 988,0
728
+ 1669,0
729
+ 1162,1
730
+ 1272,1
731
+ 1007,0
732
+ 1959,0
733
+ 1057,1
734
+ 1707,0
735
+ 580,1
736
+ 1387,0
737
+ 1066,1
738
+ 557,1
739
+ 499,1
740
+ 1440,0
741
+ 854,1
742
+ 252,0
743
+ 1135,1
744
+ 372,0
745
+ 2189,0
746
+ 1184,1
747
+ 794,0
748
+ 198,1
749
+ 1296,0
750
+ 1821,0
751
+ 330,1
752
+ 1218,1
753
+ 826,0
754
+ 1336,0
755
+ 123,1
756
+ 2010,0
757
+ 1270,0
758
+ 1413,0
759
+ 358,0
760
+ 665,1
761
+ 396,0
762
+ 367,1
763
+ 752,1
764
+ 1214,0
765
+ 416,1
766
+ 1789,0
767
+ 2067,0
768
+ 1193,1
769
+ 322,0
770
+ 1016,1
771
+ 2154,0
772
+ 1767,0
773
+ 494,1
774
+ 19,1
775
+ 332,1
776
+ 35,0
777
+ 1632,0
778
+ 376,1
779
+ 66,1
780
+ 144,1
781
+ 72,0
782
+ 959,1
783
+ 649,1
784
+ 658,0
785
+ 263,0
786
+ 2261,0
787
+ 946,1
788
+ 1330,0
789
+ 1042,1
790
+ 1588,0
791
+ 1518,0
792
+ 2248,0
793
+ 1230,1
794
+ 1849,0
795
+ 1817,0
796
+ 1699,0
797
+ 2358,0
798
+ 1185,1
799
+ 769,0
800
+ 1232,1
801
+ 484,1
802
+ 40,1
803
+ 1277,0
804
+ 829,1
805
+ 637,1
806
+ 1190,0
807
+ 483,1
808
+ 1667,0
809
+ 9,0
810
+ 1759,0
811
+ 368,0
812
+ 2288,0
813
+ 2118,0
814
+ 721,1
815
+ 138,0
816
+ 117,1
817
+ 103,1
818
+ 1035,0
819
+ 1852,0
820
+ 373,0
821
+ 2381,0
822
+ 2204,0
823
+ 1477,0
824
+ 48,0
825
+ 533,0
826
+ 1194,0
827
+ 1055,0
828
+ 124,0
829
+ 1288,1
830
+ 1829,0
831
+ 400,0
832
+ 1197,0
833
+ 8,0
834
+ 59,1
835
+ 1319,1
836
+ 717,1
837
+ 994,0
838
+ 410,1
839
+ 255,0
840
+ 1860,0
841
+ 506,0
842
+ 2152,0
843
+ 1769,0
844
+ 598,0
845
+ 5,0
846
+ 2245,0
847
+ 755,0
848
+ 1331,0
849
+ 188,1
850
+ 1447,0
851
+ 2201,0
852
+ 1069,1
853
+ 289,1
854
+ 1339,0
855
+ 206,1
856
+ 1890,0
857
+ 1078,1
858
+ 504,1
859
+ 337,1
860
+ 1348,0
861
+ 1049,1
862
+ 603,1
863
+ 1912,0
864
+ 979,0
865
+ 477,1
866
+ 222,0
867
+ 972,1
868
+ 981,1
869
+ 795,1
870
+ 171,0
871
+ 16,1
872
+ 594,1
873
+ 121,1
874
+ 202,1
875
+ 2336,0
876
+ 1070,1
877
+ 1265,0
878
+ 604,1
879
+ 646,1
880
+ 2347,0
881
+ 2094,0
882
+ 560,1
883
+ 496,0
884
+ 239,0
885
+ 1456,0
886
+ 242,0
887
+ 897,0
888
+ 448,0
889
+ 2061,0
890
+ 679,0
891
+ 532,0
892
+ 1727,0
893
+ 1286,1
894
+ 653,0
895
+ 1850,0
896
+ 1193,0
897
+ 854,0
898
+ 771,0
899
+ 606,1
900
+ 1396,0
901
+ 641,0
902
+ 493,1
903
+ 960,0
904
+ 2034,0
905
+ 651,0
906
+ 451,0
907
+ 2383,0
908
+ 1263,1
909
+ 1165,1
910
+ 1153,1
911
+ 84,0
912
+ 996,0
913
+ 1236,1
914
+ 945,1
915
+ 976,0
916
+ 1094,0
917
+ 877,0
918
+ 2025,0
919
+ 1039,1
920
+ 857,0
921
+ 1267,1
922
+ 1280,1
923
+ 1088,0
924
+ 1116,0
925
+ 157,1
926
+ 555,1
927
+ 962,0
928
+ 285,0
929
+ 1197,1
930
+ 246,0
931
+ 263,1
932
+ 983,1
933
+ 619,1
934
+ 968,1
935
+ 1309,0
936
+ 484,0
937
+ 869,0
938
+ 643,0
939
+ 1322,1
940
+ 979,1
941
+ 756,1
942
+ 1252,0
943
+ 734,0
944
+ 870,0
945
+ 133,0
946
+ 1010,0
947
+ 223,1
948
+ 1671,0
949
+ 93,1
950
+ 1942,0
951
+ 969,0
952
+ 687,1
953
+ 1099,1
954
+ 1548,0
955
+ 860,0
956
+ 511,1
957
+ 554,1
958
+ 447,1
959
+ 620,0
960
+ 640,1
961
+ 587,1
962
+ 642,1
963
+ 1169,1
964
+ 258,0
965
+ 1471,0
966
+ 882,1
967
+ 930,0
968
+ 2185,0
969
+ 843,1
970
+ 413,1
971
+ 632,0
972
+ 640,0
973
+ 1906,0
974
+ 8,1
975
+ 1327,1
976
+ 1199,1
977
+ 50,1
978
+ 234,0
979
+ 584,1
980
+ 1213,0
981
+ 328,0
982
+ 1076,0
983
+ 1991,0
984
+ 794,1
985
+ 568,1
986
+ 565,0
987
+ 1555,0
988
+ 92,1
989
+ 920,1
990
+ 1805,0
991
+ 923,1
992
+ 324,1
993
+ 346,0
994
+ 856,0
995
+ 12,0
996
+ 1079,1
997
+ 274,1
998
+ 2208,0
999
+ 814,1
1000
+ 1792,0
1001
+ 1054,1
1002
+ 2155,0
1003
+ 2339,0
1004
+ 881,1
1005
+ 1033,0
1006
+ 363,0
1007
+ 393,0
1008
+ 151,0
1009
+ 1621,0
1010
+ 82,0
1011
+ 440,1
1012
+ 871,1
1013
+ 1244,0
1014
+ 787,1
1015
+ 452,0
1016
+ 2126,0
1017
+ 1700,0
1018
+ 743,0
1019
+ 100,0
1020
+ 1315,0
1021
+ 381,1
1022
+ 954,1
1023
+ 395,0
1024
+ 1790,0
1025
+ 19,0
1026
+ 1139,1
1027
+ 861,1
1028
+ 294,1
1029
+ 1780,0
1030
+ 119,0
1031
+ 1154,1
1032
+ 137,1
1033
+ 2254,0
1034
+ 1172,1
1035
+ 207,1
1036
+ 563,1
1037
+ 43,0
1038
+ 1140,0
1039
+ 112,0
1040
+ 2279,0
1041
+ 1178,1
1042
+ 881,0
1043
+ 1597,0
1044
+ 596,0
1045
+ 283,1
1046
+ 680,0
1047
+ 322,1
1048
+ 537,0
1049
+ 1097,1
1050
+ 564,1
1051
+ 558,1
1052
+ 146,0
1053
+ 210,1
1054
+ 968,0
1055
+ 282,1
1056
+ 281,1
1057
+ 29,1
1058
+ 525,0
1059
+ 538,1
1060
+ 339,0
1061
+ 2145,0
1062
+ 758,0
1063
+ 518,1
1064
+ 1062,0
1065
+ 591,1
1066
+ 177,1
1067
+ 1723,0
1068
+ 152,0
1069
+ 1001,1
1070
+ 2158,0
1071
+ 345,1
1072
+ 659,1
1073
+ 81,1
1074
+ 325,0
1075
+ 628,1
1076
+ 402,1
1077
+ 1356,0
1078
+ 98,0
1079
+ 1056,1
1080
+ 1210,1
1081
+ 1148,0
1082
+ 3,1
1083
+ 686,1
1084
+ 1782,0
1085
+ 56,0
1086
+ 522,1
1087
+ 1030,0
1088
+ 1791,0
1089
+ 971,1
1090
+ 32,0
1091
+ 978,1
1092
+ 588,1
1093
+ 678,1
1094
+ 2292,0
1095
+ 386,0
1096
+ 737,0
1097
+ 1925,0
1098
+ 609,0
1099
+ 364,0
1100
+ 763,0
1101
+ 645,0
1102
+ 1847,0
1103
+ 1104,1
1104
+ 1461,0
1105
+ 137,0
1106
+ 240,0
1107
+ 674,1
1108
+ 601,0
1109
+ 2088,0
1110
+ 155,1
1111
+ 938,0
1112
+ 907,0
1113
+ 74,0
1114
+ 911,1
1115
+ 974,0
1116
+ 742,1
1117
+ 983,0
1118
+ 940,1
1119
+ 347,1
1120
+ 492,1
1121
+ 80,0
1122
+ 1231,1
1123
+ 1724,0
1124
+ 782,1
1125
+ 1495,0
1126
+ 2200,0
1127
+ 51,0
1128
+ 751,1
1129
+ 191,0
1130
+ 1422,0
1131
+ 496,1
1132
+ 974,1
1133
+ 805,1
1134
+ 1657,0
1135
+ 433,0
1136
+ 438,0
1137
+ 285,1
1138
+ 1897,0
1139
+ 1175,1
1140
+ 1320,0
1141
+ 2331,0
1142
+ 1546,0
1143
+ 780,1
1144
+ 820,1
1145
+ 1266,0
1146
+ 977,1
1147
+ 1496,0
1148
+ 1763,0
1149
+ 1842,0
1150
+ 1144,0
1151
+ 1547,0
1152
+ 228,0
1153
+ 1708,0
1154
+ 577,0
1155
+ 401,1
1156
+ 418,1
1157
+ 511,0
1158
+ 590,1
1159
+ 1013,1
1160
+ 567,0
1161
+ 470,1
1162
+ 227,1
1163
+ 436,1
1164
+ 1047,0
1165
+ 822,1
1166
+ 117,0
1167
+ 29,0
1168
+ 997,0
1169
+ 1093,0
1170
+ 1090,1
1171
+ 624,1
1172
+ 164,1
1173
+ 2378,0
1174
+ 1084,1
1175
+ 75,1
1176
+ 793,1
1177
+ 910,0
1178
+ 676,0
1179
+ 908,0
1180
+ 2029,0
1181
+ 92,0
1182
+ 38,0
1183
+ 505,1
1184
+ 409,1
1185
+ 1388,0
1186
+ 2122,0
1187
+ 1997,0
1188
+ 441,0
1189
+ 134,0
1190
+ 107,0
1191
+ 1026,0
1192
+ 236,1
1193
+ 261,0
1194
+ 1602,0
1195
+ 2195,0
1196
+ 1937,0
1197
+ 2121,0
1198
+ 627,0
1199
+ 697,0
1200
+ 1340,1
1201
+ 516,1
1202
+ 467,1
1203
+ 1173,0
1204
+ 1448,0
1205
+ 2065,0
1206
+ 456,0
1207
+ 1474,0
1208
+ 2322,0
1209
+ 1274,0
1210
+ 1784,0
1211
+ 773,1
1212
+ 818,1
1213
+ 153,1
1214
+ 53,0
1215
+ 1109,1
1216
+ 2350,0
1217
+ 1578,0
1218
+ 1858,0
1219
+ 321,0
1220
+ 986,1
1221
+ 180,0
1222
+ 1115,0
1223
+ 1873,0
1224
+ 1869,0
1225
+ 416,0
1226
+ 187,0
1227
+ 819,1
1228
+ 220,0
1229
+ 156,0
1230
+ 1827,0
1231
+ 685,0
1232
+ 1255,1
1233
+ 952,1
1234
+ 2341,0
1235
+ 595,1
1236
+ 509,0
1237
+ 251,0
1238
+ 2172,0
1239
+ 1934,0
1240
+ 118,1
1241
+ 521,0
1242
+ 1237,1
1243
+ 807,0
1244
+ 345,0
1245
+ 457,1
1246
+ 955,1
1247
+ 633,0
1248
+ 330,0
1249
+ 423,1
1250
+ 21,1
1251
+ 875,1
1252
+ 280,1
1253
+ 1165,0
1254
+ 70,0
1255
+ 404,0
1256
+ 389,1
1257
+ 1012,1
1258
+ 929,1
1259
+ 1378,0
1260
+ 1025,1
1261
+ 1453,0
1262
+ 730,0
1263
+ 1342,0
1264
+ 486,1
1265
+ 1540,0
1266
+ 528,0
1267
+ 1146,0
1268
+ 1291,1
1269
+ 148,1
1270
+ 778,0
1271
+ 1151,0
1272
+ 675,0
1273
+ 823,0
1274
+ 2187,0
1275
+ 498,0
1276
+ 2134,0
1277
+ 1004,0
1278
+ 1248,1
1279
+ 1273,1
1280
+ 912,0
1281
+ 984,1
1282
+ 615,1
1283
+ 1013,0
1284
+ 1760,0
1285
+ 469,1
1286
+ 2325,0
1287
+ 1311,0
1288
+ 958,0
1289
+ 194,0
1290
+ 811,1
1291
+ 736,0
1292
+ 1734,0
1293
+ 842,0
1294
+ 426,1
1295
+ 1163,0
1296
+ 845,0
1297
+ 1660,0
1298
+ 1866,0
1299
+ 629,1
1300
+ 1238,1
1301
+ 849,1
1302
+ 467,0
1303
+ 741,0
1304
+ 1324,1
1305
+ 749,1
1306
+ 247,1
1307
+ 1668,0
1308
+ 25,1
1309
+ 185,0
1310
+ 1584,0
1311
+ 1338,0
1312
+ 223,0
1313
+ 657,1
1314
+ 440,0
1315
+ 558,0
1316
+ 1063,0
1317
+ 792,1
1318
+ 1772,0
1319
+ 121,0
1320
+ 727,0
1321
+ 527,1
1322
+ 1164,1
1323
+ 764,0
1324
+ 278,0
1325
+ 773,0
1326
+ 1168,1
1327
+ 1859,0
1328
+ 1046,1
1329
+ 926,0
1330
+ 1019,0
1331
+ 52,0
1332
+ 1044,1
1333
+ 1136,1
1334
+ 1910,0
1335
+ 277,0
1336
+ 305,0
1337
+ 523,0
1338
+ 922,1
1339
+ 2304,0
1340
+ 63,0
1341
+ 211,0
1342
+ 1549,0
1343
+ 2020,0
1344
+ 1394,0
1345
+ 396,1
1346
+ 919,0
1347
+ 50,0
1348
+ 1450,0
1349
+ 831,1
1350
+ 240,1
1351
+ 1310,0
1352
+ 958,1
1353
+ 58,0
1354
+ 856,1
1355
+ 478,0
1356
+ 636,0
1357
+ 1137,0
1358
+ 357,1
1359
+ 1081,0
1360
+ 953,1
1361
+ 1696,0
1362
+ 1806,0
1363
+ 279,1
1364
+ 713,1
1365
+ 2364,0
1366
+ 114,0
1367
+ 190,0
1368
+ 371,0
1369
+ 221,0
1370
+ 720,1
1371
+ 1508,0
1372
+ 310,0
1373
+ 903,0
1374
+ 795,0
1375
+ 1102,0
1376
+ 2272,0
1377
+ 479,1
1378
+ 1186,1
1379
+ 585,1
1380
+ 64,1
1381
+ 967,1
1382
+ 1434,0
1383
+ 1579,0
1384
+ 1111,1
1385
+ 616,1
1386
+ 1777,0
1387
+ 993,0
1388
+ 873,0
1389
+ 774,1
1390
+ 1157,0
1391
+ 824,1
1392
+ 1919,0
1393
+ 608,0
1394
+ 917,0
1395
+ 1037,0
1396
+ 1324,0
1397
+ 379,0
1398
+ 1368,0
1399
+ 635,1
1400
+ 30,1
1401
+ 1718,0
1402
+ 231,1
1403
+ 1103,1
1404
+ 4,1
1405
+ 417,1
1406
+ 2043,0
1407
+ 68,1
1408
+ 1347,1
1409
+ 1836,0
1410
+ 481,1
1411
+ 1596,0
1412
+ 1468,0
1413
+ 807,1
1414
+ 315,1
1415
+ 1807,0
1416
+ 599,1
1417
+ 1257,1
1418
+ 797,1
1419
+ 412,0
1420
+ 246,1
1421
+ 943,1
1422
+ 1262,0
1423
+ 124,1
1424
+ 724,1
1425
+ 1463,0
1426
+ 260,0
1427
+ 562,1
1428
+ 799,0
1429
+ 768,1
1430
+ 487,0
1431
+ 637,0
1432
+ 193,0
1433
+ 1018,1
1434
+ 1865,0
1435
+ 273,0
1436
+ 579,0
1437
+ 209,0
1438
+ 704,0
1439
+ 1064,0
1440
+ 808,1
1441
+ 1083,0
1442
+ 226,0
1443
+ 398,1
1444
+ 524,1
1445
+ 2337,0
1446
+ 891,0
1447
+ 1272,0
1448
+ 894,0
1449
+ 517,1
1450
+ 1293,0
1451
+ 705,0
1452
+ 630,1
1453
+ 862,0
1454
+ 1162,0
1455
+ 33,1
1456
+ 183,1
1457
+ 2176,0
1458
+ 277,1
1459
+ 1705,0
1460
+ 1141,0
1461
+ 1616,0
1462
+ 914,1
1463
+ 1221,0
1464
+ 1936,0
1465
+ 435,1
1466
+ 865,0
1467
+ 276,1
1468
+ 1891,0
1469
+ 1268,1
1470
+ 1226,1
1471
+ 304,0
1472
+ 40,0
1473
+ 377,1
1474
+ 1091,0
1475
+ 57,0
1476
+ 200,0
1477
+ 1267,0
1478
+ 789,1
1479
+ 206,0
1480
+ 34,1
1481
+ 972,0
1482
+ 2353,0
1483
+ 232,1
1484
+ 1161,1
1485
+ 535,1
1486
+ 1637,0
1487
+ 1127,0
1488
+ 2223,0
1489
+ 242,1
1490
+ 46,0
1491
+ 1253,1
1492
+ 22,0
1493
+ 357,0
1494
+ 439,0
1495
+ 1030,1
1496
+ 876,0
1497
+ 1462,0
1498
+ 153,0
1499
+ 235,0
1500
+ 1253,0
1501
+ 2214,0
1502
+ 251,1
1503
+ 1561,0
1504
+ 714,0
1505
+ 677,1
1506
+ 311,0
1507
+ 1256,0
1508
+ 922,0
1509
+ 1087,1
1510
+ 353,1
1511
+ 519,0
1512
+ 1489,0
1513
+ 1015,1
1514
+ 1334,1
1515
+ 1278,1
1516
+ 1721,0
1517
+ 2098,0
1518
+ 221,1
1519
+ 931,1
1520
+ 1557,0
1521
+ 2258,0
1522
+ 1893,0
1523
+ 905,1
1524
+ 504,0
1525
+ 1181,0
1526
+ 770,1
1527
+ 80,1
1528
+ 2096,0
1529
+ 1725,0
1530
+ 1022,1
1531
+ 1127,1
1532
+ 1379,0
1533
+ 508,1
1534
+ 1598,0
1535
+ 2109,0
1536
+ 1720,0
1537
+ 1730,0
1538
+ 531,0
1539
+ 15,1
1540
+ 1418,0
1541
+ 273,1
1542
+ 1861,0
1543
+ 135,0
1544
+ 1475,0
1545
+ 699,0
1546
+ 553,0
1547
+ 852,1
1548
+ 725,1
1549
+ 500,1
1550
+ 1828,0
1551
+ 692,0
1552
+ 1261,1
1553
+ 524,0
1554
+ 1179,0
1555
+ 1940,0
1556
+ 540,1
1557
+ 1275,0
1558
+ 2227,0
1559
+ 90,1
1560
+ 320,0
1561
+ 706,1
1562
+ 362,1
1563
+ 173,1
1564
+ 634,0
1565
+ 1932,0
1566
+ 965,1
1567
+ 365,1
1568
+ 302,1
1569
+ 458,0
1570
+ 105,1
1571
+ 98,1
1572
+ 1571,0
1573
+ 127,0
1574
+ 2183,0
1575
+ 1694,0
1576
+ 682,0
1577
+ 387,1
1578
+ 264,1
1579
+ 482,1
1580
+ 551,1
1581
+ 1108,0
1582
+ 745,0
1583
+ 937,0
1584
+ 2116,0
1585
+ 313,0
1586
+ 168,0
1587
+ 719,0
1588
+ 1753,0
1589
+ 804,1
1590
+ 1947,0
1591
+ 1000,0
1592
+ 1131,1
1593
+ 935,0
1594
+ 1863,0
1595
+ 1337,0
1596
+ 530,0
1597
+ 20,1
1598
+ 1313,1
1599
+ 660,0
1600
+ 6,0
1601
+ 114,1
1602
+ 443,0
1603
+ 2354,0
1604
+ 1085,0
1605
+ 1717,0
1606
+ 692,1
1607
+ 1303,1
1608
+ 906,0
1609
+ 1261,0
1610
+ 552,1
1611
+ 394,0
1612
+ 1171,0
1613
+ 2072,0
1614
+ 613,1
1615
+ 832,1
1616
+ 1962,0
1617
+ 1531,0
1618
+ 1179,1
1619
+ 1888,0
1620
+ 663,0
1621
+ 1871,0
1622
+ 267,0
1623
+ 329,0
1624
+ 1430,0
1625
+ 2164,0
1626
+ 1733,0
1627
+ 1069,0
1628
+ 465,1
1629
+ 1095,1
1630
+ 2269,0
1631
+ 658,1
1632
+ 1110,0
1633
+ 1642,0
1634
+ 813,0
1635
+ 106,1
1636
+ 111,0
1637
+ 471,0
1638
+ 292,1
1639
+ 358,1
1640
+ 189,1
1641
+ 655,1
1642
+ 2310,0
1643
+ 1300,0
1644
+ 414,1
1645
+ 1098,0
1646
+ 403,1
1647
+ 1572,0
1648
+ 1317,0
1649
+ 35,1
1650
+ 876,1
1651
+ 1880,0
1652
+ 1970,0
1653
+ 896,1
1654
+ 2015,0
1655
+ 160,0
1656
+ 1243,1
1657
+ 1714,0
1658
+ 2294,0
1659
+ 694,0
1660
+ 1228,1
1661
+ 97,0
1662
+ 165,0
1663
+ 722,0
1664
+ 521,1
1665
+ 1222,0
1666
+ 469,0
1667
+ 811,0
1668
+ 971,0
1669
+ 115,1
1670
+ 248,1
1671
+ 363,1
1672
+ 79,0
1673
+ 1259,0
1674
+ 1505,0
1675
+ 835,1
1676
+ 512,0
1677
+ 243,0
1678
+ 564,0
1679
+ 196,1
1680
+ 830,0
1681
+ 1235,0
1682
+ 1894,0
1683
+ 508,0
1684
+ 15,0
1685
+ 89,1
1686
+ 5,1
1687
+ 1344,0
1688
+ 1169,0
1689
+ 1226,0
1690
+ 948,1
1691
+ 1438,0
1692
+ 383,0
1693
+ 2379,0
1694
+ 1345,1
1695
+ 1271,0
1696
+ 1538,0
1697
+ 2267,0
1698
+ 729,1
1699
+ 185,1
1700
+ 1729,0
1701
+ 763,1
1702
+ 701,0
1703
+ 633,1
1704
+ 1381,0
1705
+ 932,1
1706
+ 1576,0
1707
+ 1166,0
1708
+ 162,1
1709
+ 234,1
1710
+ 949,0
1711
+ 610,0
1712
+ 2097,0
1713
+ 1499,0
1714
+ 1464,0
1715
+ 1028,0
1716
+ 389,0
1717
+ 904,1
1718
+ 1467,0
1719
+ 1406,0
1720
+ 1073,0
1721
+ 1511,0
1722
+ 1436,0
1723
+ 999,1
1724
+ 503,1
1725
+ 574,1
1726
+ 706,0
1727
+ 182,1
1728
+ 1138,1
1729
+ 708,1
1730
+ 503,0
1731
+ 946,0
1732
+ 2309,0
1733
+ 372,1
1734
+ 224,1
1735
+ 872,1
1736
+ 566,1
1737
+ 1021,1
1738
+ 814,0
1739
+ 571,1
1740
+ 785,1
1741
+ 73,1
1742
+ 719,1
1743
+ 1843,0
1744
+ 288,1
1745
+ 1614,0
1746
+ 1112,0
1747
+ 1017,0
1748
+ 1093,1
1749
+ 2251,0
1750
+ 54,0
1751
+ 631,0
1752
+ 662,0
1753
+ 385,1
1754
+ 742,0
1755
+ 178,1
1756
+ 796,1
1757
+ 1681,0
1758
+ 449,1
1759
+ 2233,0
1760
+ 1795,0
1761
+ 2114,0
1762
+ 801,0
1763
+ 887,0
1764
+ 142,1
1765
+ 906,1
1766
+ 66,0
1767
+ 406,1
1768
+ 1690,0
1769
+ 829,0
1770
+ 73,0
1771
+ 238,1
1772
+ 1856,0
1773
+ 176,0
1774
+ 149,0
1775
+ 1216,0
1776
+ 908,1
1777
+ 951,1
1778
+ 927,0
1779
+ 1513,0
1780
+ 1712,0
1781
+ 702,0
1782
+ 1129,0
1783
+ 1256,1
1784
+ 919,1
1785
+ 1884,0
1786
+ 311,1
1787
+ 36,1
1788
+ 1794,0
1789
+ 2093,0
1790
+ 82,1
1791
+ 1018,0
1792
+ 2027,0
1793
+ 1065,1
1794
+ 1639,0
1795
+ 809,1
1796
+ 1318,1
1797
+ 90,0
1798
+ 1074,0
1799
+ 2342,0
1800
+ 1305,1
1801
+ 947,1
1802
+ 1168,0
1803
+ 67,1
1804
+ 270,1
1805
+ 388,0
1806
+ 1306,1
1807
+ 1040,0
1808
+ 352,0
1809
+ 453,0
1810
+ 1545,0
1811
+ 2207,0
1812
+ 645,1
1813
+ 1046,0
1814
+ 857,1
1815
+ 644,1
1816
+ 1519,0
1817
+ 513,0
1818
+ 2173,0
1819
+ 268,1
1820
+ 718,1
1821
+ 109,1
1822
+ 693,0
1823
+ 759,1
1824
+ 997,1
1825
+ 1285,1
1826
+ 1068,1
1827
+ 2217,0
1828
+ 1289,0
1829
+ 1079,0
1830
+ 789,0
1831
+ 45,1
1832
+ 867,0
1833
+ 1446,0
1834
+ 1581,0
1835
+ 529,1
1836
+ 2106,0
1837
+ 1735,0
1838
+ 2078,0
1839
+ 879,0
1840
+ 501,1
1841
+ 1240,0
1842
+ 947,0
1843
+ 1052,0
1844
+ 2278,0
1845
+ 2368,0
1846
+ 1787,0
1847
+ 1202,0
1848
+ 746,0
1849
+ 1039,0
1850
+ 781,0
1851
+ 303,0
1852
+ 1214,1
1853
+ 573,1
1854
+ 391,0
1855
+ 735,0
1856
+ 625,0
1857
+ 1215,1
1858
+ 1042,0
1859
+ 147,0
1860
+ 14,0
1861
+ 1294,0
1862
+ 1628,0
1863
+ 256,0
1864
+ 1080,0
1865
+ 1312,0
1866
+ 694,1
1867
+ 1213,1
1868
+ 78,0
1869
+ 286,0
1870
+ 827,1
1871
+ 894,1
1872
+ 141,0
1873
+ 1283,0
1874
+ 348,0
1875
+ 1115,1
1876
+ 1052,1
1877
+ 557,0
1878
+ 1770,0
1879
+ 1259,1
1880
+ 447,0
1881
+ 53,1
1882
+ 862,1
1883
+ 1037,1
1884
+ 775,0
1885
+ 1268,0
1886
+ 704,1
1887
+ 1347,0
1888
+ 860,1
1889
+ 2017,0
1890
+ 233,0
1891
+ 1781,0
1892
+ 1329,1
1893
+ 86,1
1894
+ 354,0
1895
+ 824,0
1896
+ 1683,0
1897
+ 31,1
1898
+ 1063,1
1899
+ 388,1
1900
+ 1886,0
1901
+ 331,1
1902
+ 359,0
1903
+ 37,1
1904
+ 1134,1
1905
+ 279,0
1906
+ 1745,0
1907
+ 375,1
1908
+ 962,1
1909
+ 422,1
1910
+ 51,1
1911
+ 2328,0
1912
+ 1269,1
1913
+ 1421,0
1914
+ 171,1
1915
+ 895,0
1916
+ 126,0
1917
+ 1804,0
1918
+ 1709,0
1919
+ 1689,0
1920
+ 309,0
1921
+ 425,0
1922
+ 1528,0
1923
+ 1341,0
1924
+ 237,1
1925
+ 399,0
1926
+ 1001,0
1927
+ 838,1
1928
+ 1779,0
1929
+ 1650,0
1930
+ 755,1
1931
+ 27,1
1932
+ 179,0
1933
+ 1000,1
1934
+ 2246,0
1935
+ 939,0
1936
+ 514,1
1937
+ 1219,1
1938
+ 868,0
1939
+ 1219,0
1940
+ 444,0
1941
+ 667,0
1942
+ 2240,0
1943
+ 538,0
1944
+ 738,0
1945
+ 774,0
1946
+ 1333,1
1947
+ 1192,1
1948
+ 1132,1
1949
+ 378,1
1950
+ 817,1
1951
+ 1108,1
1952
+ 243,1
1953
+ 2,0
1954
+ 709,0
1955
+ 626,0
1956
+ 1191,0
1957
+ 654,0
1958
+ 641,1
1959
+ 988,1
1960
+ 783,0
1961
+ 101,0
1962
+ 3,0
1963
+ 991,0
1964
+ 415,0
1965
+ 128,1
1966
+ 2360,0
1967
+ 96,1
1968
+ 476,0
1969
+ 1336,1
1970
+ 2198,0
1971
+ 1460,0
1972
+ 207,0
1973
+ 1215,0
1974
+ 1062,1
1975
+ 399,1
1976
+ 519,1
1977
+ 1776,0
1978
+ 966,1
1979
+ 253,1
1980
+ 1251,0
1981
+ 1156,1
1982
+ 1532,0
1983
+ 1139,0
1984
+ 1756,0
1985
+ 765,1
1986
+ 458,1
1987
+ 2073,0
1988
+ 1935,0
1989
+ 2235,0
1990
+ 208,1
1991
+ 514,0
1992
+ 436,0
1993
+ 607,0
1994
+ 199,0
1995
+ 379,1
1996
+ 43,1
1997
+ 756,0
1998
+ 57,1
1999
+ 583,0
2000
+ 184,1
2001
+ 885,0
2002
+ 88,1
2003
+ 1183,0
2004
+ 656,0
2005
+ 1147,1
2006
+ 52,1
2007
+ 638,0
2008
+ 733,0
2009
+ 1523,0
2010
+ 438,1
2011
+ 1,0
2012
+ 727,1
2013
+ 1411,0
2014
+ 1876,0
2015
+ 750,1
2016
+ 2079,0
2017
+ 826,1
2018
+ 1652,0
2019
+ 1786,0
2020
+ 970,1
2021
+ 1301,1
2022
+ 520,0
2023
+ 654,1
2024
+ 864,0
2025
+ 368,1
2026
+ 1400,0
2027
+ 1403,0
2028
+ 2150,0
2029
+ 1255,0
2030
+ 1367,0
2031
+ 837,0
2032
+ 485,0
2033
+ 1043,1
2034
+ 1118,1
2035
+ 1254,1
2036
+ 490,0
2037
+ 575,0
2038
+ 1808,0
2039
+ 695,0
2040
+ 712,0
2041
+ 2119,0
2042
+ 735,1
2043
+ 2148,0
2044
+ 264,0
2045
+ 1107,1
2046
+ 901,1
2047
+ 2124,0
2048
+ 893,1
2049
+ 646,0
2050
+ 611,0
2051
+ 779,1
2052
+ 1170,0
2053
+ 816,1
2054
+ 1067,1
2055
+ 31,0
2056
+ 523,1
2057
+ 648,1
2058
+ 315,0
2059
+ 639,1
2060
+ 1600,0
2061
+ 42,0
2062
+ 1217,1
2063
+ 71,1
2064
+ 1841,0
2065
+ 278,1
2066
+ 1155,1
2067
+ 148,0
2068
+ 1485,0
2069
+ 1248,0
2070
+ 544,1
2071
+ 1200,0
2072
+ 176,1
2073
+ 1056,0
2074
+ 682,1
2075
+ 1355,0
2076
+ 629,0
2077
+ 1053,0
2078
+ 739,0
2079
+ 162,0
2080
+ 713,0
2081
+ 1350,0
2082
+ 672,1
2083
+ 230,1
2084
+ 2335,0
2085
+ 1239,1
2086
+ 2216,0
2087
+ 1247,1
2088
+ 119,1
2089
+ 1809,0
2090
+ 1166,1
2091
+ 37,0
2092
+ 1280,0
2093
+ 1361,0
2094
+ 726,0
2095
+ 26,0
2096
+ 782,0
2097
+ 361,1
2098
+ 209,1
2099
+ 186,1
2100
+ 107,1
2101
+ 841,1
2102
+ 429,1
2103
+ 1195,0
2104
+ 1167,0
2105
+ 301,0
2106
+ 1911,0
2107
+ 340,1
2108
+ 1304,0
2109
+ 819,0
2110
+ 850,0
2111
+ 1601,0
2112
+ 625,1
2113
+ 1101,0
2114
+ 89,0
2115
+ 2081,0
2116
+ 235,1
2117
+ 1201,0
2118
+ 75,0
2119
+ 710,0
2120
+ 434,1
2121
+ 334,0
2122
+ 1737,0
2123
+ 1065,0
2124
+ 1284,0
2125
+ 276,0
2126
+ 1239,0
2127
+ 138,1
2128
+ 1176,0
2129
+ 392,0
2130
+ 250,0
2131
+ 836,0
2132
+ 260,1
2133
+ 443,1
2134
+ 827,0
2135
+ 725,0
2136
+ 409,0
2137
+ 605,1
2138
+ 1854,0
2139
+ 1364,0
2140
+ 78,1
2141
+ 717,0
2142
+ 329,1
2143
+ 105,0
2144
+ 262,1
2145
+ 912,1
2146
+ 173,0
2147
+ 1752,0
2148
+ 96,0
2149
+ 1146,1
2150
+ 427,1
2151
+ 350,1
2152
+ 2307,0
2153
+ 1481,0
2154
+ 1058,0
2155
+ 547,1
2156
+ 731,1
2157
+ 1321,1
2158
+ 943,0
2159
+ 1624,0
2160
+ 259,0
2161
+ 1209,0
2162
+ 1130,1
2163
+ 1302,1
2164
+ 1117,1
2165
+ 1875,0
2166
+ 555,0
2167
+ 1478,0
2168
+ 422,0
2169
+ 1145,1
2170
+ 1740,0
2171
+ 556,0
2172
+ 424,0
2173
+ 1433,0
2174
+ 2048,0
2175
+ 2051,0
2176
+ 421,1
2177
+ 1282,1
2178
+ 1966,0
2179
+ 1058,1
2180
+ 1233,1
2181
+ 474,0
2182
+ 54,1
2183
+ 169,0
2184
+ 498,1
2185
+ 1196,0
2186
+ 1617,0
2187
+ 143,0
2188
+ 1089,0
2189
+ 847,1
2190
+ 1566,0
2191
+ 681,1
2192
+ 737,1
2193
+ 383,1
2194
+ 1293,1
2195
+ 703,1
2196
+ 1878,0
2197
+ 350,0
2198
+ 1237,0
2199
+ 247,0
2200
+ 966,0
2201
+ 129,1
2202
+ 459,1
2203
+ 699,1
2204
+ 617,1
2205
+ 573,0
2206
+ 188,0
2207
+ 1469,0
2208
+ 952,0
2209
+ 99,0
2210
+ 869,1
2211
+ 1125,0
2212
+ 356,1
2213
+ 1426,0
2214
+ 1731,0
2215
+ 810,1
2216
+ 1090,0
2217
+ 981,0
2218
+ 921,1
2219
+ 289,0
2220
+ 284,0
2221
+ 690,0
2222
+ 1006,1
2223
+ 955,0
2224
+ 1114,0
2225
+ 2084,0
2226
+ 408,0
2227
+ 1258,1
2228
+ 1086,1
2229
+ 825,0
2230
+ 492,0
2231
+ 615,0
2232
+ 1831,0
2233
+ 1882,0
2234
+ 1172,0
2235
+ 961,0
2236
+ 581,1
2237
+ 554,0
2238
+ 1839,0
2239
+ 1174,0
2240
+ 772,1
2241
+ 1619,0
2242
+ 2086,0
2243
+ 754,0
2244
+ 2140,0
2245
+ 1176,1
2246
+ 244,1
2247
+ 26,1
2248
+ 729,0
2249
+ 1138,0
2250
+ 985,0
2251
+ 382,1
2252
+ 1417,0
2253
+ 1641,0
2254
+ 1585,0
2255
+ 960,1
2256
+ 411,0
2257
+ 437,0
2258
+ 1848,0
2259
+ 351,1
2260
+ 217,0
2261
+ 2108,0
2262
+ 1134,0
2263
+ 710,1
2264
+ 695,1
2265
+ 1283,1
2266
+ 2149,0
2267
+ 441,1
2268
+ 1130,0
2269
+ 1457,0
2270
+ 578,1
2271
+ 655,0
2272
+ 1281,0
2273
+ 561,1
2274
+ 506,1
2275
+ 1322,0
2276
+ 527,0
2277
+ 1562,0
2278
+ 927,1
2279
+ 1143,1
2280
+ 256,1
2281
+ 1676,0
2282
+ 500,0
2283
+ 2181,0
2284
+ 1739,0
2285
+ 970,0
2286
+ 360,0
2287
+ 76,1
2288
+ 898,0
2289
+ 1325,0
2290
+ 157,0
2291
+ 741,1
2292
+ 616,0
2293
+ 166,0
2294
+ 925,1
2295
+ 2060,0
2296
+ 226,1
2297
+ 1006,0
2298
+ 1150,1
2299
+ 1225,0
2300
+ 1023,1
2301
+ 1419,0
2302
+ 12,1
2303
+ 1410,0
2304
+ 904,0
2305
+ 1498,0
2306
+ 1741,0
2307
+ 408,1
2308
+ 1029,1
2309
+ 1846,0
2310
+ 1524,0
2311
+ 224,0
2312
+ 225,0
2313
+ 366,1
2314
+ 454,1
2315
+ 333,0
2316
+ 1020,0
2317
+ 1041,0
2318
+ 723,0
2319
+ 1654,0
2320
+ 254,0
2321
+ 49,1
2322
+ 139,0
2323
+ 63,1
2324
+ 415,1
2325
+ 128,0
2326
+ 384,1
2327
+ 1360,0
2328
+ 1775,0
2329
+ 543,1
2330
+ 204,0
2331
+ 351,0
2332
+ 1033,1
2333
+ 1480,0
2334
+ 2117,0
2335
+ 1917,0
2336
+ 920,0
2337
+ 236,0
2338
+ 464,1
2339
+ 998,1
2340
+ 1210,0
2341
+ 1504,0
2342
+ 708,0
2343
+ 1291,0
2344
+ 804,0
2345
+ 1466,0
2346
+ 786,1
2347
+ 461,0
2348
+ 1635,0
2349
+ 913,0
2350
+ 1160,0
2351
+ 750,0
2352
+ 611,1
2353
+ 861,0
2354
+ 1270,1
2355
+ 2050,0
2356
+ 186,0
2357
+ 547,0
2358
+ 911,0
2359
+ 859,1
2360
+ 297,0
2361
+ 45,0
2362
+ 1266,1
2363
+ 1003,0
2364
+ 158,1
2365
+ 1246,1
2366
+ 269,0
2367
+ 874,0
2368
+ 1264,1
2369
+ 2080,0
2370
+ 934,0
2371
+ 1112,1
2372
+ 1067,0
2373
+ 1822,0
2374
+ 213,1
2375
+ 67,0
2376
+ 1693,0
2377
+ 1014,0
2378
+ 1002,1
2379
+ 439,1
2380
+ 588,0
2381
+ 532,1
2382
+ 784,1
2383
+ 1354,0
2384
+ 839,0
2385
+ 442,1
2386
+ 413,0
2387
+ 892,1
2388
+ 1346,0
2389
+ 883,1
2390
+ 319,0
2391
+ 1587,0
2392
+ 259,1
2393
+ 2091,0
2394
+ 602,0
2395
+ 791,1
2396
+ 184,0
2397
+ 1399,0
2398
+ 178,0
2399
+ 707,1
2400
+ 375,0
2401
+ 1565,0
2402
+ 461,1
2403
+ 99,1
2404
+ 1184,0
2405
+ 2270,0
2406
+ 1327,0
2407
+ 676,1
2408
+ 212,0
2409
+ 437,1
2410
+ 858,1
2411
+ 1281,1
2412
+ 1016,0
2413
+ 1223,1
2414
+ 610,1
2415
+ 49,0
2416
+ 1313,0
2417
+ 574,0
2418
+ 928,0
2419
+ 1973,0
2420
+ 665,0
2421
+ 502,0
2422
+ 1296,1
2423
+ 1887,0
2424
+ 1149,1
2425
+ 866,0
2426
+ 65,0
2427
+ 342,0
2428
+ 967,0
2429
+ 412,1
2430
+ 1061,1
2431
+ 1803,0
2432
+ 666,1
2433
+ 325,1
2434
+ 1024,0
2435
+ 596,1
2436
+ 1816,0
2437
+ 1814,0
2438
+ 1020,1
2439
+ 2357,0
2440
+ 1287,1
2441
+ 1881,0
2442
+ 163,1
2443
+ 743,1
2444
+ 2058,0
2445
+ 1393,0
2446
+ 668,1
2447
+ 693,1
2448
+ 837,1
2449
+ 1397,0
2450
+ 294,0
2451
+ 1658,0
2452
+ 902,1
2453
+ 747,0
2454
+ 884,0
2455
+ 296,0
2456
+ 859,0
2457
+ 1021,0
2458
+ 481,0
2459
+ 1634,0
2460
+ 2011,0
2461
+ 1252,1
2462
+ 989,1
2463
+ 258,1
2464
+ 916,1
2465
+ 2168,0
2466
+ 1246,0
2467
+ 1392,0
2468
+ 788,0
2469
+ 465,0
2470
+ 288,0
2471
+ 969,1
2472
+ 776,1
2473
+ 319,1
2474
+ 1837,0
2475
+ 1362,0
2476
+ 875,0
2477
+ 593,1
2478
+ 2024,0
2479
+ 1537,0
2480
+ 323,0
2481
+ 995,1
2482
+ 567,1
2483
+ 1235,1
2484
+ 531,1
2485
+ 2008,0
2486
+ 146,1
2487
+ 331,0
2488
+ 215,1
2489
+ 218,1
2490
+ 1938,0
2491
+ 1698,0
2492
+ 993,1
2493
+ 1202,1
2494
+ 1309,1
2495
+ 266,1
2496
+ 390,0
2497
+ 1697,0
2498
+ 1153,0
2499
+ 863,1
2500
+ 716,0
2501
+ 403,0
2502
+ 125,0
2503
+ 1845,0
2504
+ 1152,1
2505
+ 1212,1
2506
+ 446,1
2507
+ 2037,0
2508
+ 1194,1
2509
+ 1236,0
2510
+ 1004,1
2511
+ 2298,0
2512
+ 797,0
2513
+ 1486,0
2514
+ 1353,0
2515
+ 1359,0
2516
+ 1335,0
2517
+ 667,1
2518
+ 1182,1
2519
+ 1329,0
2520
+ 1068,0
2521
+ 594,0
2522
+ 1109,0
2523
+ 87,1
2524
+ 1492,0
2525
+ 873,1
2526
+ 507,0
2527
+ 650,0
2528
+ 286,1
2529
+ 1104,0
2530
+ 39,0
2531
+ 858,0
2532
+ 1663,0
2533
+ 298,1
2534
+ 777,1
2535
+ 703,0
2536
+ 769,1
2537
+ 318,0
2538
+ 1245,1
2539
+ 1412,0
2540
+ 590,0
2541
+ 618,1
2542
+ 576,1
2543
+ 428,0
2544
+ 1190,1
2545
+ 2206,0
2546
+ 1369,0
2547
+ 340,0
2548
+ 1483,0
2549
+ 61,1
2550
+ 1325,1
2551
+ 572,0
2552
+ 1187,1
2553
+ 2056,0
2554
+ 1968,0
2555
+ 1225,1
2556
+ 851,0
2557
+ 1825,0
2558
+ 591,0
2559
+ 810,0
2560
+ 845,1
2561
+ 1099,0
2562
+ 744,0
2563
+ 380,0
2564
+ 2157,0
2565
+ 1507,0
2566
+ 1173,1
2567
+ 670,0
2568
+ 154,1
2569
+ 212,1
2570
+ 1931,0
2571
+ 1732,0
2572
+ 7,0
2573
+ 990,0
2574
+ 1028,1
2575
+ 1087,0
2576
+ 84,1
2577
+ 1793,0
2578
+ 1129,1
2579
+ 1126,1
2580
+ 220,1
2581
+ 733,1
2582
+ 384,0
2583
+ 1429,0
2584
+ 122,1
2585
+ 394,1
2586
+ 2103,0
2587
+ 1771,0
2588
+ 174,0
2589
+ 33,0
2590
+ 1200,1
2591
+ 310,1
2592
+ 1036,1
2593
+ 688,0
2594
+ 2133,0
2595
+ 902,0
2596
+ 779,0
2597
+ 1879,0
2598
+ 1332,0
2599
+ 17,0
2600
+ 1170,1
2601
+ 1295,0
2602
+ 159,0
2603
+ 900,1
2604
+ 1883,0
2605
+ 434,0
2606
+ 878,1
2607
+ 2035,0
2608
+ 2377,0
2609
+ 1643,0
2610
+ 2315,0
2611
+ 686,0
2612
+ 448,1
2613
+ 62,0
2614
+ 102,1
2615
+ 68,0
2616
+ 2277,0
2617
+ 0,0
2618
+ 238,0
2619
+ 1188,0
2620
+ 431,0
2621
+ 194,1
2622
+ 202,0
2623
+ 44,1
2624
+ 421,0
2625
+ 466,1
2626
+ 1493,0
2627
+ 546,0
2628
+ 1754,0
2629
+ 1627,0
2630
+ 64,0
2631
+ 482,0
2632
+ 253,0
2633
+ 2123,0
2634
+ 1470,0
2635
+ 720,0
2636
+ 2219,0
2637
+ 649,0
2638
+ 1180,1
2639
+ 305,1
2640
+ 516,0
2641
+ 132,0
2642
+ 864,1
2643
+ 140,1
2644
+ 1774,0
2645
+ 2159,0
2646
+ 165,1
2647
+ 13,0
2648
+ 1196,1
2649
+ 451,1
2650
+ 980,0
2651
+ 954,0
2652
+ 1778,0
2653
+ 480,0
2654
+ 1012,0
2655
+ 280,0
2656
+ 205,1
2657
+ 366,0
2658
+ 529,0
2659
+ 1029,0
2660
+ 560,0
2661
+ 1045,0
2662
+ 1038,1
2663
+ 1218,0
2664
+ 1284,1
2665
+ 292,0
2666
+ 1749,0
2667
+ 1811,0
2668
+ 914,0
2669
+ 2064,0
2670
+ 232,0
2671
+ 1085,1
2672
+ 480,1
2673
+ 1135,0
2674
+ 656,1
2675
+ 307,0
2676
+ 1536,0
2677
+ 728,0
2678
+ 711,0
2679
+ 897,1
2680
+ 513,1
2681
+ 1229,0
2682
+ 1111,0
2683
+ 291,0
2684
+ 1431,0
2685
+ 118,0
2686
+ 172,0
2687
+ 1161,0
2688
+ 497,1
2689
+ 2271,0
2690
+ 257,0
2691
+ 1258,0
2692
+ 998,0
2693
+ 1097,0
2694
+ 1206,1
2695
+ 1550,0
2696
+ 445,1
2697
+ 509,1
2698
+ 287,0
2699
+ 79,1
2700
+ 833,0
2701
+ 360,1
2702
+ 101,1
2703
+ 23,1
2704
+ 1370,0
2705
+ 143,1
2706
+ 1027,1
2707
+ 871,0
2708
+ 1785,0
2709
+ 55,0
2710
+ 944,1
2711
+ 880,1
2712
+ 1158,1
2713
+ 942,0
2714
+ 2178,0
2715
+ 1320,1
2716
+ 515,0
2717
+ 1088,1
2718
+ 1371,0
2719
+ 957,1
2720
+ 1191,1
2721
+ 1407,0
2722
+ 835,0
2723
+ 155,0
2724
+ 1049,0
2725
+ 987,1
2726
+ 696,0
2727
+ 2313,0
2728
+ 468,0
2729
+ 382,0
2730
+ 821,1
2731
+ 455,1
2732
+ 1124,1
2733
+ 1250,0
2734
+ 949,1
2735
+ 798,1
2736
+ 957,0
2737
+ 1041,1
2738
+ 1706,0
2739
+ 267,1
2740
+ 1444,0
2741
+ 1118,0
2742
+ 1408,0
2743
+ 77,0
2744
+ 1757,0
2745
+ 2132,0
2746
+ 495,0
2747
+ 1510,0
2748
+ 853,1
2749
+ 1222,1
2750
+ 586,0
2751
+ 614,0
2752
+ 1742,0
2753
+ 734,1
2754
+ 2142,0
2755
+ 245,0
2756
+ 877,1
2757
+ 888,0
2758
+ 1257,0
2759
+ 700,0
2760
+ 249,0
2761
+ 2370,0
2762
+ 1497,0
2763
+ 1240,1
2764
+ 1015,0
2765
+ 127,1
2766
+ 848,1
2767
+ 1167,1
2768
+ 1206,0
2769
+ 1864,0
2770
+ 941,0
2771
+ 2141,0
2772
+ 2283,0
dataset/task1_data.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:046121ecaa796a0d453ce75820b5b6d53d468a03b7352074029504c9f96e3c32
3
+ size 4611306
src/deployment_utils.py ADDED
@@ -0,0 +1,607 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # global
2
+ from typing import Tuple, List
3
+ import re
4
+ import numpy as np
5
+ import pandas as pd
6
+
7
+ import tensorflow as tf
8
+ from tensorflow import keras
9
+ from keras.utils import pad_sequences
10
+ from keras.preprocessing.text import Tokenizer
11
+ from gensim.models.doc2vec import Doc2Vec
12
+
13
+ import transformers
14
+ from transformers import pipeline, BertTokenizer
15
+
16
+ import fasttext
17
+
18
+ # local
19
+ from preprocessing import Preprocessor
20
+ from utils import read_data
21
+
22
+
23
+ # read data
24
+ X_train, X_test, y_train, y_test = read_data()
25
+
26
+ # instantiate preprocessor object
27
+ preprocessor = Preprocessor()
28
+
29
+ # load models
30
+ doc2vec_model_embeddings = Doc2Vec.load(
31
+ "F:/Graduation Project/Project/models/best_doc2vec_embeddings")
32
+ doc2vec_model = keras.models.load_model(
33
+ "F:/Graduation Project/Project/models/best_doc2vec_model.h5")
34
+ tfidf_model = keras.models.load_model(
35
+ "F:/Graduation Project/Project/models/best_tfidf_model.h5")
36
+ cnn_model = keras.models.load_model(
37
+ "F:/Graduation Project/Project/models/best_cnn_model.h5")
38
+ glove_model = keras.models.load_model(
39
+ "F:/Graduation Project/Project/models/best_glove_model.h5")
40
+ lstm_model = keras.models.load_model(
41
+ "F:/Graduation Project/Project/models/best_lstm_model.h5")
42
+ bert_model = keras.models.load_model(
43
+ "F:/Graduation Project/Project/models/best_bert_model.h5", custom_objects={"TFBertModel": transformers.TFBertModel})
44
+ fasttext_model = fasttext.load_model(
45
+ "F:/Graduation Project/Project/models/best_fasttext_model.bin")
46
+ summarization_model = pipeline(
47
+ "summarization", model="facebook/bart-large-cnn")
48
+
49
+
50
+ # TODO: Add Docstrings
51
+ def extract_case_information(case_content: str):
52
+ content_list = case_content.split("\n")
53
+ petitioner = re.findall(r"petitioner:(.+)", content_list[0])[0]
54
+ respondent = re.findall(r"respondent:(.+)", content_list[1])[0]
55
+ facts = re.findall(r"facts:(.+)", content_list[2])[0]
56
+
57
+ return petitioner, respondent, facts
58
+
59
+
60
+ def generate_random_sample() -> Tuple[str, str, str, int]:
61
+ """
62
+ Randomly fetch a random case from `X_test` to test it.
63
+
64
+ Returns:
65
+ --------
66
+ A tuple contains the following:
67
+ - petitioner : str
68
+ Contains petitioner name.
69
+ - respondent : str
70
+ Contains respondent name.
71
+ - facts : str
72
+ Contains case facts.
73
+ - label : int
74
+ Represents the winning index(0 = petitioner, 1 = respondent).
75
+ """
76
+
77
+ random_idx = np.random.randint(low=0, high=len(X_test))
78
+
79
+ petitioner = X_test["first_party"].iloc[random_idx]
80
+ respondent = X_test["second_party"].iloc[random_idx]
81
+ facts = X_test["Facts"].iloc[random_idx]
82
+ label = y_test.iloc[random_idx][0]
83
+
84
+ return petitioner, respondent, facts, label
85
+
86
+
87
+ def generate_highlighted_words(facts: str, petitioner_words: List[str], respondent_words: List[str]):
88
+ """
89
+ Highlight `petitioner_words` and `respondent_words` for model
90
+ interpretation.
91
+
92
+ Parameters:
93
+ -----------
94
+ - facts : str
95
+ Facts of a specific case.
96
+ - petitioner_words : List[str]
97
+ Contains all words that model pays attention
98
+ to be a petetioner words.
99
+ - respondent_words : List[str]
100
+ Contains all words that model pays attention
101
+ to be a respondent words.
102
+
103
+ Returns:
104
+ --------
105
+ - rendered_text : str
106
+ Contains the `facts` but with adding
107
+ highlighting mechanism to visualize it using CSS in HTML format.
108
+
109
+ Example:
110
+ --------
111
+ >>> facts_ = 'Mohammed shot Aly after a hot negotiation happened between
112
+ ... them about the profits of their company'
113
+ >>> petitioner_words_ = ['shot', 'hot']
114
+ >>> respondent_words_ = ['profits']
115
+ >>> generate_highlighted_words(facts, petitioner_words_, respondent_words_)
116
+
117
+ >>> output:
118
+ <div class='text-facts'> Mohammed <span class='highlight-petitioner'>shot</span>
119
+ Aly after a <span class='highlight-petitioner'>hot</span> negotiation happened
120
+ between them about <span class='highlight-respondent'>profits</span> of their
121
+ company </div>
122
+ """
123
+
124
+ rendered_text = '<div class="text-facts"> '
125
+
126
+ for word in facts.split():
127
+ if word in petitioner_words:
128
+ highlight_word = ' <span class="highlight-petitioner"> ' + word + " </span> "
129
+ rendered_text += highlight_word
130
+
131
+ elif word in respondent_words:
132
+ highlight_word = ' <span class="highlight-respondent"> ' + word + " </span> "
133
+ rendered_text += highlight_word
134
+
135
+ else:
136
+ rendered_text += " " + word
137
+
138
+ rendered_text += " </div>"
139
+
140
+ return rendered_text
141
+
142
+
143
+ class VectorizerGenerator:
144
+ """Responsible for creation and generation of tokenizers and text
145
+ vectorizers for JudgerAIs' models"""
146
+
147
+ def __init__(self) -> None:
148
+ pass
149
+
150
+ def generate_tf_idf_vectorizer(self) -> keras.layers.TextVectorization:
151
+ """
152
+ Generating best text vectroizer of the tf-idf model (3rd combination).
153
+
154
+ Returns:
155
+ -------
156
+ - text_vectorizer : keras.layers.TextVectorization
157
+ Represents the case facts' vectorizer that converts case facts to
158
+ numerical tensors.
159
+ """
160
+
161
+ first_party_names = X_train["first_party"]
162
+ second_party_names = X_train["second_party"]
163
+ facts = X_train["Facts"]
164
+
165
+ anonymized_facts = preprocessor.anonymize_data(
166
+ first_party_names, second_party_names, facts)
167
+
168
+ text_vectorizer, _ = preprocessor.convert_text_to_vectors_tf_idf(
169
+ anonymized_facts)
170
+
171
+ return text_vectorizer
172
+
173
+ def generate_cnn_vectorizer(self) -> keras.layers.TextVectorization:
174
+ """
175
+ Generating best text vectroizer of the cnn model (2nd combination).
176
+
177
+ Returns:
178
+ -------
179
+ - text_vectorizer : keras.layers.TextVectorization
180
+ Represents the case facts' vectorizer that converts case facts to
181
+ numerical tensors.
182
+ """
183
+
184
+ balanced_df = preprocessor.balance_data(X_train["Facts"], y_train)
185
+ X_train_balanced = balanced_df["Facts"]
186
+
187
+ text_vectorizer, _ = preprocessor.convert_text_to_vectors_cnn(
188
+ X_train_balanced)
189
+
190
+ return text_vectorizer
191
+
192
+ def generate_glove_tokenizer(self) -> keras.preprocessing.text.Tokenizer:
193
+ """
194
+ Generating best glove tokenizer of the GloVe model (2nd combination).
195
+
196
+ Returns:
197
+ -------
198
+ - glove_tokenizer : keras.preprocessing.text.Tokenizer
199
+ Represents the case facts' tokenizer that converts case facts to
200
+ numerical tensors.
201
+ """
202
+
203
+ balanced_df = preprocessor.balance_data(X_train["Facts"], y_train)
204
+ X_train_balanced = balanced_df["Facts"]
205
+
206
+ glove_tokenizer, _ = preprocessor.convert_text_to_vectors_glove(
207
+ X_train_balanced)
208
+
209
+ return glove_tokenizer
210
+
211
+ def generate_lstm_tokenizer(self) -> keras.preprocessing.text.Tokenizer:
212
+ """
213
+ Generating best text tokenizer of the LSTM model (1st combination).
214
+
215
+ Returns:
216
+ -------
217
+ - lstm_tokenizer : keras.preprocessing.text.Tokenizer
218
+ Represents the case facts' tokenizer that converts case facts to
219
+ numerical tensors.
220
+ """
221
+
222
+ lstm_tokenizer = Tokenizer(num_words=18430)
223
+ lstm_tokenizer.fit_on_texts(X_train)
224
+
225
+ return lstm_tokenizer
226
+
227
+ def generate_bert_tokenizer(self) -> transformers.BertTokenizer:
228
+ """
229
+ Generating best bert tokenizer of the BERT model (1st combination).
230
+
231
+ Returns:
232
+ -------
233
+ - bert_tokenizer : transformers.BertTokenizer
234
+ Represents the case facts' tokenizer that converts case facts to
235
+ input ids tensors.
236
+ """
237
+
238
+ bert_tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
239
+ return bert_tokenizer
240
+
241
+
242
+ class DataPreparator:
243
+ """Responsible for preparing the case facts aka converting case facts to
244
+ numerical vectors using `VectorizerGenerator` object."""
245
+
246
+ def __init__(self) -> None:
247
+ self.vectorizer_generator = VectorizerGenerator()
248
+
249
+ def prepare_doc2vec(self, facts: str) -> pd.DataFrame:
250
+ """
251
+ Responsible for converting `facts` string to numerical vector
252
+ using `doc2vec_model_embeddings`.
253
+
254
+ Parameters:
255
+ ----------
256
+ - facts : str
257
+ Represents the case facts.
258
+
259
+ Returns:
260
+ -------
261
+ - facts_vector : pd.DataFrame
262
+ A row DataFrame represents the 50-d vector of the `facts`.
263
+ """
264
+
265
+ facts = pd.Series(facts)
266
+ facts_processed = preprocessor.preprocess_data(facts)
267
+ facts_vectors = preprocessor.convert_text_to_vectors_doc2vec(
268
+ facts_processed, train=False, embeddings_doc2vec=doc2vec_model_embeddings)
269
+
270
+ return facts_vectors
271
+
272
+ def _anonymize_facts(self, first_party_name: str, second_party_name: str, facts: str) -> str:
273
+ """
274
+ Anonymize case `facts` by replacing `first_party_name` & `second_party_name` with
275
+ generic tag "__PARTY__".
276
+
277
+ Parameters:
278
+ -----------
279
+ - first_party_name : str
280
+ Represents the petitioner name.
281
+ - second_party_name : str
282
+ Represents the respondent name.
283
+ - facts : str
284
+ Represents the case facts.
285
+
286
+ Returns:
287
+ -------
288
+ - anonymized_facts : str
289
+ Represents `facts` after anonymization.
290
+ """
291
+
292
+ anonymized_facts = preprocessor._anonymize_case_facts(
293
+ first_party_name, second_party_name, facts)
294
+
295
+ return anonymized_facts
296
+
297
+ def prepare_tf_idf(self, anonymized_facts: str) -> tf.Tensor:
298
+ """
299
+ Responsible for converting `facts` string to numerical vector
300
+ using tf-idf `vectorizer_generator` in the 3rd combination.
301
+
302
+ Parameters:
303
+ -----------
304
+ - anonymized_facts : str
305
+ Represents the case facts after anonymization.
306
+
307
+ Returns:
308
+ -------
309
+ - facts_vector : tf.Tensor
310
+ A Tensor of 10000-d represents `facts`.
311
+ """
312
+
313
+ anonymized_facts = pd.Series(anonymized_facts)
314
+ tf_idf_vectorizer = self.vectorizer_generator.generate_tf_idf_vectorizer()
315
+
316
+ facts_vector = preprocessor.convert_text_to_vectors_tf_idf(
317
+ anonymized_facts, train=False, text_vectorizer=tf_idf_vectorizer)
318
+
319
+ return facts_vector
320
+
321
+ def prepare_cnn(self, facts: str) -> tf.Tensor:
322
+ """
323
+ Responsible for converting `facts` string to numerical vector
324
+ using cnn `vectorizer_generator` in the 2nd combination.
325
+
326
+ Parameters:
327
+ -----------
328
+ - facts : str
329
+ Represents the case facts.
330
+
331
+ Returns:
332
+ -------
333
+ - facts_vector : tf.Tensor
334
+ A Tensor of 2000-d represents `facts`.
335
+ """
336
+ facts = pd.Series(facts)
337
+
338
+ cnn_vectorizer = self.vectorizer_generator.generate_cnn_vectorizer()
339
+
340
+ facts_vector = preprocessor.convert_text_to_vectors_cnn(
341
+ facts, train=False, text_vectorizer=cnn_vectorizer)
342
+
343
+ return facts_vector
344
+
345
+ def prepare_glove(self, facts: str) -> np.ndarray:
346
+ """
347
+ Responsible for converting `facts` string to numerical vector
348
+ using glove `vectorizer_generator` in the 2nd combination.
349
+
350
+ Parameters:
351
+ -----------
352
+ - facts : str
353
+ Represents the case facts.
354
+
355
+ Returns:
356
+ -------
357
+ - facts_vector : np.ndarray
358
+ A nd.ndarray of 50-d represents `facts`.
359
+ """
360
+
361
+ facts = pd.Series(facts)
362
+
363
+ glove_tokneizer = self.vectorizer_generator.generate_glove_tokenizer()
364
+
365
+ facts_vector = preprocessor.convert_text_to_vectors_glove(
366
+ facts, train=False, glove_tokenizer=glove_tokneizer)
367
+
368
+ return facts_vector
369
+
370
+ def prepare_lstm(self, facts: str) -> np.ndarray:
371
+ """
372
+ Responsible for converting `facts` string to numerical vector
373
+ using lstm `vectorizer_generator` in the 1st combination.
374
+
375
+ Parameters:
376
+ -----------
377
+ - facts : str
378
+ Represents the case facts.
379
+
380
+ Returns:
381
+ -------
382
+ - facts_vector_padded : np.ndarray
383
+ A nd.ndarray of 974-d represents `facts`.
384
+ """
385
+
386
+ facts = pd.Series(facts)
387
+ lstm_tokenizer = self.vectorizer_generator.generate_lstm_tokenizer()
388
+ facts_vector = lstm_tokenizer.texts_to_sequences(facts)
389
+ facts_vector_padded = pad_sequences(facts_vector, 974)
390
+
391
+ return facts_vector_padded
392
+
393
+ def prepare_bert(self, facts: str) -> tf.Tensor:
394
+ """
395
+ Responsible for converting `facts` string to numerical vector
396
+ using bert `vectorizer_generator` in the 1st combination.
397
+
398
+ Parameters:
399
+ -----------
400
+ - facts : str
401
+ Represents the case facts.
402
+
403
+ Returns:
404
+ -------
405
+ - tf.Tensor
406
+ A tf.Tensor of 256-d represents `facts` input ids.
407
+ """
408
+
409
+ bert_tokenizer = self.vectorizer_generator.generate_bert_tokenizer()
410
+ facts_vector_dict = bert_tokenizer.encode_plus(
411
+ facts,
412
+ max_length=256,
413
+ truncation=True,
414
+ padding='max_length',
415
+ add_special_tokens=True,
416
+ return_tensors='tf'
417
+ )
418
+
419
+ return facts_vector_dict["input_ids"]
420
+
421
+
422
+ class Predictor:
423
+ """Responsible for get predictions of JudgerAIs' models"""
424
+
425
+ def __init__(self) -> None:
426
+ self.data_preparator = DataPreparator()
427
+
428
+ def predict_doc2vec(self, facts: str) -> np.ndarray:
429
+ """
430
+ Get prediction of `facts` using `doc2vec_model`.
431
+
432
+ Parameters:
433
+ ----------
434
+ - facts : str
435
+ Represents the case facts.
436
+
437
+ Returns:
438
+ --------
439
+ - pet_res_scores : np.ndarray
440
+ An array contains 2 elements, one for probability of petitioner winning
441
+ and the second for the probability of respondent winning.
442
+ """
443
+
444
+ facts_vector = self.data_preparator.prepare_doc2vec(facts)
445
+ predictions = doc2vec_model.predict(facts_vector)
446
+
447
+ pet_res_scores = []
448
+ for i in predictions:
449
+ temp = i[0]
450
+ pet_res_scores.append(np.array([1 - temp, temp]))
451
+
452
+ return np.array(pet_res_scores)
453
+
454
+ def predict_tf_idf(self, anonymized_facts: str) -> np.ndarray:
455
+ """
456
+ Get prediction of `facts` using `tfidf_model`.
457
+
458
+ Parameters:
459
+ -----------
460
+ - anonymized_facts : str
461
+ Represents the case facts after anonymization.
462
+
463
+ Returns:
464
+ --------
465
+ - pet_res_scores : np.ndarray
466
+ An array contains 2 elements, one for probability of petitioner winning
467
+ and the second for the probability of respondent winning.
468
+ """
469
+
470
+ facts_vector = self.data_preparator.prepare_tf_idf(anonymized_facts)
471
+ predictions = tfidf_model.predict(facts_vector)
472
+
473
+ pet_res_scores = []
474
+ for i in predictions:
475
+ temp = i[0]
476
+ pet_res_scores.append(np.array([1 - temp, temp]))
477
+
478
+ return np.array(pet_res_scores)
479
+
480
+ def predict_cnn(self, facts: str) -> np.ndarray:
481
+ """
482
+ Get prediction of `facts` using `cnn_model`.
483
+
484
+ Parameters:
485
+ ----------
486
+ - facts : str
487
+ Represents the case facts.
488
+
489
+ Returns:
490
+ --------
491
+ - pet_res_scores : np.ndarray
492
+ An array contains 2 elements, one for probability of petitioner winning
493
+ and the second for the probability of respondent winning.
494
+ """
495
+
496
+ facts_vector = self.data_preparator.prepare_cnn(facts)
497
+ predictions = cnn_model.predict(facts_vector)
498
+
499
+ pet_res_scores = []
500
+ for i in predictions:
501
+ temp = i[0]
502
+ pet_res_scores.append(np.array([1 - temp, temp]))
503
+
504
+ return np.array(pet_res_scores)
505
+
506
+ def predict_glove(self, facts: str) -> np.ndarray:
507
+ """
508
+ Get prediction of `facts` using `glove_model`.
509
+
510
+ Parameters:
511
+ ----------
512
+ - facts : str
513
+ Represents the case facts.
514
+
515
+ Returns:
516
+ --------
517
+ - pet_res_scores : np.ndarray
518
+ An array contains 2 elements, one for probability of petitioner winning
519
+ and the second for the probability of respondent winning.
520
+ """
521
+
522
+ facts_vector = self.data_preparator.prepare_glove(facts)
523
+ predictions = glove_model.predict(facts_vector)
524
+
525
+ pet_res_scores = []
526
+ for i in predictions:
527
+ temp = i[0]
528
+ pet_res_scores.append(np.array([1 - temp, temp]))
529
+
530
+ return np.array(pet_res_scores)
531
+
532
+ def predict_lstm(self, facts: str) -> np.ndarray:
533
+ """
534
+ Get prediction of `facts` using `lstm_model`.
535
+
536
+ Parameters:
537
+ ----------
538
+ - facts : str
539
+ Represents the case facts.
540
+
541
+ Returns:
542
+ --------
543
+ - pet_res_scores : np.ndarray
544
+ An array contains 2 elements, one for probability of petitioner winning
545
+ and the second for the probability of respondent winning.
546
+ """
547
+
548
+ facts_vector = self.data_preparator.prepare_lstm(facts)
549
+ predictions = lstm_model.predict(facts_vector)
550
+
551
+ pet_res_scores = []
552
+ for i in predictions:
553
+ temp = i[0]
554
+ pet_res_scores.append(np.array([1 - temp, temp]))
555
+
556
+ return np.array(pet_res_scores)
557
+
558
+ def predict_bert(self, facts: str) -> np.ndarray:
559
+ """
560
+ Get prediction of `facts` using `bert_model`.
561
+
562
+ Parameters:
563
+ ----------
564
+ - facts : str
565
+ Represents the case facts.
566
+
567
+ Returns:
568
+ --------
569
+ - predictions : np.ndarray
570
+ An array contains 2 elements, one for probability of petitioner winning
571
+ and the second for the probability of respondent winning.
572
+ """
573
+
574
+ facts_vector = self.data_preparator.prepare_bert(facts)
575
+ predictions = bert_model.predict(facts_vector)
576
+
577
+ return predictions
578
+
579
+ def predict_fasttext(self, facts: str) -> np.ndarray:
580
+ """
581
+ Get prediction of `facts` using `fasttext`.
582
+
583
+ Parameters:
584
+ ----------
585
+ - facts : str
586
+ Represents the case facts.
587
+
588
+ Returns:
589
+ --------
590
+ - pet_res_scores : np.ndarray
591
+ An array contains 2 elements, one for probability of petitioner winning
592
+ and the second for the probability of respondent winning.
593
+ """
594
+
595
+ prediction = fasttext_model.predict(facts)[1]
596
+ prediction = np.array([prediction])
597
+
598
+ pet_res_scores = []
599
+ for i in prediction:
600
+ temp = i[0]
601
+ pet_res_scores.append(np.array([1 - temp, temp]))
602
+
603
+ return np.array(pet_res_scores)
604
+
605
+ def summarize_facts(self, facts: str) -> str:
606
+ summarized_case_facts = summarization_model(facts)[0]['summary_text']
607
+ return summarized_case_facts
src/plotting.py ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List
2
+
3
+ import numpy as np
4
+ import pandas as pd
5
+ import matplotlib.pyplot as plt
6
+ import seaborn as sn
7
+
8
+ from sklearn.metrics import auc
9
+ from sklearn.metrics import roc_curve
10
+ from sklearn.metrics import classification_report
11
+ from sklearn.metrics import confusion_matrix
12
+
13
+ from tensorflow import keras
14
+
15
+
16
+ class PlottingManager:
17
+ """Responsible for providing plots & visualization for the models."""
18
+
19
+ def __init__(self) -> None:
20
+ """Define style for visualizations."""
21
+ plt.style.use("seaborn")
22
+
23
+ def plot_subplots_curve(
24
+ self,
25
+ training_measure: List[List[float]],
26
+ validation_measure: List[List[float]],
27
+ title: str,
28
+ train_color: str = "orangered",
29
+ validation_color: str = "dodgerblue",
30
+ ) -> None:
31
+ """
32
+ Plotting subplots of the elements of `training_measure` vs. `validation_measure`.
33
+
34
+ Parameters:
35
+ ------------
36
+ - training_measure : List[List[float]]
37
+ A `k` by `num_epochs` list contains the trained measure whether it's loss or
38
+ accuracy for each fold.
39
+ - validation_measure : List[List[float]]
40
+ A `k` by `num_epochs` list contains the validation measure whether it's loss
41
+ or accuracy for each fold.
42
+ - title : str
43
+ Represents the title of the plot.
44
+ - train_color : str, optional
45
+ Represents the graph color for the `training_measure`. (Default is "orangered").
46
+ - validation_color : str, optional
47
+ Represents the graph color for the `validation_measure`. (Default is "dodgerblue").
48
+ """
49
+
50
+ plt.figure(figsize=(12, 8))
51
+
52
+ for i in range(len(training_measure)):
53
+ plt.subplot(2, 2, i + 1)
54
+ plt.plot(training_measure[i], c=train_color)
55
+ plt.plot(validation_measure[i], c=validation_color)
56
+ plt.title("Fold " + str(i + 1))
57
+
58
+ plt.suptitle(title)
59
+ plt.show()
60
+
61
+ def plot_heatmap(
62
+ self, measure: List[List[float]], title: str, cmap: str = "coolwarm"
63
+ ) -> None:
64
+ """
65
+ Plotting a heatmap of the values in `measure`.
66
+
67
+ Parameters:
68
+ ------------
69
+ - measure : List[List[float]]
70
+ A `k` by `num_epochs` list contains the measure whether it's loss
71
+ or accuracy for each fold.
72
+ - title : str
73
+ Title of the plot.
74
+ - cmap : str, optional
75
+ Color map of the plot (default is "coolwarm").
76
+ """
77
+
78
+ # transpose the array to make it `num_epochs` by `k`
79
+ values_array = np.array(measure).T
80
+ df_cm = pd.DataFrame(
81
+ values_array,
82
+ range(1, values_array.shape[0] + 1),
83
+ ["fold " + str(i + 1) for i in range(4)],
84
+ )
85
+
86
+ plt.figure(figsize=(10, 8))
87
+ plt.title(
88
+ title + " Throughout " + str(values_array.shape[1]) + " Folds", pad=20
89
+ )
90
+ sn.heatmap(df_cm, annot=True, cmap=cmap, annot_kws={"size": 10})
91
+ plt.show()
92
+
93
+ def plot_average_curves(
94
+ self,
95
+ title: str,
96
+ x: List[float],
97
+ y: List[float],
98
+ x_label: str,
99
+ y_label: str,
100
+ train_color: str = "orangered",
101
+ validation_color: str = "dodgerblue",
102
+ ) -> None:
103
+ """
104
+ Plotting the curves of `x` against `y`, where x and y are training and validation
105
+ measures (loss or accuracy).
106
+
107
+ Parameters:
108
+ ------------
109
+ - title : str
110
+ Title of the plot.
111
+ - x : List[float]
112
+ Training measure of the models (loss or accuracy).
113
+ - y : List[float]
114
+ Validation measure of the models (loss or accuracy).
115
+ - x_label : str
116
+ Label of the training measure to put it in plot legend.
117
+ - y_label : str
118
+ Label of the validation measure to put it in plot legend.
119
+ - train_color : str, optional
120
+ Color of the training plot (default is "orangered").
121
+ - validation_color : str, optional
122
+ Color of the validation plot (default is "dodgerblue").
123
+ """
124
+
125
+ plt.title(title, pad=20)
126
+ plt.plot(x, c=train_color, label=x_label)
127
+ plt.plot(y, c=validation_color, label=y_label)
128
+ plt.legend()
129
+ plt.show()
130
+
131
+ def plot_roc_curve(
132
+ self,
133
+ all_models: List[keras.models.Sequential],
134
+ X_test: pd.DataFrame,
135
+ y_test: pd.Series,
136
+ ) -> None:
137
+ """
138
+ Plotting the AUC-ROC curve of all the passed models in `all_models`.
139
+
140
+ Parameters:
141
+ ------------
142
+ - all_models : List[keras.models.Sequential]
143
+ Contains all trained models, number of models equals number of
144
+ `k` fold cross-validation.
145
+ - X_test : pd.DataFrame
146
+ Contains the testing vectors.
147
+ - y_test : pd.Series
148
+ Contains the testing labels.
149
+ """
150
+
151
+ plt.figure(figsize=(12, 8))
152
+ for i, model in enumerate(all_models):
153
+ y_pred = model.predict(X_test).ravel()
154
+ fpr, tpr, _ = roc_curve(y_test, y_pred)
155
+ auc_curve = auc(fpr, tpr)
156
+ plt.subplot(2, 2, i + 1)
157
+ plt.plot([0, 1], [0, 1], color="dodgerblue", linestyle="--")
158
+ plt.plot(
159
+ fpr,
160
+ tpr,
161
+ color="orangered",
162
+ label=f"Fold {str(i+1)} (area = {auc_curve:.3f})",
163
+ )
164
+ plt.legend(loc="best")
165
+ plt.title(f"Fold {str(i+1)}")
166
+
167
+ plt.suptitle("AUC-ROC curves")
168
+ plt.show()
169
+
170
+ def plot_classification_report(
171
+ self, model: keras.models.Sequential, X_test: pd.DataFrame, y_test: pd.Series
172
+ ) -> str | dict:
173
+ """
174
+ Plotting the classification report of the passed `model`.
175
+
176
+ Parameters:
177
+ ------------
178
+ - model : keras.models.Sequential
179
+ The trained model that will be evaluated.
180
+ - X_test : pd.DataFrame
181
+ Contains the testing vectors.
182
+ - y_test : pd.Series
183
+ Contains the testing labels.
184
+
185
+ Returns:
186
+ --------
187
+ - str | dict: The classification report for the given model and testing data.
188
+ It returns a string if `output_format` is set to 'str', and returns
189
+ a dictionary if `output_format` is set to 'dict'.
190
+ """
191
+
192
+ y_pred = model.predict(X_test).ravel()
193
+ preds = np.where(y_pred > 0.5, 1, 0)
194
+ cls_report = classification_report(y_test, preds)
195
+
196
+ return cls_report
197
+
198
+ def plot_confusion_matrix(
199
+ self,
200
+ all_models: List[keras.models.Sequential],
201
+ X_test: pd.DataFrame,
202
+ y_test: pd.Series,
203
+ ) -> None:
204
+ """
205
+ Plotting the confusion matrix of each model in `all_models`.
206
+
207
+ Parameters:
208
+ ------------
209
+ - all_models: list[keras.models.Sequential]
210
+ Contains all trained models, number of models equals
211
+ number of `k` fold cross-validation.
212
+ - X_test: pd.DataFrame
213
+ Contains the testing vectors.
214
+ - y_test: pd.Series
215
+ Contains the testing labels.
216
+ """
217
+
218
+ _, axes = plt.subplots(nrows=2, ncols=2, figsize=(12, 8))
219
+
220
+ for i, (model, ax) in enumerate(zip(all_models, axes.flatten())):
221
+ y_pred = model.predict(X_test).ravel()
222
+ preds = np.where(y_pred > 0.5, 1, 0)
223
+
224
+ conf_matrix = confusion_matrix(y_test, preds)
225
+ sn.heatmap(conf_matrix, annot=True, ax=ax)
226
+ ax.set_title(f"Fold {i+1}")
227
+
228
+ plt.suptitle("Confusion Matrices")
229
+ plt.tight_layout()
230
+ plt.show()
src/preprocessing.py ADDED
@@ -0,0 +1,591 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # global
2
+ import string
3
+ from typing import List, Tuple
4
+
5
+ import numpy as np
6
+ import pandas as pd
7
+
8
+ import re
9
+ import nltk
10
+
11
+ from sklearn.utils import resample
12
+
13
+ from gensim.models.doc2vec import Doc2Vec, TaggedDocument
14
+ from nltk.tokenize import RegexpTokenizer
15
+
16
+ import tensorflow as tf
17
+ from keras.layers import TextVectorization
18
+ from keras.preprocessing.text import Tokenizer
19
+ from keras.utils import pad_sequences
20
+
21
+ # local
22
+ from utils import Doc2VecModel
23
+
24
+
25
+ punct = string.punctuation
26
+ stemmer = nltk.stem.PorterStemmer()
27
+ eng_stopwords = nltk.corpus.stopwords.words("english")
28
+
29
+
30
+ class Preprocessor:
31
+ """Responsible for preprocessing case facts."""
32
+
33
+ def __init__(self) -> None:
34
+ pass
35
+
36
+ def _nltk_tokenizer(self, text: str) -> List[str]:
37
+ """
38
+ Tokenize a given `text` using the RegexpTokenizer from the nltk library.
39
+
40
+ Parameters:
41
+ -----------
42
+ - text : str
43
+ A string containing the text to be tokenized.
44
+
45
+ Returns:
46
+ --------
47
+ - tokens : List[str]
48
+ A list of tokens generated by the tokenizer.
49
+ """
50
+
51
+ tokenizer = RegexpTokenizer(r"\w+")
52
+ tokens = tokenizer.tokenize(text)
53
+
54
+ return tokens
55
+
56
+ def _tokenize_text(self, text_column: pd.Series) -> pd.Series:
57
+ """Splitting `text_column` into tokens.
58
+
59
+ Parameters:
60
+ ------------
61
+ - text_column : pd.Series
62
+ Contains text that needs to be tokenized.
63
+
64
+ Returns:
65
+ --------
66
+ - tokenized_text : pd.Series
67
+ Contains tokenized version of `text_column`.
68
+ """
69
+
70
+ tokenized_text = text_column.apply(self._nltk_tokenizer)
71
+ return tokenized_text
72
+
73
+ def _convert_to_tagged_document(
74
+ self, text_column: pd.Series
75
+ ) -> Tuple[List[str], List[TaggedDocument]]:
76
+ """
77
+ Convert `text_column` of specific to TaggedDocuments.
78
+
79
+ Parameters:
80
+ ------------
81
+ - column : pd.Series
82
+ Contains the list of tokens of each fact.
83
+
84
+ Returns:
85
+ --------
86
+ A tuble containing the following items:
87
+ - tokens_list : list[str]
88
+ Contains all tokens of each case in the `text_column`.
89
+ - tagged_docs : list[TaggedDocument]
90
+ Contains TaggedDocument object for each case.
91
+ """
92
+
93
+ tokens_list = text_column.to_list()
94
+ tagged_docs = [TaggedDocument(t, [str(i)])
95
+ for i, t in enumerate(tokens_list)]
96
+
97
+ return tokens_list, tagged_docs
98
+
99
+ def _vectorize_text(
100
+ self, doc2vec_model: Doc2Vec, df: pd.Series, tokens_list: List[str]
101
+ ) -> pd.DataFrame:
102
+ """
103
+ Convert values of `tokens_list` to a vector.
104
+
105
+ Parameters:
106
+ -----------
107
+ - doc2vec_model : Doc2Vev
108
+ Trained Doc2Vec model.
109
+ - df : pd.Series
110
+ This will use only to get its indicies for the new generated dataframe.
111
+ - tokens_list : List[str]
112
+ Contains all tokens of each case.
113
+
114
+ Returns:
115
+ --------
116
+ - text_vectors_df : pd.DataFrame
117
+ Contains the vector representaion for each case.
118
+ """
119
+
120
+ text_vectors = [doc2vec_model.infer_vector(doc) for doc in tokens_list]
121
+ text_vectors_df = pd.DataFrame(text_vectors, index=df.index)
122
+
123
+ return text_vectors_df
124
+
125
+ def _anonymize_case_facts(
126
+ self, first_party_name: str, second_party_name: str, facts: str
127
+ ) -> str:
128
+ """
129
+ Anonymize case facts by replacing its party names with "_PARTY_" tag.
130
+
131
+ Parameters:
132
+ ------------
133
+ - first_party_name : str
134
+ Represents first party name or petitioner name.
135
+ - second_party_name : str
136
+ Represents second party name or respondent name.
137
+ - facts : str
138
+ Represents case facts.
139
+
140
+ Returns:
141
+ --------
142
+ - anonymized_facts : str
143
+ An anonymized version of `facts`.
144
+ """
145
+
146
+ # remove any commas and any non alphabet characters
147
+ first_party_name = re.sub(r"[\,+]", " ", first_party_name)
148
+ first_party_name = re.sub(r"[^a-zA-Z]", " ", first_party_name)
149
+
150
+ second_party_name = re.sub(r"[\,+]", " ", second_party_name)
151
+ second_party_name = re.sub(r"[^a-zA-Z]", " ", second_party_name)
152
+
153
+ for name in first_party_name.split():
154
+ facts = re.sub(name, " _PARTY_ ", facts)
155
+
156
+ for name in second_party_name.split():
157
+ facts = re.sub(name, " _PARTY_ ", facts)
158
+
159
+ # replace any consecutive _PARTY_ tags with only one _PARTY_ tag.
160
+ regex_continous_tags = r"(_PARTY_\s+){2,}"
161
+ anonymized_facts = re.sub(regex_continous_tags, " _PARTY_ ", facts)
162
+ # remove ant consecutive spaces
163
+ anonymized_facts = re.sub(r"\s+", " ", anonymized_facts)
164
+
165
+ return anonymized_facts
166
+
167
+ def _preprocess_text(self, text: str) -> str:
168
+ """
169
+ Preprocessing & cleaning `text` including:
170
+ - lowercasing
171
+ - removing quotation marks
172
+ - removing digits
173
+ - removing punctuation
174
+ - removing brackets, braces, and paranthesis
175
+ - removeing stopwords
176
+ - stemming tokens
177
+
178
+ Parameters:
179
+ ------------
180
+ - text : str
181
+ Text need to be processed (cleaned).
182
+
183
+ Returns:
184
+ --------
185
+ - processed_text : str
186
+ A preprocessed version of `text`.
187
+ """
188
+
189
+ text = text.lower()
190
+ # remove quotation marks
191
+ text = re.sub(r"\'", "", text)
192
+ # remove digits
193
+ text = re.sub(r"\d+", "", text)
194
+ # remove punctuation but with keeping '_' letter
195
+ text = "".join([ch for ch in text if (ch == "_") or (ch not in punct)])
196
+ # remove brackets, braces, and parantheses
197
+ text = re.sub(r"[\[\]\(\)\{\}]+", " ", text)
198
+ tokens = nltk.word_tokenize(text)
199
+ # remove stopwords and stemming tokens
200
+ tokens = [stemmer.stem(token)
201
+ for token in tokens if token not in eng_stopwords]
202
+ # convert tokens back to string
203
+ processed_text = " ".join(tokens)
204
+
205
+ return processed_text
206
+
207
+ def convert_text_to_vectors_doc2vec(
208
+ self,
209
+ text_column: pd.Series,
210
+ train: bool = True,
211
+ embeddings_doc2vec: Doc2Vec = None,
212
+ ) -> Tuple[Doc2Vec, pd.DataFrame] | pd.DataFrame:
213
+ """
214
+ Converting `text_column` to vectors using `Doc2Vec` model
215
+
216
+ Parameters:
217
+ ------------
218
+ - text_column : pd.Series
219
+ Contains the case facts.
220
+ - train : bool, optional
221
+ Defines whether the model will be trained or not. (if True, Doc2Vec will be trained |
222
+ else, Doc2Vec will used the passed `embeddings_Doc2Vec`). (Default is True).
223
+ - embeddings_doc2vec : Doc2Vec, optional
224
+ Trained Doc2Vec model will be used for generating embeddings of `text_column` if
225
+ `train` is False. (Default is None).
226
+
227
+ Returns:
228
+ --------
229
+ 1. A tuple contains the following:
230
+ - embeddings_doc2vec : Doc2Vec
231
+ Trained Doc2Vec model.
232
+ - text_vectors_df : pd.DataFrame
233
+ A DataFrame contains `text_column` vectors if `train` is True.
234
+
235
+ 2. text_vectors_df : pd.DataFrame
236
+ A DataFrame contains `text_column` vectors if `train` is False.
237
+
238
+ Raises:
239
+ -------
240
+ - AssertionError
241
+ If train is False and `embeddings_doc2vec` is None.
242
+ - AssertionError
243
+ If train is False and `embedding_doc2vec` is not an instance of Doc2Vec
244
+ """
245
+
246
+ tokenized_text = self._tokenize_text(text_column)
247
+ tokens_list, tagged_docs = self._convert_to_tagged_document(
248
+ tokenized_text)
249
+
250
+ if train:
251
+ doc2vec_model = Doc2VecModel()
252
+ embeddings_doc2vec = doc2vec_model.train_doc2vec_embeddings_model(
253
+ tagged_docs
254
+ )
255
+ text_vectors_df = self._vectorize_text(
256
+ embeddings_doc2vec, text_column, tokens_list
257
+ )
258
+ return embeddings_doc2vec, text_vectors_df
259
+
260
+ assert (
261
+ embeddings_doc2vec is not None
262
+ ), "`embedding_doc2vec` argument must be not None."
263
+ assert isinstance(
264
+ embeddings_doc2vec, Doc2Vec
265
+ ), "`embedding_doc2vec` argument must be an instance of Doc2Vec to infer vectors."
266
+ text_vectors_df = self._vectorize_text(
267
+ embeddings_doc2vec, text_column, tokens_list
268
+ )
269
+
270
+ return text_vectors_df
271
+
272
+ def convert_text_to_vectors_tf_idf(
273
+ self,
274
+ text_column: pd.Series,
275
+ ngrams: int = 2,
276
+ max_tokens: int = 10000,
277
+ output_mode: str = "tf-idf",
278
+ train: bool = True,
279
+ text_vectorizer: TextVectorization = None,
280
+ ) -> Tuple[TextVectorization, tf.Tensor] | tf.Tensor:
281
+ """
282
+ Converting `text_column` to vectors using `TextVectorization` layer.
283
+
284
+ Parameters:
285
+ ------------
286
+ - text_column : pd.Series
287
+ Contains the case facts.
288
+ - ngrams : int, optional
289
+ Defines the number of n-gram (Default is 2).
290
+ - max_tokens : int, optional
291
+ Defines the number of max_tokens of `text_vectorizer` (Default is 10,000).
292
+ - output_mode : str, optional
293
+ Represents the output vectors type whether it is "tfi-df" or "binary" or "count"
294
+ (Default is "tf-idf").
295
+ - train : bool, optional
296
+ Defines whether the model will be trained or not. (if True, TextVectorization
297
+ will be trained, else, TextVectorization will used the passed `text_vectorizer`).
298
+ (Default is True).
299
+ - text_vectorizer : TextVectorization, optional
300
+ Trained TextVectorization layer will be used for generating embeddings of
301
+ `text_column` if `train` is False. (Default is None).
302
+
303
+ Returns:
304
+ --------
305
+ - if `train` == True:
306
+ A tuple contains the following:
307
+ - text_vectorizer : TextVectorization
308
+ Trained TextVectorization layer.
309
+ - text_vectors : tf.Tensor
310
+ A Tensor contains `text_column` training vectors.
311
+ - otherwise:
312
+ text_vectors : tf.Tensor
313
+ A Tensor contains `text_column` testing vectors.
314
+
315
+ Raises:
316
+ -------
317
+ - AssertionError
318
+ If train is False and `text_vectorizer` is None.
319
+ - AssertionError
320
+ If train is False and `text_vectorizer` is not an instance of TextVectorization.
321
+ """
322
+
323
+ if train:
324
+ text_vectorizer = TextVectorization(
325
+ ngrams=ngrams, max_tokens=max_tokens, output_mode=output_mode
326
+ )
327
+ text_vectorizer.adapt(text_column)
328
+ text_vectors = text_vectorizer(text_column)
329
+
330
+ return text_vectorizer, text_vectors
331
+
332
+ assert (
333
+ text_vectorizer is not None
334
+ ), "`text_vectorizer` argument must be not None."
335
+ assert isinstance(
336
+ text_vectorizer, TextVectorization
337
+ ), "`text_vectorizer` argument must be an instance of TextVectorization to infer vectors."
338
+ text_vectors = text_vectorizer(text_column)
339
+
340
+ return text_vectors
341
+
342
+ def convert_text_to_vectors_cnn(
343
+ self,
344
+ text_column: pd.Series,
345
+ max_tokens: int = 2000,
346
+ output_sequence_length: int = 500,
347
+ output_mode: str = "int",
348
+ train: bool = True,
349
+ text_vectorizer: TextVectorization = None,
350
+ ) -> Tuple[TextVectorization, tf.Tensor] | tf.Tensor:
351
+ """
352
+ Converting `text_column` to vectors using `TextVectorization` layer.
353
+
354
+ Parameters:
355
+ ------------
356
+ - text_column : pd.Series
357
+ Contains the case facts.
358
+ - max_tokens : int, optional
359
+ Defines the number of max_tokens of `text_vectorizer` (Default is 2000).
360
+ - output_sequence_length : int, optional
361
+ Represents the dimensions of the output vector (Default is 500).
362
+ - output_mode : str, optional
363
+ Represents the output vectors type whether it is "int" or "binary" or "tfi-df".
364
+ - train : bool, optional
365
+ Defines whether the model will be trained or not. (if True,
366
+ TextVectorization will be trained | else, TextVectorization will used the
367
+ passed `text_vectorizer`). (Default is True).
368
+ - text_vectorizer : TextVectorization, optional
369
+ Trained TextVectorization layer will be used for generating embeddings of
370
+ `text_column` if `train` is False. (Default is None).
371
+
372
+ Returns:
373
+ --------
374
+ - if `train` == True:
375
+ A tuple contains the following:
376
+ - text_vectorizer : TextVectorization
377
+ Trained TextVectorization layer.
378
+ - text_vectors : tf.Tensor
379
+ A Tensor contains `text_column` training vectors.
380
+ - otherwise:
381
+ text_vectors : tf.Tensor
382
+ A Tensor contains `text_column` testing vectors.
383
+
384
+ Raises:
385
+ -------
386
+ - AssertionError
387
+ If train is False and `text_vectorizer` is None.
388
+ - AssertionError
389
+ If train is False and `text_vectorizer` is not an instance of TextVectorization.
390
+ """
391
+
392
+ if train:
393
+ text_vectorizer = TextVectorization(
394
+ max_tokens=max_tokens,
395
+ output_mode=output_mode,
396
+ output_sequence_length=output_sequence_length,
397
+ )
398
+ text_vectorizer.adapt(text_column)
399
+ text_vectors = text_vectorizer(text_column)
400
+ return text_vectorizer, text_vectors
401
+
402
+ assert (
403
+ text_vectorizer is not None
404
+ ), "`text_vectorizer` argument must be not None."
405
+ assert isinstance(
406
+ text_vectorizer, TextVectorization
407
+ ), "`text_vectorizer` argument must be an instance of TextVectorization to infer vectors."
408
+ text_vectors = text_vectorizer(text_column)
409
+
410
+ return text_vectors
411
+
412
+ def convert_text_to_vectors_glove(
413
+ self,
414
+ text_column: pd.Series,
415
+ train: bool = True,
416
+ glove_tokenizer: Tokenizer = None,
417
+ vocab_size: int = 1000,
418
+ oov_token: str = "<OOV>",
419
+ max_length: int = 50,
420
+ padding_type: str = "post",
421
+ truncation_type: str = "post",
422
+ ) -> Tuple[Tokenizer, np.ndarray] | np.ndarray:
423
+ """
424
+ Converting `text_column` to vectors using `glove_tokenizer`.
425
+
426
+ Parameters:
427
+ ------------
428
+ - text_column : pd.Series
429
+ Contains the case facts.
430
+ - train : bool, optional
431
+ Defines whether the model will be trained or not. (if True,
432
+ Tokenizer will be trained | else, Tokenizer will used the
433
+ passed `glove_tokenizer`). (Default is True).
434
+ - glove_tokenizer : Tokenizer, optional
435
+ Trained Tokenizer layer will be used for generating embeddings of
436
+ `text_column` if `train` is False. (Default is None).
437
+ - vocab_size : int, optional
438
+ Represents the number of supported vocabulary of the Tokenizer,
439
+ any token not in this vocabulary will be treated as an out-of-vocabulary
440
+ token(OOV). (Default is 1000).
441
+ - oov_tokens : str, optional
442
+ Represents the token of an out-of-vocabulary token (Default is "<OOV>").
443
+ - max_length : int, optional
444
+ Defins the output vector's dimension. (Default is 50).
445
+ - padding_type : str, optional
446
+ Defines the padding type of the vectors, if the vector size is less than
447
+ `max_length`, the rest of the `max_length` will be padded with 0 (Default is "post").
448
+ - truncation_type : str, optional
449
+ Defines the truncation type of the vectors, if the vector size is more than
450
+ `max_length`, the extra of the `max_length` will be truncated (Default is "post").
451
+
452
+ Returns:
453
+ --------
454
+ - if `train` == True:
455
+ A tuple contains the following:
456
+ - glove_tokenizer : Tokenizer
457
+ Trained Tokenizer layer.
458
+ - text_padded : np.ndarray
459
+ An array contains `text_column` vectors.
460
+ - otherwise:
461
+ text_padded : np.ndarray
462
+ An array contains `text_column` vectors.
463
+
464
+ Raises:
465
+ -------
466
+ - AssertionError
467
+ If train is False and `glove_tokenizer` is None.
468
+ - AssertionError
469
+ If train is False and `glove_tokenizer` is not instance of Tokenizer.
470
+ """
471
+
472
+ if train:
473
+ glove_tokenizer = Tokenizer(
474
+ num_words=vocab_size, oov_token=oov_token)
475
+ glove_tokenizer.fit_on_texts(text_column)
476
+ text_sequences = glove_tokenizer.texts_to_sequences(text_column)
477
+ text_padded = pad_sequences(
478
+ text_sequences,
479
+ maxlen=max_length,
480
+ padding=padding_type,
481
+ truncating=truncation_type,
482
+ )
483
+
484
+ return glove_tokenizer, text_padded
485
+
486
+ assert (
487
+ glove_tokenizer is not None
488
+ ), "`glove_tokenizer` argument must be not None."
489
+ assert isinstance(
490
+ glove_tokenizer, Tokenizer
491
+ ), "`glove_tokenizer` argument must be an instance of Tokenizer."
492
+ text_sequences = glove_tokenizer.texts_to_sequences(text_column)
493
+ text_padded = pad_sequences(
494
+ text_sequences,
495
+ maxlen=max_length,
496
+ padding=padding_type,
497
+ truncating=truncation_type,
498
+ )
499
+
500
+ return text_padded
501
+
502
+ def balance_data(self, X_train: pd.Series, y_train: pd.Series) -> pd.DataFrame:
503
+ """
504
+ Balancing `X_train` and `y_train` to distribute the targets in `y_train` equally.
505
+
506
+ Parameters:
507
+ ------------
508
+ - text_column : pd.Series
509
+ Contains the case facts.
510
+ - y_train : pd.Series
511
+ Contains the training targets.
512
+
513
+ Returns:
514
+ --------
515
+ - shuffled_balanced_df : pd.DataFrame
516
+ Contains the new balanced dataframe with shuffling indicies.
517
+ """
518
+
519
+ df = pd.concat([X_train, y_train], axis=1)
520
+
521
+ first_party = df[df["winner_index"] == 0]
522
+ second_party = df[df["winner_index"] == 1]
523
+
524
+ upsample_second_party = resample(
525
+ second_party, replace=True, n_samples=len(first_party), random_state=42
526
+ )
527
+
528
+ upsample_df = pd.concat([upsample_second_party, first_party])
529
+
530
+ shuffled_indices = np.arange(upsample_df.shape[0])
531
+ np.random.shuffle(shuffled_indices)
532
+
533
+ shuffled_balanced_df = upsample_df.iloc[shuffled_indices, :]
534
+
535
+ return shuffled_balanced_df
536
+
537
+ def anonymize_data(
538
+ self,
539
+ first_party_names: pd.Series,
540
+ second_party_names: pd.Series,
541
+ text_column: pd.Series,
542
+ ) -> pd.Series:
543
+ """
544
+ Anonymize `text_column` by replacing `first_party_names` and
545
+ `second_party_names` wit "_PARTY_" tag.
546
+
547
+ Parameters:
548
+ ------------
549
+ - first_party_names : pd.Series
550
+ Contains all first party names needed to be anonymized.
551
+ - second_party_names : pd.Series
552
+ Contains all second party names needed to be anonymized.
553
+ - text_column : pd.Series
554
+ Contains all texts needed to be anonymized.
555
+
556
+ Returns:
557
+ --------
558
+ - all_anonyimzed_facts : pd.Series
559
+ Contains anonymized version of `text_column`.
560
+ """
561
+
562
+ all_anonymized_facts = []
563
+
564
+ for i in range(text_column.shape[0]):
565
+ facts = text_column.iloc[i]
566
+ first_party_name = first_party_names.iloc[i]
567
+ second_party_name = second_party_names.iloc[i]
568
+ anonymized_facts = self._anonymize_case_facts(
569
+ first_party_name, second_party_name, facts
570
+ )
571
+ all_anonymized_facts.append(anonymized_facts)
572
+
573
+ return pd.Series(all_anonymized_facts)
574
+
575
+ def preprocess_data(self, text_column: pd.Series) -> pd.Series:
576
+ """
577
+ Preprocessing & cleaning all texts in `text_column`.
578
+
579
+ Parameters:
580
+ ------------
581
+ - text_column : pd.Series
582
+ Contains all case facts.
583
+
584
+ Returns:
585
+ --------
586
+ - preprocessed_text : pd.Series
587
+ Contains all texts after being processed.
588
+ """
589
+
590
+ preprocessed_text = text_column.apply(self._preprocess_text)
591
+ return preprocessed_text
src/style.css ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ @import url('https://fonts.googleapis.com/css2?family=Cairo:wght@300;400;500;600;700;800&display=swap');
2
+
3
+ * {
4
+ font-family: 'Cairo', sans-serif !important;
5
+ }
6
+
7
+ /* title */
8
+ .e16nr0p30 {
9
+ font-weight: 700;
10
+ font-size: 30px;
11
+ }
12
+
13
+ /* buttons */
14
+ .edgvbvh10,
15
+ .edgvbvh5 {
16
+ width: 100%;
17
+ height: 40px;
18
+ background-color: #4756ff;
19
+ color: #fff;
20
+ transition: 0.4s;
21
+ border: none;
22
+ }
23
+
24
+ .edgvbvh10:hover,
25
+ .edgvbvh5:hover {
26
+ background-color: #3747fd;
27
+ color: #fff;
28
+ border: none;
29
+ }
30
+
31
+ .edgvbvh10:focus,
32
+ .edgvbvh5:focus {
33
+ background-color: #3747fd;
34
+ color: #fff !important;
35
+ box-shadow: none;
36
+ border: none;
37
+ }
38
+
39
+ /* header */
40
+ .row_heading {
41
+ font-size: 14px;
42
+ }
43
+
44
+ /* spinner */
45
+ .css-1y04v0k.e17lx80j1,
46
+ .css-p6380s.e17lx80j1 {
47
+ margin: 0px;
48
+ border-color: #34e27f #b3b3b333 #cacaca33 !important;
49
+ -webkit-box-flex: 0;
50
+ flex-grow: 0;
51
+ flex-shrink: 0;
52
+ }
53
+
54
+ /* inputs styling */
55
+ .st-bf {
56
+ transition: 0.8s;
57
+ border: none !important;
58
+ }
59
+
60
+ .st-bf:hover {
61
+ box-shadow: 0 0 0 4px #dbdbdb !important;
62
+ }
63
+
64
+ /* text stylings */
65
+ .highlight-petitioner {
66
+ border-radius: 0.4rem;
67
+ background-color: rgba(253, 231, 142, 0.4);
68
+ color: #ffd061;
69
+ padding: 1px 5px;
70
+ margin-top: 10px;
71
+ margin-right: 5px;
72
+ }
73
+
74
+ .highlight-respondent {
75
+ border-radius: 0.4rem;
76
+ background-color: rgba(78, 170, 255, 0.2);
77
+ color: #6195ff;
78
+ padding: 1px 5px;
79
+ margin-top: 10px;
80
+ margin-right: 5px;
81
+ }
82
+
83
+ .bold-text {
84
+ font-weight: 700 !important;
85
+ }
86
+
87
+ .text-facts {
88
+ line-height: 40px;
89
+ }
90
+
91
+ /* footer */
92
+ footer {
93
+ display: none !important;
94
+ }
src/utils.py ADDED
@@ -0,0 +1,389 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Callable, List, Tuple
2
+
3
+ import numpy as np
4
+ import pandas as pd
5
+
6
+ from gensim.models.doc2vec import Doc2Vec, TaggedDocument
7
+
8
+ import tensorflow as tf
9
+ from tensorflow import keras
10
+ from keras.preprocessing.text import Tokenizer
11
+
12
+
13
+ def read_data(filepath="../csvs/"):
14
+ """
15
+ Reading CSV files of the dataset.
16
+
17
+ Parameters:
18
+ ----------
19
+ - filepath : str
20
+ Defines the path that contains the CSV files.
21
+
22
+ Returns:
23
+ --------
24
+ A tuple contains the following:
25
+ - X_train : pd.DataFrame
26
+ - y_train : pd.Series
27
+ - X_test : pd.DataFrame
28
+ - y_test : pd.Series
29
+ """
30
+
31
+ X_train = pd.read_csv(filepath + "X_train.csv")
32
+ X_train = X_train.iloc[:, 1:]
33
+
34
+ X_test = pd.read_csv(filepath + "X_test.csv")
35
+ X_test = X_test.iloc[:, 1:]
36
+
37
+ y_train = pd.read_csv(filepath + "y_train.csv")
38
+ y_train = y_train.iloc[:, 1:]
39
+
40
+ y_test = pd.read_csv(filepath + "y_test.csv")
41
+ y_test = y_test.iloc[:, 1:]
42
+
43
+ return X_train, X_test, y_train, y_test
44
+
45
+
46
+ def train_model(
47
+ model_building_func: Callable[[], keras.models.Sequential],
48
+ X_train_vectors: pd.DataFrame | np.ndarray | tf.Tensor,
49
+ y_train: pd.Series,
50
+ k: int = 4,
51
+ num_epochs: int = 30,
52
+ batch_size: int = 64,
53
+ ) -> Tuple[
54
+ List[keras.models.Sequential],
55
+ List[List[float]],
56
+ List[List[float]],
57
+ List[List[float]],
58
+ List[List[float]],
59
+ ]:
60
+ """
61
+ Trains a model on `X_train_vectors` and `y_train` using k-fold cross-validation.
62
+
63
+ Parameters:
64
+ -----------
65
+ - model_building_func : Callable[[], tf.keras.models.Sequential]
66
+ A function that builds and compiles a Keras Sequential model.
67
+ - X_train_vectors : pd.DataFrame
68
+ The training input data.
69
+ - y_train : pd.Series
70
+ The training target data.
71
+ - k : int, optional
72
+ The number of folds for cross-validation (default is 4).
73
+ - num_epochs : int, optional
74
+ The number of epochs to train for (default is 30).
75
+ - batch_size : int, optional
76
+ The batch size to use during training (default is 64).
77
+
78
+ Returns:
79
+ --------
80
+ A tuple containing the following items:
81
+ - all_models : List[keras.models.Sequential]
82
+ A list of `k` trained models.
83
+ - all_losses : List[List[float]]
84
+ A `k` by `num_epochs` list containing the training losses for each fold.
85
+ - all_val_losses : List[List[float]]
86
+ A `k` by `num_epochs` list containing the validation losses for each fold.
87
+ - all_acc : List[List[float]]
88
+ A `k` by `num_epochs` list containing the training accuracies for each fold.
89
+ - all_val_acc : List[List[float]]
90
+ A `k` by `num_epochs` list containing the validation accuracies for each fold.
91
+ """
92
+
93
+ num_validation_samples = len(X_train_vectors) // k
94
+
95
+ all_models = []
96
+ all_losses = []
97
+ all_val_losses = []
98
+ all_accuracies = []
99
+ all_val_accuracies = []
100
+
101
+ for fold in range(k):
102
+ print(f"fold: {fold+1}")
103
+ validation_data = X_train_vectors[
104
+ num_validation_samples * fold : num_validation_samples * (fold + 1)
105
+ ]
106
+ validation_targets = y_train[
107
+ num_validation_samples * fold : num_validation_samples * (fold + 1)
108
+ ]
109
+
110
+ training_data = np.concatenate(
111
+ [
112
+ X_train_vectors[: num_validation_samples * fold],
113
+ X_train_vectors[num_validation_samples * (fold + 1) :],
114
+ ]
115
+ )
116
+ training_targets = np.concatenate(
117
+ [
118
+ y_train[: num_validation_samples * fold],
119
+ y_train[num_validation_samples * (fold + 1) :],
120
+ ]
121
+ )
122
+
123
+ model = model_building_func()
124
+ history = model.fit(
125
+ training_data,
126
+ training_targets,
127
+ validation_data=(validation_data, validation_targets),
128
+ epochs=num_epochs,
129
+ batch_size=batch_size,
130
+ )
131
+
132
+ all_models.append(model)
133
+ all_losses.append(history.history["loss"])
134
+ all_val_losses.append(history.history["val_loss"])
135
+ all_accuracies.append(history.history["accuracy"])
136
+ all_val_accuracies.append(history.history["val_accuracy"])
137
+
138
+ return (all_models, all_losses, all_val_losses, all_accuracies, all_val_accuracies)
139
+
140
+
141
+ def print_testing_loss_accuracy(
142
+ all_models: List[keras.models.Sequential],
143
+ X_test_vectors: pd.DataFrame | np.ndarray | tf.Tensor,
144
+ y_test: pd.Series,
145
+ ) -> None:
146
+ """
147
+ Displaying testing loss and testing accuracy of each model in `all_models`,
148
+ and displaying their average.
149
+
150
+ Parameters:
151
+ ------------
152
+ - all_models : List[keras.models.Sequential]
153
+ A list of size `k` contains trained models.
154
+ - X_test_vectors : pd.DataFrame
155
+ Contains testing vectors.
156
+ - y_test : pd.Series
157
+ Contains testing labels.
158
+ """
159
+
160
+ sum_testing_losses = 0.0
161
+ sum_testing_accuracies = 0.0
162
+
163
+ for i, model in enumerate(all_models):
164
+ print(f"model: {i+1}")
165
+ loss_accuracy = model.evaluate(X_test_vectors, y_test, verbose=1)
166
+ sum_testing_losses += loss_accuracy[0]
167
+ sum_testing_accuracies += loss_accuracy[1]
168
+ print("====" * 20)
169
+
170
+ num_models = len(all_models)
171
+ avg_testing_loss = sum_testing_losses / num_models
172
+ avg_testing_acc = sum_testing_accuracies / num_models
173
+ print(f"average testing loss: {avg_testing_loss:.3f}")
174
+ print(f"average testing accuracy: {avg_testing_acc:.3f}")
175
+
176
+
177
+ def calculate_average_measures(
178
+ all_losses: list[list[float]],
179
+ all_val_losses: list[list[float]],
180
+ all_accuracies: list[list[float]],
181
+ all_val_accuracies: list[list[float]],
182
+ ) -> Tuple[
183
+ List[keras.models.Sequential],
184
+ List[List[float]],
185
+ List[List[float]],
186
+ List[List[float]],
187
+ List[List[float]],
188
+ ]:
189
+ """
190
+ Calculate the average measures of cross-validated results.
191
+
192
+ Parameters:
193
+ ------------
194
+ - all_losses : List[List[float]]
195
+ A `k` by `num_epochs` list contains the values of training losses.
196
+ - all_val_losses : List[List[float]]
197
+ A `k` by `num_epochs` list contains the values of validation losses.
198
+ - all_accuracies : List[List[float]]
199
+ A `k` by `num_epochs` list contains the values of training accuracies.
200
+ - all_val_accuracies : List[List[float]]
201
+ A `k` by `num_epochs` list contains the values of validation accuracies.
202
+
203
+ Returns:
204
+ --------
205
+ A tuple containing the following items:
206
+ - avg_loss_hist : List[float]
207
+ A list of length `num_epochs` contains the average of training losses.
208
+ - avg_val_loss_hist : List[float]
209
+ A list of length `num_epochs` contains the average of validaton losses.
210
+ - avg_acc_hist : List[float]
211
+ A list of length `num_epochs` contains the average of training accuracies.
212
+ - avg_val_acc_hist : List[float]
213
+ A list of length `num_epochs` contains the average of validation accuracies.
214
+ """
215
+
216
+ num_epochs = len(all_losses[0])
217
+ avg_loss_hist = [np.mean([x[i] for x in all_losses]) for i in range(num_epochs)]
218
+ avg_val_loss_hist = [
219
+ np.mean([x[i] for x in all_val_losses]) for i in range(num_epochs)
220
+ ]
221
+ avg_acc_hist = [np.mean([x[i] for x in all_accuracies]) for i in range(num_epochs)]
222
+ avg_val_acc_hist = [
223
+ np.mean([x[i] for x in all_val_accuracies]) for i in range(num_epochs)
224
+ ]
225
+
226
+ return (avg_loss_hist, avg_val_loss_hist, avg_acc_hist, avg_val_acc_hist)
227
+
228
+
229
+ class Doc2VecModel:
230
+ """Responsible of creating, initializing, and training Doc2Vec embeddings model."""
231
+
232
+ def __init__(self, vector_size=50, min_count=2, epochs=100, dm=1, window=5) -> None:
233
+ """
234
+ Initalize a Doc2Vec model.
235
+
236
+ Parameters:
237
+ ------------
238
+ - vector_size : int, optional
239
+ Dimensionality of the feature vectors (Default is 50).
240
+ - min_count : int, optional
241
+ Ignores all words with total frequency lower than this (Default is 2).
242
+ - epochs : int, optional
243
+ Represents the number of training epochs (Default is 100).
244
+ - dm : int, optional
245
+ Defines the training algorithm. If `dm=1`, 'distributed memory' (PV-DM) is used.
246
+ Otherwise, `distributed bag of words` (PV-DBOW) is employed (Default is 1).
247
+ - window : int, optional
248
+ The maximum distance between the current and predicted word within a
249
+ sentence (Default is 5).
250
+ """
251
+
252
+ self.doc2vec_model = Doc2Vec(
253
+ vector_size=vector_size,
254
+ min_count=min_count,
255
+ epochs=epochs,
256
+ dm=dm,
257
+ seed=865,
258
+ window=window,
259
+ )
260
+
261
+ def train_doc2vec_embeddings_model(
262
+ self, tagged_docs_train: List[TaggedDocument]
263
+ ) -> Doc2Vec:
264
+ """
265
+ Train Doc2Vec model on `tagged_docs_train`.
266
+
267
+ Parameters:
268
+ ------------
269
+ - tagged_docs_train : list[TaggedDocument]
270
+ Contains the required format of training Doc2Vec model.
271
+
272
+ Returns:
273
+ --------
274
+ - doc2vec_model : Doc2Vec
275
+ The trained Doc2Vec model.
276
+ """
277
+
278
+ self.doc2vec_model.build_vocab(tagged_docs_train)
279
+ self.doc2vec_model.train(
280
+ tagged_docs_train,
281
+ total_examples=self.doc2vec_model.corpus_count,
282
+ epochs=self.doc2vec_model.epochs,
283
+ )
284
+
285
+ return self.doc2vec_model
286
+
287
+
288
+ class GloveModel:
289
+ """Responsible for creating and generating the glove embedding layer"""
290
+
291
+ def __init__(self) -> None:
292
+ pass
293
+
294
+ def _generate_glove_embedding_index(
295
+ self, glove_file_path: str = "GloVe/glove.6B.50d.txt"
296
+ ) -> dict:
297
+ """
298
+ Responsible for generating glove embedding index.
299
+
300
+ Parameters:
301
+ ------------
302
+ - glove_file_path : str
303
+ Defines the path of the pretrained GloVe embeddings text file
304
+ (Default is "GloVe/glove.6B.50d.txt").
305
+
306
+ Returns:
307
+ --------
308
+ - embedding_index : dict
309
+ Contains each word as a key, and its co-effeicents as a value.
310
+ """
311
+
312
+ embeddings_index = {}
313
+ with open(glove_file_path, encoding="utf8") as f:
314
+ for line in f:
315
+ values = line.split()
316
+ word = values[0]
317
+ coefs = np.asarray(values[1:], dtype="float32")
318
+ embeddings_index[word] = coefs
319
+
320
+ return embeddings_index
321
+
322
+ def _generate_glove_embedding_matrix(
323
+ self, word_index: dict, embedding_index: dict, max_length: int
324
+ ) -> np.ndarray:
325
+ """
326
+ Generating embedding matrix of each word in `word_index`.
327
+
328
+ Parameters:
329
+ -----------
330
+ - word_index : dict
331
+ Contains words as keys with there indicies as values.
332
+ - embedding_index : dict
333
+ Contains each word as a key, and its co-effeicents as a value.
334
+ - max_length : int
335
+ Defines the size of the embedding vector of each word in the
336
+ embedding matrix.
337
+
338
+ Returns:
339
+ --------
340
+ - embedding_matrix : np.ndarray
341
+ Contains all embedding vectors for each word in`word_index`.
342
+ """
343
+
344
+ embedding_matrix = np.zeros((len(word_index) + 1, max_length))
345
+
346
+ for word, i in word_index.items():
347
+ embedding_vector = embedding_index.get(word)
348
+ if embedding_vector is not None:
349
+ embedding_matrix[i] = embedding_vector
350
+
351
+ return embedding_matrix
352
+
353
+ def generate_glove_embedding_layer(
354
+ self, glove_tokenizer: Tokenizer, max_length: int = 50
355
+ ) -> keras.layers.Embedding:
356
+ """
357
+ Create GloVe embedding layer for later usage in the neural network.
358
+
359
+ Paramters:
360
+ ----------
361
+ - glove_tokenizer : Tokenizer
362
+ Trained tokenizer on training data to extract word index from it.
363
+ - max_length : int, optional
364
+ Defines the maximum length of the output embedding vector for
365
+ each word. (Default is 50).
366
+
367
+ Returns:
368
+ --------
369
+ - embedding_layer : keras.layers.Embedding
370
+ An embedding layer of size `word index + 1` by `max_length` with
371
+ trained weights that can be used a vectorizer of case facts.
372
+ """
373
+
374
+ word_index = glove_tokenizer.word_index
375
+
376
+ embedding_index = self._generate_glove_embedding_index()
377
+ embedding_matrix = self._generate_glove_embedding_matrix(
378
+ word_index, embedding_index, max_length
379
+ )
380
+
381
+ embedding_layer = keras.layers.Embedding(
382
+ len(word_index) + 1,
383
+ max_length,
384
+ weights=[embedding_matrix],
385
+ input_length=max_length,
386
+ trainable=False,
387
+ )
388
+
389
+ return embedding_layer