File size: 76,046 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T02:00:01.687705Z"
    },
    "title": "What's The Latest? A Question-driven News Chatbot",
    "authors": [
        {
            "first": "Philippe",
            "middle": [],
            "last": "Laban",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "John",
            "middle": [],
            "last": "Canny",
            "suffix": "",
            "affiliation": {},
            "email": "canny@berkeley.edu"
        },
        {
            "first": "Marti",
            "middle": [
                "A"
            ],
            "last": "Hearst",
            "suffix": "",
            "affiliation": {},
            "email": "hearst@berkeley.edu"
        },
        {
            "first": "U",
            "middle": [
                "C"
            ],
            "last": "Berkeley",
            "suffix": "",
            "affiliation": {},
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This work describes an automatic news chatbot that draws content from a diverse set of news articles and creates conversations with a user about the news. Key components of the system include the automatic organization of news articles into topical chatrooms, integration of automatically generated questions into the conversation, and a novel method for choosing which questions to present which avoids repetitive suggestions. We describe the algorithmic framework and present the results of a usability study that shows that news readers using the system successfully engage in multi-turn conversations about specific news stories.",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This work describes an automatic news chatbot that draws content from a diverse set of news articles and creates conversations with a user about the news. Key components of the system include the automatic organization of news articles into topical chatrooms, integration of automatically generated questions into the conversation, and a novel method for choosing which questions to present which avoids repetitive suggestions. We describe the algorithmic framework and present the results of a usability study that shows that news readers using the system successfully engage in multi-turn conversations about specific news stories.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Chatbots offer the ability for interactive information access, which could be of great value in the news domain. As a user reads through news content, interaction could enable them to ask clarifying questions and go in depth on selected subjects. Current news chatbots have minimal capabilities, with content hand-crafted by members of news organizations, and cannot accept free-form questions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "To address this need, we design a new approach to interacting with large news collections. We designed, built, and evaluated a fully automated news chatbot that bases its content on a stream of news articles from a diverse set of English news sources. This in itself is a novel contribution.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Our second contribution is with respect to the scoping of the chatbot conversation. The system organizes the news articles into chatrooms, each revolving around a story, which is a set of automatically grouped news articles about a topic (e.g., articles related to Brexit).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The third contribution is a method to keep track of the state of the conversation to avoid repetition of information. For each news story, we first generate a set of essential questions and link each question with content that answers it. The motivating idea is: two pieces of content are redundant if they answer the same questions. As the user reads content, the system tracks which questions are answered (directly or indirectly) with the content read so far, and which remain unanswered. We evaluate the system through a usability study.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The remainder of this paper is structured as follows. Section 2 describes the system and the content sources, Section 3 describes the algorithm for keeping track of the conversation state, Section 4 provides the results of a usability study evaluation and Section 5 presents relevant prior work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The system is publicly available at https:// newslens.berkeley.edu/ and a demonstration video is available at this link: https://www. youtube.com/watch?v=eze9hpEPUgo.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "This section describes the components of the chatbot: the content source, the user interface, the supported user actions and the computed system answers. Appendix A lists library and data resources used in the system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "System Description",
                "sec_num": "2"
            },
            {
                "text": "We form the content for the chatbot from a set of news sources. We have collected an average of 2,000 news articles per day from 20 international news sources starting in 2010. The news articles are clustered into stories: groups of news articles about a similar evolving topic, and each story is automatically named (Laban and Hearst, 2017) . Some of the top stories at the time of writing are shown in Figure 1 (a).",
                "cite_spans": [
                    {
                        "start": 317,
                        "end": 341,
                        "text": "(Laban and Hearst, 2017)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 404,
                        "end": 412,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Content Sources",
                "sec_num": "2.1"
            },
            {
                "text": "The chatbot supports information-seeking: the user is seeking information and the system delivers in- formation in the form of news content. The homepage (Figure 1(a) ) lists the most active stories, and a user can select a story to enter its respective chatroom (Figure 1(b) ). The separation into story-specific rooms achieves two objectives:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 154,
                        "end": 166,
                        "text": "(Figure 1(a)",
                        "ref_id": "FIGREF0"
                    },
                    {
                        "start": 263,
                        "end": 275,
                        "text": "(Figure 1(b)",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "User Interface",
                "sec_num": "2.2"
            },
            {
                "text": "(1) clarity to the user, as the chatrooms allow the user to exit and enter chatrooms to come back to conversations, and (2) limiting the scope of each dialogue is helpful from both a usability and a technical standpoint, as it helps reduce ambiguity and search scope. For example, answering a question like: \"What is the total cost to insurers so far?\" is easier when knowing the scope is the Australia Fires, compared to all of news.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "User Interface",
                "sec_num": "2.2"
            },
            {
                "text": "Articles in a story are grouped into events, corresponding to an action that occurred in a particular time and place. For each event, the system forms an event message by combining the event's news article headlines generated by an abstractive summarizer model (Laban et al., 2020) .",
                "cite_spans": [
                    {
                        "start": 261,
                        "end": 281,
                        "text": "(Laban et al., 2020)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "User Interface",
                "sec_num": "2.2"
            },
            {
                "text": "Zone 2 in Figure 1(b) gives an example of an event message. The event messages form a chronological timeline in the story.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 10,
                        "end": 21,
                        "text": "Figure 1(b)",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "User Interface",
                "sec_num": "2.2"
            },
            {
                "text": "Because of the difference in respective roles, we expect user messages to be shorter than system responses, which we aim to be around 30 words.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "User Interface",
                "sec_num": "2.2"
            },
            {
                "text": "During the conversation, the user can choose among different kinds of actions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "User Actions",
                "sec_num": "2.3"
            },
            {
                "text": "Explore the event timeline. A chatroom conversation starts with the system showing the two most recent event messages of the story (Figure 1(b) ). These messages give minimal context to the user necessary to start a conversation. When the event timeline holds more than two events, a \"See previous events\" button is added at the top of the conversation, allowing the user to go further back in the event timeline of the story.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 131,
                        "end": 143,
                        "text": "(Figure 1(b)",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "User Actions",
                "sec_num": "2.3"
            },
            {
                "text": "Clarify a concept. The user can ask a clarification question regarding a person or organization (e.g., Who is Dennis Muilenburg?), a place (e.g., Where is Lebanon?) or an acronym (e.g., What does NATO stand for?). For a predetermined list of questions, the system will see if an appropriate Wikipedia entry exists, and will respond with the Ask an open-ended question. A text box (Zone 4 in Figure 1 (b)) can be used to ask any free-form question about the story. A Q&A system described in Section 3 attempts to find the answer in any paragraph of any news article of the story. If the Q&A system reaches a confidence level about at least one paragraph containing an answer to the question, the chatbot system answers the question using one of the paragraphs. In the system reply the Q&A selected answer is bolded. Figure 1 (c) shows several Q&A exchanges.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 391,
                        "end": 399,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    },
                    {
                        "start": 815,
                        "end": 823,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "User Actions",
                "sec_num": "2.3"
            },
            {
                "text": "Select a recommended question. A list of three questions generated by the algorithm described in Section 3 is suggested to the user at the bottom of the conversation (Zone 3 in Figure 1 ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 177,
                        "end": 185,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "User Actions",
                "sec_num": "2.3"
            },
            {
                "text": "(b)).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "User Actions",
                "sec_num": "2.3"
            },
            {
                "text": "Clicking on a recommended questions corresponds to asking the question in free-form. However, because recommended questions are known in advance, we pre-compute their answers to minimize user waiting times.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "User Actions",
                "sec_num": "2.3"
            },
            {
                "text": "One key problem in dialogue systems is that of keeping track of conveyed information, and avoiding repetition in system replies (see example in Figure 2 ). This problem is amplified in the news setting, where different news organizations cover content redundantly.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 144,
                        "end": 152,
                        "text": "Figure 2",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Conversation State",
                "sec_num": "3"
            },
            {
                "text": "We propose a solution that takes advantage of a Question and Answer (Q&A) system. As noted above, the motivating idea is that two pieces of content are redundant if they answer the same questions. In the example of Figure 2 , both system messages answer the same set of questions, namely: \"When did the fires start?\", \"How many people have died?\" and \"How many hectares have burned?\", and can therefore be considered redundant. Our procedure to track the knowledge state of a news conversation consists of the following steps: (1) generate candidate questions spanning the knowledge in the story, (2) build a graph connecting paragraphs with questions they answer, (3) during a conversation, use the graph to track what questions have been answered already, and avoid using paragraphs that do not answer new questions.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 215,
                        "end": 223,
                        "text": "Figure 2",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Conversation State",
                "sec_num": "3"
            },
            {
                "text": "Question Candidate Generation. We fine-tune a GPT2 language model (Radford et al., 2019) on the task of question generation using the SQuAD 2.0 dataset (Rajpurkar et al., 2018) . At training, the model reads a paragraph from the training set, and learns to generate a question associated with the paragraph. For each paragraph in each article of the story (the paragraph set), we use beam search to generate K candidate questions. In our experience, using a large beam size (K=20) is important, as one paragraph can yield several valid questions. Beam search enforces exploration, with the first step of beam search often containing several interrogative words (what, where...).",
                "cite_spans": [
                    {
                        "start": 66,
                        "end": 88,
                        "text": "(Radford et al., 2019)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 152,
                        "end": 176,
                        "text": "(Rajpurkar et al., 2018)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conversation State",
                "sec_num": "3"
            },
            {
                "text": "For a given paragraph, we reduce the set of questions by deduplicating questions that are lexically close (differ by at most 2 words), and removing questions that are too long (>12 words) or too short (<5 words).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conversation State",
                "sec_num": "3"
            },
            {
                "text": "Building the P/Q graph. We train a standard Q&A model, a Roberta model (Liu et al., 2019) finetuned on SQuAD 2.0 (Rajpurkar et al., 2018) , and use this model to build a paragraph / question bipartite graph (P/Q graph). In the P/Q graph, we connect any paragraph (P node), with a question (Q node), if the Q&A model is confident that paragraph P answers question Q. An example bipartite graph obtained is illustrated in Figure 3 , with the question set on the left, the paragraph set on the right, and edges between them representing model confidence about the answer.",
                "cite_spans": [
                    {
                        "start": 71,
                        "end": 89,
                        "text": "(Liu et al., 2019)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 113,
                        "end": 137,
                        "text": "(Rajpurkar et al., 2018)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 420,
                        "end": 428,
                        "text": "Figure 3",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Conversation State",
                "sec_num": "3"
            },
            {
                "text": "Because we used a large beam-size when generating the questions, we perform a pruning step on the questions set. Our pruning procedure is based on the realization that two questions are redundant if they connect to the same subset of paragraphs (they cover the same content). Our objective is to find the smallest set of questions that cover all paragraphs. This problem can be formulated as a standard graph theory problem known as the set cover problem, and we use a standard heuristic algorithm (Caprara et al., 1999) . After pruning, we obtain a final P/Q graph, a subgraph of the original consisting only of the covering set questions.",
                "cite_spans": [
                    {
                        "start": 498,
                        "end": 520,
                        "text": "(Caprara et al., 1999)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conversation State",
                "sec_num": "3"
            },
            {
                "text": "The P/Q graph embodies interesting properties. First, the degree of a question node measures how often a question is answered by distinct paragraphs, providing a measure of the question's importance to the story. The degree of a paragraph node indicates how many distinct questions it answers, an estimate of its relevance to a potential reader. Finally, the graph can be used to measure question relatedness: if two questions have non-empty neighboring sets (i.e., some paragraphs answer both questions), they are likely to be related questions, which can be used as a way to suggest follow-up questions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conversation State",
                "sec_num": "3"
            },
            {
                "text": "Using the P/Q graph. At the start of a conversation, no question is answered, since no paragraph has been shown to the user. Therefore, the system initializes a blank P/Q graph (left graph in Figure 3 ). As the system reveals paragraphs in the conversation, they are marked as read in the P/Q graph (shaded blue paragraphs in the right graph of Figure 3) . According to our Q&A model, any question connected to a read paragraph is answered, so we mark all neighbors of read paragraphs as answered questions (shaded blue questions on the right graph of Figure 3) . At any stage in the conversation, if a paragraph is connected to only answered questions, it is deemed uninformative, as it will not reveal the answer to a new question.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 192,
                        "end": 200,
                        "text": "Figure 3",
                        "ref_id": "FIGREF3"
                    },
                    {
                        "start": 345,
                        "end": 354,
                        "text": "Figure 3)",
                        "ref_id": "FIGREF3"
                    },
                    {
                        "start": 552,
                        "end": 561,
                        "text": "Figure 3)",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Conversation State",
                "sec_num": "3"
            },
            {
                "text": "As the conversation moves along, more paragraphs are read, increasing the number of answered questions, which in turn, increases the number of uninformative paragraphs. We program the system to prioritize paragraphs that answer the most unanswered questions, and disregard uninformative paragraphs. We further use the P/Q graph to recommend questions to the user. We select unanswered questions and prioritize questions connected to more unread paragraphs, recommending questions three at a time.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conversation State",
                "sec_num": "3"
            },
            {
                "text": "We conducted a usability study in which participants were assigned randomly to one of three configurations:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Study Results",
                "sec_num": "4"
            },
            {
                "text": "\u2022 TOPQR: the recommended questions are the most informative according to the algorithm in Section 3 (N=18),",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Study Results",
                "sec_num": "4"
            },
            {
                "text": "\u2022 RANDQR: the recommended questions are randomly sampled from the questions TOPQR would not select (however, near duplicates will appear in this set) (N=16),",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Study Results",
                "sec_num": "4"
            },
            {
                "text": "\u2022 NOQR: No questions are recommended, and the Question Recommendation module (Zone 3 in Figure 1(b) ) is hidden (N=22).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 88,
                        "end": 99,
                        "text": "Figure 1(b)",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Study Results",
                "sec_num": "4"
            },
            {
                "text": "These are contrasted in order to test (a) if showing automatically generated questions is beneficial to news readers, and (b) to assess the question tracking algorithm against a similar question recommendation method with no conversation state.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Study Results",
                "sec_num": "4"
            },
            {
                "text": "We used Amazon Mechanical Turk to recruit participants, restricting the task to workers in Englishspeaking countries having previous completed 1500 tasks (HITs) and an acceptance rate of at least 97%. Each participant was paid a flat rate of $2.50 with the study lasting a total of 15 minutes. During the study, the participants first walked through an introduction to the system, then read the news for 8 minutes, and finally completed a short survey.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Study Setup",
                "sec_num": "4.1"
            },
            {
                "text": "During the eight minutes of news reading, participants were requested to select at least 2 stories to read from a list of the 20 most recently active news stories. 1 The participants were prompted to choose stories they were interested in. The survey consisted of two sections: a satisfaction section, and a section for general free-form feedback. The satisfaction of the participants was surveyed using the standard Questionnaire for User Interaction Satisfaction (QUIS) (Norman et al., 1998) . QUIS is a series of questions about the usability of the system (ease of use, learning curve, error messages clearness, etc.) answered on a 7point Likert scale. We modify QUIS by adding two questions regarding questions and answers: \"Are suggested questions clear?\" and \"Are answers to questions informative?\" A total of fifty-six participants completed the study. We report on the usage of the system, the QUIS Satisfaction results and textual comments from the participants.",
                "cite_spans": [
                    {
                        "start": 472,
                        "end": 493,
                        "text": "(Norman et al., 1998)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Study Setup",
                "sec_num": "4.1"
            },
            {
                "text": "We observed that participants in the QR-enabled interfaces (TOPQR and RANDQR) had longer conversations than the NOQR setting, with an average chatroom conversation length of 24.9 messages in the TOPQR setting. Even though the TOPQR setting had average conversation length longer than RANDQR, this was not statistically significant.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Usage statistics",
                "sec_num": "4.2"
            },
            {
                "text": "This increase in conversation length is mostly due to the use of recommended questions, which are convenient to click on. Indeed, users clicked on 8.2 questions on average in RANDQR and 11.9 in TOPQR. NOQR participants wrote on average 2.2 of their own questions, which was not statistically higher than TOPQR (1.5) and RANDQR (1.1), showing that seeing recommended questions Measured Value TOPQR RANDQR NOQR (1) dull ... stimulating 75.28 * 5.06 4.20 (1) frustrating ... satisfying 75.00 * 4.43 4.00 (1) rigid ... flexible 74.71 4.66 4.14 (1) terrible ... wonderful 74 did not prevent participants from asking their own questions. When measuring the latency of system answers to participant questions, we observe that the average wait time in TOPQR (1.84 seconds) and RANDQR (1.88 seconds) settings is significantly lower than NOQR (4.51 seconds). This speedup is due to our ability to pre-compute answers to recommended questions, an additional benefit of the QR graph pre-computation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Usage statistics",
                "sec_num": "4.2"
            },
            {
                "text": "Overall, the systems with question recommendation enabled (TOPQR and RANDQR) obtained higher average satisfaction on most measures than the NOQR setting. That said, statistical significance was only observed in 4 cases between TOPQR and NOQR, with participants judging the TOPQR interface to be more stimulating and satisfying.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "QUIS Satisfaction Scores",
                "sec_num": "4.3"
            },
            {
                "text": "Although not statistically significant, participants rated the suggested questions for TOPQR almost 1 point higher than RANDQR, providing some evidence that incorporating past viewed information into question selection is beneficial.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "QUIS Satisfaction Scores",
                "sec_num": "4.3"
            },
            {
                "text": "Participants judged the answers to be more informative in the TOPQR setting. We interpret this as evidence that the QR module helps teach users what types of questions the system can answer, enabling them to get better answers. Several NOQR participants asked \"What can I ask?\" or equivalent.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "QUIS Satisfaction Scores",
                "sec_num": "4.3"
            },
            {
                "text": "Thirty-four of the fifty-six participants opted to give general feedback via an open ended text box. We tagged the responses into major themes:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Qualitative Feedback",
                "sec_num": "4.4"
            },
            {
                "text": "1. 19 participants (7 TOPQR, 7 RANDQR, 5 NOQR) expressed interest in the system (e.g., I enjoyed trying this system out. I particularly liked that stories are drawn from various sources.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Qualitative Feedback",
                "sec_num": "4.4"
            },
            {
                "text": "2. 11 participants (4, 3, 4) mentioned the system did not correctly reply to questions asked (e.g., Some of the questions kind of weren't answered exactly, especially in the libya article),",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Qualitative Feedback",
                "sec_num": "4.4"
            },
            {
                "text": "3. 10 participants (2, 3, 5) found an aspect of the interface confusing (e.g., This system has potential, but as of right now it seems too overloaded and hard to sort through.)",
                "cite_spans": [
                    {
                        "start": 19,
                        "end": 22,
                        "text": "(2,",
                        "ref_id": null
                    },
                    {
                        "start": 23,
                        "end": 25,
                        "text": "3,",
                        "ref_id": null
                    },
                    {
                        "start": 26,
                        "end": 28,
                        "text": "5)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Qualitative Feedback",
                "sec_num": "4.4"
            },
            {
                "text": "4. 6 participants (4, 2, 0) thought the questions were useful (e.g., I especially like the questions at the bottom. Sometimes it helps to remember some basic facts or deepen your understanding)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Qualitative Feedback",
                "sec_num": "4.4"
            },
            {
                "text": "The most commonly mentioned limitation was Q&A related errors, a limitation we hope to mitigate as automated Q&A continues progressing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Qualitative Feedback",
                "sec_num": "4.4"
            },
            {
                "text": "News Chatbots. Several news agencies have ventured in the space of dialogue interfaces as a way to attract new audiences. The chatbots are often manually curated for the dialogue medium and advanced NLP machinery such as a Q&A systems are not incorporated into the chatbot.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "On BBC's Messenger chatbot 2 , a user can enter search queries, such as \"latest news\" or \"Brexit news\" and obtain a list of latest BBC articles matching the search criteria. In the chatbot produced by Quartz 3 , journalists hand-craft news stories in the form of pre-written dialogues (aka choose-yourown adventure). At each turn, the user can choose from a list of replies, deciding which track of the dialogue-article is followed. CNN 4 has also experimented with choose-your-own adventure articles, with the added ability for small talk.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "Relevant Q&A datasets. NewsQA (Trischler et al., 2017) collected a dataset by having a crowdworker read the summary of a news article and ask a follow-up question. Subsequent crowd-workers answered the question or marked it as not-answerable.",
                "cite_spans": [
                    {
                        "start": 30,
                        "end": 54,
                        "text": "(Trischler et al., 2017)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "NewsQA's objective was to collect a dataset, and we focus on building a usable dialogue interface for the news with a Q&A component.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "CoQA (Reddy et al., 2019) and Quac (Choi et al., 2018) are two datasets collected for questions answering in the context of a dialogue. For both datasets, two crowd-workers (a student and a teacher) have a conversation about a piece of text (hidden to the student in Quac). The student must ask questions of the teacher, and the teacher answers using extracts of the document. In our system, the questions asked by the user are answered automatically, introducing potential errors, and the user can choose to ask questions or not.",
                "cite_spans": [
                    {
                        "start": 5,
                        "end": 25,
                        "text": "(Reddy et al., 2019)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 35,
                        "end": 54,
                        "text": "(Choi et al., 2018)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "In this work, the focus is not on the collection of naturally occurring questions, but in putting a Q&A system in use in a news dialogue system, and observing the extent of its use.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "Question Generation (QG) has become an active area for text generation. A common approach is to use a sequence to sequence model (Du et al., 2017) , encoding the paragraph (or context), an optional target answer (answer-aware (Sun et al., 2018) ), and decoding a paired question. This common approach focuses on the generation of a single article, from a single piece of context, often a paragraph. We argue that our framing of the QG problem as the generation of a series of questions spanning several (possibly redundant) documents is a novel task. Krishna and Iyyer (2019) build a hierarchy of questions generated for a single document; the document is then reorganized into a \"Squashed\" document, where paragraphs and questions are interleaved. Because our approach is based on using multiple documents as the source, compiling all questions into a single document would be long to read, so we opt for a chatbot.",
                "cite_spans": [
                    {
                        "start": 129,
                        "end": 146,
                        "text": "(Du et al., 2017)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 226,
                        "end": 244,
                        "text": "(Sun et al., 2018)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 551,
                        "end": 575,
                        "text": "Krishna and Iyyer (2019)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "During the usability study, we obtained direct and indirect feedback from our users, and we summarize limitations that could be addressed in the system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "6"
            },
            {
                "text": "Inability to Handle Small Talk. 4 participants attempted to have small talk with the chatbot (e.g. asking \"how are you\"). The system most often responded inadequately, saying it did not understand the request. Future work may include gently directing users who engage in small talk to a chitchat-style interface.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "6"
            },
            {
                "text": "Inaccurate Q&A system. 32% of the participants mentioned that answers are often off-track or irrelevant. This suggests that further improvements in Q&A systems are needed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "6"
            },
            {
                "text": "Dealing with errors. Within the current framework, errors are bound to happen, and easing the user's path to recovery could improve the user experience.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "6"
            },
            {
                "text": "We presented a fully automated news chatbot system, which leverages an average of 2,000 news articles a day from a diverse set of sources to build chatrooms for important news stories. In each room, the system takes note of generated questions that have already been answered, to minimize repetition of information to the news reader.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "A usability study reveals that when the chatbot recommends questions, news readers tend to have longer conversations, with an average of 24 messages exchanged. These conversation consist of combination of recommended and user-created questions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "We manually removed news stories that were predominantly about politics, to avoid heated political questions, which were not under study here.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://www.messenger.com/t/BBCPolitics 3 https://www.messenger.com/t/quartznews 4 https://www.messenger.com/t/cnn",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "We would like to thank Ruchir Baronia for early prototyping and the ACL reviewers for their helpful comments. This work was supported by a Bloomberg Data Science grant. We also gratefully acknowledge support received from an Amazon Web Services Machine Learning Research Award and an NVIDIA Corporation GPU grant.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            },
            {
                "text": "The libraries and data sources used in the described system are as follows:Transformers library 5 used to train the GPT2based Question Generation model and the Robertabased Q&A model. spaCy library 6 used to do named-entity extraction, phrase and keyword extraction.Wikidata 7 for entity linking and collection of textual content of relevant Wikipedia pages used in special case questions.MongoDB 8 and Flask 9 for storing and serving the content to the user.SetCoverPy 10 for its implementation of standard set cover algorithms in Python.List of news sources present in the dataset used by the system, in alphabetical order: Aa.com.tr, Afp.com, Aljazeera.com, Allafrica.com, Apnews.com, Bbc.co.uk, Bloomberg.com, Chicagotribune.com, Chinadaily.com.cn, Cnet.com, Cnn.com, Foxnews.com, France24.com, Independent.co.uk, Indiatimes.com, Latimes.com, Mercopress.com, Middleeasteye.net, Nytimes.com, Reuters.com, Rt.com, Techcrunch.com, Telegraph.co.uk, Theguardian.com, Washingtonpost.com 5 https://github.com/huggingface/transformers 6 https://github.com/explosion/spaCy 7 https://www.wikidata.org/ 8 https://www.mongodb.com/ 9 https://flask.palletsprojects.com/en/1.1.x/ 10 https://github.com/guangtunbenzhu/SetCoverPy",
                "cite_spans": [
                    {
                        "start": 676,
                        "end": 687,
                        "text": "Apnews.com,",
                        "ref_id": null
                    },
                    {
                        "start": 688,
                        "end": 698,
                        "text": "Bbc.co.uk,",
                        "ref_id": null
                    },
                    {
                        "start": 699,
                        "end": 713,
                        "text": "Bloomberg.com,",
                        "ref_id": null
                    },
                    {
                        "start": 714,
                        "end": 733,
                        "text": "Chicagotribune.com,",
                        "ref_id": null
                    },
                    {
                        "start": 734,
                        "end": 752,
                        "text": "Chinadaily.com.cn,",
                        "ref_id": null
                    },
                    {
                        "start": 753,
                        "end": 762,
                        "text": "Cnet.com,",
                        "ref_id": null
                    },
                    {
                        "start": 763,
                        "end": 771,
                        "text": "Cnn.com,",
                        "ref_id": null
                    },
                    {
                        "start": 772,
                        "end": 784,
                        "text": "Foxnews.com,",
                        "ref_id": null
                    },
                    {
                        "start": 785,
                        "end": 798,
                        "text": "France24.com,",
                        "ref_id": null
                    },
                    {
                        "start": 799,
                        "end": 817,
                        "text": "Independent.co.uk,",
                        "ref_id": null
                    },
                    {
                        "start": 818,
                        "end": 833,
                        "text": "Indiatimes.com,",
                        "ref_id": null
                    },
                    {
                        "start": 834,
                        "end": 846,
                        "text": "Latimes.com,",
                        "ref_id": null
                    },
                    {
                        "start": 847,
                        "end": 862,
                        "text": "Mercopress.com,",
                        "ref_id": null
                    },
                    {
                        "start": 863,
                        "end": 881,
                        "text": "Middleeasteye.net,",
                        "ref_id": null
                    },
                    {
                        "start": 882,
                        "end": 894,
                        "text": "Nytimes.com,",
                        "ref_id": null
                    },
                    {
                        "start": 895,
                        "end": 907,
                        "text": "Reuters.com,",
                        "ref_id": null
                    },
                    {
                        "start": 908,
                        "end": 915,
                        "text": "Rt.com,",
                        "ref_id": null
                    },
                    {
                        "start": 916,
                        "end": 931,
                        "text": "Techcrunch.com,",
                        "ref_id": null
                    },
                    {
                        "start": 932,
                        "end": 948,
                        "text": "Telegraph.co.uk,",
                        "ref_id": null
                    },
                    {
                        "start": 949,
                        "end": 965,
                        "text": "Theguardian.com,",
                        "ref_id": null
                    },
                    {
                        "start": 966,
                        "end": 986,
                        "text": "Washingtonpost.com 5",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Resources Used",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "A heuristic method for the set covering problem",
                "authors": [
                    {
                        "first": "Alberto",
                        "middle": [],
                        "last": "Caprara",
                        "suffix": ""
                    },
                    {
                        "first": "Matteo",
                        "middle": [],
                        "last": "Fischetti",
                        "suffix": ""
                    },
                    {
                        "first": "Paolo",
                        "middle": [],
                        "last": "Toth",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Operations research",
                "volume": "47",
                "issue": "5",
                "pages": "730--743",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alberto Caprara, Matteo Fischetti, and Paolo Toth. 1999. A heuristic method for the set covering prob- lem. Operations research, 47(5):730-743.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Quac: Question answering in context",
                "authors": [
                    {
                        "first": "Eunsol",
                        "middle": [],
                        "last": "Choi",
                        "suffix": ""
                    },
                    {
                        "first": "He",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    },
                    {
                        "first": "Mohit",
                        "middle": [],
                        "last": "Iyyer",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Yatskar",
                        "suffix": ""
                    },
                    {
                        "first": "Wentau",
                        "middle": [],
                        "last": "Yih",
                        "suffix": ""
                    },
                    {
                        "first": "Yejin",
                        "middle": [],
                        "last": "Choi",
                        "suffix": ""
                    },
                    {
                        "first": "Percy",
                        "middle": [],
                        "last": "Liang",
                        "suffix": ""
                    },
                    {
                        "first": "Luke",
                        "middle": [],
                        "last": "Zettlemoyer",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "2174--2184",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. Quac: Question answering in context. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2174-2184.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Learning to ask: Neural question generation for reading comprehension",
                "authors": [
                    {
                        "first": "Xinya",
                        "middle": [],
                        "last": "Du",
                        "suffix": ""
                    },
                    {
                        "first": "Junru",
                        "middle": [],
                        "last": "Shao",
                        "suffix": ""
                    },
                    {
                        "first": "Claire",
                        "middle": [],
                        "last": "Cardie",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "1342--1352",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Xinya Du, Junru Shao, and Claire Cardie. 2017. Learn- ing to ask: Neural question generation for reading comprehension. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342- 1352.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Generating question-answer hierarchies",
                "authors": [
                    {
                        "first": "Kalpesh",
                        "middle": [],
                        "last": "Krishna",
                        "suffix": ""
                    },
                    {
                        "first": "Mohit",
                        "middle": [],
                        "last": "Iyyer",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kalpesh Krishna and Mohit Iyyer. 2019. Generating question-answer hierarchies. In ACL.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "newslens: building and visualizing long-ranging news stories",
                "authors": [
                    {
                        "first": "Philippe",
                        "middle": [],
                        "last": "Laban",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Marti",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Hearst",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the Events and Stories in the News Workshop",
                "volume": "",
                "issue": "",
                "pages": "1--9",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philippe Laban and Marti A Hearst. 2017. newslens: building and visualizing long-ranging news stories. In Proceedings of the Events and Stories in the News Workshop, pages 1-9.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "The summary loop: Learning to write abstractive summaries without examples",
                "authors": [
                    {
                        "first": "Philippe",
                        "middle": [],
                        "last": "Laban",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Hsi",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Canny",
                        "suffix": ""
                    },
                    {
                        "first": "Marti",
                        "middle": [
                            "A"
                        ],
                        "last": "Hearst",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philippe Laban, Andrew Hsi, John Canny, and Marti A Hearst. 2020. The summary loop: Learning to write abstractive summaries without examples. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers). To appear.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Roberta: A robustly optimized bert pretraining approach",
                "authors": [
                    {
                        "first": "Yinhan",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Myle",
                        "middle": [],
                        "last": "Ott",
                        "suffix": ""
                    },
                    {
                        "first": "Naman",
                        "middle": [],
                        "last": "Goyal",
                        "suffix": ""
                    },
                    {
                        "first": "Jingfei",
                        "middle": [],
                        "last": "Du",
                        "suffix": ""
                    },
                    {
                        "first": "Mandar",
                        "middle": [],
                        "last": "Joshi",
                        "suffix": ""
                    },
                    {
                        "first": "Danqi",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Omer",
                        "middle": [],
                        "last": "Levy",
                        "suffix": ""
                    },
                    {
                        "first": "Mike",
                        "middle": [],
                        "last": "Lewis",
                        "suffix": ""
                    },
                    {
                        "first": "Luke",
                        "middle": [],
                        "last": "Zettlemoyer",
                        "suffix": ""
                    },
                    {
                        "first": "Veselin",
                        "middle": [],
                        "last": "Stoyanov",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1907.11692"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Questionnaire for user interaction satisfaction",
                "authors": [
                    {
                        "first": "Ben",
                        "middle": [],
                        "last": "Kent L Norman",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Shneiderman",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Harper",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Slaughter",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kent L Norman, Ben Shneiderman, B Harper, and L Slaughter. 1998. Questionnaire for user interac- tion satisfaction. University of Maryland (Norman, 1989) Dispon\u00edvel em.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Language models are unsupervised multitask learners",
                "authors": [
                    {
                        "first": "Alec",
                        "middle": [],
                        "last": "Radford",
                        "suffix": ""
                    },
                    {
                        "first": "Jeffrey",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    },
                    {
                        "first": "Rewon",
                        "middle": [],
                        "last": "Child",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Luan",
                        "suffix": ""
                    },
                    {
                        "first": "Dario",
                        "middle": [],
                        "last": "Amodei",
                        "suffix": ""
                    },
                    {
                        "first": "Ilya",
                        "middle": [],
                        "last": "Sutskever",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Know what you don't know: Unanswerable questions for squad",
                "authors": [
                    {
                        "first": "Pranav",
                        "middle": [],
                        "last": "Rajpurkar",
                        "suffix": ""
                    },
                    {
                        "first": "Robin",
                        "middle": [],
                        "last": "Jia",
                        "suffix": ""
                    },
                    {
                        "first": "Percy",
                        "middle": [],
                        "last": "Liang",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
                "volume": "2",
                "issue": "",
                "pages": "784--789",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 784-789.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Coqa: A conversational question answering challenge",
                "authors": [
                    {
                        "first": "Siva",
                        "middle": [],
                        "last": "Reddy",
                        "suffix": ""
                    },
                    {
                        "first": "Danqi",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher D",
                        "middle": [],
                        "last": "Manning",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Transactions of the Association for Computational Linguistics",
                "volume": "7",
                "issue": "",
                "pages": "249--266",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249-266.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Answer-focused and position-aware neural question generation",
                "authors": [
                    {
                        "first": "Xingwu",
                        "middle": [],
                        "last": "Sun",
                        "suffix": ""
                    },
                    {
                        "first": "Jing",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Yajuan",
                        "middle": [],
                        "last": "Lyu",
                        "suffix": ""
                    },
                    {
                        "first": "Wei",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    },
                    {
                        "first": "Yanjun",
                        "middle": [],
                        "last": "Ma",
                        "suffix": ""
                    },
                    {
                        "first": "Shi",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "3930--3939",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. 2018. Answer-focused and position-aware neural question generation. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3930- 3939.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Newsqa: A machine comprehension dataset",
                "authors": [
                    {
                        "first": "Adam",
                        "middle": [],
                        "last": "Trischler",
                        "suffix": ""
                    },
                    {
                        "first": "Tong",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Xingdi",
                        "middle": [],
                        "last": "Yuan",
                        "suffix": ""
                    },
                    {
                        "first": "Justin",
                        "middle": [],
                        "last": "Harris",
                        "suffix": ""
                    },
                    {
                        "first": "Alessandro",
                        "middle": [],
                        "last": "Sordoni",
                        "suffix": ""
                    },
                    {
                        "first": "Philip",
                        "middle": [],
                        "last": "Bachman",
                        "suffix": ""
                    },
                    {
                        "first": "Kaheer",
                        "middle": [],
                        "last": "Suleman",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
                "volume": "",
                "issue": "",
                "pages": "191--200",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. Newsqa: A machine compre- hension dataset. In Proceedings of the 2nd Work- shop on Representation Learning for NLP, pages 191-200.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "Screenshots of the news chatbot (a) Homepage lists most recently active chatrooms (Zone 1 is an example chatroom) (b) Newly opened chatroom: Zone 2 is an event message, Zone 3 the Question Recommendation module, and Zone 4 a text input for user-initiated questions. Event messages are created via abstractive summarization. (c) Conversation continuation with Q&A examples. Sentences shown are extracted from original articles, whose sources are shown. Answers to questions are bolded."
            },
            "FIGREF1": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "October in Australia, fires scorched more than 10.3 million hectares and 27 people have been killed what else should I knowThe fires, which have been raging since October, have killed at least 24 people and burned 10 million hectares you said that already..."
            },
            "FIGREF2": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "Example of repetition from the system. Repeating facts with different language is undesirable in a news chatbot. We introduce a novel question tracking method that attempts to minimize repetition. first two paragraphs of the Wikipedia page. For geographical entities, the system additionally responds with a geographic map when possible."
            },
            "FIGREF3": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "Conversation state is tracked with the P/Q graph. As the conversation advances, the system keeps track of answered questions. Any paragraph that does not answer a new question is discarded. Questions that are not answered yet are recommended."
            },
            "TABREF2": {
                "html": null,
                "content": "<table><tr><td>Likert values on</td></tr></table>",
                "text": "QUIS satisfaction results.",
                "num": null,
                "type_str": "table"
            }
        }
    }
}