File size: 83,665 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
{
    "paper_id": "A00-1014",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T01:11:51.622924Z"
    },
    "title": "MIMIC: An Adaptive Mixed Initiative Spoken Dialogue System for Information Queries",
    "authors": [
        {
            "first": "Jennifer",
            "middle": [],
            "last": "Chu-Carroll",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Lucent Technologies Bell Laboratories",
                "location": {
                    "addrLine": "600 Mountain Avenue Murray Hill",
                    "postCode": "07974",
                    "region": "NJ",
                    "country": "U.S.A"
                }
            },
            "email": "jencc@research.bell-labs.corn"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper describes MIMIC, an adaptive mixed initiative spoken dialogue system that provides movie showtime information. MIMIC improves upon previous dialogue systems in two respects. First, it employs initiative-oriented strategy adaptation to automatically adapt response generation strategies based on the cumulative effect of information dynamically extracted from user utterances during the dialogue. Second, MIMIC's dialogue management architecture decouples its initiative module from the goal and response strategy selection processes, providing a general framework for developing spoken dialogue systems with different adaptation behavior.",
    "pdf_parse": {
        "paper_id": "A00-1014",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper describes MIMIC, an adaptive mixed initiative spoken dialogue system that provides movie showtime information. MIMIC improves upon previous dialogue systems in two respects. First, it employs initiative-oriented strategy adaptation to automatically adapt response generation strategies based on the cumulative effect of information dynamically extracted from user utterances during the dialogue. Second, MIMIC's dialogue management architecture decouples its initiative module from the goal and response strategy selection processes, providing a general framework for developing spoken dialogue systems with different adaptation behavior.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "In recent years, speech and natural language technologies have matured enough to enable the development of spoken dialogue systems in limited domains. Most existing systems employ dialogue strategies pre-specified during the design phase of the dialogue manager without taking into account characteristics of actual dialogue interactions. More specifically, mixed initiative systems typically employ rules that specify conditions (generally based on local dialogue context) under which initiative may shift from one agent to the other. Previous research, on the other hand, has shown that changes in initiative strategies in human-human dialogues can be dynamically modeled in terms of characteristics of the user and of the on-going dialogue (Chu-Carroll and Brown, 1998) and that adaptability of initiative strategies in dialogue systems leads to better system performance (Litman and Pan, 1999) . However, no previous dialogue system takes into account these dialogue characteristics or allows for initiative-oriented adaptation of dialogue strategies.",
                "cite_spans": [
                    {
                        "start": 743,
                        "end": 772,
                        "text": "(Chu-Carroll and Brown, 1998)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 875,
                        "end": 897,
                        "text": "(Litman and Pan, 1999)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, we describe MIMIC, a voice-enabled telephone-based dialogue system that provides movie showtime information, emphasizing its dialogue management aspects. MIMIC improves upon previous systems along two dimensions. First, MIMIC automatically adapts dialogue strategies based on participant roles, characteristics of the current utterance, and dialogue history. This automatic adaptation allows appropriate dialogue strategies to be employed based on both local dialogue context and dialogue history, and has been shown to result in significantly better performance than non-adaptive systems. Second, MIMIC employs an initiative module that is decoupled from the goal selection process in the dialogue manager, while allowing the outcome of both components to jointly determine the strategies chosen for response generation. As a result, MIMIC can exhibit drastically different dialogue behavior with very minor adjustments to parameters in the initiative module, allowing for rapid development and comparison of experimental prototypes and resulting in general and portable dialogue systems.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In naturally occurring human-human dialogues, speakers often adopt different dialogue strategies based on hearer characteristics, dialogue history, etc. For instance, the speaker may provide more guidance if the hearer is having difficulty making progress toward task completion, while taking a more passive approach when the hearer is an expert in the domain. Our main goal is to enable a spoken dialogue system to simulate such human behavior by dynamically adapting dialogue strategies during an interaction based on information that can be automatically detected from the dialogue. Figure 1 shows an excerpt from a dialogue between MIMIC and an actual user where the user is attempting to find the times at which the movie Analyze This playing at theaters in Montclair. S and U indicate system and user utterances, respectively, and the italicized utterances are the output of our automatic speech recognizer. In addition, each system turn is annotated with its task and dialogue initiative holders, where task initiative tracks the lead in the process toward achieving the dialogue participants' domain goal, while dialogue initiative models the lead in determining the current discourse focus (Chu-Carroll and Brown, 1998) . In our information query application domain, the system has task (and thus dialogue) initiative if its utterances provide helpful guidance toward achieving the user's domain goal, as in utterances (6) and 7where MIMIC provided valid response choices to its query intending to solicit a theater name, while the system has dialogue but not task initiative if its utterances only specify the current discourse goal, as in utterance (4). i This dialogue illustrates several features of our adaptive mixed initiative dialogue manager. First, MIMIC automatically adapted the initiative distribution based on information extracted from user utterances and dialogue history. More specifically, MIMIC took over task initiative in utterance (6), after failing to obtain a valid answer to its query soliciting a missing theater name in (4). It retained task initiative until utterance (12), after the user implicitly indicated her intention to take over task initiative by providing a fully-specified query (utterance (11)) to a limited prompt (utterance (10)). Second, initiative distribution may affect the strategies MIMIC selects to achieve its goals. For instance, in the context of soliciting missing information, when MIMIC did not have task initiative, a simple information-seeking query was generated (utterance (4)). On the other hand, when MIMIC had task initiative, additional guidance was provided (in the form of valid response choices in utterance (6)), which helped the user successfully respond to the system's query. In the context of prompting the user for a new query, when MIMIC had task initiative, a limited prompt was selected to better constrain the user's response (utterance 10), while an open-ended prompt was generated to allow the user to take control of the problem-solving process otherwise (utterances (1) and (13)).",
                "cite_spans": [
                    {
                        "start": 1199,
                        "end": 1228,
                        "text": "(Chu-Carroll and Brown, 1998)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 586,
                        "end": 594,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Motivation",
                "sec_num": "2.1"
            },
            {
                "text": "In the next section, we briefly review a framework for dynamic initiative modeling. In Section 3, we discuss how this framework was incorporated into the dialogue management component of a spoken dialogue system to allow for automatic adaptation of dialogue strategies. Finally, we outline experiments evaluating the resulting system and show that MIMIC's automatic adaptation capabilities resulted in better system performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Motivation",
                "sec_num": "2.1"
            },
            {
                "text": "In previous work, we proposed a framework for modeling initiative during dialogue interaction (Chu-Carroll and Brown, 1998 ). This framework predicts task and dialogue initiative holders on a turn-by-turn basis in humanhuman dialogues based on participant roles (such as each dialogue agent's level of expertise and the role that she plays in the application domain), cues observed in the current dialogue turn, and dialogue history. More specifically, we utilize the Dempster-Shafer theory (Shafer, 1976; Gordon and Shortliffe, 1984) , and represent the current initiative distribution as two basic probability assignments (bpas) which indicate the amount of support for each dialogue participant having the task and dialogue initiatives. For instance, the bpa mt-cur({S}) =",
                "cite_spans": [
                    {
                        "start": 94,
                        "end": 122,
                        "text": "(Chu-Carroll and Brown, 1998",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 491,
                        "end": 505,
                        "text": "(Shafer, 1976;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 506,
                        "end": 534,
                        "text": "Gordon and Shortliffe, 1984)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "An Evidential Framework for Modeling Initiative",
                "sec_num": "2.2"
            },
            {
                "text": "l Although the dialogues we collected in our experiments (Section 5) include cases in which MIMIC has neither initiative, such cases are rare in this application domain, and will not be discussed further in this paper. 0.3, mt-c~,r({U}) = 0.7 indicates that, with all evidence taken into account, there is more support (to the degree 0.7) for the user having task initiative in the current turn than for the system. At the end of each turn, the bpas are updated based on the effects that cues observed during that turn have on changing them, and the new bpas are used to predict the next task and dialogue initiative holders.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "An Evidential Framework for Modeling Initiative",
                "sec_num": "2.2"
            },
            {
                "text": "In this framework, cues that affect initiative distribution include NoNewlnfo, triggered when the speaker simply repeats or rephrases an earlier utterance, implicitly suggesting that the speaker may want to give up initiative, AmbiguousActions, triggered when the speaker proposes an action that is ambiguous in the application domain, potentially prompting the hearer to take over initiative to resolve the detected ambiguity, etc. The effects that each cue has on changing the current bpas are also represented as bpas, which were determined by an iterative training procedure using a corpus of transcribed dialogues where each turn was annotated with the task/dialogue initiative holders and the observed cues. The bpas for the next turn are computed by combining the bpas representing the current initiative distribution and the bpas representing the effects of cues observed during the current turn, using Dempster's combination rule (Gordon and Shortliffe, 1984) . The task and dialogue initiative holders are then predicted based on the new bpas. This framework was evaluated using annotated dialogues from four task-oriented domains, achieving, on average, a correct prediction rate of 97% and 88% for task and dialogue initiative holders, respectively. In Section 3.2, we discuss how this predictive model is converted into a generative model by enabling the system to automatically detect cues that were previously labelled manually. We further discuss how the model is used by the dialogue manager for dynamic dialogue strategy adaptation.",
                "cite_spans": [
                    {
                        "start": 939,
                        "end": 968,
                        "text": "(Gordon and Shortliffe, 1984)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "An Evidential Framework for Modeling Initiative",
                "sec_num": "2.2"
            },
            {
                "text": "MIMIC is a telephone-based dialogue system that provides movie showtime information. It consists of the following main components, implemented on a distributed, client-server architecture (Zhou et al., 1997 ):",
                "cite_spans": [
                    {
                        "start": 188,
                        "end": 206,
                        "text": "(Zhou et al., 1997",
                        "ref_id": "BIBREF23"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "MIMIC: Mixed Initiative Movie Information Consultant",
                "sec_num": "3"
            },
            {
                "text": "1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "MIMIC: Mixed Initiative Movie Information Consultant",
                "sec_num": "3"
            },
            {
                "text": ".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "MIMIC: Mixed Initiative Movie Information Consultant",
                "sec_num": "3"
            },
            {
                "text": "and hang-ups, and enables streaming of audio data on channels of a telephony board.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Telephony server: this component detects rings",
                "sec_num": null
            },
            {
                "text": "Speech recognizer: the recognizer receives audio data from the telephony server and generates the word string hypothesis that best matches the audio input. We used the Lucent Automatic Speech Recognizer (Reichl and Chou, 1998; Ortmanns et al., 1999) , configured to use class-based probabilistic ngram language models to allow for rapid updates of movie/theater/town names.",
                "cite_spans": [
                    {
                        "start": 203,
                        "end": 226,
                        "text": "(Reichl and Chou, 1998;",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 227,
                        "end": 249,
                        "text": "Ortmanns et al., 1999)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Telephony server: this component detects rings",
                "sec_num": null
            },
            {
                "text": "(1) S: Hello, this is MIMIC, the movie information system. 2How can I help you? 3 are carried out by this component: 1) semantic interpretation, which constructs frame-based semantic representations from user utterances, 2) dialogue management, where response strategies are selected based on the semantic representation of the user's utterance, system's domain knowledge, and initiative distribution, and 3) utterance generation, where utterance templates are chosen and instantiated to realize the selected response strategies. These three tasks will be discussed in further detail in the rest of this section.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "_ql:l Utterance",
                "sec_num": null
            },
            {
                "text": "4. Text-to-speech engine: the TTS system receives the word string comprising the system's response from the dialogue component and converts the text into speech for output over the telephone. We used the Bell Labs TTS system (Sproat, 1998) , which in addition to converting plain text into speech, accepts text strings annotated to override default pitch height, accent placement, speaking rate, etc. 2",
                "cite_spans": [
                    {
                        "start": 225,
                        "end": 239,
                        "text": "(Sproat, 1998)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "_ql:l Utterance",
                "sec_num": null
            },
            {
                "text": "MIMIC utilizes a non-recursive frame-based semantic representation commonly used in spoken dialogue systems (e.g. (Seneff et al., 1991; Lamel, 1998) ), which represents an utterance as a set of attribute-value pairs. MIMIC's semantic representation is constructed by first extracting, for each attribute, a set of keywords from the user utterance. Using a vector-based topic identification process (Salton, 1971; Chu-Carroll and Carpenter, 1999) , these keywords are used to determine a set of likely values (including null) for that attribute. Next, the utterance is interpreted with respect to the dialogue history and the system's domain knowledge. This allows MIMIC to handle elliptical sentences and anaphoric references, as well as automatically infer missing values and detect inconsistencies in the current representation.",
                "cite_spans": [
                    {
                        "start": 114,
                        "end": 135,
                        "text": "(Seneff et al., 1991;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 136,
                        "end": 148,
                        "text": "Lamel, 1998)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 398,
                        "end": 412,
                        "text": "(Salton, 1971;",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 413,
                        "end": 445,
                        "text": "Chu-Carroll and Carpenter, 1999)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Interpretation",
                "sec_num": "3.1"
            },
            {
                "text": "This semantic representation allows for decoupling of domain-dependent task specifications and domain-independent dialogue management strategies. Each query type is specified by a template indicating, for each attribute, whether a value must, must not, or can optionally be provided in order for a query to be considered well-formed. Figure 2(b) shows that to solicit movie showtime information (question type when), a movie name and a theater name must be provided, whereas a town may optionally be provided. These specifications are determined based on domain semantics, and must be reconstructed when porting the system to a new domain.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 334,
                        "end": 345,
                        "text": "Figure 2(b)",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Semantic Interpretation",
                "sec_num": "3.1"
            },
            {
                "text": "Given a semantic representation, the dialogue history and the system's domain knowledge, the dialogue manager selects a set of strategies that guides MIMIC's response generation process. This task is carried out by three subprocesses: 1) initiative modeling, which determines the initiative distribution for the system's dialogue turn, 2) goal selection, which identifies a goal that MIMIC's response attempts to achieve, and 3) strategy selection, which chooses, based on the initiative distribution, a set of dialogue acts that MIMIC will adopt in its attempt to realize the selected goal.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dialogue Management",
                "sec_num": "3.2"
            },
            {
                "text": "MIMIC's initiative module determines the task and dialogue initiative holders for each system turn in order to enable dynamic strategy adaptation. It automatically detects cues triggered during the current user turn, and combines the effects of these cues with the current initiative distribution to determine the initiative holders for the system's turn.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Initiative Modeling",
                "sec_num": "3.2.1"
            },
            {
                "text": "The cues and the bpas representing their effects are largely based on a subset of those described in (Chu-Carroll and Brown, 1998) , 3 as shown in Figures 3(a) and 3(b). Figure 3(a) shows that observation of TakeOverTask supports a task initiative shift to the speaker to the degree .35. The remaining support is assigned to O, the set of all possible conclusions (i.e., {speaker,hearer}), indicating that to the degree .65, observation of this cue does not commit to identifying which dialogue participant should have task initiative in the next dialogue turn.",
                "cite_spans": [
                    {
                        "start": 101,
                        "end": 130,
                        "text": "(Chu-Carroll and Brown, 1998)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 147,
                        "end": 159,
                        "text": "Figures 3(a)",
                        "ref_id": null
                    },
                    {
                        "start": 170,
                        "end": 181,
                        "text": "Figure 3(a)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Cue Detection",
                "sec_num": null
            },
            {
                "text": "The cues used in MIMIC are classified into two categories, discourse cues and analytical cues, based on the types of knowledge needed to detect them: I. Discourse cues, which can be detected by considering the semantic representation of the current utterance and dialogue history:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cue Detection",
                "sec_num": null
            },
            {
                "text": "\u2022 TakeOverTask, an implicit indication that the user wants to take control of the problemsolving process, triggered when the user provides more information than the discourse expectation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cue Detection",
                "sec_num": null
            },
            {
                "text": "3We selected only those cues that can be automatically detected in a spoken dialogue system with speech recognition errors and limited semantic interpretation capabilities.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cue Detection",
                "sec_num": null
            },
            {
                "text": "\u2022 NoNewlnfo, an indication that the user is unable to make progress toward task completion, triggered when the semantic representations of two consecutive user turns are identical (a result of the user not knowing what to say or the speech recognizer failing to recognize the user utterances).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cue Detection",
                "sec_num": null
            },
            {
                "text": "2. Analytical cues, which can only be detected by taking into account MIMIC's domain knowledge:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cue Detection",
                "sec_num": null
            },
            {
                "text": "\u2022 lnvalidAction, an indication that the user made an invalid assumption about the domain, triggered when the system database lookup based on the user's query returns null.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cue Detection",
                "sec_num": null
            },
            {
                "text": "\u2022 lnvalidActionResolved, triggered when the previous invalid assumption is corrected.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cue Detection",
                "sec_num": null
            },
            {
                "text": "\u2022 AmbiguousAction, an indication that the user query is not well-formed, triggered when a mandatory attribute is unspecified or when more than one value is specified for an attribute.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cue Detection",
                "sec_num": null
            },
            {
                "text": "\u2022 AmbiguousActionResolved, triggered when the attribute in question is uniquely instantiated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cue Detection",
                "sec_num": null
            },
            {
                "text": "To determine the initiative distribution, the bpas representing the effects of cues detected in the current user utterance are instantiated (i.e., speaker~hearer in Figure 3 are instantiated as system~user accordingly). These effects are then interpreted with respect to the current initiative distribution by applying Dempster's combination rule (Gordon and Shortliffe, 1984) to the bpas representing the current initiative distribution and the instantiated bpas. This results in two new bpas representing the task and dialogue initiative distributions for the system's turn. The dialogue participant with the greater degree of support for having the task/dialogue initiative in these bpas is the task/dialogue initiative holder for the system's turn 4 (see Section 4 for an example).",
                "cite_spans": [
                    {
                        "start": 347,
                        "end": 376,
                        "text": "(Gordon and Shortliffe, 1984)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 165,
                        "end": 173,
                        "text": "Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Computing Initiative Distribution",
                "sec_num": null
            },
            {
                "text": "The goal selection module selects a goal that MIMIC attempts to achieve in its response by utilizing information from analytical cue detection as shown in Figure 4 . MIMIC's goals focus on two aspects of cooperative dialogue interaction: 1) initiating subdialogues to resolve anomalies that occur during the dialogue by attempting to instantiate an unspecified attribute, constraining an attribute for which multiple values have been specified, or correcting an invalid assumption in the case of invalid van Beeket al., 1993; Raskutti and Zukerman, 1993; Qu and Beale, 1999) , and 2) providing answers to well-formed queries (steps 9-11).",
                "cite_spans": [
                    {
                        "start": 504,
                        "end": 525,
                        "text": "van Beeket al., 1993;",
                        "ref_id": null
                    },
                    {
                        "start": 526,
                        "end": 554,
                        "text": "Raskutti and Zukerman, 1993;",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 555,
                        "end": 574,
                        "text": "Qu and Beale, 1999)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 155,
                        "end": 163,
                        "text": "Figure 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Goal Selection",
                "sec_num": "3.2.2"
            },
            {
                "text": "Previous work has argued that initiative affects the degree of control an agent has in the dialogue interaction (Whittaker and Stenton, 1988; Walker and Whittaker, 1990; Chu-Carroll and Brown, 1998) . Thus, a cooperative system may adopt different strategies to achieve the same goal depending on the initiative distribution. Since task initiative models contribution to domain/problemsolving goals, while dialogue initiative affects the cur-5An alternative strategy to step (4) is to perform a database lookup based on the ambiguous query and summarize the results (Litman et al., 1998 ), which we leave for future work. rent discourse goal, we developed alternative strategies for achieving the goals in Figure 4 based on initiative distribution, as shown in Table 1 .",
                "cite_spans": [
                    {
                        "start": 112,
                        "end": 141,
                        "text": "(Whittaker and Stenton, 1988;",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 142,
                        "end": 169,
                        "text": "Walker and Whittaker, 1990;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 170,
                        "end": 198,
                        "text": "Chu-Carroll and Brown, 1998)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 566,
                        "end": 586,
                        "text": "(Litman et al., 1998",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 706,
                        "end": 714,
                        "text": "Figure 4",
                        "ref_id": null
                    },
                    {
                        "start": 761,
                        "end": 768,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Strategy Selection",
                "sec_num": "3.2.3"
            },
            {
                "text": "The strategies employed when MIMIC has only dialogue initiative are similar to the mixed initiative dialogue strategies employed by many existing spoken dialogue systems (e.g., (Bennacef et al., 1996; Stent et al., 1999) ). To instantiate an attribute, MIMIC adopts the lnfoSeek dialogue act to solicit the missing information. In contrast, when MIMIC has both initiatives, it plays a more active role by presenting the user with additional information comprising valid instantiations of the attribute (GiveOptions). Given an invalid query, MIMIC notifies the user of the failed query and provides an openended prompt when it only has dialogue initiative. When MIMIC has both initiatives, however, in addition to No-tifyFailure, it suggests an alternative close to the user's original query and provides a limited prompt. Finally, when MIMIC has neither initiative, it simply adopts No-tifyFailure, allowing the user to determine the next discourse goal.",
                "cite_spans": [
                    {
                        "start": 177,
                        "end": 200,
                        "text": "(Bennacef et al., 1996;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 201,
                        "end": 220,
                        "text": "Stent et al., 1999)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Strategy Selection",
                "sec_num": "3.2.3"
            },
            {
                "text": "MIMIC employs a simple template-driven utterance generation approach. Templates are associated with dialogue acts as shown in Table 2 .6 The generation component receives from the dialogue manager the selected dialogue acts and the parameters needed to instantiate the templates. It then generates the system response, which is sent to the TTS module for spoken output synthesis. 6In most cases, there is a one-to-one-mapping between dialogue acts and templates. The exceptions are Answer, NotifyFailure, and SuggestAlternative, whose templates vary based on the question type. Template \"Did you say < valuel > .... or < valuen >.9\" \"Uh-huh.\" \"Choices for < attribute > are < valuex > ... < value, >7 \"What < attribute > would you like?\" E.g., \"< movie > is playing at < theater > at < time1 > ... < time,, >\" \"Can I help you with anything elseT' \"Please say the name of the movie or theater or town you would like information about.\" E.g., \"< movie > is not playing at < theater >. E.g., \"< movie > is playing at < alternativetheater > at < timex > ... < timen >\" ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 126,
                        "end": 133,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Utterance Generation",
                "sec_num": "3.3"
            },
            {
                "text": "To illustrate MIMIC's adaptation capabilities, we return to the dialogue in Figure 1 , which is repeated in Figure 5 and annotated with the cues detected in each user turn (in boldfaced italics) and the dialogue acts employed for response generation in each system turn (in boldface).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 76,
                        "end": 84,
                        "text": "Figure 1",
                        "ref_id": null
                    },
                    {
                        "start": 108,
                        "end": 116,
                        "text": "Figure 5",
                        "ref_id": "FIGREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4"
            },
            {
                "text": "The bpas representing the initiative distribution for utterance 3 The cue AmbiguousAction is detected in utterance (3) because the mandatory attribute theater was not specified and cannot be inferred (since the town of Montclair has multiple theaters). The bpas representing its effect are instantiated as follows (Figure 3 The updated bpas indicate that MIMIC should have dialogue but not task initiative when attempting to resolve the detected ambiguity in utterance (4).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 314,
                        "end": 323,
                        "text": "(Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4"
            },
            {
                "text": "MIMIC selects Instantiate as its goal to be achieved (Figure 4) , which, based on the initiative distribution, leads it to select the InfoSeek action (Table I) and generate the query \"What theater would you like?\"",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 53,
                        "end": 63,
                        "text": "(Figure 4)",
                        "ref_id": null
                    },
                    {
                        "start": 150,
                        "end": 159,
                        "text": "(Table I)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4"
            },
            {
                "text": "The user's response in (5) again triggers Ambiguous-Action, as well as NoNewlnfo since the semantic representations of (3) and (5) are identical, given the dialogue context. When the effects of these cues are taken into account, we have the following initiative distribution for utterance (6): mt-(6)({S}) = 0.62, mt_(6)({U}) = 0.38; md-(6)({S}) = 0.96, rnd_(6)({V}) = 0.04.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4"
            },
            {
                "text": "Although Instaatiate is again selected as the goal, MIMIC now has both task and dialogue initiatives; thus it selects both GiveOptions and lnfoSeek to achieve this goal and generates utterances (6) and (7). The additional information, in the form of valid theater choices, helps the user provide the missing value in (8), allowing MIMIC to answer the query in (9) and prompt for the next query. However, despite the limited prompt, the user provides a well-formed query in (11), triggering TakeOverTask. Thus, MIMIC answers the query and switches to an open-ended prompt in (13), relinquishing task initiative to the user.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4"
            },
            {
                "text": "In addition to its automatic adaptation capabilities, another advantage of MIMIC is the ease of modifying its adaptation behavior, enabled by the decoupling of the initiative module from the goal and strategy selection processes. For instance, a system-initiative version of MIMIC can be achieved by setting the initial bpas as follows: mt-initial({S}) = 1; md--initial({S}) -~ 1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4"
            },
            {
                "text": "(1) S: Hello, this is MIMIC, the movie information system. [Answer]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4"
            },
            {
                "text": "[LimitedPrompt]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4"
            },
            {
                "text": "[TakeOverTask]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4"
            },
            {
                "text": "[Answer]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4"
            },
            {
                "text": "[OpenPrompt] This is because in the Dempster-Shafer theory, if the initial bpas or the bpas for a cue provide definite evidence for drawing a certain conclusion, then no subsequent cue has any effect on changing that conclusion. Thus, MIMIC will retain both initiatives throughout the dialogue. Alternatively, versions of MIMIC with different adaptation behavior can be achieved by tailoring the initial bpas and/or the bpas for each cue based on the application. For instance, for an electronic sales agent, the effect oflnvalidAction can be increased so that when the user orders an out-of-stock item, the system will always take over task initiative and suggest an alternative item.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4"
            },
            {
                "text": "We conducted two experiments to evaluate MIMIC's automatic adaptation capabilities. We compared MIMIC with two control systems: MIMIC-SI, a system-initiative version of MIMIC in which the system retains both initiatives throughout the dialogue, and MIMIC-MI, a nonadaptive mixed-initiative version of MIMIC that resembles the behavior of many existing dialogue systems. In this section we summarize these experiments and their results. A companion paper describes the evaluation process and results in further detail (Chu-Carroll and Nickerson, 2000) . Each experiment involved eight users interacting with MIMIC and MIMIC-SI or MIMIC-MI to perform a set of tasks, each requiring the user to obtain specific movie information. User satisfaction was assessed by asking the subjects to fill out a questionnaire after interacting with each version of the system. Furthermore, a number of performance features, largely based on the PARADISE dialogue evaluation scheme (Walker et al., 1997) , were automatically logged, derived, or manually annotated. In addition, we logged the cues automatically detected in each user utterance, as well as the initiative distribution for each turn and the dialogue acts selected to generate each system response.",
                "cite_spans": [
                    {
                        "start": 517,
                        "end": 550,
                        "text": "(Chu-Carroll and Nickerson, 2000)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 964,
                        "end": 985,
                        "text": "(Walker et al., 1997)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "System Evaluation",
                "sec_num": "5"
            },
            {
                "text": "The features gathered from the dialogue interactions were analyzed along three dimensions: system performance, discourse features (in terms of characteristics of the resulting dialogues, such as the cues detected in user utterances), and initiative distribution. Our results show that MIMIC's adaptation capabilities 1) led to better system performance in terms of user satisfaction, dialogue efficiency (shorter dialogues), and dialogue quality (fewer ASR timeouts), and 2) better matched user expectations (by giving up task initiative when the user intends to have control of the dialogue interaction) and more efficiently resolved dialogue anomalies (by taking over task initiative to provide guidance when no progress is made in the dialogue, or to constrain user utterances when ASR performance is poor).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "System Evaluation",
                "sec_num": "5"
            },
            {
                "text": "In this paper, we discussed MIMIC, an adaptive mixedinitiative spoken dialogue system. MIMIC's automatic adaptation capabilities allow it to employ appropriate strategies based on the cumulative effect of information dynamically extracted from user utterances during dialogue interactions, enabling MIMIC to provide more cooperative and satisfactory responses than existing nonadaptive systems. Furthermore, MIMIC was implemented as a general framework for information query systems by decoupling its initiative module from the goal selection process, while allowing the outcome of both processes to jointly determine the response strategies employed. This feature enables easy modification to MIMIC's adaptation behavior, thus allowing the framework to be used for rapid development and comparisons of experimental prototypes of spoken dialogue systems.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "6"
            },
            {
                "text": "See(Nakatani and Chu-Carroll, 2000) for how MIMIC's dialoguelevel knowledge is used to override default prosodic assignments for concept-to-speech generation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The author would like to thank Egbert Ammicht, Antoine Saad, Qiru Zhou, Wolfgang Reichl, and Stefan Ortmanns for their help on system integration and on ASR/telephony server development, Jill Nickerson for conducting the evaluation experiments, and Bob Carpenter, Diane Litman, Christine Nakatani, and Jill Nickerson for their comments on an earlier draft of this paper.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Dialog in the RAILTEL telephone-based system",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Bennacef",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Devillers",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Rosset",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Lamel",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Proceedings of the 4th International Conference on Spoken Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Bennacef, L. Devillers, S. Rosset, and L. Lamel. 1996. Dialog in the RAILTEL telephone-based sys- tem. In Proceedings of the 4th International Confer- ence on Spoken Language Processing.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "An evidential model for tracking initiative in collaborative dialogue interactions",
                "authors": [
                    {
                        "first": "Jennifer",
                        "middle": [],
                        "last": "Chu",
                        "suffix": ""
                    },
                    {
                        "first": "-",
                        "middle": [],
                        "last": "Carroll",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [
                            "K"
                        ],
                        "last": "Brown",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "User Modeling and User-Adapted Interaction",
                "volume": "8",
                "issue": "3-4",
                "pages": "215--253",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jennifer Chu-Carroll and Michael K. Brown. 1998. An evidential model for tracking initiative in collabora- tive dialogue interactions. User Modeling and User- Adapted Interaction, 8(3-4):215-253.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Vectorbased natural language call routing",
                "authors": [
                    {
                        "first": "Jennifer",
                        "middle": [],
                        "last": "Chu",
                        "suffix": ""
                    },
                    {
                        "first": "-",
                        "middle": [],
                        "last": "Carroll",
                        "suffix": ""
                    },
                    {
                        "first": "Bob",
                        "middle": [],
                        "last": "Carpenter",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Computational Linguistics",
                "volume": "25",
                "issue": "3",
                "pages": "361--388",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jennifer Chu-Carroll and Bob Carpenter. 1999. Vector- based natural language call routing. Computational Linguistics, 25(3):361-388.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Evaluating automatic dialogue strategy adaptation for a spoken dialogue system",
                "authors": [
                    {
                        "first": "Jennifer",
                        "middle": [],
                        "last": "Chu",
                        "suffix": ""
                    },
                    {
                        "first": "-",
                        "middle": [],
                        "last": "Carroll",
                        "suffix": ""
                    },
                    {
                        "first": "Jill",
                        "middle": [
                            "S"
                        ],
                        "last": "Nickerson",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jennifer Chu-Carroll and Jill S. Nickerson. 2000. Evalu- ating automatic dialogue strategy adaptation for a spo- ken dialogue system. In Proceedings of the 1st Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics. To appear.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "The Dempster-Shafer theory of evidence",
                "authors": [
                    {
                        "first": "Jean",
                        "middle": [],
                        "last": "Gordon",
                        "suffix": ""
                    },
                    {
                        "first": "Edward",
                        "middle": [
                            "H"
                        ],
                        "last": "Shortliffe",
                        "suffix": ""
                    }
                ],
                "year": 1984,
                "venue": "Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project",
                "volume": "13",
                "issue": "",
                "pages": "272--292",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jean Gordon and Edward H. Shortliffe. 1984. The Dempster-Shafer theory of evidence. In Bruce Buchanan and Edward Shortliffe, editors, Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project, chapter 13, pages 272-292. Addison-Wesley.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Spoken language dialog system development and evaluation at LIMSI",
                "authors": [
                    {
                        "first": "Lori",
                        "middle": [],
                        "last": "Lamel",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of the International Symposium on Spoken Dialogue",
                "volume": "",
                "issue": "",
                "pages": "9--17",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lori Lamel. 1998. Spoken language dialog system de- velopment and evaluation at LIMSI. In Proceedings of the International Symposium on Spoken Dialogue, pages 9-17.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Empirically evaluating an adaptable spoken dialogue system",
                "authors": [
                    {
                        "first": "Diane",
                        "middle": [
                            "J"
                        ],
                        "last": "Litman",
                        "suffix": ""
                    },
                    {
                        "first": "Shimei",
                        "middle": [],
                        "last": "Pan",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proceedings of the 7th International Conference on User Modeling",
                "volume": "",
                "issue": "",
                "pages": "55--64",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diane J. Litman and Shimei Pan. 1999. Empirically evaluating an adaptable spoken dialogue system. In Proceedings of the 7th International Conference on User Modeling, pages 55-64.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Evaluating response strategies in a web-based spoken dialogue agent",
                "authors": [
                    {
                        "first": "Diane",
                        "middle": [
                            "J"
                        ],
                        "last": "Litman",
                        "suffix": ""
                    },
                    {
                        "first": "Shimei",
                        "middle": [],
                        "last": "Pan",
                        "suffix": ""
                    },
                    {
                        "first": "Marilyn",
                        "middle": [
                            "A"
                        ],
                        "last": "Walker",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of the 36th",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diane J. Litman, Shimei Pan, and Marilyn A. Walker. 1998. Evaluating response strategies in a web-based spoken dialogue agent. In Proceedings of the 36th",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Annual Meeting of the Association for Computational Linguistics",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "780--786",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 780-786.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Using dialogue representations for concept-to-speech generation",
                "authors": [
                    {
                        "first": "Christine",
                        "middle": [
                            "H"
                        ],
                        "last": "Nakatani",
                        "suffix": ""
                    },
                    {
                        "first": "Jennifer",
                        "middle": [],
                        "last": "Chu-Carroll",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the ANLP-NAACL Workshop on Conversational Systems",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christine H. Nakatani and Jennifer Chu-Carroll. 2000. Using dialogue representations for concept-to-speech generation. In Proceedings of the ANLP-NAACL Workshop on Conversational Systems.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "An efficient decoding method for real time speech recognition",
                "authors": [
                    {
                        "first": "Stefan",
                        "middle": [],
                        "last": "Ortmanns",
                        "suffix": ""
                    },
                    {
                        "first": "Wolfgang",
                        "middle": [],
                        "last": "Reichl",
                        "suffix": ""
                    },
                    {
                        "first": "Wu",
                        "middle": [],
                        "last": "Chou",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proceedings of the 5th European Conference on Speech Communication and Technology",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stefan Ortmanns, Wolfgang Reichl, and Wu Chou. 1999. An efficient decoding method for real time speech recognition. In Proceedings of the 5th European Con- ference on Speech Communication and Technology.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "A constraint-based model for cooperative response generation in information dialogues",
                "authors": [
                    {
                        "first": "Yan",
                        "middle": [],
                        "last": "Qu",
                        "suffix": ""
                    },
                    {
                        "first": "Steve",
                        "middle": [],
                        "last": "Beale",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proceedings of the Sixteenth National Conference on Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yan Qu and Steve Beale. 1999. A constraint-based model for cooperative response generation in informa- tion dialogues. In Proceedings of the Sixteenth Na- tional Conference on Artificial Intelligence.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Eliciting additional information during cooperative consultations",
                "authors": [
                    {
                        "first": "Bhavani",
                        "middle": [],
                        "last": "Raskutti",
                        "suffix": ""
                    },
                    {
                        "first": "Ingrid",
                        "middle": [],
                        "last": "Zukerman",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Proceedings of the 15th Annual Meeting of the Cognitive Science Society",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bhavani Raskutti and Ingrid Zukerman. 1993. Elicit- ing additional information during cooperative consul- tations. In Proceedings of the 15th Annual Meeting of the Cognitive Science Society.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Decision tree state tying based on segmental clustering for acoustic modeling",
                "authors": [
                    {
                        "first": "Wolfgang",
                        "middle": [],
                        "last": "Reichl",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Wu' Chou",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of the International Conference on Acoustics, Speech, and Signal Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wolfgang Reichl and Wu' Chou. 1998. Decision tree state tying based on segmental clustering for acoustic modeling. In Proceedings of the International Confer- ence on Acoustics, Speech, and Signal Processing.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "The SMART Retrieval System",
                "authors": [
                    {
                        "first": "Gerald",
                        "middle": [],
                        "last": "Salton",
                        "suffix": ""
                    }
                ],
                "year": 1971,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gerald Salton. 1971. The SMART Retrieval System. Prentice Hall, Inc.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Development and preliminary evaluation of the MIT ATIS system",
                "authors": [
                    {
                        "first": "Stephanie",
                        "middle": [],
                        "last": "Seneff",
                        "suffix": ""
                    },
                    {
                        "first": "James",
                        "middle": [],
                        "last": "Glass",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Goddeau",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Goodine",
                        "suffix": ""
                    },
                    {
                        "first": "Lynette",
                        "middle": [],
                        "last": "Hirschman",
                        "suffix": ""
                    },
                    {
                        "first": "Hong",
                        "middle": [],
                        "last": "Leung",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Phillips",
                        "suffix": ""
                    },
                    {
                        "first": "Joseph",
                        "middle": [],
                        "last": "Polifroni",
                        "suffix": ""
                    },
                    {
                        "first": "Victor",
                        "middle": [],
                        "last": "Zue",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "Proceedings of the DARPA Speech and Natural Language Workshop",
                "volume": "",
                "issue": "",
                "pages": "88--93",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stephanie Seneff, James Glass, David Goddeau, David Goodine, Lynette Hirschman, Hong Leung, Michael Phillips, Joseph Polifroni, and Victor Zue. 1991. De- velopment and preliminary evaluation of the MIT ATIS system. In Proceedings of the DARPA Speech and Natural Language Workshop, pages 88-93.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "A Mathematical Theory of Evidence",
                "authors": [
                    {
                        "first": "Glenn",
                        "middle": [],
                        "last": "Shafer",
                        "suffix": ""
                    }
                ],
                "year": 1976,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Glenn Shafer. 1976. A Mathematical Theory of Evi- dence. Princeton University Press.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Multilingual Text-to-Speech Synthesis: The Bell Labs Approach",
                "authors": [],
                "year": 1998,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Richard Sproat, editor. 1998. Multilingual Text-to- Speech Synthesis: The Bell Labs Approach. Kluwer, Boston, MA.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "The CommandTalk spoken dialogue system",
                "authors": [
                    {
                        "first": "Amanda",
                        "middle": [],
                        "last": "Stent",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Dowding",
                        "suffix": ""
                    },
                    {
                        "first": "Jean",
                        "middle": [
                            "Mark"
                        ],
                        "last": "Gawron",
                        "suffix": ""
                    },
                    {
                        "first": "Elizabeth",
                        "middle": [
                            "Owen"
                        ],
                        "last": "Bratt",
                        "suffix": ""
                    },
                    {
                        "first": "Robert",
                        "middle": [],
                        "last": "Moore",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "183--190",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Amanda Stent, John Dowding, Jean Mark Gawron, Eliz- abeth Owen Bratt, and Robert Moore. 1999. The CommandTalk spoken dialogue system. In Proceed- ings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 183-190.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "From plan critiquing to clarification dialogue for cooperative response generation",
                "authors": [
                    {
                        "first": "Robin",
                        "middle": [],
                        "last": "Peter Van Beek",
                        "suffix": ""
                    },
                    {
                        "first": "Ken",
                        "middle": [],
                        "last": "Cohen",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Schmidt",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Computational Intelligence",
                "volume": "9",
                "issue": "2",
                "pages": "132--154",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Peter van Beek, Robin Cohen, and Ken Schmidt. 1993. From plan critiquing to clarification dialogue for co- operative response generation. Computational Intelli- gence, 9(2):132-154.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Mixed initiative in dialogue: An investigation into discourse segmentation",
                "authors": [
                    {
                        "first": "Marilyn",
                        "middle": [],
                        "last": "Walker",
                        "suffix": ""
                    },
                    {
                        "first": "Steve",
                        "middle": [],
                        "last": "Whittaker",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "70--78",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Marilyn Walker and Steve Whittaker. 1990. Mixed ini- tiative in dialogue: An investigation into discourse segmentation. In Proceedings of the 28th Annual Meeting of the Association for Computational Lin- guistics, pages 70-78.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "PARADISE: A framework for evaluating spoken dialogue agents",
                "authors": [
                    {
                        "first": "Marilyn",
                        "middle": [
                            "A"
                        ],
                        "last": "Walker",
                        "suffix": ""
                    },
                    {
                        "first": "Diane",
                        "middle": [
                            "J"
                        ],
                        "last": "Litman",
                        "suffix": ""
                    },
                    {
                        "first": "Candance",
                        "middle": [
                            "A"
                        ],
                        "last": "Kamm",
                        "suffix": ""
                    },
                    {
                        "first": "Alicia",
                        "middle": [],
                        "last": "Abella",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "271--280",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Marilyn A. Walker, Diane J. Litman, Candance A. Kamm, and Alicia Abella. 1997. PARADISE: A framework for evaluating spoken dialogue agents. In Proceedings of the 35th Annual Meeting of the Associ- ation for Computational Linguistics, pages 271-280.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Cues and control in expert-client dialogues",
                "authors": [
                    {
                        "first": "Steve",
                        "middle": [],
                        "last": "Whittaker",
                        "suffix": ""
                    },
                    {
                        "first": "Phil",
                        "middle": [],
                        "last": "Stenton",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "123--130",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Steve Whittaker and Phil Stenton. 1988. Cues and con- trol in expert-client dialogues. In Proceedings of the 26th Annual Meeting of the Association for Computa- tional Linguistics, pages 123-130.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Speech technology integration and research platform: A system study",
                "authors": [
                    {
                        "first": "Qiru",
                        "middle": [],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "Chin-Hui",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Wu",
                        "middle": [],
                        "last": "Chou",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Pargellis",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Proceedings of the 5th European Conference on Speech Communication and Technology",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Qiru Zhou, Chin-Hui Lee, Wu Chou, and Andrew Pargel- lis. 1997. Speech technology integration and research platform: A system study. In Proceedings of the 5th European Conference on Speech Communication and Technology.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "type_str": "figure",
                "num": null,
                "text": "Figure 2(a) shows the frame-based semantic representation for the utterance \"What time is Analyze This playing Semantic Representation and Task Specification in Montclair?\""
            },
            "FIGREF1": {
                "uris": null,
                "type_str": "figure",
                "num": null,
                "text": "41n practice, this is the preferred initiative holder since practical reasons may prevent the dialogue participant from actually holding the initiative. For instance, if having task initiative dictates inclusion of additional helpful information, this can only be realized if M1M1C's knowledge base provides such information. ({speaker}) = 0.35; mr-tot(O) = 0.65 mt-,~ni({hearer}) = 0.35; mt-nn~(O) = 0.65 mt-i~({hearer}) = 0.35; mt-ia(O) = 0.65 mt-iar({hearer}) = 0.35; mt-iar(O) = 0.65 mt-aa({hearer}) = 0.35; mt-a~(O) = 0.65 mt .... ({speaker}) = 0.35; mt .... ({speaker}) = 0.35; ma-tot(O) = 0.65 md-nni({hearer}) = 0.35; md-nni(O) -~-0.65 md-ia ({hearer}) = 0.7; md-ia (O) = 0.3 ma-iar({hearer}) = 0.7; ma-iar(O) = 0.3 ma-aa({hearer}) = 0.7; md_a~(O) = 0.3 ma .... ({speaker}) = 0.7; md .... (O) = 0"
            },
            "FIGREF2": {
                "uris": null,
                "type_str": "figure",
                "num": null,
                "text": "are the initial bpas, which, based on MIMIC's role as an information provider, are mt-(3)({S}) = 0.3, mt-(3)({U}) = 0.7; = 0.6, md-(3)({V}) = 0.4."
            },
            "FIGREF3": {
                "uris": null,
                "type_str": "figure",
                "num": null,
                "text": "): mt-,,({S}) = 0.35, mt_,,(O) = 0.65; md-aa({S}) = 0.7, md-aa(O) = 0.3. Combining the current bpas with the effects of the observed cue, we obtain the following new bpas: mt-(4)({S}) = 0.4, mt_(a)({U}) = 0.6; md_(4)({S}) = 0.83, md_(4)({U}) = 0.17."
            },
            "FIGREF4": {
                "uris": null,
                "type_str": "figure",
                "num": null,
                "text": "Annotated Dialogue Shown inFigure 1"
            },
            "TABREF2": {
                "type_str": "table",
                "text": "Mappings Between Dialogue Acts and Utterance Templates",
                "content": "<table/>",
                "html": null,
                "num": null
            }
        }
    }
}