File size: 58,032 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
{
    "paper_id": "A00-1010",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T01:12:13.798898Z"
    },
    "title": "TALK'N'TRAVEL: A CONVERSATIONAL SYSTEM FOR AIR TRAVEL PLANNING",
    "authors": [
        {
            "first": "David",
            "middle": [],
            "last": "Stallard",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "BBN Technologies",
                "location": {
                    "addrLine": "GTE 70 Fawcett St",
                    "settlement": "Cambridge",
                    "region": "MA",
                    "country": "USA"
                }
            },
            "email": "stallard@bbn.com"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We describe Talk'n'Travel, a spoken dialogue language system for making air travel plans over the telephone. Talk'n'Travel is a fully conversational, mixed-initiative system that allows the user to specify the constraints on his travel plan in arbitrary order, ask questions, etc., in general spoken English. The system operates according to a plan-based agenda mechanism, rather than a finite state network, and attempts to negotiate with the user when not all of his constraints can be met.",
    "pdf_parse": {
        "paper_id": "A00-1010",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We describe Talk'n'Travel, a spoken dialogue language system for making air travel plans over the telephone. Talk'n'Travel is a fully conversational, mixed-initiative system that allows the user to specify the constraints on his travel plan in arbitrary order, ask questions, etc., in general spoken English. The system operates according to a plan-based agenda mechanism, rather than a finite state network, and attempts to negotiate with the user when not all of his constraints can be met.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "This paper describes Talk'n'Travel, a spoken language dialogue system for making complex air travel plans over the telephone. Talk'n'Travel is a research prototype system sponsored under the DARPA Communicator program (MITRE, 1999) . Some other systems in the program are Ward and Pellom (1999) , Seneff and Polifroni (2000) and . The common task of this program is a mixed-initiative dialogue over the telephone, in which the user plans a multi-city trip by air, including all flights, hotels, and rental cars, all in conversational English over the telephone.",
                "cite_spans": [
                    {
                        "start": 218,
                        "end": 231,
                        "text": "(MITRE, 1999)",
                        "ref_id": null
                    },
                    {
                        "start": 272,
                        "end": 294,
                        "text": "Ward and Pellom (1999)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 297,
                        "end": 324,
                        "text": "Seneff and Polifroni (2000)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "The Communicator common task presents special challenges. It is a complex task with many subtasks, including the booking of each flight, hotel, and car reservation. Because the number of legs of the trip may be arbitrary, the number of such subtasks is not known in advance. Furthermore, the user has complete freedom to say anything at any time. His utterances can affect just the current subtask, or multiple subtasks at once (\"I want to go from Denver to Chicago and then to San Diego\"). He can go back and change the specifications for completed subtasks. And there are important constraints, such as temporal relationships between flights, that must be maintained for the solution to the whole task to be coherent.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "In order to meet this challenge, we have sought to develop dialogue techniques for Talk'n'Travel that go beyond the rigid systemdirected style of familiar IVR systems. Talk'n'Travel is instead a mixed initiative system that allows the user to specify constraints on his travel plan in arbitrary order. At any point in the dialogue, the user can supply information other than what the system is currently prompting for, change his mind about information he has previously given and even ask questions himself. The system also tries to be helpful, eliciting constraints from the user when necessary. Furthermore, if at any point the constraints the user has specified cannot all be met, the system steps in and offers a relaxation of them in an attempt to negotiate a partial solution with the user.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "The next section gives a brief overview of the system. Relevant components are discussed in subsequent sections.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "The system consists of the following modules: speech recognizer, language understander, dialogue manager, state manager, language generator, and speech synthesizer. The modules interact with each other via the central hub module of the Communicator Common Architecture.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "I System Overview",
                "sec_num": null
            },
            {
                "text": "The speech recognizer is the Byblos system (Nguyen, 1995) . It uses an acoustic model trained from the Macrophone telephone corpus, and a bigram/trigram language model trained from -40K utterances derived from various sources, including data collected under the previous ATIS program (Dahl et al, 1994) .",
                "cite_spans": [
                    {
                        "start": 43,
                        "end": 57,
                        "text": "(Nguyen, 1995)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 284,
                        "end": 302,
                        "text": "(Dahl et al, 1994)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "I System Overview",
                "sec_num": null
            },
            {
                "text": "The speech synthesizer is Lucent's commercial system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "I System Overview",
                "sec_num": null
            },
            {
                "text": "Synthesizer and recognizer both interface to the telephone via Dialogics telephony board. The database is currently a frozen snapshot of actual flights between 40 different US cities (we are currently engaged in interfacing to a commercial air travel website). The various language components are written in Java. The complete system runs on Windows NT, and is compliant with the DARPA Communicator Common architecture.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "I System Overview",
                "sec_num": null
            },
            {
                "text": "The present paper is concerned with the dialogue and discourse management, language generation and language understanding components. In the remainder of the paper, we present more detailed discussion of these components, beginning with the language understander in Section 2. Section 3 discusses the discourse and dialogue components, and Section 4, the language generator.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "I System Overview",
                "sec_num": null
            },
            {
                "text": "Semantic frames have proven useful as a meaning representation for many applications. Their simplicity and useful computational properties have often been seen as more important than their limitations in expressive power, especially in simpler domains.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Meaning Representation",
                "sec_num": "2.1"
            },
            {
                "text": "Even in such domains, however, flames still have some shortcomings. While most naturally representing equalities between slot and filler, flames have a harder time with inequalities, such as 'the departure time is before 10 AM', or 'the airline is not Delta'. These require the slot-filler to be some sort of predicate, interval, or set object, at a cost to simplicity uniformity. Other problematic cases include n-ary relations ('3 miles from Denver'), and disjunctions of properties on different slots.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Meaning Representation",
                "sec_num": "2.1"
            },
            {
                "text": "In our Talk'n'Travel work, we have developed a meaning representation formalism called path constraints, which overcomes these problems, while retaining the computational advantages that made frames attractive in the first place. A path constraint is an expression of the form :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Meaning Representation",
                "sec_num": "2.1"
            },
            {
                "text": "(<path> <relation> <arguments>*)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Meaning Representation",
                "sec_num": "2.1"
            },
            {
                "text": "The path is a compositional chain of one or more attributes, and relations are 1-place or higher predicates, whose first argument is implicitly the path. The relation is followed by zero or more other arguments. In the simplest case, path constraints can be thought of as flattenings of a tree of frames. The following represents the constraint that the departure time of the first leg of the itinerary is the city Boston :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Meaning Representation",
                "sec_num": "2.1"
            },
            {
                "text": "Because this syntax generalizes to any relation, however, the constraint \"departing before 10 AM\" can be represented in a syntactically equivalent way:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LEGS.0.ORIG_CITY EQ BoSToN",
                "sec_num": null
            },
            {
                "text": "LEGS.0.DEPART_TIME LT 1000",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LEGS.0.ORIG_CITY EQ BoSToN",
                "sec_num": null
            },
            {
                "text": "Because the number of arguments is arbitrary, it is equally straightforward to represent a oneplace property like \"x is nonstop\" and a three place predicate like \"x is 10 miles from Denver\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LEGS.0.ORIG_CITY EQ BoSToN",
                "sec_num": null
            },
            {
                "text": "Like flames, path constraints have a fixed format that is indexed in a computationally useful way, and are simpler than logical forms. Unlike flames, however, path constraints can be combined in arbitrary conjunctions, disjunctions, and negations, even across different paths. Path constraint meaning representations are also flat lists of constraints rather than trees, making matching rules, etc, easier to write for them.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LEGS.0.ORIG_CITY EQ BoSToN",
                "sec_num": null
            },
            {
                "text": "Language understanding in Talk'n'Travel is carried out using a system called GEM (for Generative Extraction Model). GEM (Miller, 1998) is a probabilistic semantic grammar that is an outgrowth of the work on the HUM system (Miller, 1996) , but uses hand-specified knowledge in addition to probability. The handspecified knowledge is quite simple, and is expressed by a two-level semantic dictionary. In the first level, the entries map alternative word strings to a single word class. For example, the following entry maps several alternative forms to the word class DEPART:",
                "cite_spans": [
                    {
                        "start": 120,
                        "end": 134,
                        "text": "(Miller, 1998)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 222,
                        "end": 236,
                        "text": "(Miller, 1996)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The GEM Understanding System",
                "sec_num": "2.2"
            },
            {
                "text": "Leave, depart, get out of => DEPART",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The GEM Understanding System",
                "sec_num": "2.2"
            },
            {
                "text": "In the second level, entries map sequences of word classes to constraints:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The GEM Understanding System",
                "sec_num": "2.2"
            },
            {
                "text": "Name: DepartCity 1 Head: DEPART Classes: [DEPART FROM CITY] Meaning: (DEST_CITY EQ <CITY>)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The GEM Understanding System",
                "sec_num": "2.2"
            },
            {
                "text": "The \"head\" feature allows the entry to pass one of its constituent word classes up to a higher level pattern, allowing the given pattern to be a constituent of others.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The GEM Understanding System",
                "sec_num": "2.2"
            },
            {
                "text": "The dictionary entries generate a probabilistic recursive transition network (PRTN), whose specific structure is determined by dictionary entries. Paths through this network correspond one-to-one with parse trees, so that given a path, there is exactly one corresponding tree. The probabilities for the arcs in this network can be estimated from training data using the EM (Expectation-Maximization) procedure.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The GEM Understanding System",
                "sec_num": "2.2"
            },
            {
                "text": "GEM also includes a noise state to which arbitrary input between patterns can be mapped, making the system quite robust to ill-formed input. There is no separate phase for handling ungrammatical input, nor any distinction between grammatical and ungrammatical input.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The GEM Understanding System",
                "sec_num": "2.2"
            },
            {
                "text": "A key feature of the Communicator task is that the user can say anything at any time, adding or changing information at will. He may add new subtasks (e.g. trip legs) or modifying existing ones. A conventional dialogue state network approach would be therefore infeasible, as the network would be almost unboundedly large and complex.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discourse and Dialogue Processing",
                "sec_num": "3"
            },
            {
                "text": "A signifigant additional problem is that changes need not be monotonic. In particular, when changing his mind, or correcting the system's misinterpretations, the user may delete subtask structures altogether, as in the subdialog:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discourse and Dialogue Processing",
                "sec_num": "3"
            },
            {
                "text": "S: What day are you returning to Chicago? U: No, I don't want a return flight.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discourse and Dialogue Processing",
                "sec_num": "3"
            },
            {
                "text": "Because they take information away rather than add it, scenarios like this one make it problematic to view discourse processing as producing a contextualized, or \"thick frame\", version of the user's utterance. In our system, therefore, we have chosen a somewhat different approach.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discourse and Dialogue Processing",
                "sec_num": "3"
            },
            {
                "text": "The discourse processor, called the state manager, computes the most likely new task state, based on the user's input and the current task state. It also computes a discourse event, representing its interpretation of what happened in the conversation as a result of the user's utterance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discourse and Dialogue Processing",
                "sec_num": "3"
            },
            {
                "text": "The dialogue manager is a separate module, as has no state managing responsibilities at all. Rather, it simply computes the next action to take, based on its current goal agenda, the discourse event returned by the state manager, and the new state. This design has the advantage of making the dialogue manager considerably simpler. The discourse event also becomes available to convey to the user as confirmation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discourse and Dialogue Processing",
                "sec_num": "3"
            },
            {
                "text": "We discuss these two modules in more detail below.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discourse and Dialogue Processing",
                "sec_num": "3"
            },
            {
                "text": "The state manager is responsible for computing and maintaining the current task state. The task state is simply the set of path constraints which currently constrain the user's itinerary. Also included in the task state are the history of user and system utterances, and the current subtask and object in focus, if any. At any of these steps, zero or more alternative new states can result, and are fed to the next step. If zero states result at any step, the new meaning representation is rejected, and another one requested from the understander. If no more hypotheses are available, the entire utterance is rejected, and a DONT_UNDERSTAND event is returned to the dialogue manager.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "Step 1 resolves ellipses. Ellipses include both short responses like \"Boston\" and yes/no responses. In this step, a complete meaning representation such as '(ORIQCITY EQ BOSTON)' is generated based on the system's prompt and the input meaning. The hypothesis is rejected if this cannot be done.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "Step 2 matches the input meaning to one or more of the subtasks of the problem. For the Communicator problem, the subtasks are legs of the user's itinerary, and matching is done based on cities mentioned in the input meaning. The default is the subtask currently in focus in the dialogue.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "A match to a subtask is represented by adding the prefix for the subtask to the path of the constraint. For example, \"I want to arrive in Denver by 4 PM\" and then continue on to Chicago would be : Step 3, local ambiguities are expanded into their different possibilities. These include partially specified times such as \"2 o'clock\"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "LEGS.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "Step 4 applies inference and coherency rules. These rules will vary from application to application. They are written in the path constraint formalism, augmented with variables that can range over attributes and other values.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "The following is an example, representing the constraint a flight leg cannot be scheduled to depart until after the preceding flight arrives:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "LEGS.$N.ARRIVE LT LEGS. $N+ 1 .DEPART",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "States that violate coherency constraints are discarded.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "Step 5 computes the set of objects in the database that satisfy the constraints on the current subtask. This set will be empty when the constraints are not all satisfiable, in which case the relaxation of Step 6 is invoked. This relaxation is a best-first search for the satisfiable subset of the constraints that are deemed closest to what the user originally wanted. Alternative relaxations are scored according to a sum of penalty scores for each relaxed constraint, derived from earlier work by Stallard (1995) . The penalty score is the sum of two terms: one for the relative importance of the attribute concerned (e.g. relaxations of DEPART_DATE are penalised more than relaxations of AIRLINE) and the other for the nearness of the satisfiers to the original constraint (relevant for number-like attributes like departure time).",
                "cite_spans": [
                    {
                        "start": 499,
                        "end": 514,
                        "text": "Stallard (1995)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "The latter allows the system to give credit to solutions that are near fits to the user's goals, even if they relax strongly desired constraints. For example, suppose the user has expressed a desire to fly on Delta and arrive by 3 PM, while the system is only able to find a flight on Delta that arrives at 3:15 PM. In this case, this flight, which meets one constraint and almost meets the other, may well satisfy the user more than a flight on a different airline that happens to meet the time constraint exactly.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "In the final step, the alternative new states are rank-ordered according to a pragmatic score, and the highest-scoring alternative is chosen. The pragmatic score is computed based on a number of factors, including the plausibility of disambiguated times and whether or not the state interpreted the user as responding to the system prompt.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "The appropriate discourse event is then deterministicaUy computed and returned. There are several types of discourse event. The most common is UPDATE, which specifies the constraints that have been added, removed, or relaxed. Another type is REPEAT, which is generated when the user has simply repeated constraints the system already knows. Other types include QUESTION, TIMEOUT, and DONT UNDERSTAND.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "State Manager",
                "sec_num": "3.1"
            },
            {
                "text": "Upon receiving the new discourse event from the state manager, the dialogue manager determines what next action to take. Actions can be external, such as speaking to the user or asking him a question, or internal, such as querying the database or other elements of the system state. The current action is determined by consulting a stack-based agenda of goals and actions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dialogue Manager",
                "sec_num": "3.1"
            },
            {
                "text": "The agenda stack is in turn determined by an application-dependent library of plans. Plans are tree structures whose root is the name of the goal the plan is designed to solve, and whose leaves are either other goal names or actions. An example of a plan is the following: The system begins the interaction with the highlevel goal START on its stack. At each step, the system examines the top of its goal stack and either executes it if it is an action suitable for execution, or replaces it on the stack with its plan steps if it is a goal.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dialogue Manager",
                "sec_num": "3.1"
            },
            {
                "text": "Actions are objects with success and relevancy predicates and an execute method, somewhat similar to the \"handlers\" of . An action has an underlying goal, such as finding out the user's constraints on some attribute. The action's success predicate will return true if this underlying goal has been achieved, and its relevancy predicate will return true if it is still relevant to the current situation. Before carrying out an action, the dialogue manager first checks to see if its success predicate returns false and its relevancy predicate returns true. If either condition is not met, the action is popped off the stack and disposed of without being executed. Otherwise, the action's execute method is invoked.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dialogue Manager",
                "sec_num": "3.1"
            },
            {
                "text": "The system includes a set of actions that are built in, and may be parameterized for each each domain. For example, the action type ELICIT is parameterized by an attribute A, a path prefix P, and verbalization string S. Its success predicate returns true if the path 'P.A' is constrained in the current state. Its execute method generates a meaning frame that is passed to the language generator, ultimately prompting the user with a question such as \"What city are you flying to?\"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dialogue Manager",
                "sec_num": "3.1"
            },
            {
                "text": "Once an action's execute method is invoked, it remains on the stack for the next cycle, where it is tested again for success and relevancy. In this case, if the success condition is met -that is, if the user did indeed reply with a specification of his destination city -the action is popped off the stack.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dialogue Manager",
                "sec_num": "3.1"
            },
            {
                "text": "If the system did not receive this information, either because the user made a stipulation about some different attribute, asked a question, or simply was not understood, the action remains on the stack to be executed again. Of course, the user may have already specified the destination city in a previous utterance. In this case, the action is already satisfied, and is not executed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dialogue Manager",
                "sec_num": "3.1"
            },
            {
                "text": "In this way, the user has flexibility in how he actually carries out the dialogue.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dialogue Manager",
                "sec_num": "3.1"
            },
            {
                "text": "In certain situations, other goals and actions may be pushed onto the stack, temporarily interrupting the execution of the current plan. For example, the user himself may ask a question. In this case, an action to answer the question is created, and pushed onto the stack. The dialogue manager then executes this action to answer the user's question before continuing on with the plan. Or the state manager may generate a clarification question, which the dialogue manager seeks to have the user answer.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dialogue Manager",
                "sec_num": "3.1"
            },
            {
                "text": "Actions can also have a set of conditional branchings that are tested after the action is executed. If present, these determine the next action to execute or goal to work on. For example, the action that asks the user \"Do you want a return flight to X?\" specifies the branch to be taken when the user replies in the negative. This branch includes an action that asks the user \"Is Y your final destination?\", an action that is executed if the user did not specify an additional destination along with his negative reply.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dialogue Manager",
                "sec_num": "3.1"
            },
            {
                "text": "Unlike the approach taken by Ward and Pellom (1999) , which seeks to avoid scripting entirely by driving the dialogue off the current status of the itinerary, the Talk'n'Travel dialogue manager thus seeks to allow partially scripted dialogue where appropriate to the situation.",
                "cite_spans": [
                    {
                        "start": 29,
                        "end": 51,
                        "text": "Ward and Pellom (1999)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dialogue Manager",
                "sec_num": "3.1"
            },
            {
                "text": "The language generator takes a meaning frame from the dialogue manager, and generates a text string in English for it. It uses a set of patternbased rules that map constraints into alternative syntactic realisations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language Generation",
                "sec_num": "4"
            },
            {
                "text": "For example, the following rule allows a constraint on departure time to be realized as \"leave at 3 PM\" or \"3 PM flight\":",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language Generation",
                "sec_num": "4"
            },
            {
                "text": "LEG.$N.DEPART_TIME EQ $X =~ [leave at $X], [nom-comp $X]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language Generation",
                "sec_num": "4"
            },
            {
                "text": "Different realization rules can be selected for depending upon whether the constraint is to be realized as an assertion or as a description. The generation algorithm assembles the selected realizations for each constraint into a simplified syntax tree, selecting appropriate inflections of verb and noun heads as it does so. Terminal values in constraints are realized as type-specific nominals, such as \"3 PM\" or \"Delta\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language Generation",
                "sec_num": "4"
            },
            {
                "text": "A crucial feature of the generation process is that it adds to each prompt a paraphrase of the most recent discourse event, corresponding to what the system thinks the user just said. This helps keep the conversation grounded in terms of mutual understanding between the participants.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language Generation",
                "sec_num": "4"
            },
            {
                "text": "The following is an example dialog with the system: ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Example Scenario",
                "sec_num": "5"
            },
            {
                "text": "The Talk'n'Travel system described here was successfully demonstrated at the DARPA Communicator Compare and Contrast Workshop in June 1999. We are currently collecting data with test subjects and are using the results to improve the system's performance in all areas, in preparation for the forthcoming common evaluation of Communicator systems in June 2000. 8 of the subjects were successful. Of successful sessions, the average duration was 387 seconds, with a minimum of 272 and a maximum of 578. The average number of user utterances was 25, with a minimum of 18 and a maximum of 37. The word error rate of the recognizer was 11.8%.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Current Status and Conclusions",
                "sec_num": "6"
            },
            {
                "text": "The primary cause of failure to complete the scenario, as well as excessive time spent on completing it, was corruption of the discourse state due to recognition or interpretation errors. While the system informs the user of the change in state after every utterance, the user was not always successful in correcting it when it made errors, and sometimes the user did not even notice when the system had made an error. If the user is not attentive at the time, or happens not to understand what the synthesizer said, there is no implicit way for him to find out afterwards what the system thinks his constraints are.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Current Status and Conclusions",
                "sec_num": "6"
            },
            {
                "text": "While preliminary, these results point to two directions for future work. One is that the system needs to be better able to recognize and deal with problem situations in which the dialogue is not advancing. The other is that the system needs to be more communicative about its current understanding of the user's goals, even at points in the dialogue at which it might be assumed that user and system were in agreement.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Current Status and Conclusions",
                "sec_num": "6"
            }
        ],
        "back_matter": [
            {
                "text": "This work was sponsored by DARPA and monitored by SPAWAR Systems Center under Contract No. N66001-99-D-8615.To determine the performance of the system, we ran an informal experiment in which 11 different subjects called into the system and attempted to use it to solve a travel problem. None of the subjects were system developers. Each subject had a single session in which he was given a three-city trip to plan, including dates of travel, constraints on departure and arrival times, airline preferences.The author wishes to thank Scott Miller for the use of his GEM system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "DARPA Communicator homepage",
                "authors": [],
                "year": 1999,
                "venue": "MITRE",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "MITRE (1999) DARPA Communicator homepage http://fofoca.mitre.org].",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "The CU Communicator System",
                "authors": [
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Ward",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Pellom",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "IEEE Workshop on Automatic Speech Recognition and Understanding",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ward W., and Pellom, B. (1999) The CU Communicator System. In 1999 IEEE Workshop on Automatic Speech Recognition and Understanding, Keystone, Colorado.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "The Generative Extraction Model",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Miller",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Miller S. (1998) The Generative Extraction Model. Unpublished manuscript.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Expanding the scope of the ATIS task",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Dahl",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Bates",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Brown",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Fisher",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Hunicke-Smith",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Pallet",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Pao",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Rudnicky",
                        "suffix": ""
                    },
                    {
                        "first": "Shriberg",
                        "middle": [
                            "E"
                        ],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Proceedings of the ARPA Spoken Language Technology Workshop",
                "volume": "",
                "issue": "",
                "pages": "3--8",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dahl D., Bates M., Brown M., Fisher, W. Hunicke- Smith K., Pallet D., Pao C., Rudnicky A., and Shriberg E. (1994) Expanding the scope of the ATIS task. In Proceedings of the ARPA Spoken Language Technology Workshop, Plainsboro, NJ., pp 3-8.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "A schema-based approach to dialog control",
                "authors": [
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Constantinides",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Hansma",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Tchou",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Rudnicky",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proceedings oflCSLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Constantinides P., Hansma S., Tchou C. and Rudnicky, A. (1999) A schema-based approach to dialog control. Proceedings oflCSLP, Paper 637.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Creating natural dialogs in the Carnegie Mellon Communicator system",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Rudnicky",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Thayer",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Constantinides",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Tchou",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Shern",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Lenzo",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Oh",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proceedings of Eurospeech",
                "volume": "4",
                "issue": "",
                "pages": "1531--1534",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rudnicky A., Thayer, E., Constantinides P., Tchou C., Shern, R., Lenzo K., Xu W., Oh A. (1999) Creating natural dialogs in the Carnegie Mellon Communicator system. Proceedings of Eurospeech, 1999, Vol 4, pp. 1531-1534",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "An agenda-based dialog management architecture for soken language systems",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Rudnicky",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "IEEE Workshop on Automatic Speech Recognition and Understanding",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rudnicky A., and Xu W. (1999) An agenda-based dialog management architecture for soken language systems. In 1999 IEEE Workshop on Automatic Speech Recognition and Understanding, Keystone, Colorado.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Dialogue Management in the Mercury Flight Reservation System",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Seneff",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Polifroni",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "ANLP Conversational Systems Workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Seneff S., and Polifroni, J. (2000) Dialogue Management in the Mercury Flight Reservation System. ANLP Conversational Systems Workshop.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "The 1994 BBN/BYBLOS Speech Recognition System",
                "authors": [
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Nguyen",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Anastasakos",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Kubala",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Lapre",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Makhoul",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Schwartz",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Yuan",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Zavaliagkos",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Zhao",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Proc of ARPA Spoken Language Systems Technology Workshop",
                "volume": "",
                "issue": "",
                "pages": "77--81",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nguyen L., Anastasakos T., Kubala F., LaPre C., Makhoul J., Schwartz R., Yuan N., Zavaliagkos G., and Zhao Y. (1995) The 1994 BBN/BYBLOS Speech Recognition System, In Proc of ARPA Spoken Language Systems Technology Workshop, Austin, Texas, pp. 77-81.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "The Initial Implementation of the BBN ATIS4 Dialog System",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Stallard",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Proc of ARPA Spoken Language Systems Technology Workshop",
                "volume": "",
                "issue": "",
                "pages": "208--211",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stallard D. (1995) The Initial Implementation of the BBN ATIS4 Dialog System, In Proc of ARPA Spoken Language Systems Technology Workshop, Austin, Texas, pp. 208-211.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "i996) A Fully Statistical Approach to Natural Language Interfaces",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Miller",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Stallard",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "Proc of the 34 th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Miller S. and Stallard D. (i996) A Fully Statistical Approach to Natural Language Interfaces, In Proc of the 34 th Annual Meeting of the Association for Computational Linguistics, Santa Cruz, California.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "type_str": "figure",
                "text": "Welcome to Talk'n'Travel. How can I help you? U: I need a flight from Boston to Denver tomorrow. S: On Thursday, November 18, flying from Boston to Denver. What time do you want to leave? U: I want to leave before 10 AM. S: Leaving before 10 AM. Do you have an airline preference? U: Continental. S: Continental. I'm sorry, there are no flights on Continental leaving before 10 AM. I can get you a flight on Continental leaving at 10:10 AM instead. Would that be OK? U: Yes. S: OK, I've got you on Continental flight 429, departing Boston at 10:10 AM, arriving at Denver at 3:44 PM. Do you want to book a return flight from Denver to Boston? U: Yes, on Monday. S: Alright, we'll add a flight on Monday, November 22, from Boston to Denver to your itinerary. I have a Continental flight at 7 AM. Is that OK? U. Anything later? S: Leaving after 7 AM. There are no flights on Continental after 7 AM. I can get you a flight after 7 AM on Frontier, Northwest, and United instead. Would that be OK? U: How about United. S: United. I have a United flight at 8:01 AM. Is that flight OK? U: Yes. S: OK, I've got you on United flight 1762, departing Denver at 8:01 AM, arriving at Boston at 1:42 PM.",
                "uris": null
            }
        }
    }
}