File size: 65,580 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
{
    "paper_id": "W02-0111",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T05:13:28.074715Z"
    },
    "title": "Lexicalized Grammar 101",
    "authors": [
        {
            "first": "Matthew",
            "middle": [],
            "last": "Stone",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "the State University of New Jersey Piscataway",
                "location": {
                    "postCode": "08854-8019",
                    "region": "NJ",
                    "country": "USA"
                }
            },
            "email": "mdstone@cs.rutgers.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper presents a simple and versatile tree-rewriting lexicalized grammar formalism, TAGLET, that provides an effective scaffold for introducing advanced topics in a survey course on natural language processing (NLP). Students who implement a strong competence TAGLET parser and generator simultaneously get experience with central computer science ideas and develop an effective starting point for their own subsequent projects in data-intensive and interactive NLP.",
    "pdf_parse": {
        "paper_id": "W02-0111",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper presents a simple and versatile tree-rewriting lexicalized grammar formalism, TAGLET, that provides an effective scaffold for introducing advanced topics in a survey course on natural language processing (NLP). Students who implement a strong competence TAGLET parser and generator simultaneously get experience with central computer science ideas and develop an effective starting point for their own subsequent projects in data-intensive and interactive NLP.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "This paper is particularly addressed to readers at institutions whose resources and organization rule out extensive formal course-work in natural language processing (NLP). This is typical at universities in North America. In such places, NLP teaching must be ambitious but focused; courses must quickly acquaint a broad range of students to the essential concepts of the field and sell them on its current research opportunities and challenges. This paper presents one resource that may help. Specifically, I outline a simple and versatile lexicalized formalism for natural language syntax, semantics and pragmatics, called TAGLET, and draw on my experience with CS 533 (NLP) at Rutgers to motivate the potential role for TAGLET in a broad NLP class whose emphasis is to introduce topics of current research. Notes, assignments and implementations for TAGLET are available on the web. I begin in Section 2 by describing CS 533situating the course within the university and outlining its topics, audience and goals. I then describe the specific goals for teaching and implementing grammar formalisms within such a course, in Section 3. Section 4 gives an informal overview of TAGLET, and the algorithms, specifications and assignments that fit TAGLET into a broad general NLP class.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In brief, TAGLET is a context-free tree-rewriting formalism, defined by the usual complementation operation and the simplest imaginable modification operation. By implementing a strong competence TAGLET parser and generator students simultaneously get experience with central computer science ideas-data structures, unification, recursion and abstraction-and develop an effective starting point for their own subsequent projects. Two noteworthy directions are the construction of interactive applications, where TAGLET's relatively scalable and reversible processing lets students easily explore cutting-edge issues in dialogue semantics and pragmatics, and the development of linguistic specifications, where TAGLET's ability to lexicalize tree-bank parses introduces a modern perspective of linguistic intuitions and annotations as programs. Section 5 briefly summarizes the advantages of TAGLET over the many alternative formalisms that are available; an appendix to the paper provides more extensive technical details. abilistic and decision-theoretic modeling (including statistical classification, hidden Markov models and Markov decision processes) from the graduate-level AI foundations class. They might take NLP as a preliminary to research in dialogue systems or in learning for language and information-or simply to fulfill the breadth requirement of MS and PhD degrees.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Students from a number of other departments frequently get involved in natural language research, however, and are also welcome in 533; on average, only about half the students in 533 come from computer science. Students from the linguistics department frequently undertake computational work as a way of exploring practical learnability as a constraint on universal grammar, or practical reasoning as a constraint on formal semantics and pragmatics. The course also attracts students from Rutgers's library and information science department, its primary locus for research in information retrieval and human-computer interaction. Ambitious undergraduates can also take 533 their senior year; most participate in the interdisciplinary cognitive science undergraduate major. 533 is the only computational course in natural language at Rutgers.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Overall, the course is structured into three modules, each of which represents about fifteen hours of in-class lecture time.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The first module gives a general overview of language use and dialogue applications. Lectures follow (Clark, 1996) , but instill the practical methodology for specifying and constructing knowledgebased systems, in the style of (Brachman et al., 1990) , into the treatment of communication. Concurrently, students explore precise descriptions of their intuitions about language and communication through a series of short homework exercises.",
                "cite_spans": [
                    {
                        "start": 101,
                        "end": 114,
                        "text": "(Clark, 1996)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 227,
                        "end": 250,
                        "text": "(Brachman et al., 1990)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The second module focuses on general techniques for linguistic representation and implementation, using TAGLET. With an extended TAGLET project, conveniently implemented in stages, we use basic tree operations to introduce Prolog programming, including data structures, recursion and abstraction much as outlined in (Sterling and Shapiro, 1994) ; then we write a simple chart parser with incremental interpretation, and a simple communicative-intent generator scaled down after (Stone et al., 2001) .",
                "cite_spans": [
                    {
                        "start": 316,
                        "end": 344,
                        "text": "(Sterling and Shapiro, 1994)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 478,
                        "end": 498,
                        "text": "(Stone et al., 2001)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The third module explores the distinctive problems of specific applications in NLP, including spo-ken dialogue systems, information retrieval and text classification, spelling correction and shallow tagging applications, and machine translation. Jurafsky and Martin (2000) is our source-book. Concurrently, students pursue a final project, singly or in crossdisciplinary teams, involving a more substantial and potentially innovative implementation.",
                "cite_spans": [
                    {
                        "start": 246,
                        "end": 272,
                        "text": "Jurafsky and Martin (2000)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In its overall structure, the course seems quite successful. The initial emphasis on clarifying intuitions about communication puts students on an even footing, as it highlights important ideas about language use without too much dependence on specialized training in language or computation. By the end of the class, students are able to build on the more specifically computational material to come up with substantial and interesting final projects. In Spring 2002 (the first time this version of 533 was taught), some students looked at utterance interpretation, response generation and graphics generation in dialogue interaction; explored statistical methods for word-sense disambiguation, summarization and generation; and quantified the potential impact of NLP techniques on information tasks. Many of these results represented fruitful collaborations between students from different departments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Naturally, there is always room for improvement, and the course is evolving. My presentation of TAGLET here, for example, represents as much a project for the next run of 533 as a report of this year's materials; in many respects, TAGLET actually emerged during the semester as a dynamic reaction to the requirements and opportunities of a sixweek module on general techniques for linguistic representation and implementation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In a survey course for a broad, research-oriented audience, like CS 533 at Rutgers, a module on linguistic representation must orient itself to central ideas about computation. 533 may be the first and last place linguistics or information science students encounter concepts of specification, abstraction, complexity and search in class-work. The students who attack interdisciplinary research with success will be the ones who internalize and draw on these concepts, not those who merely hack proficiently. At the same time, computer scientists also can benefit from an emphasis on computational fundamentals; it means that they are building on and reinforcing their expertise in computation in exploring its application to language. Nevertheless, NLP is not compiler construction. Programming assignments should always underline a worthwhile linguistic lesson, not indulge in implementation for its own sake. This perspective suggests a number of desiderata for the grammar formalism for a survey course in NLP.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language and Computation in NLP",
                "sec_num": "3"
            },
            {
                "text": "Tree rewriting. Students need to master recursive data-structures and programming. NLP directs our attention to the recursive structures of linguistic syntax. In fact, by adopting a grammar formalism whose primitives operate on these structures as firstclass objects, we can introduce a rich set of relatively straightforward operations to implement, and motivate them by their role in subsequent programs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language and Computation in NLP",
                "sec_num": "3"
            },
            {
                "text": "Lexicalization. Students need to distinguish between specification and implementation, and to understand the barriers of abstraction that underlie the distinction. Lexicalized grammars come with a ready notion of abstraction. From the outside, abstractly, a lexicalized grammar analyzes each sentence as a simple combination of atomic elements from a lexicon of options. Simultaneously, a concrete implementation can assign complex structures to the atomic elements (elementary trees) and implement complex combinatory operations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language and Computation in NLP",
                "sec_num": "3"
            },
            {
                "text": "Strong competence implementation. Students need to understand how natural language must and does respond to the practical logic of physical realization, like all AI (Agre, 1997) . Mechanisms that use grammars face inherent computational problems and natural grammars in particular must respond to these problems: students should undertake implementations which directly realize the operations of the grammar in parsing and generation. But these must be effective programs that students can build on-our time and interest is too scarce for extensive reimplementations.",
                "cite_spans": [
                    {
                        "start": 165,
                        "end": 177,
                        "text": "(Agre, 1997)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language and Computation in NLP",
                "sec_num": "3"
            },
            {
                "text": "Simplicity. Where possible, linguistic proposals should translate readily to the formalism. At the same time, students should be able to adapt aspects of the formalism to explore their own judgments and ideas. Where possible, students should get intuitive and satisfying results from straightforward algorithms implemented with minimal bookkeeping and case analysis. At the same time, there is no reason why the formalism should not offer opportunities for meaningful optimization.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language and Computation in NLP",
                "sec_num": "3"
            },
            {
                "text": "We cannot expect any formalism to fare perfectly by all these criteria-if any does, it is a deep fact about natural language! Still, it is worth remarking just how badly these criteria judge traditional unification-based context-free grammars (CFGs), as presented in say (Pereira and Shieber, 1987) . Datastructures are an afterthought in CFGs; CFGs cannot in principle be lexicalized; and, whatever their merits in parsing or recognition, CFGs set up a positively abysmal search space for meaningful generation tasks.",
                "cite_spans": [
                    {
                        "start": 271,
                        "end": 298,
                        "text": "(Pereira and Shieber, 1987)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Language and Computation in NLP",
                "sec_num": "3"
            },
            {
                "text": "TAGLET 1 is my response to the objectives motivated in Section 2 and outlined in Section 3. TAGLET represents my way of distilling the essential linguistic and computational insights of lexicalized tree-adjoining grammar-LTAG (Joshi et al., 1975; Schabes, 1990 )-into a form that students can easily realize in end-to-end implementations.",
                "cite_spans": [
                    {
                        "start": 226,
                        "end": 246,
                        "text": "(Joshi et al., 1975;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 247,
                        "end": 260,
                        "text": "Schabes, 1990",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TAGLET",
                "sec_num": "4"
            },
            {
                "text": "Like LTAG, TAGLET analyzes sentences as a complex of atomic elements combined by two kinds of operations, complementation and modification. Abstractly, complementation combines a head with an argument which is syntactically obligatory and semantically dependent on the head. Abstractly, modification combines a head with an adjunct which is syntactically optional and need not involve any special semantic dependence. Crucially for generation, in a derivation, modification and complementation operations can apply to a head in any order, often yielding identical structures in surface syntax. This means the generator can provide required material first, then elaborate it, enabling use of grammar in high-level tasks such as the planning of referring expressions or the \"aggregation\" of related semantic material into a single complex sentence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "Concretely, TAGLET operations are implemented by operations that rewrite trees. Each lexical element is associated with a fragmentary phrase- (Rambow et al., 1995) ; sister-adjunction just adds the modifier subtree as a child of an existing node in the head tree-either on the left of the head (forward sisteradjunction) as in Figure 2 , or on the right of the head (backward sister-adjunction). I describe TAGLET formally in Appendix A.",
                "cite_spans": [
                    {
                        "start": 142,
                        "end": 163,
                        "text": "(Rambow et al., 1995)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 327,
                        "end": 335,
                        "text": "Figure 2",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "TAGLET is equivalent in weak generative power to context-free grammar. That is, any language defined by a TAGLET also has a CFG, and any language defined by a CFG also has a TAGLET. On the other hand context-free languages can have derivations in which all lexical items are arbitrarily far from the root; TAGLET derived structures always have an anchor whose path to the root of the sentence has a fixed length given by a grammatical element. See Appendix B. The restriction seems of little linguistic significance, since any tree-bank parse induces a unique TAGLET grammar once you label which child of each node is the head, which are complements and which are modifiers. Indeed, since TAGLET thus induces bigram dependency structures from trees, this invites the estimation of probability distributions on TAGLET derivations based on observed bigram dependencies; see (Chiang, 2000) .",
                "cite_spans": [
                    {
                        "start": 872,
                        "end": 886,
                        "text": "(Chiang, 2000)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "To implement an effective TAGLET generator, you can perform a greedy head-first search of derivations guided by heuristic progress toward achieving communicative goals (Stone et al., 2001 ). Meanwhile, because TAGLET is context-free, you can easily write a CKY-style dynamic programming parser that stores structures recognized for spans of text in a chart, and iteratively combines structures in adjacent spans until the analyses span the entire sentence. (More complexity would be required for multiply-anchored trees, as they induce discontinuous constituents.) The simple requirement that operations never apply inside complements or modifiers, and apply left-to-right within a head, suffices to avoid spurious ambiguity. See Appendix C.",
                "cite_spans": [
                    {
                        "start": 168,
                        "end": 187,
                        "text": "(Stone et al., 2001",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview",
                "sec_num": "4.1"
            },
            {
                "text": "With TAGLET, two kinds of examples are instructive: those where TAGLET can mirror TAG, and those where it cannot. For the first case, consider an analysis of Chris loves Sandy madly by the trees of The feature values will be preserved by further steps of derivation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Examples",
                "sec_num": "4.2"
            },
            {
                "text": "Semantics and pragmatics are crucial to NLP. TAGLET lets students explore meaty issues in semantics and pragmatics, using the unification-based semantics proposed in (Stone and Doran, 1997) . We view constituents as referential, or better, indexical; we link elementary trees with constraints on these indices and conjoin the constraints in the meaning of a compound structure. This example shows how the strategy depends on a rich ontology: The example also shows how the strategy lets us quickly implement, say, the constraint-satisfaction approaches to reference resolution or the planrecognition approaches to discourse integration described in (Stone and Webber, 1998) .",
                "cite_spans": [
                    {
                        "start": 166,
                        "end": 189,
                        "text": "(Stone and Doran, 1997)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 649,
                        "end": 673,
                        "text": "(Stone and Webber, 1998)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Building on TAGLET",
                "sec_num": "4.3"
            },
            {
                "text": "S",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Building on TAGLET",
                "sec_num": "4.3"
            },
            {
                "text": "Here is a plan for a six-week TAGLET module. The first two weeks introduce data structures and recursive programming in Prolog, with examples drawn from phrase structure trees and syntactic combination; and discuss dynamic-programming parsers, with an aside on convenient implementation using Prolog assertion. As homework, students implement simple tree operations, and build up to definitions of substitution and modification for parsing and generation; they use these combinatory operations to write a CKY TAGLET parser.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lectures and Assignments",
                "sec_num": "4.4"
            },
            {
                "text": "The next two weeks begin with lectures on the lexicon, emphasizing abstraction on the computational side and the idiosyncrasy of lexical syntax and the indexicality of lexical semantics on the linguistic side; and continue with lectures on semantics and interpretation. Meanwhile, students add reference resolution to the parser, and implement routines to construct grammars from tree-bank parses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lectures and Assignments",
                "sec_num": "4.4"
            },
            {
                "text": "The final two weeks cover generation as problemsolving, and search through the grammar. Students reuse the grammar and interpretation model they already have to construct a generator.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lectures and Assignments",
                "sec_num": "4.4"
            },
            {
                "text": "Important as they are, lexicalized grammars can be forbidding. Versions of TAG and combinatory categorial grammars (CCG) (Steedman, 2000) , as presented in the literature, require complex bookkeeping for effective computation. When I wrote a CCG parser as an undergraduate, it took me a whole semester to get an implemented handle on the metatheory that governs the interaction of (crossing) composition or type-raising with spurious ambiguity; I still have never written a TAG parser or a CCG generator. Variants of TAG like TIG (Schabes and Waters, 1995) or D-Tree grammars (Rambow et al., 1995) are motivated by linguistic or formal considerations rather than pedagogical or computational ones. Other formalisms come with linguistic assumptions that are hard to manage. Link grammar (Sleator and Temperley, 1993) and other pure dependency formalisms can make it difficult to explore rich hierarchical syntax and the flexibility of modification; HPSG (Pollard and Sag, 1994) comes with a commitment to its complex, rather bewildering regime for formalizing linguistic information as feature structures. Of course, you probably could refine any of these theories to a simple core-and would get something very like TAGLET.",
                "cite_spans": [
                    {
                        "start": 121,
                        "end": 137,
                        "text": "(Steedman, 2000)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 530,
                        "end": 556,
                        "text": "(Schabes and Waters, 1995)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 576,
                        "end": 597,
                        "text": "(Rambow et al., 1995)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 786,
                        "end": 815,
                        "text": "(Sleator and Temperley, 1993)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 953,
                        "end": 976,
                        "text": "(Pollard and Sag, 1994)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "5"
            },
            {
                "text": "I strongly believe that this distillation is worth the trouble, because lexicalization ties grammar formalisms so closely to the motivations for studying language in the first place. For linguistics, this philosophy invites a fine-grained description of sen-tence syntax, in which researchers document the diversity of linguistic constructions within and across languages, and at the same time uncover important generalizations among them. For computation, this philosophy suggests a particularly concrete approach to language processing, in which the information a system maintains and the decisions it takes ultimately always just concern words. In taking TAGLET as a starting point for teaching implementation in NLP, I aim to expose a broad range of students to a lexicalized approach to the cognitive science of human language that respects and integrates both linguistic and computational advantages.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "5"
            },
            {
                "text": "Each node in a TAGLET derived tree T is first contributed by a specific TAGLET element, and so indirectly by a particular anchor. Accordingly, we can construct a lexicalized derivation tree corresponding to T . Nodes in the derivation tree are labeled by the elements used in deriving T . An edge leads from parent E to child E if T includes a step of derivation in which E is substituted or sister-adjoined at a node first contributed by E. To make the derivation unambiguous, we record the address of the node in E at which the operation applies, and we order the edges in the derivation tree in the same order that the corresponding operations are applied in T . For Figure 3 , we have:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 670,
                        "end": 678,
                        "text": "Figure 3",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "B Properties",
                "sec_num": null
            },
            {
                "text": "\u03b1 2 :loves\u00a8\u00a8\u00a8\u00a8\u00a8\u00a8\u00a8r r r r r r r r \u03b1 1 :Chris (0) \u03b1 3 :Sandy (1.1) \u03b2 \u2190 4 :madly (1.1) Let L be a CFL.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "B Properties",
                "sec_num": null
            },
            {
                "text": "Then there is a grammar G for L in Greibach normal form (Hopcroft et al., 2000) , where each production has the form",
                "cite_spans": [
                    {
                        "start": 56,
                        "end": 79,
                        "text": "(Hopcroft et al., 2000)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "B Properties",
                "sec_num": null
            },
            {
                "text": "A \u2192 xB 1 ... B n where x \u2208 V T and B i \u2208 V N .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "B Properties",
                "sec_num": null
            },
            {
                "text": "For each such production, create the TAGLET element which allows complementation with a tree as below:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "B Properties",
                "sec_num": null
            },
            {
                "text": "\u00c4\u00a8r r r x B 1 B n",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "B Properties",
                "sec_num": null
            },
            {
                "text": "An easy induction transforms any derivation in G to a derivation in this TAGLET grammar, and vice versa. So both generate the same language L.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "B Properties",
                "sec_num": null
            },
            {
                "text": "Conversely, we can build a CFG for a TAGLET by creating nonterminals and productions for each node in a TAGLET elementary structure, taking into account the possibilities for optional premodification and postmodification as well as complementation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "B Properties",
                "sec_num": null
            },
            {
                "text": "Suppose we make a bottom-up traversal of a TAGLET derivation tree to construct the derived tree. After we finish with each node (and all its children), we obtain a subtree of the final derived tree. This subtree represents a complete constituent that must appear as a subsequence of the final sentence. A CKY TAGLET parser just reproduces this hierarchical discovery of constituents, by adding completed constituents for complements and modifiers into an open constituent for a head.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "C Parsing",
                "sec_num": null
            },
            {
                "text": "The only trick is to preserve linear order; this means adding each new complement and modifier at a possible \"next place\", without skipping past missing complements or slipping under existing modifiers. To do that, we only apply operations that add completed constituents T 2 along what is known as the frontier of the head tree T 1 , further away from the head than previously incorporated material. This concept, though complex, is essential in any account of incremental structure-building. To avoid spurious ambiguities, we also require that operations to the left frontier must precede operations to the right frontier. This gives a relation COMBINE(T 1 , T 2 , T 3 ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "C Parsing",
                "sec_num": null
            },
            {
                "text": "The parser analyses a string of length N using a dynamic-programming procedure to enumerate all the analyses that span contiguous substrings, shortest substrings first. We write T \u2208 (i, j) to indicate that object T spans position i to j. The start of the string is position 0; the end is position N. So we have: for word w \u2208 (i, i + 1), T with anchor w add T \u2208 (i, i + 1) for k \u2190 2 up to N for i \u2190 k \u2212 2 down to 0 for j \u2190 i + 1 up to k \u2212 1 for T 1 \u2208 (i, j) and T 2 \u2208 ( j, k) for T 3 with COMBINE(T 1 , T 2 , T 3 ) add T 3 \u2208 (i, k)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "C Parsing",
                "sec_num": null
            },
            {
                "text": "Now, any parser that delivers possible analyses exhaustively will be prohibitively expensive in the worst-case; analyses of ambiguities multiply exponentially. At the cost of a strong-competence implementation, one can imagine avoiding the complexity by maintaining TAGLET derivation forests. This enables O(N 3 ) recognition, since TAGLET parsing operations apply within spans of the spine of single elementary trees and therefore the number of COMBINE results for T 1 and T 2 is independent of N.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "C Parsing",
                "sec_num": null
            },
            {
                "text": "CS 533NLP at Rutgers is taught as part of the graduate artificial intelligence (AI) sequence in the computer science department. As a prerequisite, computer science students are expected to be familiar with prob-July 2002, pp. 77-84. Association for Computational Linguistics. Natural Language Processing and Computational Linguistics, Philadelphia, Proceedings of the Workshop on Effective Tools and Methodologies for Teaching",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "If the acronym must stand for something, \"Tree Assembly Grammar for LExicalized Teaching\" will do.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "Thanks to the students of CS 533 and four anonymous reviewers for helping to disabuse me of numerous preconceptions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            },
            {
                "text": "I define TAGLET in terms of primitive trees. The definitions require a set V T of terminal categories, corresponding to our lexical items, and a disjoint set V N of nonterminal categories, corresponding to constituent categories. TAGLET uses trees labeled by these categories both as representations of the syntactic structure of sentences and as representations of the grammatical properties of words:\u2022 A syntactic tree is a tree whose nodes are each assigned a unique label in V N \u222a V T , such that only leaf nodes are assigned a label in V T .\u2022 A lexical tree is a syntactic tree in which exactly one node, called the anchor, is assigned a label in V T . The path through such a tree from the root to the anchor is called the spine.A primitive tree is lexical tree in which every leaf is the child of a node on the spine. See Figures 3 and 4 . A TAGLET element is a pair T, O consisting of primitive tree together with the specification of the operation for the tree; the allowable operations are complementation, indicated by \u03b1; premodification at a specified category C \u2208 V N , indicated by \u03b2 \u2192 (C) and postmodification at a specified category C \u2208 V N , indicated by \u03b2 \u2190 (C).Formally, then, a TAGLET grammar is a tuple N gives the set of nonterminal categories, and \u0393 gives a set of TAGLET elements for V T and V N . Given a TAGLET grammar G, the set of derived trees for G is defined as the smallest set closed under the following operations:derived tree for G.\u2022 (Substitution) Suppose T, O is a derived tree for G where T contains leaf node n with label C \u2208 V N ; and suppose T , \u03b1 is a derived tree for G where the root of T also has label C. Then T , O is a derived tree for G where T is obtained from T by identifying node n with the root of T .\u2022 (Premodification) Suppose T, O is a derived tree for G where T contains node n with label C \u2208 V N , and suppose T , \u03b2 \u2192 (C) is a derived tree for G. Then T , O is a derived tree for G where T is obtained from T by adding T as the first child of node n.\u2022 (Postmodification) Suppose T, O is a derived tree for G where T contains node n with label C \u2208 V N , and suppose T , \u03b2 \u2190 (C) is a derived tree for G. Then T , O is a derived tree for G where T is obtained from T by adding T as the last child of node n.A derivation for G is a derived tree T, \u03b1 for G, in which all the leaves of T are elements of V T . The yield of a derivation T, \u03b1 is the string consisting of the leaves of T in order. A string \u03c3 is in the language generated by G just in case \u03c3 is the yield of some derivation for G.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 829,
                        "end": 844,
                        "text": "Figures 3 and 4",
                        "ref_id": null
                    },
                    {
                        "start": 1224,
                        "end": 1225,
                        "text": "N",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "A Definitions",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Computation and Human Experience",
                "authors": [
                    {
                        "first": "Philip",
                        "middle": [
                            "E"
                        ],
                        "last": "Agre",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philip E. Agre. 1997. Computation and Human Experi- ence. Cambridge.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Living with CLASSIC: when and how to use a KL-ONE-like language",
                "authors": [
                    {
                        "first": "Ronald",
                        "middle": [],
                        "last": "Brachman",
                        "suffix": ""
                    },
                    {
                        "first": "Deborah",
                        "middle": [],
                        "last": "Mcguinness",
                        "suffix": ""
                    },
                    {
                        "first": "Peter",
                        "middle": [
                            "Patel"
                        ],
                        "last": "Schneider",
                        "suffix": ""
                    },
                    {
                        "first": "Lori",
                        "middle": [
                            "Alperin"
                        ],
                        "last": "Resnick",
                        "suffix": ""
                    },
                    {
                        "first": "Alexander",
                        "middle": [],
                        "last": "Borgida",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ronald Brachman, Deborah McGuinness, Peter Pa- tel Schneider, Lori Alperin Resnick, and Alexander Borgida. 1990. Living with CLASSIC: when and how to use a KL-ONE-like language. In J. Sowa, editor, Principles of Semantic Networks. Morgan Kaufmann.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Statistical parsing with an automatically-extracted tree adjoining grammar",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Chiang",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "456--463",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Chiang. 2000. Statistical parsing with an automatically-extracted tree adjoining grammar. In ACL, pages 456-463.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Using Language",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Herbert",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Clark",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Herbert H. Clark. 1996. Using Language. Cambridge University Press, Cambridge, UK.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Introduction to automata theory, languages and computation",
                "authors": [
                    {
                        "first": "John",
                        "middle": [
                            "E"
                        ],
                        "last": "Hopcroft",
                        "suffix": ""
                    },
                    {
                        "first": "Rajeev",
                        "middle": [],
                        "last": "Motwani",
                        "suffix": ""
                    },
                    {
                        "first": "Jeffrey",
                        "middle": [
                            "D"
                        ],
                        "last": "Ullman",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ull- man. 2000. Introduction to automata theory, lan- guages and computation. Addison-Wesley, second edition.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Tree adjunct grammars",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Aravind",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Joshi",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Levy",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Takahashi",
                        "suffix": ""
                    }
                ],
                "year": 1975,
                "venue": "Journal of the Computer and System Sciences",
                "volume": "10",
                "issue": "",
                "pages": "136--163",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Aravind K. Joshi, L. Levy, and M. Takahashi. 1975. Tree adjunct grammars. Journal of the Computer and Sys- tem Sciences, 10:136-163.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Speech and Language Processing: An introduction to natural language processing, computational linguistics and speech recognition",
                "authors": [
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Jurafsky",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "James",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Martin",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daniel Jurafsky and James H. Martin. 2000. Speech and Language Processing: An introduction to nat- ural language processing, computational linguistics and speech recognition. Prentice-Hall.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Prolog and Natural Language Analysis. CSLI, Stanford CA",
                "authors": [
                    {
                        "first": "C",
                        "middle": [
                            "N"
                        ],
                        "last": "Fernando",
                        "suffix": ""
                    },
                    {
                        "first": "Stuart",
                        "middle": [
                            "M"
                        ],
                        "last": "Pereira",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Shieber",
                        "suffix": ""
                    }
                ],
                "year": 1987,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fernando C. N. Pereira and Stuart M. Shieber. 1987. Prolog and Natural Language Analysis. CSLI, Stan- ford CA.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Head-Driven Phrase Structure Grammar",
                "authors": [
                    {
                        "first": "Carl",
                        "middle": [],
                        "last": "Pollard",
                        "suffix": ""
                    },
                    {
                        "first": "Ivan",
                        "middle": [
                            "A"
                        ],
                        "last": "Sag",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press, Chicago.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "D-Tree grammars",
                "authors": [
                    {
                        "first": "Owen",
                        "middle": [],
                        "last": "Rambow",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Vijay-Shanker",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Weir",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "151--158",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Owen Rambow, K. Vijay-Shanker, and David Weir. 1995. D-Tree grammars. In ACL, pages 151-158.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Treeinsertion grammar: A cubic-time parsable formalism that lexicalizes context-free grammar without changing the trees produced",
                "authors": [
                    {
                        "first": "Yves",
                        "middle": [],
                        "last": "Schabes",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [
                            "C"
                        ],
                        "last": "Waters",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Computational Linguistics",
                "volume": "21",
                "issue": "",
                "pages": "479--513",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yves Schabes and Richard C. Waters. 1995. Tree- insertion grammar: A cubic-time parsable formalism that lexicalizes context-free grammar without chang- ing the trees produced. Computational Linguistics, 21:479-513.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Mathematical and Computational Aspects of Lexicalized Grammars",
                "authors": [
                    {
                        "first": "Yves",
                        "middle": [],
                        "last": "Schabes",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yves Schabes. 1990. Mathematical and Computational Aspects of Lexicalized Grammars. Ph.D. thesis, Com- puter Science Department, University of Pennsylva- nia.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Parsing English with a link grammar",
                "authors": [
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Sleator",
                        "suffix": ""
                    },
                    {
                        "first": "Davy",
                        "middle": [],
                        "last": "Temperley",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Third International Workshop on Parsing Technologies",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daniel Sleator and Davy Temperley. 1993. Parsing English with a link grammar. In Third International Workshop on Parsing Technologies.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "The Syntactic Process. MIT",
                "authors": [
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Steedman",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mark Steedman. 2000. The Syntactic Process. MIT.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "The Art of Prolog",
                "authors": [
                    {
                        "first": "Leon",
                        "middle": [],
                        "last": "Sterling",
                        "suffix": ""
                    },
                    {
                        "first": "Ehud",
                        "middle": [],
                        "last": "Shapiro",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Leon Sterling and Ehud Shapiro. 1994. The Art of Pro- log. MIT, second edition.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Sentence planning as description using tree-adjoining grammar",
                "authors": [
                    {
                        "first": "Matthew",
                        "middle": [],
                        "last": "Stone",
                        "suffix": ""
                    },
                    {
                        "first": "Christine",
                        "middle": [],
                        "last": "Doran",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Proceedings of ACL",
                "volume": "",
                "issue": "",
                "pages": "198--205",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Matthew Stone and Christine Doran. 1997. Sentence planning as description using tree-adjoining grammar. In Proceedings of ACL, pages 198-205.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Textual economy through close coupling of syntax and semantics",
                "authors": [
                    {
                        "first": "Matthew",
                        "middle": [],
                        "last": "Stone",
                        "suffix": ""
                    },
                    {
                        "first": "Bonnie",
                        "middle": [],
                        "last": "Webber",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of International Natural Language Generation Workshop",
                "volume": "",
                "issue": "",
                "pages": "178--187",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Matthew Stone and Bonnie Webber. 1998. Textual economy through close coupling of syntax and seman- tics. In Proceedings of International Natural Lan- guage Generation Workshop, pages 178-187.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Microplanning with communicative intentions: The SPUD system. Under review",
                "authors": [
                    {
                        "first": "Matthew",
                        "middle": [],
                        "last": "Stone",
                        "suffix": ""
                    },
                    {
                        "first": "Christine",
                        "middle": [],
                        "last": "Doran",
                        "suffix": ""
                    },
                    {
                        "first": "Bonnie",
                        "middle": [],
                        "last": "Webber",
                        "suffix": ""
                    },
                    {
                        "first": "Tonia",
                        "middle": [],
                        "last": "Bleam",
                        "suffix": ""
                    },
                    {
                        "first": "Martha",
                        "middle": [],
                        "last": "Palmer",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Matthew Stone, Christine Doran, Bonnie Webber, Tonia Bleam, and Martha Palmer. 2001. Microplanning with communicative intentions: The SPUD system. Under review.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "Forward sister-adjunction (modification.) structure tree containing a distinguished word called the anchor. For complementation, TAGLET adopts TAG's substitution operation; substitution replaces a leaf node in the head tree with the phrase structure tree associated with the complement. See Figure 1. For modification, TAGLET adopts the the sister-adjunction operation defined in",
                "num": null,
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF1": {
                "text": "Parallel analysis in TAGLET and TAG.",
                "num": null,
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF2": {
                "text": "case, consider the embedded question who Chris thinks Sandy likes. The usual TAG analysis uses the full power of adjunction. TAGLET requires the use of one of the familiar context-free filler-gap analyses, as perhaps that suggested by the trees inFigure 4, and their composition: TAGLET requires a gap-threading analysis of extraction (or another context-free analysis). syntactic features amounts to an intermediate case. In TAGLET derivations (unlike in TAG) nodes accrete children during the course of a derivation but are never rewritten or split. Thus, we can decorate any TAGLET node with a single set of syntactic features that is preserved throughout the derivation. Consider the trees for he knows below: these trees combine, we can immediately unify the number Y of the verb with the pronoun's singular; we can immediately unify the case X of the pronoun with the nominative assigned by the verb:",
                "num": null,
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF3": {
                "text": ") \u2227 sandy(s) \u2227 love(e, c, s) \u2227 mad(e)",
                "num": null,
                "uris": null,
                "type_str": "figure"
            }
        }
    }
}