File size: 34,761 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:12:34.100216Z"
    },
    "title": "Natural Language Response Generation from SQL with Generalization and Back-translation",
    "authors": [
        {
            "first": "Saptarashmi",
            "middle": [],
            "last": "Bandyopadhyay",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Maryland",
                "location": {
                    "postCode": "20742",
                    "settlement": "College Park College Park",
                    "region": "MD"
                }
            },
            "email": ""
        },
        {
            "first": "Tianyang",
            "middle": [],
            "last": "Zhao",
            "suffix": "",
            "affiliation": {},
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Generation of natural language responses to the queries of structured language like SQL is very challenging as it requires generalization to new domains and the ability to answer ambiguous queries among other issues. We have participated in the CoSQL shared task organized in the IntEx-SemPar workshop at EMNLP 2020. We have trained a number of Neural Machine Translation (NMT) models to efficiently generate the natural language responses from SQL. Our shuffled backtranslation model has led to a BLEU score of 7.47 on the unknown test dataset. In this paper, we will discuss our methodologies to approach the problem and future directions to improve the quality of the generated natural language responses.",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Generation of natural language responses to the queries of structured language like SQL is very challenging as it requires generalization to new domains and the ability to answer ambiguous queries among other issues. We have participated in the CoSQL shared task organized in the IntEx-SemPar workshop at EMNLP 2020. We have trained a number of Neural Machine Translation (NMT) models to efficiently generate the natural language responses from SQL. Our shuffled backtranslation model has led to a BLEU score of 7.47 on the unknown test dataset. In this paper, we will discuss our methodologies to approach the problem and future directions to improve the quality of the generated natural language responses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Natural language interfaces to databases (NLIDB) has been the focus of many research works, including a shared track on the Conversational text-to-SQL Challenge at EMNLP-IntexSemPar 2020 (Yu et al., 2019) . We have focused on the second task, natural language response generation from SQL queries and execution results.",
                "cite_spans": [
                    {
                        "start": 41,
                        "end": 48,
                        "text": "(NLIDB)",
                        "ref_id": null
                    },
                    {
                        "start": 187,
                        "end": 204,
                        "text": "(Yu et al., 2019)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "For example, when the SQL query \"SELECT dorm name FROM dorm\" is present, a possible response by the system could be \"This is the list of the names of all the dorms\". The ideal responses should demonstrate the results of the query, present the logical relationship between the query objects and the results, and be free from any grammatical error. Another challenge for this task is that the system needs to be able to generalize and do well on the SQL queries and the database schema which it has never seen before.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Many existing papers focus on text to SQL generation like Shin (2019) and Zhong et al. (2017) which emphasize self-attention and reinforcementlearning-based approaches. The problem of generating natural language responses from SQL is that this specific area is relatively under-researched, but we have tried to come up with probable solutions in this shared task. Gray et al. (1997) inspired us to generalize SQL keywords for better response generation with improvement in generalization. We have employed back-translation, used by Sennrich et al. (2015) and Hoang et al. (2018) , in order to increase the BLEU score. We were also motivated by the linguistic generalization results pointed out by Bandyopadhyay (2019) and Bandyopadhyay (2020) where the lemma and the Part-of-Speech tag are added to the natural language dataset for better generalization. Although we did not include it in our final model due to challenges in removing the linguistic factors, this approach offers a potential future in the generalization of the generated natural language responses.",
                "cite_spans": [
                    {
                        "start": 58,
                        "end": 69,
                        "text": "Shin (2019)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 74,
                        "end": 93,
                        "text": "Zhong et al. (2017)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 364,
                        "end": 382,
                        "text": "Gray et al. (1997)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 532,
                        "end": 554,
                        "text": "Sennrich et al. (2015)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 559,
                        "end": 578,
                        "text": "Hoang et al. (2018)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 722,
                        "end": 742,
                        "text": "Bandyopadhyay (2020)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Works",
                "sec_num": "2"
            },
            {
                "text": "We decided to take the Neural Machine Translation (NMT) approach, where the SQL queries with the execution results are regarded as the source, and the natural language, more specifically English, responses are seen as the target. We chose Seq2seq as our baseline model. After several attempts of training and parameter tuning, we were able to obtain a baseline BLEU score.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pre-processing Methods",
                "sec_num": "3"
            },
            {
                "text": "In order to further improve the BLEU score, first, we came up with the idea of SQL keyword generalization. SQL keyword generalization is a preprocessing method we applied to the input data (i.e. the SQL queries with the execution results). We first put the common SQL keywords into different groups based on their characteristics. Table 1 shows our choices of grouping. Then, we substituted each of those keywords in the input data to the newly purposed, generalized name according to the group we put the keyword in. More specifically, UNION, INTERSECTION, and EXCEPT are substituted as SET because these three keywords are set operations. AND and OR are substituted as LOGIC because they are logic operators. One thing worth noting is that although AND in SQL is not only a logic operator as it can also be used to join tables, the phrase \"JOIN . . . ON . . .\" is primarily used for this particular purpose. EXISTS, UNIQUE, and IN are substituted as NEST because these keywords are followed by one or multiple nested queries. ANY and ALL are substituted as RANGE since they are followed by a sub-query that will return a range of values, and an operator such as > is usually in front of ANY and ALL to compare with those values returned by the sub-query. AVG, COUNT, SUM, MAX, and MIN are substituted as AGG since all these keywords are aggregate operators.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 331,
                        "end": 338,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Pre-processing Methods",
                "sec_num": "3"
            },
            {
                "text": "The remaining common SQL keywords are difficult to be grouped with other ones. For example, GROUP BY and HAVING have distinct meanings and work differently as they are followed by nonidentical elements. GROUP BY is followed by a \"grouping-list\", usually an attribute of a table, while HAVING is followed by a \"group-qualification\", usually a comparison involving an operator. Therefore, those keywords are kept as they are in the input data. Moreover, the operators are also not generalized since >, \u2265, <, \u2264 are used to compare numerical values only, while = and = are used to compare non-numerical values as well, like strings.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pre-processing Methods",
                "sec_num": "3"
            },
            {
                "text": "Overall, the reason we applied this SQL keyword generalization pre-processing is to avoid situations where certain common keywords are seen only for a few times or even never seen in the training data set, then the trained model would react poorly to those keywords in the test data set by pulling words from the vocabulary almost randomly.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pre-processing Methods",
                "sec_num": "3"
            },
            {
                "text": "Another idea we utilized to improve the BLEU score is the iterative back-translation as described in Shin (2019) and Zhong et al. (2017) .",
                "cite_spans": [
                    {
                        "start": 101,
                        "end": 112,
                        "text": "Shin (2019)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 117,
                        "end": 136,
                        "text": "Zhong et al. (2017)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Shuffled Back-Translation",
                "sec_num": "4"
            },
            {
                "text": "Back-translation is a simple way of adding synthetic data to the training model by training a targetto-source model, then generating a synthetic source dataset using a monolingual corpus on the target side. The synthetic source dataset and the provided target dataset are augmented to the training datasets to re-train the model. Since no monolingual corpus was provided in our case, we split the original dataset. To address any potential bias, we shuffled the dataset before splitting so that the created monolingual dataset is free from bias.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Shuffled Back-Translation",
                "sec_num": "4"
            },
            {
                "text": "We also tried a variant of back translation called cyclic translation. The idea simply repeats the step of back-translation. After generating the synthetic source dataset from the provided target dataset, that dataset is used as input to the baseline source-totarget model to generate the synthetic target dataset. The synthetic source dataset and synthetic target dataset are augmented to the training datasets to train the model once again.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Shuffled Back-Translation",
                "sec_num": "4"
            },
            {
                "text": "The shuffled back-translated model with a high drop-out rate and more number of training steps led to the highest BLEU score on the development dataset as reported in Section 5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Shuffled Back-Translation",
                "sec_num": "4"
            },
            {
                "text": "A lot of diverse models have been trained for our experiments as enumerated below which have been labeled as follows: 8. Cyclic-translation with SQL keyword generalization and true-cased input (higher dropout and more training steps) (Model 8) 9. Shuffled back-translation with SQL keyword generalization and true-cased input and dropout rate = 0.5 (Model 9) These models have been described in the previous sections. All the notable results are shown in Table  2 . We began our experiment by tuning the hyperparameters of the Seq2seq model in Tensorflow NMT. After repeated experimentation, we selected the parameters for our baseline training model (Model 1) as follows: The other parameters are set to the default Tensorflow NMT values.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 455,
                        "end": 463,
                        "text": "Table  2",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Experiment and Results",
                "sec_num": "5"
            },
            {
                "text": "Then, we came up with the idea of SQL keyword generalization and implemented this idea. It turned out to be wonderful and improved the BLEU score significantly (from 7.60 to 9.72). Next, we focused on other possible pre-processing techniques that we could apply. We initially were considering four methods: tokenization, true-casing, linguistic factorization, and byte pair encoding. According to our testing, byte pair encoding, and the combination of these two methods degraded the BLEU score. Linguistic factorization led to high BLEU scores but the removal of the linguistic factors from the generated response again reduced the BLEU score. Tokenization also degrades the performance of the model. After carefully observing the given dataset, we found that it has already been tokenized, so further tokenization is unnecessary. In the end, SQL keyword generalization and true-casing are the two pre-processing techniques that we apply to the model. Afterwards, we started to think about the steps in the training process that we could improve. We implemented back-translation, and it increased the BLEU score. However, we found this method is likely to introduce an overfitting issue. To be more specific, since we were not given any test data or any dataset analogous to a monolingual corpus, we split the given ground truth file for the development set into two files and used them (one as our \"development\" ground truth and the other as our \"test\" ground truth) for the external evaluation during the training. The model achieved a much higher BLEU score on our \"development\" ground truth than previously recorded but the BLEU score on our \"test\" ground truth decreased in comparison to that previously recorded.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment and Results",
                "sec_num": "5"
            },
            {
                "text": "Then, we came up with three ways to deal with this issue. The first one was the cyclic translation where no extra data (i.e. the monolingual data) is introduced in the training. This new way of training did help with the overfitting issue with a higher BLEU score on our created \"test\" dataset but failed to improve the BLEU score on the given development set. The second way was to shuffle the monolingual data used in the back-translation. It solved the overfitting issue but did not achieve a higher BLEU score on the development data either. The last way was to change the values for certain hyper-parameters. For instance, we increased the dropout rate from 0.4 to 0.5 to strengthen regularization. Accordingly, we also increased the number of training steps from 12000 to 20000. We applied the hyper-parameter changes to all three training methods, the original back-translation, cyclic translation, and shuffled back-translation. In the end, the shuffled back-translation model with the new hyperparameter settings and the two pre-processing practices achieved the highest BLEU score on the development set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment and Results",
                "sec_num": "5"
            },
            {
                "text": "Our submitted shuffled back-translation with a drop-out rate of 0.5 and 20000 training steps on Tensorflow NMT gives a BLEU score of 7.47 on the unknown testing dataset and a BLEU score of 12.12 on the development dataset. A further conclusion can be drawn once the Grammar and the Logical Consistency Rate (LCR) scores are released by the organizers. It can be observed that shuffled back-translation with a higher drop-out rate gave a high BLEU score on the development dataset compared to the baseline or the back-translated model with a lower drop-out rate. This suggests that the shuffling of the dataset before back-translation can potentially address the issue of any bias in the datasets. The improved results with increased dropout suggest that regularization has been effective in this experimental setting. The idea of cyclic translation deserves further exploration. Generalization may be improved on the natural language responses by developing an improved variant of the linguistic factoring approach. The collection of additional training data can also be useful to increase the BLEU score on the unknown test dataset.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Factored neural machine translation at loresmt 2019",
                "authors": [
                    {
                        "first": "Saptarashmi",
                        "middle": [],
                        "last": "Bandyopadhyay",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages",
                "volume": "",
                "issue": "",
                "pages": "68--71",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Saptarashmi Bandyopadhyay. 2019. Factored neural machine translation at loresmt 2019. In Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages, pages 68-71.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Factored neural machine translation on low resource languages in the covid-19 crisis",
                "authors": [
                    {
                        "first": "Saptarashmi",
                        "middle": [],
                        "last": "Bandyopadhyay",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Saptarashmi Bandyopadhyay. 2020. Factored neural machine translation on low resource languages in the covid-19 crisis.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Data cube: A relational aggregation operator generalizing group-by, cross-tab, and sub-totals. Data mining and knowledge discovery",
                "authors": [
                    {
                        "first": "Jim",
                        "middle": [],
                        "last": "Gray",
                        "suffix": ""
                    },
                    {
                        "first": "Surajit",
                        "middle": [],
                        "last": "Chaudhuri",
                        "suffix": ""
                    },
                    {
                        "first": "Adam",
                        "middle": [],
                        "last": "Bosworth",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Layman",
                        "suffix": ""
                    },
                    {
                        "first": "Don",
                        "middle": [],
                        "last": "Reichart",
                        "suffix": ""
                    },
                    {
                        "first": "Murali",
                        "middle": [],
                        "last": "Venkatrao",
                        "suffix": ""
                    },
                    {
                        "first": "Frank",
                        "middle": [],
                        "last": "Pellow",
                        "suffix": ""
                    },
                    {
                        "first": "Hamid",
                        "middle": [],
                        "last": "Pirahesh",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "1",
                "issue": "",
                "pages": "29--53",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jim Gray, Surajit Chaudhuri, Adam Bosworth, Andrew Layman, Don Reichart, Murali Venkatrao, Frank Pellow, and Hamid Pirahesh. 1997. Data cube: A re- lational aggregation operator generalizing group-by, cross-tab, and sub-totals. Data mining and knowl- edge discovery, 1(1):29-53.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Iterative backtranslation for neural machine translation",
                "authors": [
                    {
                        "first": "Duy",
                        "middle": [],
                        "last": "Vu Cong",
                        "suffix": ""
                    },
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Hoang",
                        "suffix": ""
                    },
                    {
                        "first": "Gholamreza",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    },
                    {
                        "first": "Trevor",
                        "middle": [],
                        "last": "Haffari",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Cohn",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
                "volume": "",
                "issue": "",
                "pages": "18--24",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Improving neural machine translation models with monolingual data",
                "authors": [
                    {
                        "first": "Rico",
                        "middle": [],
                        "last": "Sennrich",
                        "suffix": ""
                    },
                    {
                        "first": "Barry",
                        "middle": [],
                        "last": "Haddow",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandra",
                        "middle": [],
                        "last": "Birch",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1511.06709"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Encoding database schemas with relation-aware self-attention for text-to-sql parsers",
                "authors": [
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Shin",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1906.11790"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Richard Shin. 2019. Encoding database schemas with relation-aware self-attention for text-to-sql parsers. arXiv preprint arXiv:1906.11790.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Cosql: A conversational text-to-sql challenge towards cross-domain natural language interfaces to databases",
                "authors": [
                    {
                        "first": "Tao",
                        "middle": [],
                        "last": "Yu",
                        "suffix": ""
                    },
                    {
                        "first": "Rui",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Yang",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    },
                    {
                        "first": "Suyi",
                        "middle": [],
                        "last": "Er",
                        "suffix": ""
                    },
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Bo",
                        "middle": [],
                        "last": "Xue",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Pang",
                        "suffix": ""
                    },
                    {
                        "first": "Victoria",
                        "middle": [],
                        "last": "Xi",
                        "suffix": ""
                    },
                    {
                        "first": "Yi",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "Tianze",
                        "middle": [],
                        "last": "Chern Tan",
                        "suffix": ""
                    },
                    {
                        "first": "Zihan",
                        "middle": [],
                        "last": "Shi",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1909.05378"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Tao Yu, Rui Zhang, He Yang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, et al. 2019. Cosql: A conversational text-to-sql challenge towards cross-domain natural language interfaces to databases. arXiv preprint arXiv:1909.05378.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Time expression analysis and recognition using syntactic token types and general heuristic rules",
                "authors": [
                    {
                        "first": "Xiaoshi",
                        "middle": [],
                        "last": "Zhong",
                        "suffix": ""
                    },
                    {
                        "first": "Aixin",
                        "middle": [],
                        "last": "Sun",
                        "suffix": ""
                    },
                    {
                        "first": "Erik",
                        "middle": [],
                        "last": "Cambria",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "420--429",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/P17-1039"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Xiaoshi Zhong, Aixin Sun, and Erik Cambria. 2017. Time expression analysis and recognition using syn- tactic token types and general heuristic rules. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 420-429, Vancouver, Canada. Asso- ciation for Computational Linguistics.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "text": "layered bi-directional encoder 2. Source and target sequence length of 60 3. Adam optimizer 4. 0.001 as the initial learning rate 5. luong10 learning rate decay scheme as described",
                "uris": null,
                "type_str": "figure"
            },
            "TABREF1": {
                "content": "<table/>",
                "type_str": "table",
                "num": null,
                "html": null,
                "text": "The grouped SQL keywords and their substitutions."
            },
            "TABREF3": {
                "content": "<table><tr><td>4. Back-translation with SQL keyword general-</td></tr><tr><td>ization and true-cased input (Model 4)</td></tr><tr><td>5. Cyclic-translation with SQL keyword gener-</td></tr><tr><td>alization and true-cased input (Model 5)</td></tr><tr><td>6. Shuffled back-translation with SQL keyword</td></tr><tr><td>generalization and true-cased input (Model 6)</td></tr><tr><td>7. Back-translation with SQL keyword general-</td></tr><tr><td>ization and true-cased input (higher dropout</td></tr><tr><td>and more training steps) (Model 7)</td></tr></table>",
                "type_str": "table",
                "num": null,
                "html": null,
                "text": "Cross validation results with different models."
            }
        }
    }
}