ACL-OCL / Base_JSON /prefixC /json /csrr /2022.csrr-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:35:11.638501Z"
},
"title": "Cloze Evaluation for Deeper Understanding of Commonsense Stories in Indonesian",
"authors": [
{
"first": "Fajri",
"middle": [],
"last": "Koto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": ""
},
{
"first": "Jey",
"middle": [
"Han"
],
"last": "Lau",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": "jeyhan.lau@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Story comprehension that involves complex causal and temporal relations is a critical task in NLP, but previous studies have focused predominantly on English, leaving open the question of how the findings generalize to other languages, such as Indonesian. In this paper, we follow the Story Cloze Test framework of Mostafazadeh et al. (2016) in evaluating story understanding in Indonesian, by constructing a four-sentence story with one correct ending and one incorrect ending. To investigate commonsense knowledge acquisition in language models, we experimented with: (1) a classification task to predict the correct ending; and (2) a generation task to complete the story with a single sentence. We investigate these tasks in two settings: (i) monolingual training and (ii) zero-shot cross-lingual transfer between Indonesian and English.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Story comprehension that involves complex causal and temporal relations is a critical task in NLP, but previous studies have focused predominantly on English, leaving open the question of how the findings generalize to other languages, such as Indonesian. In this paper, we follow the Story Cloze Test framework of Mostafazadeh et al. (2016) in evaluating story understanding in Indonesian, by constructing a four-sentence story with one correct ending and one incorrect ending. To investigate commonsense knowledge acquisition in language models, we experimented with: (1) a classification task to predict the correct ending; and (2) a generation task to complete the story with a single sentence. We investigate these tasks in two settings: (i) monolingual training and (ii) zero-shot cross-lingual transfer between Indonesian and English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Commonsense reasoning is a key component of natural language understanding (NLU), which previous work (Charniak, 1972; Mueller, 2004; Mostafazadeh et al., 2016; Chen et al., 2019 ) has attempted to model through tasks such as story comprehension. While humans can easily comprehend temporal and causal relations to understand a story narrative, machines tend to struggle due to implicit information and story premises. Often, world knowledge such as social conventions, the laws of nature, and common logic are required to connect the premises to draw appropriate conclusions or closure (Shoham, 1990; Ponti et al., 2020) . Mostafazadeh et al. (2016) and Sharma et al. (2018) introduced the Story Cloze Test framework to empirically evaluate commonsense reasoning, based on English short stories about daily-life events. The task is to choose the correct ending of a four-sentence story based on a two-way multiple choice. Mostafazadeh et al. (2016) published 3,700 data pairs, and the dataset has been used to model commonsense reasoning (Schwartz et al., 2017; Liu et al., 2018; Sap et al., 2019; Chen et al., 2019; Li et al., 2019) and perform discourse probing of pretrained language models (Koto et al., 2021) .",
"cite_spans": [
{
"start": 102,
"end": 118,
"text": "(Charniak, 1972;",
"ref_id": "BIBREF2"
},
{
"start": 119,
"end": 133,
"text": "Mueller, 2004;",
"ref_id": "BIBREF19"
},
{
"start": 134,
"end": 160,
"text": "Mostafazadeh et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 161,
"end": 178,
"text": "Chen et al., 2019",
"ref_id": "BIBREF3"
},
{
"start": 587,
"end": 601,
"text": "(Shoham, 1990;",
"ref_id": "BIBREF26"
},
{
"start": 602,
"end": 621,
"text": "Ponti et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 624,
"end": 650,
"text": "Mostafazadeh et al. (2016)",
"ref_id": "BIBREF18"
},
{
"start": 655,
"end": 675,
"text": "Sharma et al. (2018)",
"ref_id": "BIBREF25"
},
{
"start": 923,
"end": 949,
"text": "Mostafazadeh et al. (2016)",
"ref_id": "BIBREF18"
},
{
"start": 1039,
"end": 1062,
"text": "(Schwartz et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 1063,
"end": 1080,
"text": "Liu et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 1081,
"end": 1098,
"text": "Sap et al., 2019;",
"ref_id": "BIBREF23"
},
{
"start": 1099,
"end": 1117,
"text": "Chen et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 1118,
"end": 1134,
"text": "Li et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 1195,
"end": 1214,
"text": "(Koto et al., 2021)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is a lack of research modeling story comprehension in languages beyond English. Ponti et al. (2020) argued that current progress over English may not generalize to other languages because of its Anglocentric bias both linguistically, and also in terms of cultural and social conventions (Thomas, 1983) . Motivated by this, we explore commonsense reasoning in Indonesian by constructing a dataset based on the framework of Mostafazadeh et al. (2016) .",
"cite_spans": [
{
"start": 77,
"end": 105,
"text": "English. Ponti et al. (2020)",
"ref_id": null
},
{
"start": 293,
"end": 307,
"text": "(Thomas, 1983)",
"ref_id": "BIBREF27"
},
{
"start": 428,
"end": 454,
"text": "Mostafazadeh et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "XCOPA (Ponti et al., 2020) is perhaps the most closely-related work to ours, wherein 600 instances of the COPA dataset (Roemmele et al., 2011) were manually translated into 11 languages, including Indonesian.",
"cite_spans": [
{
"start": 6,
"end": 26,
"text": "(Ponti et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 119,
"end": 142,
"text": "(Roemmele et al., 2011)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "COPA is an open-domain commonsense causal reasoning task that consists of two-sentence pairs, and does not include complex narrative comprehension. Moreover, the translation approach also has its own limitations, in entrenching Anglocentric social contexts in other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To summarize, we introduce the first Story Cloze Test in Indonesian, and perform preliminary studies based on: (1) a classification task to predict the correct ending (Li et al., 2019) ; and (2) a single-sentence generation task to complete the story (Guan et al., 2019; Huang et al., 2021) . We perform these two tasks in two settings: (1) monolingual training, and (2) zero-shot cross-lingual transfer, between Indonesian and English. Our data and code are available at https://github.com/fajri91/ IndoCloze.",
"cite_spans": [
{
"start": 167,
"end": 184,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 251,
"end": 270,
"text": "(Guan et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 271,
"end": 290,
"text": "Huang et al., 2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Following Mostafazadeh et al. (2016) , we construct an Indonesian Story Cloze Test dataset. Each instance consists of a four-sentence premise, and two candidates for the fifth sentence: an appropriate 8 0 5 10 15 20 25 30 35 40 Number of words 0 500 1000 1500 2000 2500 Frequency sentence1 sentence2 sentence3 sentence4 correct and inappropriate ending. Similar to Mostafazadeh et al. (2016) and Sharma et al. (2018) , our corpus consists of daily-life events, but in Indonesian contexts (e.g. locations, places, names, food, culture). Data creation. We hired seven Indonesian university students to each write 500 short stories over a period of one month. As part of the recruitment, candidates were provided with story requirements and several examples, 1 and asked to write a 5sentence story, as well as an inappropriate fifth sentence. From ten applicants, we hired the seven best candidates based on their submitted stories. After one month, four workers completed the job and were paid Rp 750,000. 2 The three who did not complete the task were paid a prorated salary, based on the number of completed stories. This resulted in a dataset of 2,335 stories (see Table 2 for examples).",
"cite_spans": [
{
"start": 10,
"end": 36,
"text": "Mostafazadeh et al. (2016)",
"ref_id": "BIBREF18"
},
{
"start": 397,
"end": 423,
"text": "Mostafazadeh et al. (2016)",
"ref_id": "BIBREF18"
},
{
"start": 428,
"end": 448,
"text": "Sharma et al. (2018)",
"ref_id": "BIBREF25"
},
{
"start": 1036,
"end": 1037,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 201,
"end": 359,
"text": "8 0 5 10 15 20 25 30 35 40 Number of words 0 500 1000 1500 2000 2500 Frequency sentence1 sentence2 sentence3 sentence4 correct",
"ref_id": "TABREF1"
},
{
"start": 1198,
"end": 1205,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset Construction",
"sec_num": "2"
},
{
"text": "Quality control. We additionally assessed the dataset by employing two Indonesian university students that were not involved in the data construction. 3 Based on 100 random samples, we asked each worker to choose the correct fifth sentence for a given four-sentence premise, and found that both workers achieved 99% accuracy. 4 Data statistics. Our corpus contains 14,010 sentences and 106,479 words. In Figure 1 , we observe that word counts in each sentence position are somewhat similar, with a median sentence length of 5-10 words.",
"cite_spans": [
{
"start": 151,
"end": 152,
"text": "3",
"ref_id": null
},
{
"start": 326,
"end": 327,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 404,
"end": 412,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dataset Construction",
"sec_num": "2"
},
{
"text": "We used an IndoBERT model (Koto et al., 2020) to train POS and NER models, based on the datasets of Dinakaramani et al. (2014) and Gultom and Wibowo (2017) , resp., and used them to predict VERB, PERSON, LOCATION, and ORGANIZATION tags. 5 First, we found that the dataset contains 21,447 VERB tokens (3,723 unique tokens), with the top-3 most frequent verbs having a frequency of 2% (see Figure 2 in Appendix). We also observe that PERSON, LOCATION, and ORGANIZATION NEs are mostly local Indonesian expressions, with common PERSON names being Reno and Mamat, and organization names being KAI and Bobo, as captured in Table 1 . Additionally, we found that the top-5 most frequent bigrams and trigrams have a frequency of less than 0.3%, demonstrating the lexical diversity of our stories, even though the dataset was created by a small number of workers (Table 3) .",
"cite_spans": [
{
"start": 26,
"end": 45,
"text": "(Koto et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 100,
"end": 126,
"text": "Dinakaramani et al. (2014)",
"ref_id": "BIBREF5"
},
{
"start": 131,
"end": 155,
"text": "Gultom and Wibowo (2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 388,
"end": 396,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 617,
"end": 624,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 853,
"end": 862,
"text": "(Table 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Dataset Construction",
"sec_num": "2"
},
{
"text": "Similar to Bhagavatula et al. (2020) experiments in English commonsense reasoning, we conducted two tasks: (1) a classification task to predict the correct ending; and (2) a single-sentence generation task to complete the story. We perform these two tasks in two settings: (1) monolingual training, and (2) zero-shot cross-lingual transfer, between Indonesian and English. The data split is presented in Table 4 .",
"cite_spans": [
{
"start": 11,
"end": 36,
"text": "Bhagavatula et al. (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 404,
"end": 411,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "Following Mostafazadeh et al. 2016, we evaluate the classification task based on accuracy, defined as #correct #testcases . Models are tuned based on the development set, and results are averaged over three runs. We experiment with the following four models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "3.1"
},
{
"text": "n-gram overlap: We select candidate with the highest ROUGE-1 (F1; Lin (2004)), computed between the premise and ending.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "3.1"
},
{
"text": "fastText-based similarity: We pick the candidate with the highest cosine similarity, computed",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "3.1"
},
{
"text": "Sepulang sekolah, Rani dan Rina mengunjungi toko komik. Komik kesukaan mereka terbit hari ini. Masing-masing membayar sepuluh ribu rupiah. Setelah membayar, mereka berdua pulang ke rumah",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context",
"sec_num": null
},
{
"text": "After school, Rani and Rina visit a comic shop. Their favorite comic will be published today. Each of them paid ten thousand rupiah. After paying, the two of them went home.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context",
"sec_num": null
},
{
"text": "Mereka membaca komik itu bersama-sama di rumah. They read the comic together at home.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right ending",
"sec_num": null
},
{
"text": "Komik itu mereka robek jadi dua bagian. They tore the comic into two parts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wrong ending",
"sec_num": null
},
{
"text": "Hari ini langit sangat mendung. Gemuruh sudah terdengar sejak pagi. Diprediksi hujan akan segera turun. Aku bergegas berangkat kerja karena takut kehujanan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context",
"sec_num": null
},
{
"text": "Today the sky is very cloudy. There has been thunder since morning. It is predicted that rain will fall soon. I rush to work to avoid the rain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context",
"sec_num": null
},
{
"text": "Aku membawa jas hujan. I take a raincoat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right ending",
"sec_num": null
},
{
"text": "Sebelum berangkat, aku menjemur pakaian di halaman rumah Before leaving, I hang my washing outdoors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wrong ending",
"sec_num": null
},
{
"text": "Boni punya 5 balon. Balon ini dibelikan oleh ayah di Jalan Margonda. Semua balon Boni berwarna berbeda. 2 balon berwarna merah dan biru.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context",
"sec_num": null
},
{
"text": "Boni has 5 balloons. These balloons were bought by his father at Jalan Margonda. All Boni's balloons are different colours. Two of the balloons are red and blue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context",
"sec_num": null
},
{
"text": "Yang lain berwarna putih, hitam, dan kuning The others are white, black and yellow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right ending",
"sec_num": null
},
{
"text": "Sedangkan ketiga lainnya berwarna merah muda.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wrong ending",
"sec_num": null
},
{
"text": "While the other three are pink. between the premise and ending based on 300d Indonesian fastText (Bojanowski et al., 2017) . Hierarchical BiLSTM: We use a two-level 200d BiLSTM, using the first to encode a single sentence with 300d fastText as input. We perform average pooling to obtain a sentence representation, and apply the second BiLSTM across all sentences. We concatenate the last hidden state of the two LSTMs, and perform binary classification using a sigmoid function (see Appendix for hyper-parameters).",
"cite_spans": [
{
"start": 97,
"end": 122,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Wrong ending",
"sec_num": null
},
{
"text": "Pretrained Language Models: We fine-tune MBERT (Devlin et al., 2019) and INDOBERT (Koto et al., 2020) by concatenating the premise and ending sentence, and use [CLS] for classification (see Appendix for hyper-parameters). 6 For classification, we first evaluate the difficulty of our dataset by predicting the fifth sentence based on a different combination of premises as context. For zero-shot cross-lingual transfer, we use the English corpus of Mostafazadeh et al. (2016) , and also use translations from Google Translate. 7",
"cite_spans": [
{
"start": 47,
"end": 68,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 82,
"end": 101,
"text": "(Koto et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 222,
"end": 223,
"text": "6",
"ref_id": null
},
{
"start": 449,
"end": 475,
"text": "Mostafazadeh et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Wrong ending",
"sec_num": null
},
{
"text": "We use the four-sentence premise as input, and train MBART to generate the fifth sentence for both English and Indonesian. For English, we use the 45K stories of Mostafazadeh et al. (2016) as the training set (see Table 4 ) and perform zero-shot cross-lingual transfer in both language directions (see Appendix for hyper-parameters).",
"cite_spans": [
{
"start": 162,
"end": 188,
"text": "Mostafazadeh et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 214,
"end": 221,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Generation",
"sec_num": "3.2"
},
{
"text": "For automatic evaluation we use ROUGE-L (Lin, 2004) , BLEU-4 (Papineni et al., 2002) , ME-TEOR (Lavie and Agarwal, 2007) , and BERTScore (Zhang et al., 2020). For Indonesian, we also conducted manual evaluation using 4 models \u00d7 50 randomly-sampled test instances, including gold sentences and predicted sentences, trained on the EN, ID, and EN+ID datasets. We asked two native speakers to read the premise and then examine whether the fifth sentence is coherent Indonesian text, does not contain repetition, follows commonsense, contains natural or unnatural code-switching (in the case there is code-switching), and the overall story has good narrative flow. 8",
"cite_spans": [
{
"start": 40,
"end": 51,
"text": "(Lin, 2004)",
"ref_id": "BIBREF15"
},
{
"start": 61,
"end": 84,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF20"
},
{
"start": 95,
"end": 120,
"text": "(Lavie and Agarwal, 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3.2"
},
{
"text": "Classification. In Table 5 , we find that a 1sentence premise (s 4 ) is inadequate to comprehend the narrative of the story. We also observe that the n-gram method performs at near-random (52.9%), while fastText also struggles at 62.6% accuracy. Table 8 : Manual evaluation of the generation task for 50 randomly Indonesian samples, in terms of whether the fifth-sentence: A: does not contain repetition; B: follows commonsense; C: is fluent Indonesian; D: has good narrative flow. The presented scores are aggregated across two annotators (in %). The Kappa scores for each category range between 0.4-0.8 (see Appendix).",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 5",
"ref_id": null
},
{
"start": 246,
"end": 253,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4"
},
{
"text": "racy. Compared to the English Story Cloze Test, our corpus is arguably harder, as Li et al. (2019) reported BERT accuracies of 78% and 88.1% in the English corpus when using None and s 1 \u2192 s 4 as the premise. We acknowledge that there is a spurious correlation of sentence-5 candidates with the commonsense labels, indicated by INDOBERT accuracy of 76.1% when having context of None. This phenomenon is worse in the English dataset (Mostafazadeh et al., 2016) where the BERT accuracy of using context of None is 88.1% (Li et al., 2019) .",
"cite_spans": [
{
"start": 82,
"end": 98,
"text": "Li et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 432,
"end": 459,
"text": "(Mostafazadeh et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 518,
"end": 535,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4"
},
{
"text": "In Table 6 , we use MBERT to examine commonsense reasoning crosslingually between English (EN) and Indonesian (ID). To simplify, we use L1\u2192L2 to denote training in language L1 and testing in L2. First, we observe that combining EN and ID training worsens commonsense reasoning in both English and Indonesian. Applying zeroshot learning (i.e. EN\u2192ID and ID\u2192EN) achieves mixed results, and ID\u2192EN has worse cross-lingual transfer than EN\u2192ID in terms of performance gap over monolingual training. We argue this is because: (1) English is the dominant language in MBERT training, and (2) our ID corpus contains contexts that are less universal (e.g. nasi padang 9 vs. hamburger).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4"
},
{
"text": "To further observe whether the transferability is affected by factors beyond language, we translate the training data with Google Translate. In Table 6 , EN denotes the English translation of the Indonesian training set, and ID vice versa. Surprisingly, we found that ID \u2192ID has worse performance than EN\u2192ID, while EN \u2192EN improves slightly over ID\u2192EN. This suggests that translating the training set to the test language is ineffective, and actually hurts performance for the ID test set. To further explore this effect, we asked two expert workers to evaluate 100 random sentences in the Google Translate output for EN-ID and ID-EN, and found quality in both translation directions to be high, with very little difference in terms of adequacy and fluency (4.5-4.6 out of 5). 10 Generation. In Table 7 , we observe that training using EN achieves the best performance across the automatic metrics on both the EN and ID test sets, with the one exception of BERTScore for EN+ID\u2192ID. 11 However, in the manual evaluation of Indonesian (Table 8) , we observe a different trend, in that training using the EN data tends to generate repetitive fifth sentences. Based on the manual evaluation, the best results are using ID and EN+ID as the training data, where the models do not suffer from repetition, generate fluent Indonesian, with similar acceptability in terms of commonsense reasoning.",
"cite_spans": [
{
"start": 776,
"end": 778,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 794,
"end": 801,
"text": "Table 7",
"ref_id": "TABREF7"
},
{
"start": 1031,
"end": 1040,
"text": "(Table 8)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4"
},
{
"text": "Although zero-shot cross-lingual transfer of EN\u2192ID suffers from repetition, we notice that MBART is capable of generating plausibly codemixed sentences made up of Indonesian and English (Gardner-Chloros et al., 2009) . Based on our manual evaluation on the same 50 Indonesian test set, we found that 41% of generated fifth sentences contain code-mixing, of which 75% are naturalistic (see Table 9 for examples).",
"cite_spans": [
{
"start": 186,
"end": 216,
"text": "(Gardner-Chloros et al., 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 389,
"end": 396,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4"
},
{
"text": "In this paper, we introduced the first Indonesian story cloze dataset, and performed preliminary analysis in classification and generation settings in two scenarios: monolingual training and zeroshot cross-lingual transfer between Indonesian and 9 Indonesian cuisine. 10 Please see Appendix for the adequacy and fluency scores (including Pearson correlations) of each translation system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "11 EN+ID means that we train the model in a pipeline, using EN first, then ID.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Natural code-mixing sentence Now Armend memiliki printer di rumahnya (Now Armend has a printer in his house)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The only time Livia keluar kamar, adalah ketika ia sedang tidur The only time Livia left the room is when she sleeps Unnatural code-mixing sentence He Hendrik ditangkap oleh Polda (He Hendrik is arrested by the local police)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Shearing her teeth ketika diminta untuk menyanyi paling keras! (Shearing her teeth when she is asked to sing loudly!) Table 9 : Example of code-mixing sentence, generated by MBART when trained on the EN dataset. Red font denotes English words.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "English. From both experiments, we found that the cross-lingual transfer of commonsense from English to Indonesian does not perform well, motivating the construction of commonsense reasoning resources in different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We paid our expert workers fairly, based on the monthly minimum wage in Indonesia. All workers were made aware that the submitted stories would be distributed, and used for research purposes. No sensitive information about the workers will be released.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "6"
},
{
"text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "6"
},
{
"text": "Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In ICLR 2020 : Eighth International Conference on Learning Representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "6"
},
{
"text": "A Training Configurations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "6"
},
{
"text": "For LSTM, we set the maximum token for each sentence to be 30, and train the model for 100 epochs with early stopping (patience = 20), a batch size of 20, Adam optimizer, and a learning rate of 0.01. For pretrained-language model, we set the maximum token to be 450 and 50 for the premise and ending sentence, respectively, and train the model for 20 epochs with early stopping (patience = 5), a batch size of 40, Adam optimizer, an initial learning rate of 5e-5, and warm-up of 10% of the total steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Classification",
"sec_num": null
},
{
"text": "To train the sentence-5 generation task, we set the maximum length of tokens to be 200 and 50 for the input and target text, respectively. We train the models on 4\u00d7V100 32GB GPUs for 60 epochs with an initial learning rate of 1e-4 (Adam optimizer). We use a total batch size of 320 (20 x 4 GPUs x gradient accumulation of 4), a warmup of 10% of total steps, and save checkpoints for every 500 steps. We also compute ROUGE scores (R1) to pick the best checkpoint based on the development set. For calculating BERTScore we use bert-base-multilingual-cased based on layer suggested by Zhang et al. (2020) .",
"cite_spans": [
{
"start": 582,
"end": 601,
"text": "Zhang et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Generation",
"sec_num": null
},
{
"text": "mem akai (wea r) m em ilih (ch oo se ) m en de ng ar (l is te n) m e n a n g is (c ry ) m e n u n g g u (w a it ) b e r u s a h a ( t r y ) k e lu a r ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Additional Data Statistics",
"sec_num": null
},
{
"text": "We further analyze false positive (FP) and true positive (TP) of INDOBERT by considering 1) whether the story contains temporal and causal relations; and 2) the number of premise sentences that are minimally required to entail the right ending. 12 We randomly selected 50 samples from each FP and TP sets, and found that 60% of FP samples have temporal relations while TP has lower percentage (56%). On the other hand, causal relations tends to be correctly predicted, with proportion 88% and 94% for FP and TP, respectively. Lastly, we found that FP samples have a higher average of minimally-required premise: 2.8 (out of 4), while TP samples are only 2.1. Table 11 : Classification task: We randomly sample 100 sentences (of stories) and use Google Translate to obtain the translation. We ask two expert workers to evaluate adequacy and fluency of EN-ID and ID-EN translation (Koehn and Monz, 2006) . Scores reflect the average of two annotations, ranging between 1-5.",
"cite_spans": [
{
"start": 245,
"end": 247,
"text": "12",
"ref_id": null
},
{
"start": 879,
"end": 901,
"text": "(Koehn and Monz, 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 659,
"end": 667,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "C Analysis on Classification Task: FP and TP Samples",
"sec_num": null
},
{
"text": "Buatlah sebuah cerita pendek dengan 5 kalimat!",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Interview Questions",
"sec_num": null
},
{
"text": "Cerita pendek yang kami maksud terdiri dari 4 kalimat dan 2 kalimat penutup. Satu kalimat penutup merupakan kalimat yang sesuai dengan logika manusia berdasarkan 4 kalimat premise (sesuai dengan commonsense), sedangan 1 kalimat penutup lainnya merupakan kalimat yang tidak sesuai dengan logika manusia (commonsense).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Interview Questions",
"sec_num": null
},
{
"text": "==== Contoh STORY-1 ==== Make a short story with 5 sentences!",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Interview Questions",
"sec_num": null
},
{
"text": "The short story consists of 4 sentences and 2 ending sentences. One ending sentence is a sentence that is in accordance with human logic based on 4 premise sentences (follows the commonsense), while the other one is a sentence that is not in accordance with human logic (do not follow the commonsense).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Interview Questions",
"sec_num": null
},
{
"text": "==== Example-1 ==== 1. Grandma really likes watching soap operas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Interview Questions",
"sec_num": null
},
{
"text": "2. Every evening after evening prayer she sits in front of the television for 3 hours. 3. Sometimes she muttered because she was annoyed to see the antagonist. 4. Often, she is accompanied by her husband when watching soap operas Correct ending (5): For my grandmother, soap operas are a good entertainment at night Incorrect ending (5): Grandma really wants to be a soap opera actor and will shoot tomorrow ==== Example-2 ==== 1. Pak Miskin has 3 children 2. Sinta, the first child is in grade 6. 3. The second child named Heru is 4 years old 4. The youngest child is Cahyono Correct ending (5): He is still 10 months old Incorrect ending (5): Cahyono is in grade 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Interview Questions",
"sec_num": null
},
{
"text": "Figure 3: Interview question that is used in the hiring of story writers. The second row is the English translation (for illustration).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Interview Questions",
"sec_num": null
},
{
"text": "See Appendix for more details.2 The monthly minimum wage in Indonesia is around Rp 4,000,000, and the workload to write 500 short stories equates to roughly 5-days of full-time work.3 We paid Rp 150,000 to each.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The two candidate fifth sentences (the correct and incorrect endings) are shuffled for each story.5 The POS and NER models have accuracies of 96.8% and 90.1%, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the Huggingface Pytorch framework for finetuning(Wolf et al., 2019). 7 https://translate.google.com/; accessed on April 2021.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sentence can be in any position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to the anonymous reviewers for their helpful feedback and suggestions. The first author is supported by the Australia Awards Scholarship (AAS), funded by the Department of Foreign Affairs and Trade (DFAT), Australia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "It has been fifteen years that Jerry has not visited his elementary school. Today he is visiting his school to invite his teachers to his wedding. He feels so happy meeting with his former teachers. Those teachers are no longer as young as fifteen years ago.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "Even so, they still remember Jerry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold:",
"sec_num": null
},
{
"text": "Jerry feels that he has lost his school.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EN model:",
"sec_num": null
},
{
"text": "Jerry is very happy with his teachers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ID model:",
"sec_num": null
},
{
"text": "Jerry is very proud of his primary school. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EN+ID model:",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Abductive commonsense reasoning",
"authors": [
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han- nah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Toward a model of children's story comprehension",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 1972. Toward a model of children's story comprehension. Ph.D. thesis, Massachusetts Institute of Technology.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Incorporating structured commonsense knowledge in story completion",
"authors": [
{
"first": "Jiaao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianshu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "6244--6251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaao Chen, Jianshu Chen, and Zhou Yu. 2019. In- corporating structured commonsense knowledge in story completion. In Proceedings of the AAAI Con- ference on Artificial Intelligence, pages 6244-6251.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Designing an Indonesian part of speech tagset and manually tagged Indonesian corpus",
"authors": [
{
"first": "Arawinda",
"middle": [],
"last": "Dinakaramani",
"suffix": ""
},
{
"first": "Fam",
"middle": [],
"last": "Rashel",
"suffix": ""
},
{
"first": "Andry",
"middle": [],
"last": "Luthfi",
"suffix": ""
},
{
"first": "Ruli",
"middle": [],
"last": "Manurung",
"suffix": ""
}
],
"year": 2014,
"venue": "2014 International Conference on Asian Language Processing (IALP)",
"volume": "",
"issue": "",
"pages": "66--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arawinda Dinakaramani, Fam Rashel, Andry Luthfi, and Ruli Manurung. 2014. Designing an Indone- sian part of speech tagset and manually tagged In- donesian corpus. In 2014 International Conference on Asian Language Processing (IALP), pages 66-69. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Code-switching",
"authors": [
{
"first": "Penelope",
"middle": [],
"last": "Gardner-Chloros",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Penelope Gardner-Chloros et al. 2009. Code-switching. Cambridge university press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Story ending generation with incremental encoding and commonsense knowledge",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Yansen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Guan, Yansen Wang, and Minlie Huang. 2019. Story ending generation with incremental encoding and commonsense knowledge. In AAAI.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic open domain information extraction from Indonesian text",
"authors": [
{
"first": "Yohanes",
"middle": [],
"last": "Gultom",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wahyu Catur Wibowo",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 International Workshop on Big Data and Information Security (IWBIS)",
"volume": "",
"issue": "",
"pages": "23--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yohanes Gultom and Wahyu Catur Wibowo. 2017. Au- tomatic open domain information extraction from In- donesian text. In 2017 International Workshop on Big Data and Information Security (IWBIS), pages 23-30.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Story ending generation with multi-level graph convolutional networks over dependency trees",
"authors": [
{
"first": "Qingbao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Linzhang",
"middle": [],
"last": "Mo",
"suffix": ""
},
{
"first": "Pijian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Qingguang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jielong",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ho Fung Leung",
"suffix": ""
}
],
"year": 2021,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingbao Huang, Linzhang Mo, Pijian Li, Yi Cai, Qing- guang Liu, Jielong Wei, Qing Li, and Ho fung Le- ung. 2021. Story ending generation with multi-level graph convolutional networks over dependency trees. In AAAI.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Manual and automatic evaluation of machine translation between European languages",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings on the Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "102--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Christof Monz. 2006. Manual and automatic evaluation of machine translation between European languages. In Proceedings on the Work- shop on Statistical Machine Translation, pages 102- 121, New York City. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Discourse probing of pretrained language models",
"authors": [
{
"first": "Fajri",
"middle": [],
"last": "Koto",
"suffix": ""
},
{
"first": "Jey",
"middle": [
"Han"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3849--3864",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.301"
]
},
"num": null,
"urls": [],
"raw_text": "Fajri Koto, Jey Han Lau, and Timothy Baldwin. 2021. Discourse probing of pretrained language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 3849-3864, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "IndoLEM and IndoBERT: A benchmark dataset and pre-trained language model for Indonesian NLP",
"authors": [
{
"first": "Fajri",
"middle": [],
"last": "Koto",
"suffix": ""
},
{
"first": "Afshin",
"middle": [],
"last": "Rahimi",
"suffix": ""
},
{
"first": "Jey",
"middle": [
"Han"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "757--770",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.66"
]
},
"num": null,
"urls": [],
"raw_text": "Fajri Koto, Afshin Rahimi, Jey Han Lau, and Timothy Baldwin. 2020. IndoLEM and IndoBERT: A bench- mark dataset and pre-trained language model for In- donesian NLP. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 757-770, Barcelona, Spain (Online). Interna- tional Committee on Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Abhaya",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "228--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceed- ings of the Second Workshop on Statistical Machine Translation, pages 228-231, Prague, Czech Repub- lic. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Story ending prediction by transferable bert",
"authors": [
{
"first": "Zhongyang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1800--1806",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongyang Li, Xiao Ding, and Ting Liu. 2019. Story ending prediction by transferable bert. In Proceed- ings of the Twenty-Eighth International Joint Con- ference on Artificial Intelligence, pages 1800-1806.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Narrative modeling with memory chains and semantic supervision",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "278--284",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2045"
]
},
"num": null,
"urls": [],
"raw_text": "Fei Liu, Trevor Cohn, and Timothy Baldwin. 2018. Narrative modeling with memory chains and seman- tic supervision. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 278- 284, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "726--742",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00343"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8:726-742.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A corpus and cloze evaluation for deeper understanding of commonsense stories",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Pushmeet",
"middle": [],
"last": "Kohli",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "839--849",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1098"
]
},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A cor- pus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839-849, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Understanding script-based stories using commonsense reasoning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Erik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mueller",
"suffix": ""
}
],
"year": 2004,
"venue": "Cognitive Systems Research",
"volume": "5",
"issue": "4",
"pages": "307--340",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik T Mueller. 2004. Understanding script-based sto- ries using commonsense reasoning. Cognitive Sys- tems Research, 5(4):307-340.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "XCOPA: A multilingual dataset for causal commonsense reasoning",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Edoardo Maria Ponti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Qianchu",
"middle": [],
"last": "Majewska",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2362--2376",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.185"
]
},
"num": null,
"urls": [],
"raw_text": "Edoardo Maria Ponti, Goran Glava\u0161, Olga Majewska, Qianchu Liu, Ivan Vuli\u0107, and Anna Korhonen. 2020. XCOPA: A multilingual dataset for causal common- sense reasoning. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 2362-2376, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Choice of plausible alternatives: An evaluation of commonsense causal reasoning",
"authors": [
{
"first": "Melissa",
"middle": [],
"last": "Roemmele",
"suffix": ""
},
{
"first": "Andrew S",
"middle": [],
"last": "Cosmin Adrian Bejan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gordon",
"suffix": ""
}
],
"year": 2011,
"venue": "AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning",
"volume": "",
"issue": "",
"pages": "90--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S Gordon. 2011. Choice of plausible alterna- tives: An evaluation of commonsense causal reason- ing. In AAAI Spring Symposium: Logical Formal- izations of Commonsense Reasoning, pages 90-95.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Atomic: An atlas of machine commonsense for ifthen reasoning",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Allaway",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roof",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3027--3035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if- then reasoning. In Proceedings of the AAAI Confer- ence on Artificial Intelligence, pages 3027-3035.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Story cloze task: UW NLP system",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Leila",
"middle": [],
"last": "Zilles",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics",
"volume": "",
"issue": "",
"pages": "52--55",
"other_ids": {
"DOI": [
"10.18653/v1/W17-0907"
]
},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, and Noah A. Smith. 2017. Story cloze task: UW NLP system. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sen- tential and Discourse-level Semantics, pages 52-55, Valencia, Spain. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Tackling the story ending biases in the story cloze test",
"authors": [
{
"first": "Rishi",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Omid",
"middle": [],
"last": "Bakhshandeh",
"suffix": ""
},
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "752--757",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2119"
]
},
"num": null,
"urls": [],
"raw_text": "Rishi Sharma, James Allen, Omid Bakhshandeh, and Nasrin Mostafazadeh. 2018. Tackling the story end- ing biases in the story cloze test. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 752-757, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Nonmonotonic reasoning and causation",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Shoham",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive Science",
"volume": "14",
"issue": "2",
"pages": "213--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Shoham. 1990. Nonmonotonic reasoning and causation. Cognitive Science, 14(2):213-252.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Cross-cultural pragmatic failure",
"authors": [
{
"first": "Jenny",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 1983,
"venue": "Applied linguistics",
"volume": "4",
"issue": "",
"pages": "91--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Thomas. 1983. Cross-cultural pragmatic failure. Applied linguistics, 4(2):91-112.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. arXiv preprint arXiv:1910.03771.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "ending incorrect ending Number of words in each sentence position.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "( g o o u t ) b e rj a la n (w a lk ) b e rt e m u (m e e t) du du k (s it) me ma sak (co ok) menuju (lead up) mulai (start) me mb ant u (he lp) m em ak an (e at ) m a su k (g e t in ) m e m u tu s k a n (d e c id e ) lu p a ( fo r g e t ) ik u t ( f o ll o w ) ti d u r (s le e p ) ta h u (k n o w ) ke m ba li (g o ba ck ) m en ga m bil (ta ke ) bera da (exis t) melakukan (do) berh asil (suc ceed ) be laj ar (st ud y) m em ba w a (b ri ng ) m e n c a ri (l o o k fo r) m e n o n to n (w a tc h ): 0 Distribution of top-50 verbs in our corpus.",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"text": "",
"num": null,
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"html": null,
"text": "Three example Story Cloze Test instances, with an English translation for illustrative purposes.",
"num": null,
"content": "<table><tr><td>Bigram (#unique: 59,256)</td><td>Freq (%)</td></tr><tr><td>pergi ke (go to) tidak bisa (can not) hari ini (today) teman temannya (his/her friends) tidak pernah (never)</td><td>0.30 0.29 0.27 0.25 0.25</td></tr><tr><td>Trigram (#unique: 72,443)</td><td>Freq (%)</td></tr><tr><td>oleh karena itu (therefore/thus) pulang ke rumah (go home) dengan teman temannya (with his/her friends) maka dari itu (therefore/thus) dan teman temannya (and his/her friends)</td><td>0.04 0.04 0.03 0.03 0.03</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "Top-5 bigrams and trigrams.",
"num": null,
"content": "<table><tr><td>Task</td><td>EN</td><td>ID (ours)</td></tr><tr><td>Classification</td><td colspan=\"2\">1,683 / 188 / 1,871 1,000 / 200 / 1,135</td></tr><tr><td>Generation</td><td colspan=\"2\">45,496 / 1,871 / 1,871 1,000 / 200 / 1,135</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"html": null,
"text": "Data distribution of train/development/test set. The English dataset is from Mostafazadeh et al. (2016).",
"num": null,
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"html": null,
"text": "Table 5: Test classification accuracy (%) based on different contexts (s i indicates i-th sentence). Human accuracy is 99 (from 100 samples).",
"num": null,
"content": "<table><tr><td/><td colspan=\"3\">Context n-gram fastText</td><td>LSTM</td><td>MBERT</td><td>INDOBERT</td></tr><tr><td/><td>None s 4 s 3 \u2192 s 4 s 2 \u2192 s 4 s 1 \u2192 s 4</td><td>-40.2 49.5 52.9 52.8</td><td>-58.9 62.3 62.5 62.6</td><td colspan=\"2\">68.4 \u00b1 1.5 75.7 \u00b1 0.9 68.8 \u00b1 1.9 77.1 \u00b1 1.4 69.5 \u00b1 0.5 77.3 \u00b1 1.5 68.6 \u00b1 0.9 77.8 \u00b1 0.9 70.0 \u00b1 2.1 78.2 \u00b1 1.4</td><td>76.1 \u00b1 3.4 78.1 \u00b1 0.3 76.0 \u00b1 7.8 75.4 \u00b1 0.9 81.0 \u00b1 2.1</td></tr><tr><td>Train</td><td colspan=\"3\">Test (EN) Test (ID)</td><td/></tr><tr><td colspan=\"4\">EN ID EN+ID EN ID EN+EN 82.9 \u00b1 0.3 75.7 \u00b1 1.5 81.9 \u00b1 0.5 71.3 \u00b1 2.3 68.1 \u00b1 1.9 78.2 \u00b1 1.4 81.7 \u00b1 1.0 76.8 \u00b1 1.1 69.2 \u00b1 1.5 75.6 \u00b1 0.6 78.0 \u00b1 0.9 69.6 \u00b1 0.4 ID+ID 78.6 \u00b1 0.6 76.2 \u00b1 0.6</td><td/></tr></table>"
},
"TABREF6": {
"type_str": "table",
"html": null,
"text": "",
"num": null,
"content": "<table><tr><td>Train</td><td>R-L</td><td>Test (EN) B M</td><td>BS</td><td>R-L</td><td>Test (ID) B M</td><td>BS</td></tr><tr><td>EN ID</td><td colspan=\"3\">20.4 6.9 9.2 75.2 8.5 4.5 4.0 70.3</td><td colspan=\"3\">19.2 6.6 8.2 73.8 17.6 6.2 7.6 74.4</td></tr><tr><td colspan=\"4\">EN+ID 13.6 5.2 6.3 72.4</td><td colspan=\"3\">18.6 6.4 8.0 74.7</td></tr></table>"
},
"TABREF7": {
"type_str": "table",
"html": null,
"text": "",
"num": null,
"content": "<table/>"
},
"TABREF10": {
"type_str": "table",
"html": null,
"text": "Generation",
"num": null,
"content": "<table><tr><td colspan=\"5\">task: Kappa scores (inter-annotator agreement) of manual evaluation for 4 mod-els \u00d7 50 randomly sampled Indonesian test. We eval-uate whether the fifth-sentence: A: does not contain repetition; B: follows commonsense; C: is a fluent In-donesian; D: has a good flow; E: has natural English code-switching; and F: has unnatural English code-switching.</td></tr><tr><td>Aspect</td><td colspan=\"4\">EN-ID Adequacy Fluency Adequacy Fluency ID-EN</td></tr><tr><td>Pearson</td><td>0.55</td><td>0.56</td><td>0.39</td><td>0.37</td></tr><tr><td>Score</td><td>4.47</td><td>4.57</td><td>4.60</td><td>4.58</td></tr></table>"
}
}
}
}