question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
be08ef81c3cfaaaf35c7414397a1871611f1a7fd
Which are the state-of-the-art models?
[ "WMD, VSM, PV-DTW, PV-TED" ]
[ [ "This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods.", "We denote the following distance/similarity measures.", "WMD: The Word Mover's Distance introduced in Section SECREF1 . WMD adapts the earth mover's distance to the space of documents.", "VSM: The similarity measure introduced in Section UID12 .", "PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 .", "PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 ." ] ]
dc57ae854d78aa5d5e8c979826d3e2524d4e9165
Why is being feature-engineering free an advantage?
[ "Unanswerable" ]
[ [] ]
18412237f7faafc6befe975d5bcd348e2b499b55
Where did this model place in the final evaluation of the shared task?
[ "$4th$" ]
[ [ "For the shared task submission, we use the predictions of BERT-1M-MT5 as our first submitted system. Then, we concatenate our DEV and TRAIN data to compose a new training set (thus using all the training data released by organizers) to re-train BERT-1M-MT5 and BERT-MT6 with the same parameters. We use the predictions of these two models as our second and third submissions. Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task." ] ]
02945c85d6cc4cdd1757b2f2bfa5e92ee4ed14a0
What in-domain data is used to continue pre-training?
[ "dialectal tweet data" ]
[ [ "We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data)." ] ]
6e51af9088c390829703c6fa966e98c3a53114c1
What dialect is used in the Google BERT model and what is used in the task data?
[ "Modern Standard Arabic (MSA), MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine" ]
[ [ "Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions:", "The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e. targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony\"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine." ] ]
07ee4e0277ad1083270131d32a71c3fe062a916d
What are the tasks used in the mulit-task learning setup?
[ "Author profiling and deception detection in Arabic, LAMA+DINA Emotion detection, Sentiment analysis in Arabic tweets" ]
[ [ "Our multi-task BERT models involve six different Arabic classification tasks. We briefly introduce the data for these tasks here:", "Author profiling and deception detection in Arabic (APDA). BIBREF9 . From APDA, we only use the corpus of author profiling (which includes the three profiling tasks of age, gender, and variety). The organizers of APDA provide 225,000 tweets as training data. Each tweet is labelled with three tags (one for each task). To develop our models, we split the training data into 90% training set ($n$=202,500 tweets) and 10% development set ($n$=22,500 tweets). With regard to age, authors consider tweets of three classes: {Under 25, Between 25 and 34, and Above 35}. For the Arabic varieties, they consider the following fifteen classes: {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. Gender is labeled as a binary task with {male,female} tags.", "LAMA+DINA Emotion detection. Alhuzali et al. BIBREF10 introduce LAMA, a dataset for Arabic emotion detection. They use a first-person seed phrase approach and extend work by Abdul-Mageed et al. BIBREF11 for emotion data collection from 6 to 8 emotion categories (i.e. anger, anticipation, disgust, fear, joy, sadness, surprise and trust). We use the combined LAMA+DINA corpus. It is split by the authors as 189,902 tweets training set, 910 as development, and 941 as test. In our experiment, we use only the training set for out MTL experiments.", "Sentiment analysis in Arabic tweets. This dataset is a shared task on Kaggle by Motaz Saad . The corpus contains 58,751 Arabic tweets (46,940 training, and 11,811 test). The tweets are annotated with positive and negative labels based on an emoji lexicon." ] ]
bfce2afe7a4b71f9127d4f9ef479a0bfb16eaf76
What human evaluation metrics were used in the paper?
[ "rating questions on a scale of 1-5 based on fluency of language used and relevance of the question to the context" ]
[ [ "For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer." ] ]
dfbab3cd991f86d998223726617d61113caa6193
For the purposes of this paper, how is something determined to be domain specific knowledge?
[ "reviews under distinct product categories are considered specific domain knowledge" ]
[ [ "Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain." ] ]
df510c85c277afc67799abcb503caa248c448ad2
Does the fact that GCNs can perform well on this tell us that the task is simpler than previously thought?
[ "No" ]
[ [ "We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 .", "We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation." ] ]
d95180d72d329a27ddf2fd5cc6919f469632a895
Are there conceptual benefits to using GCNs over more complex architectures like attention?
[ "Yes" ]
[ [ "The proposed model architecture is shown in the Figure 1 . Recurrent Neural Networks like LSTM, GRU update their weights at every timestep sequentially and hence lack parallelization over inputs in training. In case of attention based models, the attention layer has to wait for outputs from all timesteps. Hence, these models fail to take the advantage of parallelism either. Since, proposed model is based on convolution layers and gated mechanism, it can be parallelized efficiently. The convolution layers learn higher level representations for the source domain. The gated mechanism learn the domain agnostic representations. They together control the information that has to flow through further fully connected output layer after max pooling.", "We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation.", "We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 ." ] ]
e196e2ce72eb8b2d50732c26e9bf346df6643f69
Do they evaluate only on English?
[ "Yes" ]
[ [ "We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.", "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 .", "In Table TABREF1 , we see a notable difference in terms of size among the Twitter datasets. Given this circumstance, and in light of the findings by BIBREF18 , we are interested in studying how the addition of external soft-annotated data impacts on the performance. Thus, in addition to the datasets introduced before, we use two corpora for augmentation purposes. The first dataset was collected using the Twitter API, targeting tweets with the hashtags #sarcasm or #irony, resulting on a total of 180,000 and 45,000 tweets respectively. On the other hand, to obtain non-sarcastic and non-ironic tweets, we relied on the SemEval 2018 Task 1 dataset BIBREF25 . To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems. We later extract all the hashtags in each target dataset and proceed to augment only using those external tweets that contain any of these hashtags. This allows us to, for each class, add a total of 36,835 tweets for the Ptáček corpus, 8,095 for the Riloff corpus and 26,168 for the SemEval-2018 corpus." ] ]
46570c8faaeefecc8232cfc2faab0005faaba35f
What are the 7 different datasets?
[ "SemEval 2018 Task 3, BIBREF20, BIBREF4, SARC 2.0, SARC 2.0 pol, Sarcasm Corpus V1 (SC-V1), Sarcasm Corpus V2 (SC-V2)" ]
[ [ "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 ." ] ]
982d375378238d0adbc9a4c987d633ed16b7f98f
What are the three different sources of data?
[ "Twitter, Reddit, Online Dialogues" ]
[ [ "We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.", "Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.", "Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.", "Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 ." ] ]
bbdb2942dc6de3d384e3a1b705af996a5341031b
What type of model are the ELMo representations used in?
[ "A bi-LSTM with max-pooling on top of it" ]
[ [ "The usage of a purely character-based input would allow us to directly recover and model these features. Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10 . The ELMo layer allows to recover a rich 1,024-dimensional dense vector for each word. Using CNNs, each vector is built upon the characters that compose the underlying words. As ELMo also contains a deep bi-directional LSTM on top of this character-derived vectors, each word-level embedding contains contextual information from their surroundings. Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 .", "Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification." ] ]
4ec538e114356f72ef82f001549accefaf85e99c
Which morphosyntactic features are thought to indicate irony or sarcasm?
[ "all caps, quotation marks, emoticons, emojis, hashtags" ]
[ [ "On the other hand, deep models for irony and sarcasm detection, which are currently offer state-of-the-art performance, have exploited sequential neural networks such as LSTMs and GRUs BIBREF15 , BIBREF23 on top of distributed word representations. Recently, in addition to using a sequential model, BIBREF14 proposed to use intra-attention to compare elements in a sequence against themselves. This allowed the model to better capture word-to-word level interactions that could also be useful for detecting sarcasm, such as the incongruity phenomenon BIBREF3 . Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags." ] ]
40a45d59a2ef7a67c8ab0f2b2d5b43fc85b85498
Is the model evaluated on other datasets?
[ "No" ]
[ [ "We evaluated SDNet on CoQA dataset, which improves the previous state-of-the-art model's result by 1.6% (from 75.0% to 76.6%) overall $F_1$ score. The ensemble model further increase the $F_1$ score to $79.3\\%$ . Moreover, SDNet is the first model ever to pass $80\\%$ on CoQA's in-domain dataset." ] ]
b29b5c39575454da9566b3dd27707fced8c6f4a1
Does the model incorporate coreference and entailment?
[ "As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution." ]
[ [ "Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution. We thus employ self-attention on question. The formula is the same as word-level attention, except that we are attending a question to itself: $\\lbrace {u}_i^Q\\rbrace _{i=1}^n=\\mbox{Attn}(\\lbrace {h}_i^{Q,K+1}\\rbrace _{i=1}^n, \\lbrace {h}_i^{Q,K+1}\\rbrace _{i=1}^n, \\lbrace {h}_i^{Q,K+1}\\rbrace _{i=1}^n)$ . The final question representation is thus $\\lbrace {u}_i^Q\\rbrace _{i=1}^n$ ." ] ]
4040f5c9f365f9bc80b56dce944ada85bb8b4ab4
Is the incorporation of context separately evaluated?
[ "No" ]
[ [] ]
7dce1b64c0040500951c864fce93d1ad7a1809bc
Which frozen acoustic model do they use?
[ "a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13" ]
[ [ "Several recent works have investigated jointly training the acoustic model with a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13, but these works did not evaluate their system on speech enhancement metrics. Indeed, our internal experiments show that without access to the clean data, joint training severely harms performance on these metrics." ] ]
e1b36927114969f3b759cba056cfb3756de474e4
By how much does using phonetic feedback improve state-of-the-art systems?
[ "Improved AECNN-T by 2.1 and AECNN-T-SM BY 0.9" ]
[ [ "In addition to the setting without any parallel data, we show results given parallel data. In Table TABREF10 we demonstrate that training the AECNN framework with mimic loss improves intelligibility over both the model trained with only time-domain loss (AECNN-T), as well as the model trained with both time-domain and spectral-domain losses (AECNN-T-SM). We only see a small improvement in the SI-SDR, likely due to the fact that the mimic loss technique is designed to improve the recognizablity of the results. In fact, seeing any improvement in SI-SDR at all is a surprising result." ] ]
186ccc18c6361904bee0d58196e341a719fb31c2
What features are used?
[ "Sociodemographics: gender, age, marital status, etc., Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc., Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc." ]
[ [ "45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features):", "Sociodemographics: gender, age, marital status, etc.", "Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc.", "Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc.", "The Current Admission feature group has the most number of features, with 29 features included in this group alone. These features can be further stratified into two groups: `structured' clinical features and `unstructured' clinical features.", "Feature Extraction ::: Structured Features", "Structure features are features that were identified on the EHR using regular expression matching and include rating scores that have been reported in the psychiatric literature as correlated with increased readmission risk, such as Global Assessment of Functioning, Insight and Compliance:", "Global Assessment of Functioning (GAF): The psychosocial functioning of the patient ranging from 100 (extremely high functioning) to 1 (severely impaired) BIBREF13.", "Insight: The degree to which the patient recognizes and accepts his/her illness (either Good, Fair or Poor).", "Compliance: The ability of the patient to comply with medication and to follow medical advice (either Yes, Partial, or None).", "These features are widely-used in clinical practice and evaluate the general state and prognosis of the patient during the patient's evaluation.", "Feature Extraction ::: Unstructured Features", "Unstructured features aim to capture the state of the patient in relation to seven risk factor domains (Appearance, Thought Process, Thought Content, Interpersonal, Substance Use, Occupation, and Mood) from the free-text narratives on the EHR. These seven domains have been identified as associated with readmission risk in prior work BIBREF14.", "These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain." ] ]
fd5412e2784acefb50afc3bfae1e087580b90ab9
Do they compare to previous models?
[ "Yes" ]
[ [ "To systematically evaluate the importance of the clinical sentiment values extracted from the free text in EHRs, we first build a baseline model using the structured features, which are similar to prior studies on readmission risk prediction BIBREF6. We then compare two models incorporating the unstructured features. In the \"Baseline+Domain Sentences\" model, we consider whether adding the counts of sentences per EHR that involve each of the seven risk factor domains as identified by our topic extraction model improved the model performance. In the \"Baseline+Clinical Sentiment\" model, we evaluate whether adding clinical sentiment scores for each risk factor domain improved the model performance. We also experimented with combining both sets of features and found no additional improvement." ] ]
c7f087c78768d5c6f3ff26921858186d627fd4fd
How do they incorporate sentiment analysis?
[ "features per admission were extracted as inputs to the readmission risk classifier" ]
[ [ "These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain.", "These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text. In our paper we also showed that this automatic pipeline achieves reasonably strong F-scores, with an overall performance of 0.828 F1 for the topic extraction component and 0.5 F1 on the clinical sentiment component.", "45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features):" ] ]
82596190560dc2e2ced2131779730f40a3f3eb8c
What is the dataset used?
[ "EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA" ]
[ [ "The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission." ] ]
345f65eaff1610deecb02ff785198aa531648e75
How do they extract topics?
[ " automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15" ]
[ [ "These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text. In our paper we also showed that this automatic pipeline achieves reasonably strong F-scores, with an overall performance of 0.828 F1 for the topic extraction component and 0.5 F1 on the clinical sentiment component." ] ]
51d03f0741b72ae242c380266acd2321baf43444
How does this compare to simple interpolation between a word-level and a character-level language model?
[ "Unanswerable" ]
[ [] ]
96c20af8bbef435d0d534d10c42ae15ff2f926f8
What translationese effects are seen in the analysis?
[ "potentially indicating a shining through effect, explicitation effect" ]
[ [ "Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set.", "We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3." ] ]
9544cc0244db480217ce9174aa13f1bf09ba0d94
What languages are seen in the news and TED datasets?
[ "English, German" ]
[ [ "As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46. The corpus contains ca. 160.7 thousand tokens manually annotated with about 14.9 thousand mentions and 4.7 thousand coreference chains. For our analysis, we select a portion of English news texts and TED talks from ParCorFull and translate them with the three NMT systems described in SECREF4 above. As texts considerably differ in their length, we select 17 news texts (494 sentences) and four TED talks (518 sentences). The size (in tokens) of the total data set under analysis – source (src) and human translations (ref) from ParCorFull and the automatic translations produced within this study (S1, S2 and S3) are presented in Table TABREF20." ] ]
c97a4a1c0e3d00137a9ae8d6fbb809ba6492991d
How are the coreference chain translations evaluated?
[ "Unanswerable" ]
[ [] ]
3758669426e8fb55a4102564cf05f2864275041b
How are the (possibly incorrect) coreference chains in the MT outputs annotated?
[ "allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause), The mentions referring to the same discourse item are linked between each other., chain members are annotated for their correctness" ]
[ [ "The English sources and their corresponding human translations into German were already manually annotated for coreference chains. We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull.", "In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference." ] ]
1ebd6f703458eb6690421398c79abf3fc114148f
Which three neural machine translation systems are analyzed?
[ "first two systems are transformer models trained on different amounts of data, The third system includes a modification to consider the information of full coreference chains" ]
[ [ "We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1. A variant of the S3 system participated in the news machine translation of the shared task held at WMT 2019 BIBREF43." ] ]
15a1df59ed20aa415a4daf0acb256747f6766f77
Which coreference phenomena are analyzed?
[ "shining through, explicitation" ]
[ [ "Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations." ] ]
b124137e62178a2bd3b5570d73b1652dfefa2457
What new interesting tasks can be solved based on the uncanny semantic structures of the embedding space?
[ " analogy query, analogy browsing" ]
[ [ "Based on our theoretical results, we design a general framework for data exploration on scholarly data by semantic queries on knowledge graph embedding space. The main component in this framework is the conversion between the data exploration tasks and the semantic queries. We first outline the semantic query solutions to some traditional data exploration tasks, such as similar paper prediction and similar author prediction. We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries." ] ]
c6aa8a02597fea802890945f0b4be8d631e4d5cd
What are the uncanny semantic structures of the embedding space?
[ "Semantic similarity structure, Semantic direction structure" ]
[ [ "We mainly concern with the two following structures of the embedding space.", "Semantic similarity structure: Semantically similar entities are close to each other in the embedding space, and vice versa. This structure can be identified by a vector similarity measure, such as the dot product between two embedding vectors. The similarity between two embedding vectors is computed as:", "Semantic direction structure: There exist semantic directions in the embedding space, by which only one semantic aspect changes while all other aspects stay the same. It can be identified by a vector difference, such as the subtraction between two embedding vectors. The semantic direction between two embedding vectors is computed as:" ] ]
bfad30f51ce3deea8a178944fa4c6e8acdd83a48
What is the general framework for data exploration by semantic queries?
[ "three main components, namely data processing, task processing, and query processing" ]
[ [ "Given the theoretical results, here we design a general framework for scholarly data exploration by using semantic queries on knowledge graph embedding space. Figure FIGREF19 shows the architecture of the proposed framework. There are three main components, namely data processing, task processing, and query processing." ] ]
dd9883f4adf7be072d314d7ed13fe4518c5500e0
What data exploration is supported by the analysis of these semantic structures?
[ "Task processing: converting data exploration tasks to algebraic operations on the embedding space, Query processing: executing semantic query on the embedding space and return results" ]
[ [ "Task processing: converting data exploration tasks to algebraic operations on the embedding space by following task-specific conversion templates. Some important tasks and their conversion templates are discussed in Section SECREF5.", "Query processing: executing semantic query on the embedding space and return results. Note that the algebraic operations on embedding vectors are linear and can be performed in parallel. Therefore, the semantic query is efficient." ] ]
81669c550d32d756f516dab5d2b76ff5f21c0f36
what are the existing models they compared with?
[ "Syn Dep, OpenIE, SRL, BiDAF, QANet, BERT, NAQANet, NAQANet+" ]
[ [ "Experiments ::: Baselines", "For comparison, we select several public models as baselines including semantic parsing models:", "BiDAF BIBREF3, an MRC model which utilizes a bi-directional attention flow network to encode the question and passage;", "QANet BIBREF12, which utilizes convolutions and self-attentions as the building blocks of encoders to represent the question and passage;", "BERT BIBREF23, a pre-trained bidirectional Transformer-based language model which achieves state-of-the-art performance on lots of public MRC datasets recently;", "and numerical MRC models:", "NAQANet BIBREF6, a numerical version of QANet model.", "NAQANet+, an enhanced version of NAQANet implemented by ourselves, which further considers real number (e.g. “2.5”), richer arithmetic expression, data augmentation, etc. The enhancements are also used in our NumNet model and the details are given in the Appendix.", "Syn Dep BIBREF6, the neural semantic parsing model (KDG) BIBREF22 with Stanford dependencies based sentence representations;", "OpenIE BIBREF6, KDG with open information extraction based sentence representations;", "SRL BIBREF6, KDG with semantic role labeling based sentence representations;", "and traditional MRC models:" ] ]
b0b1ff2d6515fb40d74a4538614a0db537e020ea
Do they report results only on English data?
[ "Unanswerable" ]
[ [] ]
4266aacb575b4be7dbcdb8616766324f8790763c
What conclusions do the authors draw from their detailed analyses?
[ "neural network-based models can outperform feature-based models with wide margins, contextualized representation learning can boost performance of NN models" ]
[ [ "We established strong baselines for two story narrative understanding datasets: CaTeRS and RED. We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models. Further research can focus on more systematic study or build stronger NN models over the same datasets used in this work. Exploring possibilities to directly apply temporal relation extraction to enhance performance of story generation systems is another promising research direction." ] ]
191107cd112f7ee6d19c1dc43177e6899452a2c7
Do the BERT-based embeddings improve results?
[ "Yes" ]
[ [ "Table TABREF25 contains the best hyper-parameters and Table TABREF26 contains micro-average F1 scores for both datasets on dev and test sets. We only consider positive pairs, i.e. correct predictions on NONE pairs are excluded for evaluation. In general, the baseline model CAEVO is outperformed by both NN models, and NN model with BERT embedding achieves the greatest performance. We now provide more detailed analysis and discussion for each dataset." ] ]
b0dca7b74934f51ff3da0c074ad659c25d84174d
What were the traditional linguistic feature-based models?
[ "CAEVO" ]
[ [ "The series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity." ] ]
601e58a3d2c03a0b4cd627c81c6228a714e43903
What type of baseline are established for the two datasets?
[ "CAEVO" ]
[ [ "The series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity." ] ]
a0fbf90ceb520626b80ff0f9160b3cd5029585a5
What model achieves state of the art performance on this task?
[ "BIBREF16" ]
[ [ "Multitask learning, on the other hand, does not appear to have any positive impact on performance. Comparing the two CNN models, the addition of multitask learning actually appears to impair performance, with MultitaskCNN doing worse than BasicCNN in all three metrics. The difference is smaller when comparing BasicDCGAN and MultitaskDCGAN, and may not be enough to decidedly conclude that the use of multitask learning has a net negative impact there, but certainly there is no indication of a net positive impact. The observed performance of both the BasicDCGAN and MultitaskDCGAN using 3-classes is comparable to the state-of-the-art, with 49.80% compared to 49.99% reported in BIBREF16 . It needs to be noted that in BIBREF16 data from the test speaker's session partner was utilized in the training of the model. Our models in contrast are trained on only four of the five sessions as discussed in SECREF5 . Further, the here presented models are trained on the raw spectrograms of the audio and no feature extraction was employed whatsoever. This representation learning approach is employed in order to allow the DCGAN component of the model to train on vast amounts of unsupervised speech data." ] ]
e8ca81d5b36952259ef3e0dbeac7b3a622eabe8e
Which multitask annotated corpus is used?
[ "IEMOCAP" ]
[ [ "Due to the semi-supervised nature of the proposed Multitask DCGAN model, we utilize both labeled and unlabeled data. For the unlabeled data, we use audio from the AMI BIBREF8 and IEMOCAP BIBREF7 datasets. For the labeled data, we use audio from the IEMOCAP dataset, which comes with labels for activation and valence, both measured on a 5-point Likert scale from three distinct annotators. Although IEMOCAP provides per-word activation and valence labels, in practice these labels do not generally change over time in a given audio file, and so for simplicity we label each audio clip with the average valence and activation. Since valence and activation are both measured on a 5-point scale, the labels are encoded in 5-element one-hot vectors. For instance, a valence of 5 is represented with the vector INLINEFORM0 . The one-hot encoding can be thought of as a probability distribution representing the likelihood of the correct label being some particular value. Thus, in cases where the annotators disagree on the valence or activation label, this can be represented by assigning probabilities to multiple positions in the label vector. For instance, a label of 4.5 conceptually means that the “correct” valence is either 4 or 5 with equal probability, so the corresponding vector would be INLINEFORM1 . These “fuzzy labels” have been shown to improve classification performance in a number of applications BIBREF14 , BIBREF15 . It should be noted here that we had generally greater success with this fuzzy label method than training the neural network model on the valence label directly, i.e. classification task vs. regression." ] ]
e75685ef5f58027be44f42f30cb3988b509b2768
What are the tasks in the multitask learning setup?
[ "set of related tasks are learned (e.g., emotional activation), primary task (e.g., emotional valence)" ]
[ [ "Within this work, we particularly target emotional valence as the primary task, as it has been shown to be the most challenging emotional dimension for acoustic analyses in a number of studies BIBREF10 , BIBREF11 . Apart from solely targeting valence classification, we further investigate the principle of multitask learning. In multitask learning, a set of related tasks are learned (e.g., emotional activation), along with a primary task (e.g., emotional valence); both tasks share parts of the network topology and are hence jointly trained, as depicted in Figure FIGREF4 . It is expected that data for the secondary task models information, which would also be discriminative in learning the primary task. In fact, this approach has been shown to improve generalizability across corpora BIBREF12 ." ] ]
1df24849e50fcf22f0855e0c0937c1288450ed5c
What are the subtle changes in voice which have been previously overshadowed?
[ "Unanswerable" ]
[ [] ]
859e0bed084f47796417656d7a68849eb9cb324f
how are rare words defined?
[ "low-frequency words" ]
[ [ "In our experiments, the short list is determined according to the word frequency. Concretely, we sort the vocabulary according to the word frequency from high to low. A frequency filter ratio INLINEFORM0 is set to filter out the low-frequency words (rare words) from the lookup table. For example, INLINEFORM1 =0.9 means the least frequent 10% words are replaced with the default UNK notation." ] ]
04e90c93d046cd89acef5a7c58952f54de689103
which public datasets were used?
[ "CMRC-2017, People's Daily (PD), Children Fairy Tales (CFT) , Children's Book Test (CBT)" ]
[ [ "To verify the effectiveness of our proposed model, we conduct multiple experiments on three Chinese Machine Reading Comprehension datasets, namely CMRC-2017 BIBREF17 , People's Daily (PD) and Children Fairy Tales (CFT) BIBREF2 . In these datasets, a story containing consecutive sentences is formed as the Document and one of the sentences is either automatically or manually selected as the Query where one token is replaced by a placeholder to indicate the answer to fill in. Table TABREF8 gives data statistics. Different from the current cloze-style datasets for English reading comprehension, such as CBT, Daily Mail and CNN BIBREF0 , the three Chinese datasets do not provide candidate answers. Thus, the model has to find the correct answer from the entire document.", "Besides, we also use the Children's Book Test (CBT) dataset BIBREF1 to test the generalization ability in multi-lingual case. We only focus on subsets where the answer is either a common noun (CN) or NE which is more challenging since the answer is likely to be rare words. We evaluate all the models in terms of accuracy, which is the standard evaluation metric for this task." ] ]
f513e27db363c28d19a29e01f758437d7477eb24
what are the baselines?
[ "AS Reader, GA Reader, CAS Reader" ]
[ [ "Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline. Although WHU's model achieves the best besides our model on the valid set with only 0.75% below ours, their result on the test set is lower than ours by 2.27%, indicating our model has a satisfactory generalization ability." ] ]
eb5ed1dd26fd9adb587d29225c7951a476c6ec28
What are the results of the experiment?
[ "They were able to create a language model from the dataset, but did not test." ]
[ [ "The BD-4SK-ASR Dataset ::: The Language Model", "We created the language from the transcriptions. The model was created using CMUSphinx in which (fixed) discount mass is 0.5, and backoffs are computed using the ratio method. The model includes 283 unigrams, 5337 bigrams, and 6935 trigrams." ] ]
0828cfcf0e9e02834cc5f279a98e277d9138ffd9
How was the dataset collected?
[ "extracted text from Sorani Kurdish books of primary school and randomly created sentences" ]
[ [ "To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences." ] ]
7b2de0109b68f78afa9e6190c82ca9ffaf62f9bd
What is the size of the dataset?
[ "2000 sentences" ]
[ [ "To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences." ] ]
482ac96ff675975227b6d7058b9b87aeab6f81fe
How many different subjects does the dataset contain?
[ "Unanswerable" ]
[ [] ]
3f3c09c1fd542c1d9acf197957c66b79ea1baf6e
How many annotators participated?
[ "1" ]
[ [ "Two thousand narration files were created. We used Audacity to record the narrations. We used a normal laptop in a quiet room and minimized the background noise. However, we could not manage to avoid the noise of the fan of the laptop. A single speaker narrated the 2000 sentences, which took several days. We set the Audacity software to have a sampling rate of 16, 16-bit bit rate, and a mono (single) channel. The noise reduction db was set to 6, the sensitivity to 4.00, and the frequency smoothing to 0." ] ]
0a82534ec6e294ab952103f11f56fd99137adc1f
How long is the dataset?
[ "2000" ]
[ [ "The corpus includes 2000 sentences. Theses sentence are random renderings of 200 sentences, which we have taken from Sorani Kurdish books of the grades one to three of the primary school in the Kurdistan Region of Iraq. The reason that we have taken only 200 sentences is to have a smaller dictionary and also to increase the repetition of each word in the narrated speech. We transformed the corpus sentences, which are in Persian-Arabic script, into the format which complies with the suggested phones for the related Sorani letters (see Section SECREF6)." ] ]
938688871913862c9f8a28b42165237b7324e0de
Do the authors mention any possible confounds in their study?
[ "Yes" ]
[ [ "On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration. We observed the same phenomenon recently during the Brexit campaign BIBREF38 . Along our interpretation the Brexit was “won” to some extent due to these social media activities, which are practically non-existent among the pro-EU political groups. The fact that ENL and EFDD are the least cohesive groups in the European Parliament can be attributed to their political focus. It seems more important for the group to agree on its anti-EU stance and to call for independence and sovereignty, and much less important to agree on other issues put forward in the parliament." ] ]
4170ed011b02663f5b1b1a3c1f0415b7abfaa85d
What is the relationship between the co-voting and retweeting patterns?
[ "we observe a positive correlation between retweeting and co-voting, strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets, Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union, significantly negative coefficient, is the area Economic and monetary system" ]
[ [ "Overall, we observe a positive correlation between retweeting and co-voting, which is significantly different from zero. The strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets. Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union. The only exception, with a significantly negative coefficient, is the area Economic and monetary system. This implies that in the area Economic and monetary system we observe a significant deviation from the usual co-voting patterns. Results from section “sec:coalitionpolicy”, confirm that this is indeed the case. Especially noteworthy are the coalitions between GUE-NGL and Greens-EFA on the left wing, and EFDD and ENL on the right wing. In the section “sec:coalitionpolicy” we interpret these results as a combination of Euroscepticism on both sides, motivated on the left by a skeptical attitude towards the market orientation of the EU, and on the right by a reluctance to give up national sovereignty." ] ]
fd08dc218effecbe5137a7e3b73d9e5e37ace9c1
Does the analysis find that coalitions are formed in the same way for different policy areas?
[ "No" ]
[ [ "In the area of Economic and monetary system we see a strong cooperation between EPP and S&D (Fig FIGREF42 A), which is on a par with the cohesion of the most cohesive groups (GUE-NGL, S&D, Greens-EFA, ALDE, and EPP), and is above the cohesion of the other groups. As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B. On one hand, we observe cooperation between S&D, ALDE, EPP, and ECR, and on the other, cooperation between GUE-NGL, Greens-EFA, EFDD, ENL, and NI. This division in blocks is seen again in Fig FIGREF42 C, which shows the strongest disagreements. Here, we observe two blocks composed of S&D, EPP, and ALDE on one hand, and GUE-NGL, EFDD, ENL, and NI on the other, which are in strong opposition to each other.", "In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis.", "The patterns of coalitions forming on Twitter closely resemble those in the European Parliament. In Fig FIGREF42 J we see that the strongest degrees of cooperation on Twitter are within the groups. The only group with low cohesion is the NI, whose members have only seven retweets between them. The coalitions on Twitter follow the seating order in the European Parliament remarkably well (see Fig FIGREF42 K). What is striking is that the same blocks form on the left, center, and on the center-right, both in the European Parliament and on Twitter. The largest difference between the coalitions in the European Parliament and on Twitter is on the far-right, where we observe ENL and NI as isolated blocks." ] ]
a85c2510f25c7152940b5ac4333a80e0f91ade6e
What insights does the analysis give about the cohesion of political groups in the European parliament?
[ "Greens-EFA, S&D, and EPP exhibit the highest cohesion, non-aligned members NI have the lowest cohesion, followed by EFDD and ENL, two methods disagree is the level of cohesion of GUE-NGL" ]
[ [ "As with INLINEFORM0 , Greens-EFA, S&D, and EPP exhibit the highest cohesion, even though their ranking is permuted when compared to the ranking obtained with INLINEFORM1 . At the other end of the scale, we observe the same situation as with INLINEFORM2 . The non-aligned members NI have the lowest cohesion, followed by EFDD and ENL.", "The only place where the two methods disagree is the level of cohesion of GUE-NGL. The Alpha attributes GUE-NGL a rather high level of cohesion, on a par with ALDE, whereas the ERGM attributes them a much lower cohesion. The reason for this difference is the relatively high abstention rate of GUE-NGL. Whereas the overall fraction of non-attending and abstaining MEPs across all RCVs and all political groups is 25%, the GUE-NGL abstention rate is 34%. This is reflected in an above average cohesion by INLINEFORM0 where only yes/no votes are considered, and in a relatively lower, below average cohesion by ERGM. In the later case, the non-attendance is interpreted as a non-cohesive voting of a political groups as a whole." ] ]
fa572f1f3f3ce6e1f9f4c9530456329ffc2677ca
Do they authors account for differences in usage of Twitter amongst MPs into their model?
[ "No" ]
[ [ "The retweeting behavior of MEPs is captured by their retweet network. Each MEP active on Twitter is a node in this network. An edge in the network between two MEPs exists when one MEP retweeted the other. The weight of the edge is the number of retweets between the two MEPs. The resulting retweet network is an undirected, weighted network.", "We measure the cohesion of a political group INLINEFORM0 as the average retweets, i.e., the ratio of the number of retweets between the MEPs in the group INLINEFORM1 to the number of MEPs in the group INLINEFORM2 . The higher the ratio, the more each MEP (on average) retweets the MEPs from the same political group, hence, the higher the cohesion of the political group. The definition of the average retweeets ( INLINEFORM3 ) of a group INLINEFORM4 is: INLINEFORM5" ] ]
5e057e115f8976bf9fe70ab5321af72eb4b4c0fc
Did the authors examine if any of the MEPs used the disclaimer that retweeting does not imply endorsement on their twitter profile?
[ "No" ]
[ [] ]
d824f837d8bc17f399e9b8ce8b30795944df0d51
How do they show their model discovers underlying syntactic structure?
[ "By visualizing syntactic distance estimated by the parsing network" ]
[ [ "In Figure FIGREF32 , we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB test set. We observe that the syntactic distance tends to be higher between the last character of a word and a space, which is a reasonable breakpoint to separate between words. In other words, if the model sees a space, it will attend on all previous step. If the model sees a letter, it will attend no further then the last space step. The model autonomously discovered to avoid inter-word attention connection, and use the hidden states of space (separator) tokens to summarize previous information. This is strong proof that the model can understand the latent structure of data. As a result our model achieve state-of-the-art performance and significantly outperform baseline models. It is worth noting that HM-LSTM BIBREF6 also unsupervisedly induce similar structure from data. But discrete operations in HM-LSTM make their training procedure more complicated then ours." ] ]
2ff3898fbb5954aa82dd2f60b37dd303449c81ba
Which dataset do they experiment with?
[ "Penn Treebank, Text8, WSJ10" ]
[ [ "From a character-level view, natural language is a discrete sequence of data, where discrete symbols form a distinct and shallow tree structure: the sentence is the root, words are children of the root, and characters are leafs. However, compared to word-level language modeling, character-level language modeling requires the model to handle longer-term dependencies. We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.", "The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset. WSJ10 is the 7422 sentences in the Penn Treebank Wall Street Journal section which contained 10 words or less after the removal of punctuation and null elements. Evaluation was done by seeing whether proposed constituent spans are also in the Treebank parse, measuring unlabeled F1 ( INLINEFORM0 ) of unlabeled constituent precision and recall. Constituents which could not be gotten wrong (those of span one and those spanning entire sentences) were discarded. Given the mechanism discussed in Section SECREF14 , our model generates a binary tree. Although standard constituency parsing tree is not limited to binary tree. Previous unsupervised constituency parsing model also generate binary trees BIBREF11 , BIBREF13 . Our model is compared with the several baseline methods, that are explained in Appendix ." ] ]
3070d6d6a52aa070f0c0a7b4de8abddd3da4f056
How do they measure performance of language model tasks?
[ "BPC, Perplexity" ]
[ [ "In Table TABREF39 , our results are comparable to the state-of-the-art methods. Since we do not have the same computational resource used in BIBREF50 to tune hyper-parameters at large scale, we expect that our model could achieve better performance after an aggressive hyperparameter tuning process. As shown in Table TABREF42 , our method outperform baseline methods. It is worth noticing that the continuous cache pointer can also be applied to output of our Predict Network without modification. Visualizations of tree structure generated from learned PTB language model are included in Appendix . In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. By removing Parsing Network, we observe a significant drop of performance. This stands as empirical evidence regarding the benefit of having structure information to control attention.", "Word-level Language Model" ] ]
ee9b95d773e060dced08705db8d79a0a6ef353da
How are content clusters used to improve the prediction of incident severity?
[ "they are used as additional features in a supervised classification task" ]
[ [ "As a further application of our work, we have carried out a supervised classification task aimed at predicting the degree of harm of an incident directly from the text and the hand-coded features (e.g., external category, medical specialty, location). A one-hot encoding is applied to turn these categorical values into numerical ones. We also checked if using our unsupervised content-driven cluster labels as additional features can improve the performance of the supervised classification." ] ]
dbdf13cb4faa1785bdee90734f6c16380459520b
What cluster identification method is used in this paper?
[ "A combination of Minimum spanning trees, K-Nearest Neighbors and Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18" ]
[ [ "The trained Doc2Vec model is subsequently used to infer high-dimensional vector descriptions for the text of each document in our target analysis set. We then compute a matrix containing all the pairwise (cosine) similarities between the Doc2Vec document vectors. This similarity matrix can be thought of as the adjacency matrix of a full, weighted graph with documents as nodes and edges weighted by their similarity. We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph BIBREF14, a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The MST-kNN graph is then analysed with Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18, a multi-resolution graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity. MS uses a diffusive process on the graph to reveal the multiscale organisation at different resolutions without the need to choose a priori the number or type of clusters." ] ]
73e715e485942859e1db75bfb5f35f1d5eb79d2e
How can a neural model be used for a retrieval if the input is the entire Wikipedia?
[ "Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question." ]
[ [ "The learning model for retrieval is trained by an oracle constructed using distant supervision. Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question. For example, for x_to_movie question type, the answer movie articles are the correct articles to be retrieved. On the other hand, for questions in movie_to_x type, the movie in the question should be retrieved. Having collected the labels, we train a retrieval model for classifying a question and article pair as relevant or not relevant.", "Figure 5 gives an overview of the model, which uses a Word Level Attention (WLA) mechanism. First, the question and article are embedded into vector sequences, using the same method as the comprehension model. We do not use anonymization here, to retain simplicity. Otherwise, the anonymization procedure would have to be repeated several times for a potentially large collection of documents. These vector sequences are next fed to a Bi-GRU, to produce the outputs $v$ (for the question) and $H_c$ (for the document) similar to the previous section." ] ]
12391aab31c899bac0ecd7238c111cb73723a6b7
Which algorithm is used in the UDS-DFKI system?
[ "Our transference model extends the original transformer model to multi-encoder based transformer architecture. The transformer architecture BIBREF12 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. " ]
[ [ "Our transference model extends the original transformer model to multi-encoder based transformer architecture. The transformer architecture BIBREF12 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. The transformer uses positional encoding to encode the input and output sequences, and computes both self- and cross-attention through so-called multi-head attentions, which are facilitated by parallelization. We use multi-head attention to jointly attend to information at different positions from different representation subspaces." ] ]
8b43201e7e648c670c02e16ba189230820879228
Does the use of out-of-domain data improve the performance of the method?
[ "No" ]
[ [ "Our system was ranked second in the competition only 0.3 BLEU points behind the winning team UPC-TALP. The relative low BLEU and high TER scores obtained by all teams are due to out-of-domain data provided in the competition which made the task equally challenging to all participants.", "This paper presented the UDS-DFKI system submitted to the Similar Language Translation shared task at WMT 2019. We presented the results obtained by our system in translating from Czech to Polish. Our system achieved competitive performance ranking second among ten teams in the competition in terms of BLEU score. The fact that out-of-domain data was provided by the organizers resulted in a challenging but interesting scenario for all participants." ] ]
5d5a571ff04a5fdd656ca87f6525a60e917d6558
Do they impose any grammatical constraints over the generated output?
[ "No" ]
[ [ "In the TCM prescription generation task, the textual symptom descriptions can be seen as the question and the aim of the task is to produce a set of TCM herbs that form a prescription as the answer to the question. However, the set of herbs is different from the textual answers to a question in the QA task. A difference that is most evident is that there will not be any duplication of herbs in the prescription. However, the basic seq2seq model sometimes produces the same herb tokens repeatedly when applied to the TCM prescription generation task. This phenomenon can hurt the performance of recall rate even after applying a post-process to eliminate repetitions. Because in a limited length of the prescription , the model would produce the same token over and over again, rather than real and novel ones. Furthermore, the basic seq2seq assumes a strict order between generated tokens, but in reality, we should not severely punish the model when it predicts the correct tokens in the wrong order. In this paper, we explore to automatically generate TCM prescriptions based on textual symptoms. We propose a soft seq2seq model with coverage mechanism and a novel soft loss function. The coverage mechanism is designed to make the model aware of the herbs that have already been generated while the soft loss function is to relieve the side effect of strict order assumption. In the experiment results, our proposed model beats all the baselines in professional evaluations, and we observe a large increase in both the recall rate and the F1 score compared with the basic seq2seq model." ] ]
3c362bfa11c60bad6c7ea83f8753d427cda77de0
Why did they think this was a good idea?
[ "They think it will help human TCM practitioners make prescriptions." ]
[ [ "During the long history of TCM, there has been a number of therapy records or treatment guidelines in the TCM classics composed by outstanding TCM researchers and practitioners. In real life, TCM practitioners often take these classical records for reference when prescribing for the patient, which inspires us to design a model that can automatically generate prescriptions by learning from these classics. It also needs to be noted that due to the issues in actual practice, the objective of this work is to generate candidate prescriptions to facilitate the prescribing procedure instead of substituting the human practitioners completely. An example of TCM prescription is shown in Table 1 . The herbs in the prescription are organized in a weak order. By “weak order”, we mean that the effect of the herbs are not influenced by the order. However, the order of the herbs reflects the way of thinking when constructing the prescription. Therefore, the herbs are connected to each other, and the most important ones are usually listed first." ] ]
e78a47aec37d9a3bec5a18706b0a462c148c118b
How many languages are included in the tweets?
[ "Unanswerable" ]
[ [] ]
351510da69ab6879df5ff5c7c5f49a8a7aea4632
What languages are explored?
[ "Unanswerable" ]
[ [] ]
d43e868cae91b3dc393c05c55da0754b0fb3a46a
Which countries did they look at?
[ "Unanswerable" ]
[ [] ]
fd8b6723ad5f52770bec9009e45f860f4a8c4321
What QA models were used?
[ "A pointer network decodes the answer from a bidirectional LSTM with attention flow layer and self-matching layer, whose inputs come from word and character embeddings of the query and input text fed through a highway layer." ]
[ [ "The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as", "$$\\begin{split} g_t &= {\\rm sigmoid}(W_gx_t+b_g) \\\\ s_t &= {\\rm relu } (W_xx_t+b_x) \\\\ u_t &= g_t \\odot s_t + (1 - g_t) \\odot x_t~. \\end{split}$$ (Eq. 18)", "Here $W_g, W_x \\in \\mathbb {R}^{d \\times 2d}$ and $b_g, b_x \\in \\mathbb {R}^d$ are trainable weights, $u_t$ is a $d$ -dimension vector. The function relu is the rectified linear units BIBREF43 and $\\odot $ is element-wise multiply over two vectors. The same Highway Layer is applied to $q_t$ and produces $v_t$ .", "Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:", "Here we obtain $\\mathbf {U} = [u_1^{^{\\prime }}, ... , u_n^{^{\\prime }}] \\in \\mathbb {R}^{2d \\times n}$ and $\\mathbf {V} = [v_1^{^{\\prime }}, ... , v_m^{^{\\prime }}] \\in \\mathbb {R}^{2d \\times m}$ . Then we feed $\\mathbf {U}$ and $\\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query. We obtain the $8d$ -dimension query-aware context embedding vectors $h_1, ... , h_n$ as the result.", "After modeling interactions between the input text and queries, we need to enhance the interactions within the input text words themselves especially for the longer text in IE settings. Therefore, we introduce Self-Matching Layer BIBREF29 in our model as", "$$\\begin{split} o_t &= {\\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\\\ s_j^t &= w^T {\\rm tanh}(W_hh_j+\\tilde{W_h}h_t)\\\\ \\alpha _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\ c_t &= \\Sigma _{i=1}^n\\alpha _i^th_i ~. \\end{split}$$ (Eq. 20)", "Here $W_h, \\tilde{W_h} \\in \\mathbb {R}^{d \\times 8d}$ and $w \\in \\mathbb {R}^d$ are trainable weights, $[h, c]$ is vector concatenation across row. Besides, $\\alpha _i^t$ is the attention weight from the $t^{th}$ word to the $i^{th}$ word and $c_t$ is the enhanced contextual embeddings over the $t^{th}$ word in the input text. We obtain the $2d$ -dimension query-aware and self-enhanced embeddings of input text after this step. Finally we feed the embeddings $\\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as", "$$\\begin{split} p_t &= {\\rm LSTM}(p_{t-1}, c_t) \\\\ s_j^t &= w^T {\\rm tanh}(W_oo_j+W_pp_{t-1})\\\\ \\beta _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\ c_t &= \\Sigma _{i=1}^n\\beta _i^to_i~. \\end{split}$$ (Eq. 21)", "Here $\\beta _{n+1}^t$ denotes the probability of generating the “ ${\\rm eos}$ ” symbol since the decoder also needs to determine when to stop. Therefore, the probability of generating the answer sequence $\\textbf {a}$ is as follows", "$${\\rm P}(\\textbf {a}|\\mathbf {O}) = \\prod _t {\\rm P}(a^t | a^1, ... , a^{t-1}, \\mathbf {O})~.$$ (Eq. 23)" ] ]
4ce3a6632e7d86d29a42bd1fcf325114b3c11d46
Can this approach model n-ary relations?
[ "No" ]
[ [ "For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper. Besides, processing longer documents and improving the quality of our benchmark are all challenging problems as we mentioned previously. We hope this work can provide new thoughts for the area of information extraction.", "The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \\lbrace e_i, r_{ij}, e_j\\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation. We ignore the adverbials and only consider the entity pairs and their relations as in standard RE settings. Note that we process the entire document as a whole instead of processing individual sentences separately as in previous systems. As shown in Figure 1 , our QA4IE framework consists of four key steps:" ] ]
e7c0cdc05b48889905cc03215d1993ab94fb6eaa
Was this benchmark automatically created from an existing dataset?
[ "No" ]
[ [ "As mentioned above, step 1, 2 and 4 in the QA4IE framework can be solved by existing work. Therefore, in this paper, we mainly focus on step 3. According to the recent progress in QA and MRC, deep neural networks are very good at solving this kind of problem with a large-scale dataset to train the network. However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark. Inspired by WikiReading BIBREF33 , a recent large-scale QA benchmark over Wikipedia, we find that the articles in Wikipedia together with the high quality triples in knowledge bases such as Wikidata BIBREF34 and DBpedia can form the supervision we need. Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types.", "Incorporating DBpedia. Unlike WikiData, DBpedia is constructed automatically without human verification. Relations and properties in DBpedia are coarse and noisy. Thus we fix the existing 636 relation types in WikiData and build a projection from DBpedia relations to these 636 relation types. We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations. Then we gather all the DBpedia triples with the first entity is corresponding to one of the above 3.5M articles and the relation is one of the projected 148 relations. After the same clipping process as above and removing the repetitive triples, we obtain 394K additional triples in 302K existing Wikipedia articles." ] ]
99760276cfd699e55b827ceeb653b31b043b9ceb
How does morphological analysis differ from morphological inflection?
[ "Morphological analysis is the task of creating a morphosyntactic description for a given word, inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form" ]
[ [ "Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection,\" inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.", "Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose." ] ]
247e1fe052230458ce11b98e3637acf0b86795cd
What was the criterion used for selecting the lemmata?
[ "Unanswerable" ]
[ [] ]
79cfd1b82c72d18e2279792c66a042c0e9dfa6b7
What are the architectures used for the three tasks?
[ "DyNet" ]
[ [ "Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.", "Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.", "Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet." ] ]
9e1bf306658ef2972159643fdaf149c569db524b
Which language family does Chatino belong to?
[ "the Otomanguean language family" ]
[ [ "Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. There are three main Chatino languages: Zenzontepec Chatino (ZEN, ISO 639-2 code czn), Tataltepec Chatino (TAT, cta), and Eastern Chatino (ISO 639-2 ctp, cya, ctz, and cly) (E.Cruz 2011 and Campbell 2011). San Juan Quiahije Chatino (SJQ), the language of the focus of this study, belongs to Eastern Chatino, and is used by about 3000 speakers." ] ]
25b24ab1248f14a621686a57555189acc1afd49c
What system is used as baseline?
[ "DyNet" ]
[ [ "Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.", "Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.", "Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet." ] ]
8486e06c03f82ebd48c7cfbaffaa76e8b899eea5
How was annotation done?
[ " hand-curated collection of complete inflection tables for 198 lemmata" ]
[ [ "We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:", "Person: first (1), second (2), and third (3)", "Number: singular (SG) ad plural (PL)", "Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)", "Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB)." ] ]
27f575e90487ef68298cfb6452683bb977e39e43
How was the data collected?
[ "Unanswerable" ]
[ [] ]
157b9f6f8fb5d370fa23df31de24ae7efb75d6f3
How do their results compare against other competitors in the PAN 2017 shared task on Author Profiling?
[ "They achieved best result in the PAN 2017 shared task with accuracy for Variety prediction task 0.0013 more than the 2nd best baseline, accuracy for Gender prediction task 0.0029 more than 2nd best baseline and accuracy for Joint prediction task 0.0101 more than the 2nd best baseline" ]
[ [ "For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). For the global scores, all languages are combined. We present finer-grained scores showing the breakdown per language in Table TABREF24 . We compare our gender and variety accuracies against the LDR-baseline BIBREF10 , a low dimensionality representation especially tailored to language variety identification, provided by the organisers. The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline)." ] ]
9bcc1df7ad103c7a21d69761c452ad3cd2951bda
On which task does do model do worst?
[ "Gender prediction task" ]
[ [ "N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task." ] ]
8427988488b5ecdbe4b57b3813b3f981b07f53a5
On which task does do model do best?
[ "Variety prediction task" ]
[ [ "N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task." ] ]
3604c4fba0a82d7139efd5ced47612c90bd10601
Is their implementation on CNN-DSA compared to GPU implementation in terms of power consumption, accuracy and speed?
[ "No" ]
[ [] ]
38e2f07ba965b676a99be06e8872dade7c04722a
Does this implementation on CNN-DSA lead to diminishing of performance?
[ "Unanswerable" ]
[ [] ]
931a2a13a1f6a8d9107d26811089bdccc39b0800
How is Super Character method modified to handle tabular data also?
[ "simply split the image into two parts. One for the text input, and the other for the tabular data" ]
[ [ "For multi-modal sentiment analysis, we can simply split the image into two parts. One for the text input, and the other for the tabular data. Such that both can be embedded into the Super Characters image. The CNN accelerator chip comes together with a Model Development Kit (MDK) for CNN model training, which feeds the two-dimensional Super Characters images into MDK and then obtain the fixed point model. Then, using the Software Development Kit (SDK) to load the model into the chip and send command to the CNN accelerator chip, such as to read an image, or to forward pass the image through the network to get the inference result. The advantage of using the CNN accelerator is low-power, it consumes only 300mw for an input of size 3x224x224 RGB image at the speed of 140fps. Compared with other models using GPU or FPGA, this solution implement the heavy-lifting DNN computations in the CNN accelerator chip, and the host computer is only responsible for memory read/write to generate the designed Super Character image. This has shown good result on system implementations for NLP applications BIBREF9." ] ]
8c981f8b992cb583e598f71741c322f522c6d2ad
How are the substitution rules built?
[ "from the Database of Modern Icelandic Inflection (DMII) BIBREF1" ]
[ [ "In this paper, we describe and evaluate Nefnir BIBREF0 , a new open source lemmatizer for Icelandic. Nefnir uses suffix substitution rules derived (learned) from the Database of Modern Icelandic Inflection (DMII) BIBREF1 , which contains over 5.8 million inflectional forms." ] ]
16f33de90b76975a99572e0684632d5aedbd957c
Which dataset do they use?
[ "a reference corpus of 21,093 tokens and their correct lemmas" ]
[ [ "We have evaluated the output of Nefnir against a reference corpus of 21,093 tokens and their correct lemmas.", "Samples for the reference corpus were extracted from two larger corpora, in order to obtain a diverse vocabulary:" ] ]
d0b005cb7ed6d4c307745096b2ed8762612480d2
What baseline is used to compare the experimental results against?
[ "Transformer generation model" ]
[ [ "Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous. While ALL has more data than CDA and CT, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the CT and ALL models benefit from knowing the data split ($\\text{F}^{0}\\text{M}^{0}$, for example), and both models yield a genderedness ratio closest to ground truth." ] ]
9d9b11f86a96c6d3dd862453bf240d6e018e75af
How does counterfactual data augmentation aim to tackle bias?
[ "The training dataset is augmented by swapping all gendered words by their other gender counterparts" ]
[ [ "One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather." ] ]
415f35adb0ef746883fb9c33aa53b79cc4e723c3
In the targeted data collection approach, what type of data is targetted?
[ "Gendered characters in the dataset" ]
[ [ "There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character." ] ]
52f1a91f546b8a25a5d72325c503ec8f9c72de23
Which language models do they compare against?
[ "RNNLM BIBREF11" ]
[ [ "We compare against the following document representation baselines: bag-of-words (BoW); Denoising Autoencoders (DEA) BIBREF14 , a representation learned from reconstructing original document INLINEFORM0 using corrupted one INLINEFORM1 . SDAs have been shown to be the state-of-the-art for sentiment analysis tasks BIBREF29 . We used Kullback-Liebler divergence as the reconstruction error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into account the non-zero elements of INLINEFORM2 in the reconstruction error and employed negative sampling for the remainings; Word2Vec BIBREF1 +IDF, a representation generated through weighted average of word vectors learned using Word2Vec; Doc2Vec BIBREF2 ; Skip-thought Vectors BIBREF16 , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representations that apply to various natural language processing tasks. We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison. In the semantic relatedness task, we further compare to LSTM-based methods BIBREF18 that have been reported on this dataset." ] ]
bb5697cf352dd608edf119ca9b82a6b7e51c8d21
Is their approach similar to making an averaged weighted sum of word vectors, where weights reflect word frequencies?
[ "Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained." ]
[ [ "Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layer as well as an output layer to predict the target word, “ceremony” in this example. The embeddings of neighboring words (“opening”, “for”, “the”) provide local context while the vector representation of the entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors, which directly learns a unique vector for each document, Doc2VecC represents each document as an average of the embeddings of words randomly sampled from the document (“performance” at position INLINEFORM0 , “praised” at position INLINEFORM1 , and “brazil” at position INLINEFORM2 ). BIBREF25 also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained. This corruption mechanism offers us great speedup during training as it significantly reduces the number of parameters to update in back propagation. At the same time, as we are going to detail in the next section, it introduces a special form of regularization, which brings great performance improvement." ] ]
98785bf06e60fcf0a6fe8921edab6190d0c2cec1
How do they determine which words are informative?
[ "Informative are those that will not be suppressed by regularization performed." ]
[ [ "Data dependent regularization. As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100 in this experiment. Table TABREF24 lists the words having the smallest INLINEFORM0 norm of embeddings found by different algorithms. The number inside the parenthesis after each word is the number of times this word appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words have embeddings that are close to zero, despite some of them being indicative of sentiment such as debacle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words." ] ]
9846f84747b89f5c692665c4ea7111671ad9839a
What is their best performance on the largest language direction dataset?
[ "Unanswerable" ]
[ [] ]
eecf62e18a790bcfdd8a56f0c4f498927ff2fb47
How does soft contextual data augmentation work?
[ "softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary" ]
[ [ "While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is relatively limited. SCA BIBREF5 softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, i.e., replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary. It was applied in Russian$\\rightarrow $English translation in our submitted systems." ] ]