question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
73d657d6faed0c11c65b1ab60e553db57f4971ca
Do they compare their neural network against any other model?
[ "No" ]
[ [] ]
9ef182b61461d0d8b6feb1d6174796ccde290a15
Do they annotate their own dataset or use an existing one?
[ "Use an existing one" ]
[ [ "For our experiments, we select a dataset whose utterances have been correctly synchronised at recording time. This allows us to control how the model is trained and verify its performance using ground truth synchronisation offsets. We use UltraSuite: a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 . We used all three datasets from the repository: UXTD (recorded with typically developing children), and UXSSD and UPX (recorded with children with speech sound disorders). In total, the dataset contains 13,815 spoken utterances from 86 speakers, corresponding to 35.9 hours of recordings. The utterances have been categorised by the type of task the child was given, and are labelled as: Words (A), Non-words (B), Sentence (C), Articulatory (D), Non-speech (E), or Conversations (F). See BIBREF15 for details." ] ]
f6f8054f327a2c084a73faca16cf24a180c094ae
Does their neural network predict a single offset in a recording?
[ "Yes" ]
[ [ "Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. The candidate set is independent of the model, and is chosen based on task knowledge (Section SECREF5 )." ] ]
b8f711179a468fec9a0d8a961fb0f51894af4b31
What kind of neural network architecture do they use?
[ "CNN" ]
[ [ "We adopt the approach in BIBREF4 , modifying it to synchronise audio with UTI data. Our model, UltraSync, consists of two streams: the first takes as input a short segment of ultrasound and the second takes as input the corresponding audio. Both inputs are high-dimensional and are of different sizes. The objective is to learn a mapping from the inputs to a pair of low-dimensional vectors of the same length, such that the Euclidean distance between the two vectors is small when they correlate and large otherwise BIBREF21 , BIBREF22 . This model can be viewed as an extension of a siamese neural network BIBREF23 but with two asymmetrical streams and no shared parameters. Figure FIGREF1 illustrates the main architecture. The visual data INLINEFORM0 (ultrasound) and audio data INLINEFORM1 (MFCC), which have different shapes, are mapped to low dimensional embeddings INLINEFORM2 (visual) and INLINEFORM3 (audio) of the same size: DISPLAYFORM0" ] ]
3bf429633ecbbfec3d7ffbcfa61fa90440cc918b
How are aspects identified in aspect extraction?
[ "apply an ensemble of deep learning and linguistics t" ]
[ [ "Most of the previous works in aspect term extraction have either used conditional random fields (CRFs) BIBREF9 , BIBREF10 or linguistic patterns BIBREF7 , BIBREF11 . Both of these approaches have their own limitations: CRF is a linear model, so it needs a large number of features to work well; linguistic patterns need to be crafted by hand, and they crucially depend on the grammatical accuracy of the sentences. In this chapter, we apply an ensemble of deep learning and linguistics to tackle both the problem of aspect extraction and subjectivity detection." ] ]
94e0cf44345800ef46a8c7d52902f074a1139e1a
What web and user-generated NER datasets are used for the analysis?
[ "MUC, CoNLL, ACE, OntoNotes, MSM, Ritter, UMBC" ]
[ [ "Since the goal of this study is to compare NER performance on corpora from diverse domains and genres, seven benchmark NER corpora are included, spanning newswire, broadcast conversation, Web content, and social media (see Table 1 for details). These datasets were chosen such that they have been annotated with the same or very similar entity classes, in particular, names of people, locations, and organisations. Thus corpora including only domain-specific entities (e.g. biomedical corpora) were excluded. The choice of corpora was also motivated by their chronological age; we wanted to ensure a good temporal spread, in order to study possible effects of entity drift over time." ] ]
ad67ca844c63bf8ac9fdd0fa5f58c5a438f16211
Which unlabeled data do they pretrain with?
[ "1000 hours of WSJ audio data" ]
[ [ "We consider pre-training on the audio data (without labels) of WSJ, part of clean Librispeech (about 80h) and full Librispeech as well as a combination of all datasets (§ SECREF7 ). For the pre-training experiments we feed the output of the context network to the acoustic model, instead of log-mel filterbank features.", "Our experimental results on the WSJ benchmark demonstrate that pre-trained representations estimated on about 1,000 hours of unlabeled speech can substantially improve a character-based ASR system and outperform the best character-based result in the literature, Deep Speech 2. On the TIMIT task, pre-training enables us to match the best reported result in the literature. In a simulated low-resource setup with only eight hours of transcriped audio data, reduces WER by up to 32% compared to a baseline model that relies on labeled data only (§ SECREF3 & § SECREF4 )." ] ]
12eaaf3b6ebc51846448c6e1ad210dbef7d25a96
How many convolutional layers does their model have?
[ "wav2vec has 12 convolutional layers" ]
[ [ "Given raw audio samples INLINEFORM0 , we apply the encoder network INLINEFORM1 which we parameterize as a five-layer convolutional network similar to BIBREF15 . Alternatively, one could use other architectures such as the trainable frontend of BIBREF24 amongst others. The encoder layers have kernel sizes INLINEFORM2 and strides INLINEFORM3 . The output of the encoder is a low frequency feature representation INLINEFORM4 which encodes about 30ms of 16KHz of audio and the striding results in representation INLINEFORM5 every 10ms.", "Next, we apply the context network INLINEFORM0 to the output of the encoder network to mix multiple latent representations INLINEFORM1 into a single contextualized tensor INLINEFORM2 for a receptive field size INLINEFORM3 . The context network has seven layers and each layer has kernel size three and stride one. The total receptive field of the context network is about 180ms." ] ]
828615a874512844ede9d7f7d92bdc48bb48b18d
Do they explore how much traning data is needed for which magnitude of improvement for WER?
[ "Yes" ]
[ [ "What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance." ] ]
a43c400ae37a8705ff2effb4828f4b0b177a74c4
How are character representations from various languages joint?
[ "shared character embeddings for taggers in both languages together through optimization of a joint loss function" ]
[ [ "Our formulation of transfer learning builds on work in multi-task learning BIBREF15 , BIBREF9 . We treat each individual language as a task and train a joint model for all the tasks. We first discuss the current state of the art in morphological tagging: a character-level recurrent neural network. After that, we explore three augmentations to the architecture that allow for the transfer learning scenario. All of our proposals force the embedding of the characters for both the source and the target language to share the same vector space, but involve different mechanisms, by which the model may learn language-specific features.", "Cross-lingual morphological tagging may be formulated as a multi-task learning problem. We seek to learn a set of shared character embeddings for taggers in both languages together through optimization of a joint loss function that combines the high-resource tagger and the low-resource one. The first loss function we consider is the following:" ] ]
4056ee2fd7a0a0f444275e627bb881134a1c2a10
On which dataset is the experiment conducted?
[ "We use the morphological tagging datasets provided by the Universal Dependencies (UD) treebanks (the concatenation of the $4^\\text{th}$ and $6^\\text{th}$ columns of the file format) BIBREF13 . " ]
[ [ "We use the morphological tagging datasets provided by the Universal Dependencies (UD) treebanks (the concatenation of the $4^\\text{th}$ and $6^\\text{th}$ columns of the file format) BIBREF13 . We list the size of the training, development and test splits of the UD treebanks we used in tab:lang-size. Also, we list the number of unique morphological tags in each language in tab:num-tags, which serves as an approximate measure of the morphological complexity each language exhibits. Crucially, the data are annotated in a cross-linguistically consistent manner, such that words in the different languages that have the same syntacto-semantic function have the same bundle of tags (see sec:morpho-tagging for a discussion). Potentially, further gains would be possible by using a more universal scheme, e.g., the UniMorph scheme." ] ]
f6496b8d09911cdf3a9b72aec0b0be6232a6dba1
Do they train their own RE model?
[ "Yes" ]
[ [ "First we compare our neural network English RE models with the state-of-the-art RE models on the ACE05 English data. The ACE05 English data can be divided to 6 different domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.", "We learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.", "We apply model ensemble to further improve the accuracy of the Bi-LSTM model. We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models. This Ensemble approach improves the single model by 0.6-1.9 $F_1$ points, except for Arabic." ] ]
5c90e1ed208911dbcae7e760a553e912f8c237a5
How big are the datasets?
[ "In-house dataset consists of 3716 documents \nACE05 dataset consists of 1635 documents" ]
[ [ "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)." ] ]
3c3b4797e2b21e2c31cf117ad9e52f327721790f
What languages do they experiment on?
[ "English, German, Spanish, Italian, Japanese and Portuguese, English, Arabic and Chinese" ]
[ [ "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)." ] ]
a7d72f308444616a0befc8db7ad388b3216e2143
What datasets are used?
[ "in-house dataset, ACE05 dataset " ]
[ [ "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).", "In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11." ] ]
dfb0351e8fa62ceb51ce77b0f607885523d1b8e8
How better does auto-completion perform when using both language and vision than only language?
[ "Unanswerable" ]
[ [] ]
a130aa735de3b65c71f27018f20d3c068bafb826
How big is data provided by this research?
[ "16k images and 740k corresponding region descriptions" ]
[ [ "For training, we aggregated (query, image) pairs using the region descriptions from the VG dataset and referring expressions from the ReferIt dataset. Our VG training set consists of 85% of the data: 16k images and 740k corresponding region descriptions. The Referit training data consists of 9k images and 54k referring expressions." ] ]
0c1663a7f7750b399f40ef7b4bf19d5c598890ff
How they complete a user query prefix conditioned upon an image?
[ "we replace user embeddings with a low-dimensional image representation" ]
[ [ "To adapt the FactorCell BIBREF4 for our purposes, we replace user embeddings with a low-dimensional image representation. Thus, we are able to modify each query completion to be personalized to a specific image representation. We extract features from an input image using a CNN pretrained on ImageNet, retraining only the last two fully connected layers. The image feature vector is fed into the FactorCell through the adaptation matrix. We perform beam search over the sequence of predicted characters to chose the optimal completion for the given prefix." ] ]
aa800b424db77e634e82680f804894bfa37f2a34
Did the collection process use a WoZ method?
[ "No" ]
[ [ "This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.", "While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort." ] ]
fbd47705262bfa0a2ba1440a2589152def64cbbd
By how much did their model outperform the baseline?
[ "increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively, over INLINEFORM0 increase in EM and GM between our model and the next best two models" ]
[ [ "First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.", "Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models." ] ]
51aaec4c511d96ef5f5c8bae3d5d856d8bc288d3
What baselines did they compare their model with?
[ "the baseline where path generation uses a standard sequence-to-sequence model augmented with attention mechanism and path verification uses depth-first search" ]
[ [ "The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path." ] ]
3aee5c856e0ee608a7664289ffdd11455d153234
What was the performance of their model?
[ "For test-repeated set, EM score of 61.17, F1 of 93.54, ED of 0.75 and GM of 61.36. For test-new set, EM score of 41.71, F1 of 91.02, ED of 1.22 and GM of 41.81" ]
[ [] ]
f42d470384ca63a8e106c7caf1cb59c7b92dbc27
What evaluation metrics are used?
[ "exact match, f1 score, edit distance and goal match" ]
[ [ "We compare the performance of translation approaches based on four metrics:", "[align=left,leftmargin=0em,labelsep=0.4em,font=]", "As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.", "The harmonic average of the precision and recall over all the test set BIBREF26 .", "The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .", "GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0." ] ]
29bdd1fb20d013b23b3962a065de3a564b14f0fb
Did the authors use a crowdsourcing platform?
[ "Yes" ]
[ [ "This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans." ] ]
25b2ae2d86b74ea69b09c140a41593c00c47a82b
How were the navigation instructions collected?
[ "using Amazon Mechanical Turk using simulated environments with topological maps" ]
[ [ "This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.", "We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.", "As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:", "While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort." ] ]
fd7f13b63f6ba674f5d5447b6114a201fe3137cb
What language is the experiment done in?
[ "english language" ]
[ [ "While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort." ] ]
c82e945b43b2e61c8ea567727e239662309e9508
What additional features are proposed for future work?
[ "distinguishing between clinically positive and negative phenomena within each risk factor domain and accounting for structured data collected on the target cohort" ]
[ [ "Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time." ] ]
fbee81a9d90ff23603ee4f5986f9e8c0eb035b52
What are their initial results on this task?
[ "Achieved the highest per-domain scores on Substance (F1 ≈ 0.8) and the lowest scores on Interpersonal and Mood (F1 ≈ 0.5), and show consistency in per-domain performance rankings between MLP and RBF models." ]
[ [] ]
39cf0b3974e8a19f3745ad0bcd1e916bf20eeab8
What datasets did the authors use?
[ " a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital, an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR)" ]
[ [ "Our target data set consists of a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital. OnTrackTM is an outpatient program, focusing on treating adults ages 18 to 30 who are experiencing their first episodes of psychosis. The length of time in the program varies depending on patient improvement and insurance coverage, with an average of two to three years. The program focuses primarily on early intervention via individual therapy, group therapy, medication evaluation, and medication management. See Table TABREF2 for a demographic breakdown of the 220 patients, for which we have so far extracted approximately 240,000 total EHR paragraphs spanning from 2011 to 2014 using Meditech, the software employed by McLean for storing and organizing EHR data.", "These patients are part of a larger research cohort of approximately 1,800 psychosis patients, which will allow us to connect the results of this EHR study with other ongoing research studies incorporating genetic, cognitive, neurobiological, and functional outcome data from this cohort.", "We also use an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These records are highly comparable in style and vocabulary to our target data set. The corpus consists of discharge summaries, encounter notes, and visit notes from approximately 30,000 patients admitted to the system's hospitals with psychiatric diagnoses and symptoms. This breadth of data captures a wide range of clinical narratives, creating a comprehensive foundation for topic extraction." ] ]
1f6180bba0bc657c773bd3e4269f87540a520ead
How many linguistic and semantic features are learned?
[ "Unanswerable" ]
[ [] ]
57388bf2693d71eb966d42fa58ab66d7f595e55f
How is morphology knowledge implemented in the method?
[ "A BPE model is applied to the stem after morpheme segmentation." ]
[ [ "The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. The problem with BPE is that it do not consider the morpheme boundaries inside words, which might cause a loss of morphological properties and semantic information. Hence, on the analyses of the above popular word segmentation methods, we propose the morphologically motivated segmentation strategy that combines the morpheme segmentation and BPE for further improving the translation performance of NMT.", "Compared with the sentence of word surface forms, the corresponding sentence of stem units only contains the structure information without considering morphological information, which can make better generalization over inflectional variants of the same word and reduce data sparseness BIBREF8. Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation." ] ]
47796c7f0a7de76ccb97ccbd43dc851bb8a613d5
How does the word segmentation method work?
[ "morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5, Zemberek, BIBREF12" ]
[ [ "We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section. The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. The sentence examples with different segmentation strategies for Turkish-English machine translation task are shown in Table 1.", "We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. We employ the python toolkits of jieba for Chinese word segmentation. We apply BPE on the target-side words and we set the number of merge operations to 35K for Chinese and 30K for English and we set the maximum sentence length to 150 tokens. The training corpus statistics of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 2 and Table 3 respectively." ] ]
9d5153a7553b7113716420a6ddceb59f877eb617
Is the word segmentation method independently evaluated?
[ "No" ]
[ [] ]
55c840a2f1f663ab2bff984ae71501b17429d0c0
Do they normalize the calculated intermediate output hypotheses to compensate for the incompleteness?
[ "Unanswerable" ]
[ [] ]
fa5357c56ea80a21a7ca88a80f21711c5431042c
How many layers do they use in their best performing network?
[ "36" ]
[ [ "Table TABREF20 shows results for Librispeech with SpecAugment. We test both CTC and CE/hybrid systems. There are consistent gains first from iterated loss, and then from multiple feature presentation. We also run additional CTC experiments with 36 layers Transformer (total parameters $\\pm $120 millions). The baseline with 36 layers has the same performance with 24 layers, but by adding the proposed methods, the 36 layer performance improved to give the best results. This shows that our proposed methods can improve even very deep models." ] ]
35915166ab2fd3d39c0297c427d4ac00e8083066
Do they just sum up all the loses the calculate to end up with one single loss?
[ "No" ]
[ [ "We have found it beneficial to apply the loss function at several intermediate layers of the network. Suppose there are $M$ total layers, and define a subset of these layers at which to apply the loss function: $K = \\lbrace k_1, k_2, ..., k_L\\rbrace \\subseteq \\lbrace 1,..,M-1\\rbrace $. The total objective function is then defined as", "where $Z_{k_l}$ is the $k_l$-th Transformer layer activations, $Y$ is the ground-truth transcription for CTC and context dependent states for hybrid ASR, and $Loss(P, Y)$ can be defined as CTC objective BIBREF11 or cross entropy for hybrid ASR. The coefficient $\\lambda $ scales the auxiliary loss and we set $\\lambda = 0.3$ based on our preliminary experiments. We illustrate the auxiliary prediction and loss in Figure FIGREF6." ] ]
e6c872fea474ea96ca2553f7e9d5875df4ef55cd
Does their model take more time to train than regular transformer models?
[ "Unanswerable" ]
[ [] ]
fc29bb14f251f18862c100e0d3cd1396e8f2c3a1
Are agglutinative languages used in the prediction of both prefixing and suffixing languages?
[ "Yes" ]
[ [ "Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\\rightarrow $ ue).", "Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.", "Analyzing next the errors concerning affixes, we find that models pretrained on HUN, ITA, DEU, and FRA (in that order) commit the fewest errors. This supports two of our previous hypotheses: First, given that ITA and FRA are both from the same language family as SPA, relatedness seems to be benficial for learning of the second language. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.", "In Table TABREF39, the errors for Zulu are shown, and Table TABREF19 reveals the relative performance for different source languages: TUR $>$ HUN $>$ DEU $>$ ITA $>$ FRA $>$ NAV $>$ EUS $>$ QVH. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language." ] ]
f3e96c5487d87557a661a65395b0162033dc05b3
What is an example of a prefixing language?
[ "Zulu" ]
[ [ "Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing." ] ]
74db8301d42c7e7936eb09b2171cd857744c52eb
How is the performance on the task evaluated?
[ "Comparison of test accuracies of neural network models on an inflection task and qualitative analysis of the errors" ]
[ [ "Neural network models, and specifically sequence-to-sequence models, have pushed the state of the art for morphological inflection – the task of learning a mapping from lemmata to their inflected forms – in the last years BIBREF6. Thus, in this work, we experiment on such models, asking not what they learn, but, motivated by the respective research on human subjects, the related question of how what they learn depends on their prior knowledge. We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages. We aim at finding answers to two main questions: (i) Do errors systematically differ between source languages? (ii) Do these differences seem explainable, given the properties of the source and target languages? In other words, we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1, i.e., the \"native language\", in neural network models.", "For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison. As we can see, the results are similar to the test set results for all language combinations. We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories." ] ]
587885bc86543b8f8b134c20e2c62f6251195571
What are the tree target languages studied in the paper?
[ "English, Spanish and Zulu" ]
[ [ "To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology." ] ]
b72264a73eea36c828e7de778a8b4599a5d02b39
Is the model evaluated against any baseline?
[ "No" ]
[ [ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. Languages closely related to French (Spanish and Portuguese) ranked better, while our worst result used German. English also performs notably well in our experiments. We believe this is due to the statistics features of the resulting text. We observe in Table that the English portion of the dataset contains the smallest vocabulary among all languages. Since we train our systems in very low-resource settings, vocabulary-related features can impact greatly the system's capacity to language-model, and consequently the final quality of the produced alignments. Even in high-resource settings, it was already attested that some languages are more difficult to model than others BIBREF9." ] ]
24cc1586e5411a7f8574796d3c576b7c677d6e21
Does the paper report the accuracy of the model?
[ "No" ]
[ [ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. Languages closely related to French (Spanish and Portuguese) ranked better, while our worst result used German. English also performs notably well in our experiments. We believe this is due to the statistics features of the resulting text. We observe in Table that the English portion of the dataset contains the smallest vocabulary among all languages. Since we train our systems in very low-resource settings, vocabulary-related features can impact greatly the system's capacity to language-model, and consequently the final quality of the produced alignments. Even in high-resource settings, it was already attested that some languages are more difficult to model than others BIBREF9." ] ]
db291d734524fa51fb314628b64ebe1bac7f7e1e
How is the performance of the model evaluated?
[ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8., For the multilingual selection experiments, we experimented combining the languages from top to bottom as they appear Table (ranked by performance; e.g. 1-3 means the combination of FR(1), EN(2) and PT(3)). , Lastly, following the methodology from BIBREF8, we extract the most confident alignments (in terms of ANE) discovered by the bilingual models." ]
[ [ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. Languages closely related to French (Spanish and Portuguese) ranked better, while our worst result used German. English also performs notably well in our experiments. We believe this is due to the statistics features of the resulting text. We observe in Table that the English portion of the dataset contains the smallest vocabulary among all languages. Since we train our systems in very low-resource settings, vocabulary-related features can impact greatly the system's capacity to language-model, and consequently the final quality of the produced alignments. Even in high-resource settings, it was already attested that some languages are more difficult to model than others BIBREF9.", "For the multilingual selection experiments, we experimented combining the languages from top to bottom as they appear Table (ranked by performance; e.g. 1-3 means the combination of FR(1), EN(2) and PT(3)). We observe that the performance improvement is smaller than the one observed in previous work BIBREF10, which we attribute to the fact that our dataset was artificially augmented. This could result in the available multilingual form of supervision not being as rich as in a manually generated dataset. Finally, the best boundary segmentation result is obtained by performing multilingual voting with all the languages and an agreement of 50%, which indicates that the information learned by different languages will provide additional complementary evidence.", "Lastly, following the methodology from BIBREF8, we extract the most confident alignments (in terms of ANE) discovered by the bilingual models. Table presents the top 10 most confident (discovered type, translation) pairs. Looking at the pairs the bilingual models are most confident about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation oboá+ngá). However, the models still differ for most of their alignments in the table. This hints that while a portion of the lexicon might be captured independently of the language used, other structures might be more dependent of the chosen language. On this note, BIBREF11 suggests the notion of word cannot always be meaningfully defined cross-linguistically." ] ]
85abd60094c92eb16f39f861c6de8c2064807d02
What are the different bilingual models employed?
[ " Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target" ]
[ [ "We use the bilingual neural-based Unsupervised Word Segmentation (UWS) approach from BIBREF6 to discover words in Mboshi. In this approach, Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target, the language to document (unsegmented phonemic sequence). Due to the attention mechanism present in these networks BIBREF7, posterior to training, it is possible to retrieve soft-alignment probability matrices between source and target sequences. These matrices give us sentence-level source-to-target alignment information, and by using it for clustering neighbor phonemes aligned to the same translation word, we are able to create segmentation in the target side. The product of this approach is a set of (discovered-units, translation words) pairs.", "In this work we apply two simple methods for including multilingual information into the bilingual models from BIBREF6. The first one, Multilingual Voting, consists of merging the information learned by models trained with different language pairs by performing a voting over the final discovered boundaries. The voting is performed by applying an agreement threshold $T$ over the output boundaries. This threshold balances between accepting all boundaries from all the bilingual models (zero agreement) and accepting only input boundaries discovered by all these models (total agreement). The second method is ANE Selection. For every language pair and aligned sentence in the dataset, a soft-alignment probability matrix is generated. We use Average Normalized Entropy (ANE) BIBREF8 computed over these matrices for selecting the most confident one for segmenting each phoneme sequence. This exploits the idea that models trained on different language pairs will have language-related behavior, thus differing on the resulting alignment and segmentation over the same phoneme sequence.", "Lastly, following the methodology from BIBREF8, we extract the most confident alignments (in terms of ANE) discovered by the bilingual models. Table presents the top 10 most confident (discovered type, translation) pairs. Looking at the pairs the bilingual models are most confident about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation oboá+ngá). However, the models still differ for most of their alignments in the table. This hints that while a portion of the lexicon might be captured independently of the language used, other structures might be more dependent of the chosen language. On this note, BIBREF11 suggests the notion of word cannot always be meaningfully defined cross-linguistically." ] ]
50f09a044f0c0795cc636c3e25a4f7c3231fb08d
How does the well-resourced language impact the quality of the output?
[ "Results show that similar languages rank better in terms of segmentation performance, and that by combining the information learned by different models, segmentation is further improved." ]
[ [ "In this work we train bilingual UWS models using the endangered language Mboshi as target and different well-resourced languages as aligned information. Results show that similar languages rank better in terms of segmentation performance, and that by combining the information learned by different models, segmentation is further improved. This might be due to the different language-dependent structures that are captured by using more than one language. Lastly, we extend the bilingual Mboshi-French parallel corpus, creating a multilingual corpus for the endangered language Mboshi that we make available to the community." ] ]
26b5c090f72f6d51e5d9af2e470d06b2d7fc4a98
what are the baselines?
[ " 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256" ]
[ [ "As the baseline model (BASE-4L) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256 by default. As a comparison, we design a densely connected model with same number of layers, but the hidden size is set as 128 in order to keep the model size consistent. The models adopting DenseAtt-1, DenseAtt-2 are named as DenseNMT-4L-1 and DenseNMT-4L-2 respectively. In order to check the effect of dense connections on deeper models, we also construct a series of 8-layer models. We set the hidden number to be 192, such that both 4-layer models and 8-layer models have similar number of parameters. For dense structured models, we set the dimension of hidden states to be 96." ] ]
8c0621016e96d86a7063cb0c9ec20c76a2dba678
did they outperform previous methods?
[ "Yes" ]
[ [ "Table TABREF32 shows the results for De-En, Tr-En, Tr-En-morph datasets, where the best accuracy for models with the same depth and of similar sizes are marked in boldface. In almost all genres, DenseNMT models are significantly better than the baselines. With embedding size 256, where all models achieve their best scores, DenseNMT outperforms baselines by 0.7-1.0 BLEU on De-En, 0.5-1.3 BLEU on Tr-En, 0.8-1.5 BLEU on Tr-En-morph. We observe significant gain using other embedding sizes as well." ] ]
f1214a05cc0e6d870c789aed24a8d4c768e1db2f
what language pairs are explored?
[ "German-English, Turkish-English, English-German" ]
[ [ "We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German." ] ]
41d3ab045ef8e52e4bbe5418096551a22c5e9c43
what datasets were used?
[ "IWSLT14 German-English, IWSLT14 Turkish-English, WMT14 English-German" ]
[ [ "We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German." ] ]
62736ad71c76a20aee8e003c462869bab4ab4b1e
How is order of binomials tracked across time?
[ "draw our data from news publications, wine reviews, and Reddit, develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time, develop a null model to determine how much variation in binomial orderings we might expect across communities and across time" ]
[ [ "The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means.", "We also address temporal and community structure in collections of binomials. While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5, BIBREF10, BIBREF1, BIBREF13, BIBREF14, BIBREF15, there has been little analysis of this change. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. For example, in one community, we find that over a period of 10 years, the binomial `son and daughter' went from nearly frozen to appearing in that order only 64% of the time.", "While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable." ] ]
aaf50a6a9f449389ef212d25d0fae59c10b0df92
What types of various community texts have been investigated for exploring global structure of binomials?
[ "news publications, wine reviews, and Reddit" ]
[ [ "The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means." ] ]
a1917232441890a89b9a268ad8f987184fa50f7a
Are there any new finding in analasys of trinomials that was not present binomials?
[ "Trinomials are likely to appear in exactly one order" ]
[ [ "Finally, we expand our work to include multinomials, which are lists of more than two words. There already appears to be more structure in trinomials (lists of three) compared to binomials. Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. For instance, in one section of our Reddit data, `Fraud, Waste, and Abuse' appears 34 times, and `Waste, Fraud, and Abuse' appears 20 times. This could point to, for example, recency principles being more important in lists of three than in lists of two. While multinomials were in principle part of the scope of past research in this area, they were difficult to study in smaller corpora, suggesting another benefit of working at our current scale." ] ]
574f17134e4dd041c357ebb75a7ef590da294d22
What new model is proposed for binomial lists?
[ "null model " ]
[ [ "While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable." ] ]
41fd359b8c1402b31b6f5efd660143d1414783a0
How was performance of previously proposed rules at very large scale?
[ " close to random," ]
[ [ "Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials." ] ]
d216d715ec27ee2d4949f9e908895a18fb3238e2
What previously proposed rules for predicting binoial ordering are used?
[ "word length, number of phonemes, number of syllables, alphabetical order, and frequency" ]
[ [ "Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials." ] ]
ba973b13f26cd5eb1da54663c0a72842681d5bf5
What online text resources are used to test binomial lists?
[ "news publications, wine reviews, and Reddit" ]
[ [ "The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means." ] ]
508580af51483b5fb0df2630e8ea726ff08d537b
How do they model a city description using embeddings?
[ "We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm." ]
[ [ "While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm." ] ]
89d1687270654979c53d0d0e6a845cdc89414c67
How do they obtain human judgements?
[ "Using crowdsourcing " ]
[ [ "As the book is too small to train any models, we leverage recent advances in large-scale language model-based representations BIBREF5 , BIBREF6 to compute a representation of each city. We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects." ] ]
fc6cfac99636adda28654e1e19931c7394d76c7c
Which clustering method do they use to cluster city description embeddings?
[ " We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. " ]
[ [ "Given 55 city representations, how do we group them into eleven clusters of five cities each? Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4" ] ]
ed7a3e7fc1672f85a768613e7d1b419475950ab4
Does this approach perform better in the multi-domain or single-domain setting?
[ "single-domain setting" ]
[ [] ]
72ceeb58e783e3981055c70a3483ea706511fac3
What are the performance metrics used?
[ "joint goal accuracy" ]
[ [ "As a convention, the metric of joint goal accuracy is used to compare our model to previous work. The joint goal accuracy only regards the model making a successful belief state prediction if all of the slots and values predicted are exactly matched with the labels provided. This metric gives a strict measurement that tells how often the DST module will not propagate errors to the downstream modules in a dialogue system. In this work, the model with the highest joint accuracy on the validation set is evaluated on the test set for the test joint accuracy measurement." ] ]
9bfa46ad55136f2a365e090ce585fc012495393c
Which datasets are used to evaluate performance?
[ "the single domain dataset, WoZ2.0 , the multi-domain dataset, MultiWoZ" ]
[ [ "We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35. The statistics of these two datsets are shown in Table 2 ." ] ]
42812113ec720b560eb9463ff5e74df8764d1bff
How does the automatic theorem prover infer the relation?
[ "Unanswerable" ]
[ [] ]
4f4892f753b1d9c5e5e74c7c94d8c9b6ef523e7b
If these model can learn the first-order logic on artificial language, why can't it lear for natural language?
[ "Unanswerable" ]
[ [] ]
f258ada8577bb71873581820a94695f4a2c223b3
How many samples did they generate for the artificial language?
[ "70,000" ]
[ [ "In a new experiment, we train the models on pairs of sentences with length 5, 7 or 8, and test on pairs of sentences with lengths 6 or 9. As before, the training and test sets contain some 30,000 and 5,000 sentence pairs, respectively. Results are shown in Table UID19 ." ] ]
05bb75a1e1202850efa9191d6901de0a34744af0
Which of their training domains improves performance the most?
[ "documents from the CommonCrawl dataset that has the most overlapping n-grams with the question" ]
[ [ "As previous systems collect relevant data from knowledge bases after observing questions during evaluation BIBREF24 , BIBREF25 , we also explore using this option. Namely, we build a customized text corpus based on questions in commonsense reasoning tasks. It is important to note that this does not include the answers and therefore does not provide supervision to our resolvers. In particular, we aggregate documents from the CommonCrawl dataset that has the most overlapping n-grams with the questions. The score for each document is a weighted sum of $F_1(n)$ scores when counting overlapping n-grams: $Similarity\\_Score_{document} = \\frac{\\sum _{n=1}^4nF_1(n)}{\\sum _{n=1}^4n}$", "The top 0.1% of highest ranked documents is chosen as our new training corpus. Details of the ranking is shown in Figure 2 . This procedure resulted in nearly 1,000,000 documents, with the highest ranking document having a score of $8\\times 10^{-2}$ , still relatively small to a perfect score of $1.0$ . We name this dataset STORIES since most of the constituent documents take the form of a story with long chain of coherent events.", "Figure 5 -left and middle show that STORIES always yield the highest accuracy for both types of input processing. We next rank the text corpora based on ensemble performance for more reliable results. Namely, we compare the previous ensemble of 10 models against the same set of models trained on each single text corpus. This time, the original ensemble trained on a diverse set of text corpora outperforms all other single-corpus ensembles including STORIES. This highlights the important role of diversity in training data for commonsense reasoning accuracy of the final system." ] ]
770aeff30846cd3d0d5963f527691f3685e8af02
Do they fine-tune their model on the end task?
[ "Yes" ]
[ [ "As previous systems collect relevant data from knowledge bases after observing questions during evaluation BIBREF24 , BIBREF25 , we also explore using this option. Namely, we build a customized text corpus based on questions in commonsense reasoning tasks. It is important to note that this does not include the answers and therefore does not provide supervision to our resolvers. In particular, we aggregate documents from the CommonCrawl dataset that has the most overlapping n-grams with the questions. The score for each document is a weighted sum of $F_1(n)$ scores when counting overlapping n-grams: $Similarity\\_Score_{document} = \\frac{\\sum _{n=1}^4nF_1(n)}{\\sum _{n=1}^4n}$" ] ]
f7817b949605fb04b1e4fec9dd9ca8804fb92ae9
Why does not the approach from English work on other languages?
[ "Because, unlike other languages, English does not mark grammatical genders" ]
[ [ "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English BIBREF5 , BIBREF6 , BIBREF7 . Yet, gender stereotypes also exist in other languages because they are a function of society, not of grammar. Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 . In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns. This means that if the gender of one word changes, the others have to be updated to match. As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped BIBREF9 , will yield ungrammatical sentences. Consider the Spanish phrase el ingeniero experto (the skilled engineer). Replacing ingeniero with ingeniera is insufficient—el must also be replaced with la and experto with experta." ] ]
8255f74cae1352e5acb2144fb857758dda69be02
How do they measure grammaticality?
[ "by calculating log ratio of grammatical phrase over ungrammatical phrase" ]
[ [ "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): DISPLAYFORM0" ] ]
db62d5d83ec187063b57425affe73fef8733dd28
Which model do they use to convert between masculine-inflected and feminine-inflected sentences?
[ "Markov random field with an optional neural parameterization" ]
[ [ "We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns. To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results. For example, we demonstrated that our approach reduces gender stereotyping in neural language models. Finally, we also identified avenues for future work, such as the inclusion of co-reference information." ] ]
946676f1a836ea2d6fe98cb4cfc26b9f4f81984d
What is the performance achieved by the model described in the paper?
[ "Unanswerable" ]
[ [] ]
3b090b416c4ad7d9b5b05df10c5e7770a4590f6a
What is the best performance achieved by supervised models?
[ "Unanswerable" ]
[ [] ]
a1e07c7563ad038ee2a7de5093ea08efdd6077d4
What is the size of the datasets employed?
[ "(about 4 million sentences, 138 million word tokens), one trained on the Billion Word benchmark" ]
[ [ "are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256.", "We also compare LSTM language models trained on large corpora. We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15. We set the size of the input embeddings and hidden layers to 400 for the LSTM (frWaC) model since it is trained on a large dataset." ] ]
a1c4f9e8661d4d488b8684f055e0ee0e2275f767
What are the baseline models?
[ "Recurrent Neural Network (RNN), ActionLSTM, Generative Recurrent Neural Network Grammars (RNNG)" ]
[ [ "Methods ::: Models Tested ::: Recurrent Neural Network (RNN) Language Models", "are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256.", "Methods ::: Models Tested ::: ActionLSTM", "models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16. The action space consists of three possibilities: open a new non-terminal node and opening bracket; generate a terminal node; and close a bracket. To compute surprisal values for a given token, we approximate $P(w_i|w_{1\\cdots i-1)}$ by marginalizing over the most-likely partial parses found by word-synchronous beam search BIBREF17.", "Methods ::: Models Tested ::: Generative Recurrent Neural Network Grammars (RNNG)", "jointly model the word sequence as well as the underlying syntactic structure BIBREF18. Following BIBREF19, we estimate surprisal using word-synchronous beam search BIBREF17. We use the same hyper-parameter settings as BIBREF18." ] ]
c5171daf82107fce0f285fa18f19e91fbd1215c5
What evaluation metrics are used?
[ "the evaluation metrics include BLEU and ROUGE (1, 2, L) scores" ]
[ [ "The probability of activating inter-layer and inner-layer teacher forcing is set to 0.5, the probability of teacher forcing is attenuated every epoch, and the decaying ratio is 0.9. The models are trained for 20 training epochs without early stop; when curriculum learning is applied, only the first layer is trained during first five epochs, the second decoder layer starts to be trained at the sixth epoch, and so on. To evaluate the quality of the generated sequences regarding both precision and recall, the evaluation metrics include BLEU and ROUGE (1, 2, L) scores with multiple references BIBREF22 ." ] ]
baeb6785077931e842079e9d0c9c9040947ffa4e
What datasets did they use?
[ "The E2E NLG challenge dataset BIBREF21" ]
[ [ "The E2E NLG challenge dataset BIBREF21 is utilized in our experiments, which is a crowd-sourced dataset of 50k instances in the restaurant domain. Our models are trained on the official training set and verified on the official testing set. As shown in Figure FIGREF2 , the inputs are semantic frames containing specific slots and corresponding values, and the outputs are the associated natural language utterances with the given semantics. For example, a semantic frame with the slot-value pairs “name[Bibimbap House], food[English], priceRange[moderate], area [riverside], near [Clare Hall]” corresponds to the target sentence “Bibimbap House is a moderately priced restaurant who's main cuisine is English food. You will find this local gem near Clare Hall in the Riverside area.”." ] ]
bb570d4a1b814f508a07e74baac735bf6ca0f040
Why does their model do better than prior models?
[ "better sentence pair representations" ]
[ [ "Without ELMo (the same setting as 4-th row in Table 3 ), our data settings is the same as qin-EtAl:2017:Long whose performance was state-of-the-art and will be compared directly. We see that even without the pre-trained ELMo encoder, our performance is better, which is mostly attributed to our better sentence pair representations." ] ]
1771a55236823ed44d3ee537de2e85465bf03eaf
What is the difference in recall score between the systems?
[ "Between the model and Stanford, Spacy and Flair the differences are 42.91, 25.03, 69.8 with Traditional NERs as reference and 49.88, 43.36, 62.43 with Wikipedia titles as reference." ]
[ [] ]
1d74fd1d38a5532d20ffae4abbadaeda225b6932
What is their f1 score and recall?
[ "F1 score and Recall are 68.66, 80.08 with Traditional NERs as reference and 59.56, 69.76 with Wikipedia titles as reference." ]
[ [] ]
da8bda963f179f5517a864943dc0ee71249ee1ce
How many layers does their system have?
[ "4 layers" ]
[ [] ]
5c059a13d59947f30877bed7d0180cca20a83284
Which news corpus is used?
[ "Unanswerable" ]
[ [] ]
a1885f807753cff7a59f69b5cf6d0fdef8484057
How large is the dataset they used?
[ "English wikipedia dataset has more than 18 million, a dump of 15 million English news articles " ]
[ [ "We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. To remove such titles, we deployed simple rules as follows -", "After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles. Then selected minimum articles required to cover all possible 2-grams to 5-grams. This step is done to save some training time without loosing accuracy. Do note that, in future we are planning to use whole dataset and hope to see gains in F1 and Recall further. But as per manual inspection, our dataset contains enough variations of sentences with rich vocabulary which contains names of celebrities, politicians, local authorities, national/local organisations and almost all locations, India and International, mentioned in the news text, in last 4 years." ] ]
c2553166463b7b5ae4d9786f0446eb06a90af458
Which coreference resolution systems are tested?
[ "the BIBREF5 sieve system from the rule-based paradigm (referred to as RULE), BIBREF6 from the statistical paradigm (STAT), and the BIBREF11 deep reinforcement system from the neural paradigm (NEURAL)." ]
[ [ "In this work, we evaluate three publicly-available off-the-shelf coreference resolution systems, representing three different machine learning paradigms: rule-based systems, feature-driven statistical systems, and neural systems.", "We evaluate examples of each of the three coreference system architectures described in \"Coreference Systems\" : the BIBREF5 sieve system from the rule-based paradigm (referred to as RULE), BIBREF6 from the statistical paradigm (STAT), and the BIBREF11 deep reinforcement system from the neural paradigm (NEURAL)." ] ]
cc9f0ac8ead575a9b485a51ddc06b9ecb2e2a44d
How big is improvement in performances of proposed model over state of the art?
[ "Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively." ]
[ [ "We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy).", "Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively." ] ]
69e678666d11731c9bfa99953e2cd5a5d11a4d4f
What two large datasets are used for evaluation?
[ "SParC BIBREF2 and CoSQL BIBREF6" ]
[ [ "In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance." ] ]
471d624498ab48549ce492ada9e6129da05debac
What context modelling methods are evaluated?
[ "Concat\nTurn\nGate\nAction Copy\nTree Copy\nSQL Attn\nConcat + Action Copy\nConcat + Tree Copy\nConcat + SQL Attn\nTurn + Action Copy\nTurn + Tree Copy\nTurn + SQL Attn\nTurn + SQL Attn + Action Copy" ]
[ [ "To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected." ] ]
f858031ebe57b6139af46ee0f25c10870bb00c3c
What are two datasets models are tested on?
[ "SParC BIBREF2 and CoSQL BIBREF6" ]
[ [ "In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance." ] ]
1763a029daca7cab10f18634aba02a6bd1b6faa7
How big is the improvement over the state-of-the-art results?
[ "AGDT improves the performance by 2.4% and 1.6% in the “DS” part of the two dataset, Our AGDT surpasses GCAE by a very large margin (+11.4% and +4.9% respectively) on both datasets, In the “HDS” part, the AGDT model obtains +3.6% higher accuracy than GCAE on the restaurant domain and +4.2% higher accuracy on the laptop domain" ]
[ [ "Experiments ::: Main Results and Analysis ::: Aspect-Category Sentiment Analysis Task", "We present the overall performance of our model and baseline models in Table TABREF27. Results show that our AGDT outperforms all baseline models on both “restaurant-14” and “restaurant-large” datasets. ATAE-LSTM employs an aspect-weakly associative encoder to generate the aspect-specific sentence representation by simply concatenating the aspect, which is insufficient to exploit the given aspect. Although GCAE incorporates the gating mechanism to control the sentiment information flow according to the given aspect, the information flow is generated by an aspect-independent encoder. Compared with GCAE, our AGDT improves the performance by 2.4% and 1.6% in the “DS” part of the two dataset, respectively. These results demonstrate that our AGDT can sufficiently exploit the given aspect to generate the aspect-guided sentence representation, and thus conduct accurate sentiment prediction. Our model benefits from the following aspects. First, our AGDT utilizes an aspect-guided encoder, which leverages the given aspect to guide the sentence encoding from scratch and generates the aspect-guided representation. Second, the AGDT guarantees that the aspect-specific information has been fully embedded in the sentence representation via reconstructing the given aspect. Third, the given aspect embedding is concatenated on the aspect-guided sentence representation for final predictions.", "The “HDS”, which is designed to measure whether a model can detect different sentiment polarities in a sentence, consists of replicated sentences with different sentiments towards multiple aspects. Our AGDT surpasses GCAE by a very large margin (+11.4% and +4.9% respectively) on both datasets. This indicates that the given aspect information is very pivotal to the accurate sentiment prediction, especially when the sentence has different sentiment labels, which is consistent with existing work BIBREF2, BIBREF3, BIBREF4. Those results demonstrate the effectiveness of our model and suggest that our AGDT has better ability to distinguish the different sentiments of multiple aspects compared to GCAE.", "Experiments ::: Main Results and Analysis ::: Aspect-Term Sentiment Analysis Task", "In the “HDS” part, the AGDT model obtains +3.6% higher accuracy than GCAE on the restaurant domain and +4.2% higher accuracy on the laptop domain, which shows that our AGDT has stronger ability for the multi-sentiment problem against GCAE. These results further demonstrate that our model works well across tasks and datasets." ] ]
f9de9ddea0c70630b360167354004ab8cbfff041
Is the model evaluated against other Aspect-Based models?
[ "Yes" ]
[ [ "Experiments ::: Baselines", "To comprehensively evaluate our AGDT, we compare the AGDT with several competitive models." ] ]
fc8bc6a3c837a9d1c869b7ee90cf4e3c39bcd102
Is the baseline a non-heirarchical model like BERT?
[ "There were hierarchical and non-hierarchical baselines; BERT was one of those baselines" ]
[ [ "Our main results on the CNNDM dataset are shown in Table 1 , with abstractive models in the top block and extractive models in the bottom block. Pointer+Coverage BIBREF9 , Abstract-ML+RL BIBREF10 and DCA BIBREF42 are all sequence to sequence learning based models with copy and coverage modeling, reinforcement learning and deep communicating agents extensions. SentRewrite BIBREF26 and InconsisLoss BIBREF25 all try to decompose the word by word summary generation into sentence selection from document and “sentence” level summarization (or compression). Bottom-Up BIBREF27 generates summaries by combines a word prediction model with the decoder attention model. The extractive models are usually based on hierarchical encoders (SummaRuNNer; BIBREF3 and NeuSum; BIBREF11 ). They have been extended with reinforcement learning (Refresh; BIBREF4 and BanditSum; BIBREF20 ), Maximal Marginal Relevance (NeuSum-MMR; BIBREF21 ), latent variable modeling (LatentSum; BIBREF5 ) and syntactic compression (JECS; BIBREF38 ). Lead3 is a baseline which simply selects the first three sentences. Our model $\\text{\\sc Hibert}_S$ (in-domain), which only use one pre-training stage on the in-domain CNNDM training set, outperforms all of them and differences between them are all significant with a 0.95 confidence interval (estimated with the ROUGE script). Note that pre-training $\\text{\\sc Hibert}_S$ (in-domain) is very fast and it only takes around 30 minutes for one epoch on the CNNDM training set. Our models with two pre-training stages ( $\\text{\\sc Hibert}_S$ ) or larger size ( $\\text{\\sc Hibert}_M$ ) perform even better and $\\text{\\sc Hibert}_M$ outperforms BERT by 0.5 ROUGE. We also implemented two baselines. One is the hierarchical transformer summarization model (HeriTransfomer; described in \"Extractive Summarization\" ) without pre-training. Note the setting for HeriTransfomer is ( $L=4$ , $H=300$ and $A=4$ ) . We can see that the pre-training (details in Section \"Pre-training\" ) leads to a +1.25 ROUGE improvement. Another baseline is based on a pre-trained BERT BIBREF0 and finetuned on the CNNDM dataset. We used the $\\text{BERT}_{\\text{base}}$ model because our 16G RAM V100 GPU cannot fit $\\text{BERT}_{\\text{large}}$ for the summarization task even with batch size of 1. The positional embedding of BERT supports input length up to 512 words, we therefore split documents with more than 10 sentences into multiple blocks (each block with 10 sentences). We feed each block (the BOS and EOS tokens of each sentence are replaced with [CLS] and [SEP] tokens) into BERT and use the representation at [CLS] token to classify each sentence. Our model $\\text{\\sc Hibert}_S$1 outperforms BERT by 0.4 to 0.5 ROUGE despite with only half the number of model parameters ( $\\text{\\sc Hibert}_S$2 54.6M v.s. BERT 110M). Results on the NYT50 dataset show the similar trends (see Table 2 ). EXTRACTION is a extractive model based hierarchical LSTM and we use the numbers reported by xu:2019:arxiv. The improvement of $\\text{\\sc Hibert}_S$3 over the baseline without pre-training (HeriTransformer) becomes 2.0 ROUGE. $\\text{\\sc Hibert}_S$4 (in-domain), $\\text{\\sc Hibert}_S$5 (in-domain), $\\text{\\sc Hibert}_S$6 and $\\text{\\sc Hibert}_S$7 all outperform BERT significantly according to the ROUGE script." ] ]
58e65741184c81c9e7fe0ca15832df2d496beb6f
Do they build a model to recognize discourse relations on their dataset?
[ "No" ]
[ [] ]
269b05b74d5215b09c16e95a91ae50caedd9e2aa
Which inter-annotator metric do they use?
[ "agreement rates, Kappa value" ]
[ [ "We measured intra-annotator agreement between two annotators in three aspects: relations, senses, arguments. To be specific, the annotators’ consistency in annotating the type of a specific relation or sense and the position and scope of arguments are measured. To assess the consistency of annotations and also eliminate coincidental annotations, we used agreement rates, which is calculated by dividing the number of senses under each category where the annotators annotate consistently by the total number of each kind of sense. And considering the potential impact of unbalanced distribution of senses, we also used the Kappa value. And the final agreement study was carried out for the first 300 relations in our corpus. We obtained high agreement results and Kappa value for the discourse relation type and top-level senses ($\\ge {0.9} $ ). However, what we did was more than this, and we also achieved great results on the second-level and third-level senses for the sake of our self-demand for high-quality, finally achieving agreement of 0.85 and Kappa value of 0.83 for these two deeper levels of senses." ] ]
0d7f514f04150468b2d1de9174c12c28e52c5511
How high is the inter-annotator agreement?
[ "agreement of 0.85 and Kappa value of 0.83" ]
[ [ "We measured intra-annotator agreement between two annotators in three aspects: relations, senses, arguments. To be specific, the annotators’ consistency in annotating the type of a specific relation or sense and the position and scope of arguments are measured. To assess the consistency of annotations and also eliminate coincidental annotations, we used agreement rates, which is calculated by dividing the number of senses under each category where the annotators annotate consistently by the total number of each kind of sense. And considering the potential impact of unbalanced distribution of senses, we also used the Kappa value. And the final agreement study was carried out for the first 300 relations in our corpus. We obtained high agreement results and Kappa value for the discourse relation type and top-level senses ($\\ge {0.9} $ ). However, what we did was more than this, and we also achieved great results on the second-level and third-level senses for the sake of our self-demand for high-quality, finally achieving agreement of 0.85 and Kappa value of 0.83 for these two deeper levels of senses." ] ]
4d223225dbf84a80e2235448a4d7ba67bfb12490
How are resources adapted to properties of Chinese text?
[ "removing AltLexC and adding Progression into our sense hierarchy" ]
[ [ "In this paper, we describe our scheme and process in annotating shallow discourse relations using PDTB-style. In view of the differences between English and Chinese, we made adaptations for the PDTB-3 scheme such as removing AltLexC and adding Progression into our sense hierarchy. To ensure the annotation quality, we formulated detailed annotation criteria and quality assurance strategies. After serious training, we annotated 3212 discourse relations, and we achieved a satisfactory consistency of labelling with a Kappa value of greater than 0.85 for most of the indicators. Finally, we display our annotation results in which the distribution of discourse relations and senses differ from that in other corpora which annotate news report or newspaper texts. Our corpus contains more Contingency, Temporal and Comparison relations instead of being governed by Expansion." ] ]
ca26cfcc755f9d0641db0e4d88b4109b903dbb26
How better are results compared to baseline models?
[ "F1 score of best authors' model is 55.98 compared to BiLSTM and FastText that have F1 score slighlty higher than 46.61." ]
[ [ "We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\\%$) than distance from the thesis and linguistic features. Claims with higher impact parents are more likely to be have higher impact. Similarity with the parent and thesis is not significantly better than the majority baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline.", "We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\\%$)." ] ]
6cdd61ebf84aa742155f4554456cc3233b6ae2bf
What models that rely only on claim-specific linguistic features are used as baselines?
[ "SVM with RBF kernel" ]
[ [ "Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim." ] ]
8e8097cada29d89ca07166641c725e0f8fed6676
How is pargmative and discourse context added to the dataset?
[ "While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument." ]
[ [ "Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented." ] ]
951098f0b7169447695b47c142384f278f451a1e
What annotations are available in the dataset?
[ "5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact" ]
[ [ "Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented." ] ]
07c59824f5e7c5399d15491da3543905cfa5f751
How big is dataset used for training/testing?
[ "4,261 days for France and 4,748 for the UK" ]
[ [ "Our work aims at predicting time series using exclusively text. Therefore for both countries the inputs of all our models consist only of written daily weather reports. Under their raw shape, those reports take the form of PDF documents giving a short summary of the country's overall weather, accompanied by pressure, temperature, wind, etc. maps. Note that those reports are written a posteriori, although they could be written in a predictive fashion as well. The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively. An excerpt for each language may be found in tables TABREF6 and TABREF7. The relevant text was extracted from the PDF documents using the Python library PyPDF2." ] ]