question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
96ee62407b1ca2a6538c218781e73e8fbf45094a
How many human subjects were used in the study?
[ "Unanswerable" ]
[ [] ]
ad0a7fe75db5553652cd25555c6980f497e08113
How does the model compute the likelihood of executing to the correction semantic denotation?
[ "By treating logical forms as a latent variable and training a discriminative log-linear model over logical form y given x." ]
[ [ "Since the training data consists only of utterance-denotation pairs, the ranker is trained to maximize the log-likelihood of the correct answer $z$ by treating logical forms as a latent variable:", "It is impractical to rely solely on a neural decoder to find the most likely logical form at run time in the weakly-supervised setting. One reason is that although the decoder utilizes global utterance features for generation, it cannot leverage global features of the logical form since a logical form is conditionally generated following a specific tree-traversal order. To this end, we follow previous work BIBREF21 and introduce a ranker to the system. The role of the ranker is to score the candidate logical forms generated by the parser; at test time, the logical form receiving the highest score will be used for execution. The ranker is a discriminative log-linear model over logical form $y$ given utterance $x$ :" ] ]
f268b70b08bd0436de5310e390ca5f38f7636612
Which conventional alignment models do they use as guidance?
[ "GIZA++ BIBREF3 or fast_align BIBREF4 " ]
[ [ "Inspired by the supervised reordering in conventional SMT, in this paper, we propose a Supervised Attention based NMT (SA-NMT) model. Specifically, similar to conventional SMT, we first run off-the-shelf aligners (GIZA++ BIBREF3 or fast_align BIBREF4 etc.) to obtain the alignment of the bilingual training corpus in advance. Then, treating this alignment result as the supervision of attention, we jointly learn attention and translation, both in supervised manners. Since the conventional aligners delivers higher quality alignment, it is expected that the alignment in the supervised attention NMT will be improved leading to better end-to-end translation performance. One advantage of the proposed SA-NMT is that it implements the supervision of attention as a regularization in the joint training objective (§3.2). Furthermore, since the supervision of attention lies in the middle of the entire network architecture rather than the top ( as in the supervision of translation (see Figure 1(b)), it serves to mitigate the vanishing gradient problem during the back-propagation BIBREF7 ." ] ]
7aae4533dbf097992f23fb2e0574ec5c891ca236
Which dataset do they use?
[ "BTEC corpus, the CSTAR03 and IWSLT04 held out sets, the NIST2008 Open Machine Translation Campaign" ]
[ [ "For the low resource translation task, we used the BTEC corpus as the training data, which consists of 30k sentence pairs with 0.27M Chinese words and 0.33M English words. As development and test sets, we used the CSTAR03 and IWSLT04 held out sets, respectively. We trained a 4-gram language model on the target side of training corpus for running Moses. For training all NMT systems, we employed the same settings as those in the large scale task, except that vocabulary size is 6000, batch size is 16, and the hyper-parameter INLINEFORM0 for SA-NMT.", "We used the data from the NIST2008 Open Machine Translation Campaign. The training data consisted of 1.8M sentence pairs, the development set was nist02 (878 sentences), and the test sets are were nist05 (1082 sentences), nist06 (1664 sentences) and nist08 (1357 sentences)." ] ]
c80669cb444a6ec6249b971213b0226f59940a82
On average, by how much do they reduce the diarization error?
[ "Unanswerable" ]
[ [] ]
10045d7dac063013a8447b5a4bc3a3c2f18f9e82
Do they compare their algorithm to voting without weights?
[ "No" ]
[ [] ]
4e4946c023211712c782637fcca523deb126e519
How do they assign weights between votes in their DOVER algorithm?
[ "Unanswerable" ]
[ [] ]
144714fe0d5a2bb7e21a7bf50df39d790ff12916
What are state of the art methods authors compare their work with?
[ "ISOT dataset: LLVM\nLiar dataset: Hybrid CNN and LSTM with attention" ]
[ [ "Table TABREF21 shows the performance of non-static capsule network for fake news detection in comparison to other methods. The accuracy of our model is 7.8% higher than the best result achieved by LSVM.", "As mentioned in Section SECREF13, the LIAR dataset is a multi-label dataset with short news statements. In comparison to the ISOT dataset, the classification task for this dataset is more challenging. We evaluate the proposed model while using different metadata, which is considered as speaker profiles. Table TABREF30 shows the performance of the capsule network for fake news detection by adding every metadata. The best result of the model is achieved by using history as metadata. The results show that this model can perform better than state-of-the-art baselines including hybrid CNN BIBREF15 and LSTM with attention BIBREF16 by 3.1% on the validation set and 1% on the test set." ] ]
f01aa192d97fa3cc36b6e316355dc5da0e9b97dc
What are the baselines model?
[ "(i) Uniform, (ii) SVR+W, (iii) SVR+O, (iv) C4.5SSL, (v) GLM" ]
[ [ "Results of Predictive Models. For the purpose of evaluation, we report the average results after 10-fold cross-validation. Here we consider five baselines to compare with GraLap: (i) Uniform: assign 3 to all the references assuming equal intensity, (ii) SVR+W: recently proposed Support Vector Regression (SVR) with the feature set mentioned in BIBREF4 , (iii) SVR+O: SVR model with our feature set, (iv) C4.5SSL: C4.5 semi-supervised algorithm with our feature set BIBREF23 , and (v) GLM: the traditional graph-based LP model with our feature set BIBREF9 . Three metrics are used to compare the results of the competing models with the annotated labels: Root Mean Square Error (RMSE), Pearson's correlation coefficient ( INLINEFORM0 ), and coefficient of determination ( INLINEFORM1 )." ] ]
3d583a0675ad34eb7a46767ef5eba5f0ea898aa9
What is the architecture of the model?
[ "LSTM" ]
[ [ "Training: The baseline model was trained using RNNLM BIBREF25 . Then, we trained our LSTM models with different hidden sizes [200, 500]. All LSTMs have 2 layers and unrolled for 35 steps. The embedding size is equal to the LSTM hidden size. A dropout regularization BIBREF26 was applied to the word embedding vector and POS tag embedding vector, and to the recurrent output BIBREF27 with values between [0.2, 0.4]. We used a batch size of 20 in the training. EOS tag was used to separate every sentence. We chose Stochastic Gradient Descent and started with a learning rate of 20 and if there was no improvement during the evaluation, we reduced the learning rate by a factor of 0.75. The gradient was clipped to a maximum of 0.25. For the multi-task learning, we used different loss weights hyper-parameters INLINEFORM0 in the range of [0.25, 0.5, 0.75]. We tuned our model with the development set and we evaluated our best model using the test set, taking perplexity as the final evaluation metric. Where the latter was calculated by taking the exponential of the error in the negative log-form. INLINEFORM1" ] ]
d7d41a1b8bbb1baece89b28962d23ee4457b9c3a
What languages are explored in the work?
[ "Mandarin, English" ]
[ [ "In this section, we present the experimental setting for this task", "Corpus: SEAME (South East Asia Mandarin-English), a conversational Mandarin-English code-switching speech corpus consists of spontaneously spoken interviews and conversations BIBREF8 . Our dataset (LDC2015S04) is the most updated version of the Linguistic Data Consortium (LDC) database. However, the statistics are not identical to BIBREF23 . The corpus consists of two phases. In Phase I, only selected audio segments were transcribed. In Phase II, most of the audio segments were transcribed. According to the authors, it was not possible to restore the original dataset. The authors only used Phase I corpus. Few speaker ids are not in the speaker list provided by the authors BIBREF23 . Therefore as a workaround, we added these ids to the train set. As our future reference, the recording lists are included in the supplementary material." ] ]
b458ebca72e3013da3b4064293a0a2b4b5ef1fa6
What is the state-of-the-art neural coreference resolution model?
[ "BIBREF2 , BIBREF1 " ]
[ [ "We use the English coreference resolution dataset from the CoNLL-2012 shared task BIBREF15 , the benchmark dataset for the training and evaluation of coreference resolution. The training dataset contains 2408 documents with 1.3 million words. We use two state-of-art neural coreference resolution models described by BIBREF2 and BIBREF1 . We report the average F1 value of standard MUC, B INLINEFORM0 and CEAF INLINEFORM1 metrics for the original test set." ] ]
1cbca15405632a2e9d0a7061855642d661e3b3a7
How much improvement do they get?
[ "Their GTRS approach got an improvement of 3.89% compared to SVM and 27.91% compared to Pawlak." ]
[ [] ]
018ef092ffc356a2c0e970ae64ad3c2cf8443288
How large is the dataset?
[ "8757 news records" ]
[ [ "There are 8757 news records in our preprocessed data set. We use Jenks natural breaks BIBREF24 to discretize continuous variables $S_{N\\!P}$ and $S_{Q\\!P}$ both into five categories denoted by nominal values from 0 to 4, where larger values still fall into bins with larger nominal value. Let $D_{N\\!P}$ and $D_{Q\\!P}$ denote the discretized variables $S_{N\\!P}$ and $S_{Q\\!P}$, respectively. We derived the information table that only contains discrete features from our original dataset. A fraction of the information table is shown in Table TABREF23." ] ]
de4e180f49ff187abc519d01eff14ebcd8149cad
What features do they extract?
[ "Inconsistency in Noun Phrase Structures, Inconsistency Between Clauses, Inconsistency Between Named Entities and Noun Phrases, Word Level Feature Using TF-IDF" ]
[ [ "Satirical news is not based on or does not aim to state the fact. Rather, it uses parody or humor to make statement, criticisms, or just amusements. In order to achieve such effect, contradictions are greatly utilized. Therefore, inconsistencies significantly exist in different parts of a satirical news tweet. In addition, there is a lack of entity or inconsistency between entities in news satire. We extracted these features at semantic level from different sub-structures of the news tweet. Different structural parts of the sentence are derived by part-of-speech tagging and named entity recognition by Flair. The inconsistencies in different structures are measured by cosine similarity of word phrases where words are represented by Glove word vectors. We explored three different aspects of inconsistency and designed metrics for their measurements. A word level feature using tf-idf BIBREF22 is added for robustness." ] ]
bdc1f37c8b5e96e3c29cc02dae4ce80087d83284
What they use as a metric of finding hot spots in meeting?
[ "unweighted average recall (UAR) metric" ]
[ [ "In spite of the windowing approach, the class distribution is still skewed, and an accuracy metric would reflect the particular class distribution in our data set. Therefore, we adopt the unweighted average recall (UAR) metric commonly used in emotion classification research. UAR is a reweighted accuracy where the samples of both classes are weighted equally in aggregate. UAR thus simulates a uniform class distribution. To match the objective, our classifiers are trained on appropriately weighted training data. Note that chance performance for UAR is by definition 50%, making results more comparable across different data sets." ] ]
c54de73b36ab86534d18a295f3711591ce9e1784
Is this approach compared to some baseline?
[ "No" ]
[ [ "Table TABREF24 gives the UAR for each feature subset individually, for all features combined, and for a combination in which one feature subset in turn is left out. The one-feature-set-at-time results suggest that prosody, speech activity and words are of increasing importance in that order. The leave-one-out analysis agrees that the words are the most important (largest drop in accuracy when removed), but on that criterion the prosodic features are more important than speech-activity. The combination of all features is 0.4% absolute better than any other subset, showing that all feature subsets are partly complementary." ] ]
fdd9dea06550a2fd0df7a1e6a5109facf3601d76
How big is ICSI meeting corpus?
[ " 75 meetings and about 70 hours of real-time audio duration" ]
[ [ "The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances." ] ]
3786164eaf3965c11c9969c4463b8c3223627067
What annotations are available in ICSI meeting corpus?
[ "8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator" ]
[ [ "The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances." ] ]
2fd8688c8f475ab43edaf5d189567f8799b018e1
Is such bias caused by bad annotation?
[ "No" ]
[ [ "Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts BIBREF0 , BIBREF1 . In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone). The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI. However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts BIBREF2 , BIBREF3 , BIBREF4 . For instance, in some datasets, negation words like “not” and “nobody” are often associated with a relationship of contradiction. As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases." ] ]
b68d2549431c524a86a46c63960b3b283f61f445
How do they determine similar environments for fragments in their data augmentation scheme?
[ "fragments are interchangeable if they occur in at least one lexical environment that is exactly the same" ]
[ [ "The only remaining question is what makes two environments similar enough to infer the existence of a common category. There is, again, a large literature on this question (including the aforementioned language modeling, unsupervised parsing, and alignment work), but in the current work we will make use of a very simple criterion: fragments are interchangeable if they occur in at least one lexical environment that is exactly the same. Given a window size INLINEFORM0 , a sequence of INLINEFORM1 words INLINEFORM2 , and a fragment consisting of a set of INLINEFORM3 spans INLINEFORM4 , the environment is given by INLINEFORM5 , i.e. a INLINEFORM6 -word window around each span of the fragment." ] ]
7f5059b4b5e84b7705835887f02a51d4d016316a
Do they experiment with language modeling on large datasets?
[ "No" ]
[ [] ]
df79d04cc10a01d433bb558d5f8a51bfad29f46b
Which languages do they test on?
[ "Answer with content missing: (Applications section) We use Wikipedia articles\nin five languages\n(Kinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English) as well as the Na dataset of Adams\net al. (2017).\nSelect:\nKinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English" ]
[ [ "Space requirements might still be considerable (comparable to those used by n-gram language models), and similar tricks can be used to reduce memory usage BIBREF27 . The above pseudocode is agnostic with respect to the choice of fragmentation and environment functions; task-specific choices are described in more detail for each experiment below.", "Discussion" ] ]
182b6d77b51fa83102719a81862891f49c23a025
What limitations are mentioned?
[ "deciding publisher partisanship, risk annotator bias because of short description text provided to annotators" ]
[ [ "We identified some limitations during the process, which we describe in this section.", "When deciding publisher partisanship, the number of people from whom we computed the score was small. For example, de Stentor is estimated to reach 275K readers each day on its official website. Deciding the audience leaning from 55 samples was subject to sampling bias. Besides, the scores differ very little between publishers. None of the publishers had an absolute score higher than 1, meaning that even the most partisan publisher was only slightly partisan. Deciding which publishers we consider as partisan and which not is thus not very reliable.", "The article-level annotation task was not as well-defined as on a crowdsourcing platform. We included the questions as part of an existing survey and didn't want to create much burden to the annotators. Therefore, we did not provide long descriptive text that explained how a person should annotate an article. We thus run under the risk of annotator bias. This is one of the reasons for a low inter-rater agreement." ] ]
441886f0497dc84f46ed8c32e8fa32983b5db42e
What examples of applications are mentioned?
[ "partisan news detector" ]
[ [ "This dataset is aimed to contribute to developing a partisan news detector. There are several ways that the dataset can be used to devise the system. For example, it is possible to train the detector using publisher-level labels and test with article-level labels. It is also possible to use semi-supervised learning and treat the publisher-level part as unsupervised, or use only the article-level part. We also released the raw survey data so that new mechanisms to decide the article-level labels can be devised." ] ]
62afbf8b1090e56fdd2a2fa2bdb687c3995477f6
Did they crowdsource the annotations?
[ "Yes" ]
[ [ "To collect article-level labels, we utilized a platform in the company that has been used by the market research team to collect surveys from the subscribers of different news publishers. The survey works as follows: The user is first presented with a set of selected pages (usually 4 pages and around 20 articles) from the print paper the day before. The user can select an article each time that he or she has read, and answer some questions about it. We added 3 questions to the existing survey that asked the level of partisanship, the polarity of partisanship, and which pro- or anti- entities the article presents. We also asked the political standpoint of the user. The complete survey can be found in Appendices." ] ]
d3341eefe4188ee8a68914a2e8c9047334997e84
Why they conclude that the usage of Gated-Attention provides no competitive advantage against concatenation in this setting?
[ "concatenation consistently outperforms the gated-attention mechanism for both training and testing instructions" ]
[ [ "We notice that the concatenation consistently outperforms the gated-attention mechanism for both training and testing instructions. We suspect that the gated-attention is useful in the scenarios where objects are described in terms of multiple attributes, but it has no to harming effect when it comes to the order connectors." ] ]
770b4ec5c9a9706fef89a9aae45bb3e713d6b8ee
What was the best team's system?
[ "Unanswerable" ]
[ [] ]
a379c380ac9f67f824506951444c873713405eed
What are the baselines?
[ "CNN, LSTM, BERT" ]
[ [] ]
334f90bb715d8950ead1be0742d46a3b889744e7
What semantic features help in detecting whether a piece of text is genuine or generated? of
[ "No feature is given, only discussion that semantic features are use in practice and yet to be discovered how to embed that knowledge into statistical decision theory framework." ]
[ [ "Many practical fake news detection algorithms use a kind of semantic side information, such as whether the generated text is factually correct, in addition to its statistical properties. Although statistical side information would be straightforward to incorporate in the hypothesis testing framework, it remains to understand how to cast such semantic knowledge in a statistical decision theory framework." ] ]
53c8416f2983e07a7fa33bcb4c4281bbf49c8164
Which language models generate text that can be easier to classify as genuine or generated?
[ "Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text." ]
[ [ "Suppose we are given a specific language model such as GPT-2 BIBREF6, GROVER BIBREF8, or CTRL BIBREF7, and it is characterized in terms of estimates of either cross-entropy $H(P,Q)$ or perplexity $\\mathrm {PPL}(P,Q)$.", "We can see directly that the Neyman-Pearson error of detection in the case of i.i.d. tokens is:", "and similar results hold for ergodic observations.", "Since we think of $H(P)$ as a constant, we observe that the error exponent for the decision problem is precisely an affine shift of the cross-entropy. Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text.", "Thus we see that intuitive measures of generative text quality match a formal operational measure of indistinguishability that comes from the hypothesis testing limit." ] ]
5b2480c6533696271ae6d91f2abe1e3a25c4ae73
Is the assumption that natural language is stationary and ergodic valid?
[ "It is not completely valid for natural languages because of diversity of language - this is called smoothing requirement." ]
[ [ "Manning and Schütze argue that, even though not quite correct, language text can be modeled as stationary, ergodic random processes BIBREF29, an assumption that we follow. Moreover, given the diversity of language production, we assume this stationary ergodic random process with finite alphabet $\\mathcal {A}$ denoted $X = \\lbrace X_i, -\\infty < i < \\infty \\rbrace $ is non-null in the sense that always $P(x_{-m}^{-1}) > 0$ and", "This is sometimes called the smoothing requirement." ] ]
a516b37ad9d977cb9d4da3897f942c1c494405fe
Which models do they try out?
[ "DocQA, SAN, QANet, ASReader, LM, Random Guess" ]
[ [] ]
7f5ab9a53aef7ea1a1c2221967057ee71abb27cb
Do they compare executionttime of their model against other models?
[ "No" ]
[ [ "The base model composed of DSConv layers without grouping achieves the state-of-the-art accuracy of 96.6% on the Speech Commands test set. The low-parameter model with GDSConv achieves almost the same accuracy of 96.4% with only about half the parameters. This validates the effectiveness of GDSConv for model size reduction. Table TABREF15 lists these results in comparison with related work. Compared to the DSConv network in BIBREF1, our network is more efficient in terms of accuracy for a given parameter count. Their biggest model has a 1.2% lower accuracy than our base model while having about 4 times the parameters. Choi et al. BIBREF3 has the most competitive results while we are still able to improve upon their accuracy for a given number of parameters. They are using 1D convolution along the time dimension as well which may be evidence that this yields better performance for audio processing or at least KWS." ] ]
7fbbe191f4d877cc6af89c00fcfd5b5774d2a2bb
What is the memory footprint decrease of their model in comparison to other models?
[ "Unanswerable" ]
[ [] ]
f42e61f9ad06fb782d1574eb973c880add4f76d2
What architectural factors were investigated?
[ "type of recurrent unit, type of attention, choice of sequential vs. tree-based model structure" ]
[ [ "We find that all the factors we tested can qualitatively affect how a model generalizes on the question formation task. These factors are the type of recurrent unit, the type of attention, and the choice of sequential vs. tree-based model structure. Even though all these factors affected the model's decision between move-main and move-first, only the use of a tree-based model can be said to impart a hierarchical bias, since this was the only model type that chose a hierarchical generalization across both of our tasks. Specific findings that support these general conclusions include:" ] ]
f197e0f61f7980c64a76a3a9657762f1f0edb65b
Any other bias may be detected?
[ "Unanswerable" ]
[ [] ]
b5484a0f03d63d091398d3ce4f841a45062438a7
What is the introduced meta-embedding method introduced in this paper?
[ "proposed method comprises of two steps: a neighbourhood reconstruction step (Section \"Nearest Neighbour Reconstruction\" ), and a projection step (Section \"Projection to Meta-Embedding Space\" ). In the reconstruction step, we represent the embedding of a word by the linearly weighted combination of the embeddings of its nearest neighbours in each source embedding space. " ]
[ [ "To overcome the above-mentioned challenges, we propose a locally-linear meta-embedding learning method that (a) requires only the words in the vocabulary of each source embedding, without having to predict embeddings for missing words, (b) can meta-embed source embeddings with different dimensionalities, (c) is sensitive to the diversity of the neighbourhoods of the source embeddings.", "Our proposed method comprises of two steps: a neighbourhood reconstruction step (Section \"Nearest Neighbour Reconstruction\" ), and a projection step (Section \"Projection to Meta-Embedding Space\" ). In the reconstruction step, we represent the embedding of a word by the linearly weighted combination of the embeddings of its nearest neighbours in each source embedding space. Although the number of words in the vocabulary of a particular source embedding can be potentially large, the consideration of nearest neighbours enables us to limit the representation to a handful of parameters per each word, not exceeding the neighbourhood size. The weights we learn are shared across different source embeddings, thereby incorporating the information from different source embeddings in the meta-embedding. Interestingly, vector concatenation, which has found to be an accurate meta-embedding method, can be derived as a special case of this reconstruction step." ] ]
18d8b52b4409c718bf1cc90ce9e013206034bbd9
How long are dialogue recordings used for evaluation?
[ "average 12.8 min per recording" ]
[ [ "We conducted our experiments on the CSJ BIBREF25, which is one of the most widely used evaluation sets for Japanese speech recognition. The CSJ consists of more than 600 hrs of Japanese recordings.", "While most of the content is lecture recordings by a single speaker, CSJ also contains 11.5 hrs of 54 dialogue recordings (average 12.8 min per recording) with two speakers, which were the main target of ASR and speaker diarization in this study. During the dialogue recordings, two speakers sat in two adjacent sound proof chambers divided by a glass window. They could talk with each other over voice connection through a headset for each speaker. Therefore, speech was recorded separately for each speaker, and we generated mixed monaural recordings by mixing the corresponding speeches of two speakers. When mixing two recordings, we did not apply any normalization of speech volume. Due to this recording procedure, we were able to use non-overlapped speech to evaluate the oracle WERs." ] ]
43d8057ff0d3f0c745a7164aed7ed146674630e0
What do the models that they compare predict?
[ "national dialects of English" ]
[ [ "The main set of experiments uses a Linear Support Vector Machine (Joachims, 1998) to classify dialects using CxG features. Parameters are tuned using the development data. Given the general robust performance of SVMs in the literature relative to other similar classifiers on variation tasks (c.f., Dunn, et al., 2016), we forego a systematic evaluation of classifiers.", "This paper has used data-driven language mapping to select national dialects of English to be included in a global dialect identification model. The main experiments have focused on a dynamic syntactic feature set, showing that it is possible to predict dialect membership within-domain with only a small loss of performance against lexical models. This work raises two remaining problems:" ] ]
ebb7313eee2ea447abc83cb08b658b57c7eaa600
What SMT models did they look at?
[ "automatic translator with Moses" ]
[ [ "To evaluate the usefulness of our corpus for SMT purposes, we used it to train an automatic translator with Moses BIBREF8 . We also trained an NMT model using the OpenNMT system BIBREF9 , and used the Google Translate Toolkit to produce state-of-the-art comparison results. The produced translations were evaluated according to the BLEU score BIBREF10 ." ] ]
df934aa1db09c14b3bf4bc617491264e2192390b
Which NMT models did they experiment with?
[ "2-layer LSTM model with 500 hidden units in both encoder and decoder" ]
[ [ "Prior to the MT experiments, sentences were randomly split in three disjoint datasets: training, development, and test. Approximately 13,000 sentences were allocated in the development and test sets, while the remaining was used for training. For the SMT experiment, we followed the instructions of Moses baseline system. For the NMT experiment, we used the Torch implementation to train a 2-layer LSTM model with 500 hidden units in both encoder and decoder, with 12 epochs. During translation, the option to replace UNK words by the word in the input language was used, since this is also the default in Moses." ] ]
346f10ddb34503dfba72b0e49afcdf6a08ecacfa
How big PIE datasets are obtained from dictionaries?
[ "46 documents makes up our base corpus" ]
[ [ "We use only the written part of the BNC. From this, we extract a set of documents with the aim of having as much genre variation as possible. To achieve this, we select the first document in each genre, as defined by the classCode attribute (e.g. nonAc, commerce, letters). The resulting set of 46 documents makes up our base corpus. Note that these documents vary greatly in size, which means the resulting corpus is varied, but not balanced in terms of size (Table TABREF43). The documents are split across a development and test set, as specified at the end of Section SECREF46. We exclude documents with IDs starting with A0 from all annotation and evaluation procedures, as these were used during development of the extraction tool and annotation guidelines." ] ]
2480dfe2d996afef840a81bd920aeb9c26e5b31d
What compleentary PIE extraction methods are used to increase reliability further?
[ "exact string matching, inflectional string matching" ]
[ [ "We experiment with two such combinations, by simply taking the union of the sets of extracted idioms of both systems, and filtering out duplicates. Results are shown in Table TABREF77. Both combinations show the expected effect: a clear gain in recall at a minimal loss in precision. Compared to the in-context-parsing-based system, the combination with exact string matching yields a gain in recall of over 6%, and the combination with inflectional string matching yields an even bigger gain of almost 8%, at precision losses of 0.6% and 0.8%, respectively. This indicates that the systems are very much complementary in the PIEs they extract. It also means that, when used in practice, combining inflectional string matching and parse-based extraction is the most reliable configuration." ] ]
0fec9da2bc80a12a7a6d6600b9ecf3e122732b60
Are PIEs extracted automatically subjected to human evaluation?
[ "Yes" ]
[ [ "For parser-based extraction, systems with and without in-context parsing, ignoring labels, and ignoring directionality are tested. For the three string-based extraction methods, varying numbers of intervening words and case sensitivity are evaluated. Evaluation is done using the development set, consisting of 22 documents and 1112 PIE candidates, and the test set, which consists of 23 documents and 1127 PIE candidates. For each method the best set of parameters and/or options is determined using the development set, after which the best variant by F1-score of each method is evaluated on the test set.", "Since these documents in the corpus are exhaustively annotated for PIEs (see Section SECREF40), we can calculate true and false positives, and false negatives, and thus precision, recall and F1-score. The exact spans are ignored, because the spans annotated in the evaluation corpus are not completely reliable. These were automatically generated during candidate extraction, as described in Section SECREF45. Rather, we count an extraction as a true positive if it finds the correct PIE type in the correct sentence." ] ]
5499527beadb7f5dd908bd659cad83d6a81119bd
What dictionaries are used for automatic extraction of PIEs?
[ "Wiktionary, Oxford Dictionary of English Idioms, UsingEnglish.com (UE), Sporleder corpus, VNC dataset, SemEval-2013 Task 5 dataset" ]
[ [ "We evaluate the quality of three idiom dictionaries by comparing them to each other and to three idiom corpora. Before we report on the comparison we first describe why we select and how we prepare these resources. We investigate the following six idiom resources:", "Wiktionary;", "the Oxford Dictionary of English Idioms (ODEI, BIBREF31);", "UsingEnglish.com (UE);", "the Sporleder corpus BIBREF10;", "the VNC dataset BIBREF9;", "There are four sizeable sense-annotated PIE corpora for English: the VNC-Tokens Dataset BIBREF9, the Gigaword dataset BIBREF14, the IDIX Corpus BIBREF10, and the SemEval-2013 Task 5 dataset BIBREF15. An overview of these corpora is presented in Table TABREF7." ] ]
191d4fe8a37611b2485e715bb55ff1a30038ad6a
Are experiments performed with any other pair of languages, how did proposed method perform compared to other models?
[ "No" ]
[ [ "We evaluate the proposed transfer learning techniques in two non-English language pairs of WMT 2019 news translation tasks: French$\\rightarrow $German and German$\\rightarrow $Czech." ] ]
6e76f114209f59b027ec3b3c8c9cdfc3e682589f
Is pivot language used in experiments English or some other language?
[ "Yes" ]
[ [ "Nonetheless, the main caveat of this basic pre-training is that the source encoder is trained to be used by an English decoder, while the target decoder is trained to use the outputs of an English encoder — not of a source encoder. In the following, we propose three techniques to mitigate the inconsistency of source$\\rightarrow $pivot and pivot$\\rightarrow $target pre-training stages. Note that these techniques are not exclusive and some of them can complement others for a better performance of the final model." ] ]
6583e8bfa7bcc3a792a90b30abb316e6d423f49b
What are multilingual models that were outperformed in performed experiment?
[ "Direct source$\\rightarrow $target: A standard NMT model trained on given source$\\rightarrow $target, Multilingual: A single, shared NMT model for multiple translation directions, Many-to-many: Trained for all possible directions among source, target, and pivot languages, Many-to-one: Trained for only the directions to target language" ]
[ [ "Baselines We thoroughly compare our approaches to the following baselines:", "Direct source$\\rightarrow $target: A standard NMT model trained on given source$\\rightarrow $target parallel data.", "Multilingual: A single, shared NMT model for multiple translation directions BIBREF6.", "Many-to-many: Trained for all possible directions among source, target, and pivot languages.", "Many-to-one: Trained for only the directions to target language, i.e., source$\\rightarrow $target and pivot$\\rightarrow $target, which tends to work better than many-to-many systems BIBREF27." ] ]
9a5d02062fa7eec7097f1dc1c38b5e6d5c82acdf
What are the common captioning metrics?
[ "the CIDEr-D BIBREF22 , SPICE BIBREF23 , BLEU BIBREF24 , METEOR BIBREF25 , and ROUGE-L BIBREF26 metrics" ]
[ [ "We trained and evaluated our algorithm on the Microsoft COCO (MS-COCO) 2014 Captions dataset BIBREF21 . We report results on the Karpathy validation and test splits BIBREF8 , which are commonly used in other image captioning publications. The dataset contains 113K training images with 5 human annotated captions for each image. The Karpathy test and validation sets contain 5K images each. We evaluate our models using the CIDEr-D BIBREF22 , SPICE BIBREF23 , BLEU BIBREF24 , METEOR BIBREF25 , and ROUGE-L BIBREF26 metrics. While it has been shown experimentally that BLEU and ROUGE have lower correlation with human judgments than the other metrics BIBREF23 , BIBREF22 , the common practice in the image captioning literature is to report all the mentioned metrics." ] ]
c38a48d65bb21c314194090d0cc3f1a45c549dd6
Which English domains do they evaluate on?
[ "Conll, Weblogs, Newsgroups, Reviews, Answers" ]
[ [ "We further evaluate our approach on our main evaluation corpus. The method is tested on both in-domain and out-of-domain parsing. Our DLM-based approach achieved large improvement on all five domains evaluated (Conll, Weblogs, Newsgroups, Reviews, Answers). We achieved the labelled and unlabelled improvements of up to 0.91% and 0.82% on Newsgroups domain. On average we achieved 0.6% gains for both labelled and unlabelled scores on four out-of-domain test sets. We also improved the in-domain accuracy by 0.36% (LAS) and 0.4% (UAS)." ] ]
5450f27ccc0406d3bffd08772d8b59004c2716da
What is the road exam metric?
[ "a new metric to reveal a model's robustness against exposure bias" ]
[ [ "In this paper, we adopt two simple strategies, multi-range reinforcing and multi-entropy sampling to overcome the reward sparseness during training. With the tricks applied, our model demonstrates a significant improvement over competing models. In addition, we propose road exam as a new metric to reveal a model's robustness against exposure bias." ] ]
12ac76b77f22ed3bcb6430bcd0b909441d79751b
What are the competing models?
[ "TEACHER FORCING (TF), SCHEDULED SAMPLING (SS), SEQGAN, RANKGAN, LEAKGAN." ]
[ [] ]
0038b073b7cca847033177024f9719c971692042
How is the input triple translated to a slot-filling task?
[ "The relation R(x,y) is mapped onto a question q whose answer is y" ]
[ [ "We show that it is possible to reduce relation extraction to the problem of answering simple reading comprehension questions. We map each relation type $R(x,y)$ to at least one parametrized natural-language question $q_x$ whose answer is $y$ . For example, the relation $educated\\_at(x,y)$ can be mapped to “Where did $x$ study?” and “Which university did $x$ graduate from?”. Given a particular entity $x$ (“Turing”) and a text that mentions $x$ (“Turing obtained his PhD from Princeton”), a non-null answer to any of these questions (“Princeton”) asserts the fact and also fills the slot $y$ . Figure 1 illustrates a few more examples." ] ]
ad6415f4351c44ffae237524696a3f76f383bfd5
Is model compared against state of the art models on these datasets?
[ "Yes" ]
[ [ "Deep convolutional neural networks (CNNs) with 2D convolutions and small kernels BIBREF1, have achieved state-of-the-art results for several speech recognition tasks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. The accuracy of those models grows with their complexity, leading to redundant latent representations. Several approaches have been proposed in the literature to reduce this redundancy BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, and therefore to improve their efficiency.", "Models: Our baseline CNN model BIBREF21 consists of 15 convolutional and one fully-connected layer. We use $3\\times 3$ kernels throughout the network. We start with 64 output channels in the first layer and double them after 3 and 9 layers. We use batch normalization in every convolutional layer, and ReLU afterwards (unless a reverse order is noted). The initial learning rate is 0.001. We use early stopping for training." ] ]
e097c2ec6021b1c1195b953bf3e930374b74d8eb
How is octave convolution concept extended to multiple resolutions and octaves?
[ "The resolution of the low-frequency feature maps is reduced by an octave – height and width dimensions are divided by 2. In this work, we explore spatial reduction by up to 3 octaves – dividing by $2^t$, where $t=1,2,3$ – and for up to 4 groups. We refer to such a layer as a multi-octave convolutional (MultiOctConv) layer," ]
[ [ "An octave convolutional layer BIBREF0 factorizes the output feature maps of a convolutional layer into two groups. The resolution of the low-frequency feature maps is reduced by an octave – height and width dimensions are divided by 2. In this work, we explore spatial reduction by up to 3 octaves – dividing by $2^t$, where $t=1,2,3$ – and for up to 4 groups. We refer to such a layer as a multi-octave convolutional (MultiOctConv) layer, and an example with three groups and reductions of one and two octaves is depicted in Fig. FIGREF1." ] ]
320d72a9cd19b52c29dda9ddecd520c9938a717f
Does this paper address the variation among English dialects regarding these hedges?
[ "No" ]
[ [] ]
21cbcd24863211b02b436f21deaf02125f34da4c
On which dataset is model trained?
[ "Couples Therapy Corpus (CoupTher) BIBREF21" ]
[ [ "For evaluating the proposed model on behavior related data, we employ the Couples Therapy Corpus (CoupTher) BIBREF21 and Cancer Couples Interaction Dataset (Cancer) BIBREF22. These are the targeted conditions under which a behavior-gated language model can offer improved performance.", "We utilize the Couple's Therapy Corpus as an in-domain experimental corpus since our behavior classification model is also trained on the same. The RNNLM architecture is similar to BIBREF1, but with hyperparameters optimized for the couple's corpus. The results are tabulated in Table TABREF16 in terms of perplexity. We find that the behavior gated language models yield lower perplexity compared to vanilla LSTM language model. A relative improvement of 2.43% is obtained with behavior gating on the couple's data." ] ]
37bc8763eb604c14871af71cba904b7b77b6e089
How is module that analyzes behavioral state trained?
[ "pre-trained to identify the presence of behavior from a sequence of word using the Couples Therapy Corpus" ]
[ [ "The behavior model was implemented using an RNN with LSTM units and trained with the Couples Therapy Corpus. Out of the 33 behavioral codes included in the corpus we applied the behaviors Acceptance, Blame, Negativity, Positivity, and Sadness to train our models. This is motivated from previous works which showed good separability in these behaviors as well as being easy to interpret. The behavior model is pre-trained to identify the presence of each behavior from a sequence of words using a multi-label classification scheme. The pre-trained portion of the behavior model was implemented using a single layer RNN with LSTM units with dimension size 50." ] ]
a81941f933907e4eb848f8aa896c78c1157bff20
Can the model add new relations to the knowledge graph, or just new entities?
[ "The model does not add new relations to the knowledge graph." ]
[ [ "In ConMask, we use a similar idea to select the most related words given some relationship and mask irrelevant words by assigning a relationship-dependent similarity score to words in the given entity description. We formally define relationship-dependent content masking as:", "ConMask selects words that are related to the given relationship to mitigate the inclusion of irrelevant and noisy words. From the relevant text, ConMask then uses fully convolutional network (FCN) to extract word-based embeddings. Finally, it compares the extracted embeddings to existing entities in the KG to resolve a ranked list of target entities. The overall structure of ConMask is illustrated in Fig. 1 . Later subsections describe the model in detail." ] ]
252677c93feb2cb0379009b680f0b4562b064270
How large is the dataset?
[ "6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities" ]
[ [ "Table TABREF17 shows our annotated corpus characteristics. Our corpus comprises a total of 6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities. The number of entities per abstract directly correlates with the length of the abstracts (Pearson's R 0.97). Among the concepts, Process and Material directly correlate with abstract length (R 0.8 and 0.83, respectively), while Data has only a slight correlation (R 0.35) and Method has no correlation (R 0.02)." ] ]
fe6bb55b28f14ed8ac82c122681905397e31279d
Why is a Gaussian process an especially appropriate method for this classification problem?
[ "avoids the need for expensive cross-validation for hyperparameter selection" ]
[ [ "We use Gaussian Processes as this probabilistic kernelised framework avoids the need for expensive cross-validation for hyperparameter selection. Instead, the marginal likelihood of the data can be used for hyperparameter selection." ] ]
b3ac67232c8c7d5a759ae025aee85e9c838584eb
Do the authors do manual evaluation?
[ "No" ]
[ [ "We compare the performance of our model (Table 2 ) with traditional Bag of Words (BoW), TF-IDF, and n-grams features based classifiers. We also compare against averaged Skip-Gram BIBREF29 , Doc2Vec BIBREF30 , CNN BIBREF23 , Hierarchical Attention (HN-ATT) BIBREF24 and hierarchical network (HN) models. HN it is similar to our model HN-SA but without any self attention.", "Analysis: As is evident from the experiments on both the versions of SWBD, our model (HN-SA) outperforms traditional feature based topic spotting models and deep learning based document classification models. It is interesting to see that simple BoW and n-gram baselines are quite competitive and outperform some of the deep learning based document classification model. Similar observation has also been reported by BIBREF31 ( BIBREF31 ) for the task of sentiment analysis. The task of topic spotting is arguably more challenging than document classification. In the topic spotting task, the number of output classes (66/42 classes) is much more than those in document classification (5/6 classes), which is done mainly on the texts from customer reviews. Dialogues in SWBD have on an average 200 utterances and are much longer texts than customer reviews. Additionally, the number of dialogues available for training the model is significantly lesser than customer reviews. We further investigated the performance on SWBD2 by examining the confusion matrix of the model. Figure 2 shows the heatmap of the normalized confusion matrix of the model on SWBD2. For most of the classes the classifier is able to predict accurately. However, the model gets confused between the classes which are semantically close (w.r.t. terms used) to each other, for example, the model gets confused between pragmatically similar topics e.g. HOBBIES€™ vs €˜GARDENING€™, €˜MOVIES vs €˜TV PROGRAMS’, €˜RIGHT TO PRIVACY vs€˜ DRUG TESTING€™." ] ]
43878a6a8fc36aaae29d95815355aaa7d25c3b53
What datasets did they use?
[ "the personalized bAbI dialog dataset" ]
[ [ "Our experiments on a goal-oriented dialog corpus, the personalized bAbI dialog dataset, show that leveraging personal information can significantly improve the performance of dialog systems. The Personalized MemN2N outperforms current state-of-the-art methods with over 7% improvement in terms of per-response accuracy. A test with real human users also illustrates that the proposed model leads to better outcomes, including higher task completion rate and user satisfaction." ] ]
68ff2a14e6f0e115ef12c213cf852a35a4d73863
Do twitter users tend to tweet about the DOS attack when it occurs? How much data supports this assumption?
[ "The dataset contains about 590 tweets about DDos attacks." ]
[ [ "In this subsection we discuss the experiment on the attack tweets found in the whole dataset. As stated in section 3.3, the whole dataset was divided into two parts. $D_a$ contained all of the tweets collected on the attack day of the five attacks mentioned in section 4.2. And $D_b$ contained all of the tweets collected before the five attacks. There are 1180 tweets in $D_a$ and 7979 tweets in $D_b$. The tweets on the attack days ($D_a$) are manually annotated and only 50 percent of those tweets are actually about a DDoS attack." ] ]
0b54032508c96ff3320c3db613aeb25d42d00490
What is the training and test data used?
[ "Tweets related to a Bank of America DDos attack were used as training data. The test datasets contain tweets related to attacks to Bank of America, PNC and Wells Fargo." ]
[ [ "We collected tweets related to five different DDoS attacks on three different American banks. For each attack, all the tweets containing the bank's name posted from one week before the attack until the attack day were collected. There are in total 35214 tweets in the dataset. Then the collected tweets were preprocessed as mentioned in the preprocessing section.", "Only the tweets from the Bank of America attack on 09/19/2012 were used in this experiment. The tweets before the attack day and on the attack day were used to train the two LDA models mentioned in the approach section.", "In this subsection we evaluate how good the model generalizes. To achieve that, the dataset is divided into two groups, one is about the attacks on Bank of America and the other group is about PNC and Wells Fargo. The only difference between this experiment and the experiment in section 4.4 is the dataset. In this experiment setting $D_a$ contains only the tweets collected on the days of attack on PNC and Wells Fargo. $D_b$ only contains the tweets collected before the Bank of America attack. There are 590 tweets in $D_a$ and 5229 tweets in $D_b$. In this experiment, we want to find out whether a model trained on Bank of America data can make good classification on PNC and Wells Fargo data." ] ]
86be8241737dd8f7b656a3af2cd17c8d54bf1553
Was performance of the weakly-supervised model compared to the performance of a supervised model?
[ "Yes" ]
[ [ "The precision when labeling the first x ranked tweets as attack tweet is shown in the figure FIGREF39. The x-axis is the number of ranked tweets treated as attack tweets. And the y-axis is the corresponding precision. The straight line in figures FIGREF39, FIGREF43 and FIGREF51 is the result of a supervised LDA algorithm which is used as a baseline. Supervised LDA achieved 96.44 percent precision with 10 fold cross validation." ] ]
a4422019d19f9c3d95ce8dc1d529bf3da5edcfb1
Do the tweets come from a specific region?
[ "No" ]
[ [ "Table TABREF14 presents the distribution of the tweets by country before and after the filtering process. A large portion of the samples is from India because the MeToo movement has peaked towards the end of 2018 in India. There are very few samples from Russia likely because of content moderation and regulations on social media usage in the country. Figure FIGREF15 gives a geographical distribution of the curated dataset." ] ]
bb169a0624aefe66d3b4b1116bbd152d54f9e31b
Did they experiment with the corpus?
[ "Yes" ]
[ [ "The corpus creation process involved a small number of people that have voluntarily joined the initiative, with the authors of this paper directing the work. Initially, we searched for NER resources in Romanian, and found none. Then we looked at English resources and read the in-depth ACE guide, out of which a 16-class draft evolved. We then identified a copy-right free text from which we hand-picked sentences to maximize the amount of entities while maintaining style balance. The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes. The annotation process was done online, in BRAT. The actual annotation involved 4 people, has taken about 6 months (as work was volunteer-based, we could not have reached for 100% time commitment from the people involved), and followed the steps:" ] ]
0d7de323fd191a793858386d7eb8692cc924b432
What writing styles are present in the corpus?
[ "current news, historical news, free time, sports, juridical news pieces, personal adverts, editorials." ]
[ [] ]
ca8e023d142d89557714d67739e1df54d7e5ce4b
How did they determine the distinct classes?
[ "inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8" ]
[ [ "The 16 classes are inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8. Each class will be presented in detail, with examples, in the section SECREF3 A summary of available classes with word counts for each is available in table TABREF18." ] ]
3fddd9f6707b9e40e35518dae7f6da7c4cb77d16
Do they jointly tackle multiple tagging problems?
[ "No" ]
[ [ "We evaluate our method on three tagging tasks: POS tagging (Pos), morphological tagging (Morph) and supertagging (Stag).", "We select these tasks as examples for tagging applications because they differ strongly in tag set sizes. Generally, the Pos set sizes for all the languages are no more than 17 and Stag set sizes are around 200. When treating morphological features as a string (i.e. not splitting into key-value pairs), the sizes of the Morph tag sets range from about 100 up to 2000.", "The test results for the three tasks are shown in Table TABREF17 in three groups. The first group of seven columns are the results for Pos, where both LSTM and CNN have three variations of input features: word only ( INLINEFORM0 ), character only ( INLINEFORM1 ) and both ( INLINEFORM2 ). For Morph and Stag, we only use the INLINEFORM3 setting for both LSTM and CNN." ] ]
676c874266ee0388fe5b9a75e1006796c68c3c13
How many parameters does their CNN have?
[ "Unanswerable" ]
[ [] ]
fc54736e67f748f804e8f66b3aaaea7f5e55b209
How do they confirm their model working well on out-of-vocabulary problems?
[ "conduct experiments using artificially constructed unnormalized text by corrupting words in the normal dev set" ]
[ [ "To test the robustness of the taggers against the OOV problem, we also conduct experiments using artificially constructed unnormalized text by corrupting words in the normal dev set. Again, the CNN tagger outperforms the two baselines by a very large margin." ] ]
a53683d1a0647c80a4398ff8f4a03e11c0929be2
What approach does this work propose for the new task?
[ "We propose a listening comprehension model for the task defined above, the Attention-based Multi-hop Recurrent Neural Network (AMRNN) framework, and show that this model is able to perform reasonably well for the task. In the proposed approach, the audio of the stories is first transcribed into text by ASR, and the proposed model is developed to process the transcriptions for selecting the correct answer out of 4 choices given the question. " ]
[ [ "We propose a listening comprehension model for the task defined above, the Attention-based Multi-hop Recurrent Neural Network (AMRNN) framework, and show that this model is able to perform reasonably well for the task. In the proposed approach, the audio of the stories is first transcribed into text by ASR, and the proposed model is developed to process the transcriptions for selecting the correct answer out of 4 choices given the question. The initial experiments showed that the proposed model achieves encouraging scores on the TOEFL listening comprehension test. The attention-mechanism proposed in this paper can be applied on either word or sentence levels. We found that sentence-level attention achieved better results on the manual transcriptions without ASR errors, but word-level attention outperformed the sentence-level on ASR transcriptions with errors." ] ]
0fd7d12711dfe0e35467a7ee6525127378a1bacb
What is the new task proposed in this work?
[ " listening comprehension task " ]
[ [ "With the popularity of shared videos, social networks, online course, etc, the quantity of multimedia or spoken content is growing much faster beyond what human beings can view or listen to. Accessing large collections of multimedia or spoken content is difficult and time-consuming for humans, even if these materials are more attractive for humans than plain text information. Hence, it will be great if the machine can automatically listen to and understand the spoken content, and even visualize the key information for humans. This paper presents an initial attempt towards the above goal: machine comprehension of spoken content. In an initial task, we wish the machine can listen to and understand an audio story, and answer the questions related to that audio content. TOEFL listening comprehension test is for human English learners whose native language is not English. This paper reports how today's machine can perform with such a test.", "The listening comprehension task considered here is highly related to Spoken Question Answering (SQA) BIBREF0 , BIBREF1 . In SQA, when the users enter questions in either text or spoken form, the machine needs to find the answer from some audio files. SQA usually worked with ASR transcripts of the spoken content, and used information retrieval (IR) techniques BIBREF2 or relied on knowledge bases BIBREF3 to find the proper answer. Sibyl BIBREF4 , a factoid SQA system, used some IR techniques and utilized several levels of linguistic information to deal with the task. Question Answering in Speech Transcripts (QAST) BIBREF5 , BIBREF6 , BIBREF7 has been a well-known evaluation program of SQA for years. However, most previous works on SQA mainly focused on factoid questions like “What is name of the highest mountain in Taiwan?”. Sometimes this kind of questions may be correctly answered by simply extracting the key terms from a properly chosen utterance without understanding the given spoken content. More difficult questions that cannot be answered without understanding the whole spoken content seemed rarely dealt with previously." ] ]
5dc2f79cd8078d5976f2df9ab128d4517e894257
Which news organisations are the headlines sourced from?
[ "BBC and CNN " ]
[ [ "Here, we outline the required steps for developing a knowledge graph of interlinked events. Figure FIGREF2 illustrates the high-level overview of the full pipeline. This pipeline contains the following main steps, to be discussed in detail later. (1) Collecting tweets from the stream of several news channels such as BBC and CNN on Twitter. (2) Agreeing upon background data model. (3) Event annotation potentially contains two subtasks (i) event recognition and (ii) event classification. (4) Entity/relation annotation possibly comprises a series of tasks as (i) entity recognition, (ii) entity linking, (iii) entity disambiguation, (iv) semantic role labeling of entities and (v) inferring implicit entities. (5) Interlinking events across time and media. (6) Publishing event knowledge graph based on the best practices of Linked Open Data." ] ]
4226a1830266ed5bde1b349205effafe7a0e2337
What meta-information is being transferred?
[ "high-order representation of a relation, loss gradient of relation meta" ]
[ [ "The relation-specific meta information is helpful in the following two perspectives: 1) transferring common relation information from observed triples to incomplete triples, 2) accelerating the learning process within one task by observing only a few instances. Thus we propose two kinds of relation-specific meta information: relation meta and gradient meta corresponding to afore mentioned two perspectives respectively. In our proposed framework MetaR, relation meta is the high-order representation of a relation connecting head and tail entities. Gradient meta is the loss gradient of relation meta which will be used to make a rapid update before transferring relation meta to incomplete triples during prediction." ] ]
5fb348b2d7b012123de93e79fd46a7182fd062bd
What datasets are used to evaluate the approach?
[ "NELL-One, Wiki-One" ]
[ [ "We use two datasets, NELL-One and Wiki-One which are constructed by BIBREF11 . NELL-One and Wiki-One are derived from NELL BIBREF2 and Wikidata BIBREF0 respectively. Furthermore, because these two benchmarks are firstly tested on GMatching which consider both learned embeddings and one-hop graph structures, a background graph is constructed with relations out of training/validation/test sets for obtaining the pre-train entity embeddings and providing the local graph for GMatching." ] ]
7ff48fe5b7bd6b56553caacc891ce3d7e0070440
Does their solution involve connecting images and text?
[ "Yes" ]
[ [ "Once the S-V-O is generated, Text2Visual provides users with visual components that convey the S-V-O text meanings." ] ]
54a2c08aa55c3db9b30ae2922c96528d3f4fc733
Which model do they use to generate key messages?
[ "ontology-based knowledge tree, heuristics-based, n-grams model" ]
[ [ "In order to find the object type, SimplerVoice, first, builds an ontology-based knowledge tree. Then, the system maps the object with a tree's leaf node based on the object's title. For instance, given the object's title as “Thomas' Plain Mini Bagels\", SimplerVoice automatically defines that the object category is “bagel\". Note that both the knowledge tree, and the mapping between object and object category are obtained based on text-based searching / crawling web, or through semantic webs' content. Figure FIGREF6 shows an example of the sub-tree for object category \"bagel\". While the mapped leaf node is the O in our S-V-O model, the parents nodes describe the more general object categories, and the neighbors indicate other objects' types which are similar to the input object. All the input object's type, the direct parents category, and the neighbors' are, then, put in the next step: generating verbs (V).", "We propose to use 2 methods to generate the suitable verbs for the target object: heuristics-based, and n-grams model. In detail, SimplerVoice has a set of rule-based heuristics for the objects. For instance, if the object belongs to a \"food | drink\" category, the verb is generated as \"eat | drink\". Another example is the retrieved \"play\" verb if input object falls into \"toy\" category. However, due to the complexity of object's type, heuristics-based approach might not cover all the contexts of object. As to solve this, an n-grams model is applied to generate a set of verbs for the target object. An n-gram is a contiguous sequence of n items from a given speech, or text string. N-grams model has been extensively used for various tasks in text mining, and natural language processing field BIBREF14 , BIBREF15 . Here, we use the Google Books n-grams database BIBREF16 , BIBREF17 to generate a set of verbs corresponding to the input object's usage. Given a noun, n-grams model can provide a set of words that have the highest frequency of appearance followed by the noun in the database of Google Books. For an example, \"eaten\", \"toasted\", \"are\", etc. are the words which are usually used with \"bagel\". To get the right verb form, after retrieving the words from n-grams model, SimplerVoice performs word stemming BIBREF18 on the n-grams' output." ] ]
ecb680d79e847beb7c1aa590d288a7313908d64a
What experiments they perform to demonstrate that their approach leads more accurate region based representations?
[ " To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing." ]
[ [ "The central problem we consider is category induction: given some instances of a category, predict which other individuals are likely to be instances of that category. When enough instances are given, standard approaches such as the Gaussian classifier from Section UNKREF9, or even a simple SVM classifier, can perform well on this task. For many categories, however, we only have access to a few instances, either because the considered ontology is highly incomplete or because the considered category only has few actual instances. The main research question which we want to analyze is whether (predicted) conceptual neighborhood can help to obtain better category induction models in such cases. In Section SECREF16, we first provide more details about the experimental setting that we followed. Section SECREF23 then discusses our main quantitative results. Finally, in Section SECREF26 we present a qualitative analysis.", "As explained in Section SECREF3, we used BabelNet BIBREF29 as our reference taxonomy. BabelNet is a large-scale full-fledged taxonomy consisting of heterogeneous sources such as WordNet BIBREF36, Wikidata BIBREF37 and WiBi BIBREF38, making it suitable to test our hypothesis in a general setting.", "BabelNet category selection. To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing. To tune the prior probability $\\lambda _A$ for these categories, we hold out 10% from the training set as a validation set." ] ]
b622f57c4e429b458978cb8863978d7facab7cfe
How they indentify conceptual neighbours?
[ "Once this classifier has been trained, we can then use it to predict conceptual neighborhood for categories for which only few instances are known." ]
[ [ "We now consider the following problem: given two BabelNet categories $A$ and $B$, predict whether they are likely to be conceptual neighbors based on the sentences from a text corpus in which they are both mentioned. To train such a classifier, we use the distant supervision labels from Section SECREF8 as training data. Once this classifier has been trained, we can then use it to predict conceptual neighborhood for categories for which only few instances are known.", "To find sentences in which both $A$ and $B$ are mentioned, we rely on a disambiguated text corpus in which mentions of BabelNet categories are explicitly tagged. Such a disambiguated corpus can be automatically constructed, using methods such as the one proposed by BIBREF30 mancini-etal-2017-embedding, for instance. For each pair of candidate categories, we thus retrieve all sentences where they co-occur. Next, we represent each extracted sentence as a vector. To this end, we considered two possible strategies:" ] ]
f9c5799091e7e35a8133eee4d95004e1b35aea00
What experiment result led to conclussion that reducing the number of layers of the decoder does not matter much?
[ "Exp. 5.1" ]
[ [ "Last, we analyze the importance of our second encoder ($enc_{src \\rightarrow mt}$), compared to the source encoder ($enc_{src}$) and the decoder ($dec_{pe}$), by reducing and expanding the amount of layers in the encoders and the decoder. Our standard setup, which we use for fine-tuning, ensembling etc., is fixed to 6-6-6 for $N_{src}$-$N_{mt}$-$N_{pe}$ (cf. Figure FIGREF1), where 6 is the value that was proposed by Vaswani:NIPS2017 for the base model. We investigate what happens in terms of APE performance if we change this setting to 6-6-4 and 6-4-6. To handle out-of-vocabulary words and reduce the vocabulary size, instead of considering words, we consider subword units BIBREF19 by using byte-pair encoding (BPE). In the preprocessing step, instead of learning an explicit mapping between BPEs in the $src$, $mt$ and $pe$, we define BPE tokens by jointly processing all triplets. Thus, $src$, $mt$ and $pe$ derive a single BPE vocabulary. Since $mt$ and $pe$ belong to the same language (German) and $src$ is a close language (English), they naturally share a good fraction of BPE tokens, which reduces the vocabulary size to 28k.", "The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \\rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder." ] ]
04012650a45d56c0013cf45fd9792f43916eaf83
How much is performance hurt when using too small amount of layers in encoder?
[ "comparing to the results from reducing the number of layers in the decoder, the BLEU score was 69.93 which is less than 1% in case of test2016 and in case of test2017 it was less by 0.2 %. In terms of TER it had higher score by 0.7 in case of test2016 and 0.1 in case of test2017. " ]
[ [ "The number of layers ($N_{src}$-$N_{mt}$-$N_{pe}$) in all encoders and the decoder for these results is fixed to 6-6-6. In Exp. 5.1, and 5.2 in Table TABREF5, we see the results of changing this setting to 6-6-4 and 6-4-6. This can be compared to the results of Exp. 2.3, since no fine-tuning or ensembling was performed for these three experiments. Exp. 5.1 shows that decreasing the number of layers on the decoder side does not hurt the performance. In fact, in the case of test2016, we got some improvement, while for test2017, the scores got slightly worse. In contrast, reducing the $enc_{src \\rightarrow mt}$ encoder block's depth (Exp. 5.2) does indeed reduce the performance for all four scores, showing the importance of this second encoder." ] ]
7889ec45b996be0b8bf7360d08f84daf3644f115
What was previous state of the art model for automatic post editing?
[ "pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders, tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics., shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. , The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \\rightarrow pe$ above the previous cross-attention for $mt \\rightarrow pe$." ]
[ [ "Recently, in the WMT 2018 APE shared task, several adaptations of the transformer architecture have been presented for multi-source APE. pal-EtAl:2018:WMT proposed an APE model that uses three self-attention-based encoders. They introduce an additional joint encoder that attends over a combination of the two encoded sequences from $mt$ and $src$. tebbifakhr-EtAl:2018:WMT, the NMT-subtask winner of WMT 2018 ($wmt18^{nmt}_{best}$), employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics. shin-lee:2018:WMT propose that each encoder has its own self-attention and feed-forward layer to process each input separately. On the decoder side, they add two additional multi-head attention layers, one for $src \\rightarrow mt$ and another for $src \\rightarrow pe$. Thereafter another multi-head attention between the output of those attention layers helps the decoder to capture common words in $mt$ which should remain in $pe$. The APE PBSMT-subtask winner of WMT 2018 ($wmt18^{smt}_{best}$) BIBREF11 also presented another transformer-based multi-source APE which uses two encoders and stacks an additional cross-attention component for $src \\rightarrow pe$ above the previous cross-attention for $mt \\rightarrow pe$. Comparing shin-lee:2018:WMT's approach with the winner system, there are only two differences in the architecture: (i) the cross-attention order of $src \\rightarrow mt$ and $src \\rightarrow pe$ in the decoder, and (ii) $wmt18^{smt}_{best}$ additionally shares parameters between two encoders." ] ]
41e300acec35252e23f239772cecadc0ea986071
What neural machine translation models can learn in terms of transfer learning?
[ "Multilingual Neural Machine Translation Models" ]
[ [ "Various multilingual extensions of NMT have already been proposed in the literature. The authors of BIBREF18 , BIBREF19 apply multitask learning to train models for multiple languages. Zoph and Knight BIBREF20 propose a multi-source model and BIBREF21 introduces a character-level encoder that is shared across several source languages. In our setup, we will follow the main idea proposed by Johnson et al. BIBREF22 . The authors of that paper suggest a simple addition by means of a language flag on the source language side (see Figure 2 ) to indicate the target language that needs to be produced by the decoder. This flag will be mapped on a dense vector representation and can be used to trigger the generation of the selected language. The authors of the paper argue that the model enables transfer learning and supports the translation between languages that are not explicitly available in training. This ability gives a hint of some kind of vector-based “interlingua”, which is precisely what we are looking for. However, the original paper only looks at a small number of languages and we will scale it up to a larger variation using significantly more languages to train on. More details will be given in the following section." ] ]
e70236c876c94dbecd9a665d9ba8cefe7301dcfd
Did they experiment on the proposed task?
[ "No" ]
[ [] ]
aa1f605619b2487cc914fc2594c8efe2598d8555
Is annotation done manually?
[ "Yes" ]
[ [ "The biggest difference between discourse parsing for well-written document and dialogues is that discourse relations can exist on two nonadjacent utterances in dialogues. When we annotate dialogues, we should read dialogues from begin to the end. For each utterance, we should find its one parent node at least from all its previous utterances. We assume that the discourse structure is a connected graph and no utterance is isolated.", "We propose three questions for eache dialogue and annotate the span of answers in the input dialogue. As we know, our dataset is the first corpus for multi-party dialogues reading comprehension.", "We construct following questions and answers for the dialogue in Example 1:", "Q1: When does Bdale leave?", "A1: Fri morning", "Q2: How to get people love Mark in Mjg59's opinion.", "A2: Hire people to work on reverse-engineering closed drivers.", "On the other hand, to improve the difficulty of the task, we propose $ \\frac{1}{6}$ to $ \\frac{1}{3}$ unanswerable questions in our dataset. We annotate unanswerable questions and their plausible answers (PA). Each plausible answer comes from the input dialogue, but is not the answer for the plausible question.", "Q1: Whis is the email of daniels?", "PA: +61 403 505 896" ] ]
9f2634c142dc4ad2c68135dbb393ecdfd23af13f
How large is the proposed dataset?
[ "we obtain 52,053 dialogues and 460,358 utterances" ]
[ [ "The discourse dependency structure of each multi-party dialogue can be regarded as a graph. To learn better graph representation of multi-party dialogues, we adopt the dialogues with 8-15 utterances and 3-7 speakers. To simplify the task, we filter the dialogues with long sentences (more than 20 words). Finally, we obtain 52,053 dialogues and 460,358 utterances." ] ]
77e57d19a0d48f46de8cbf857f5e5284bca0df2b
How large is the dataset?
[ "30M utterances" ]
[ [ "We collected Japanese fictional stories from the Web to construct the dataset. The dataset contains approximately 30M utterances of fictional characters. We separated the data into a 99%–1% split for training and testing. In Japanese, the function words at the end of the sentence often exhibit style (e.g., desu+wa, desu+ze;) therefore, we used an existing lexicon of multi-word functional expressions BIBREF14 . Overall, the vocabulary size $\\vert \\mathcal {V} \\vert $ was 100K." ] ]
50c8b821191339043306fd28e6cda2db400704f9
How is the dataset created?
[ "We collected Japanese fictional stories from the Web" ]
[ [ "We collected Japanese fictional stories from the Web to construct the dataset. The dataset contains approximately 30M utterances of fictional characters. We separated the data into a 99%–1% split for training and testing. In Japanese, the function words at the end of the sentence often exhibit style (e.g., desu+wa, desu+ze;) therefore, we used an existing lexicon of multi-word functional expressions BIBREF14 . Overall, the vocabulary size $\\vert \\mathcal {V} \\vert $ was 100K." ] ]
dee7383a92c78ea49859a2d5ff2a9d0a794c1f0f
What is binary variational dropout?
[ "the dropout technique of Gal & Ghahramani gal" ]
[ [ "We use the dropout technique of Gal & Ghahramani gal as a baseline because it is the most similar dropout technique to our approach and denote it VBD (variational binary dropout)." ] ]
a458c649a793588911cef4c421f95117d0b9c472
Which strategies show the most promise in deterring these attacks?
[ "At the moment defender can draw on methods from image area to text for improving the robustness of DNNs, e.g. adversarial training BIBREF107 , adding extra layer BIBREF113 , optimizing cross-entropy function BIBREF114 , BIBREF115 or weakening the transferability of adversarial examples." ]
[ [ "Appropriate future directions on adversarial attacks and defenses: As an attacker, designing universal perturbations to catch better adversarial examples can be taken into consideration like it works in image BIBREF29 . A universal adversarial perturbation on any text is able to make a model misbehave with high probability. Moreover, more wonderful universal perturbations can fool multi-models or any model on any text. On the other hand, the work of enhancing the transferability of adversarial examples is meaningful in more practical back-box attacks. On the contrary, defenders prefer to completely revamp this vulnerability in DNNs, but it is no less difficult than redesigning a network and is also a long and arduous task with the common efforts of many people. At the moment defender can draw on methods from image area to text for improving the robustness of DNNs, e.g. adversarial training BIBREF107 , adding extra layer BIBREF113 , optimizing cross-entropy function BIBREF114 , BIBREF115 or weakening the transferability of adversarial examples." ] ]
04cab3325e20c61f19846674bf9a2c46ea60c449
What are baseline models on WSJ eval92 and LibriSpeech test-clean?
[ "Wav2vec BIBREF22, a fully-supervised system using all labeled data" ]
[ [ "More recently, acoustic representation learning has drawn increasing attention BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23 in speech processing. For example, an autoregressive predictive coding model (APC) was proposed in BIBREF20 for unsupervised speech representation learning and was applied to phone classification and speaker verification. WaveNet auto-encoders BIBREF21 proposed contrastive predictive coding (CPC) to learn speech representations and was applied on unsupervised acoustic unit discovery task. Wav2vec BIBREF22 proposed a multi-layer convolutional neural network optimized via a noise contrastive binary classification and was applied to WSJ ASR tasks.", "Our experiments consisted of three different setups: 1) a fully-supervised system using all labeled data; 2) an SSL system using wav2vec features; 3) an SSL system using our proposed DeCoAR features. All models used were based on deep BLSTMs with the CTC loss criterion." ] ]
76c8aac84152fc4bbc0d5faa7b46e40438353e77
Do they use the same architecture as LSTM-s and GRUs with just replacing with the LAU unit?
[ "Yes" ]
[ [ "For all experiments, the dimensions of word embeddings and recurrent hidden states are both set to 512. The dimension of INLINEFORM0 is also of size 512. Note that our network is more narrow than most previous work where hidden states of dimmention 1024 is used. we initialize parameters by sampling each element from the Gaussian distribution with mean 0 and variance INLINEFORM1 ." ] ]
6916596253d67f74dba9222f48b9e8799581bad9
So this paper turns unstructured text inputs to parameters that GNNs can read?
[ "Yes" ]
[ [ "In this section, we will introduce the general framework of GP-GNNs. GP-GNNs first build a fully-connected graph $\\mathcal {G} = (\\mathcal {V}, \\mathcal {E})$ , where $\\mathcal {V}$ is the set of entities, and each edge $(v_i, v_j) \\in \\mathcal {E}, v_i, v_j \\in \\mathcal {V}$ corresponds to a sequence $s = x_0^{i,j}, x_1^{i,j}, \\dots , x_{l-1}^{i,j}$ extracted from the text. After that, GP-GNNs employ three modules including (1) encoding module, (2) propagation module and (3) classification module to proceed relational reasoning, as shown in Fig. 2 ." ] ]
7ccf2392422b44ede35a3fbd85bbb1da25adf795
What other models are compared to the Blending Game?
[ "Unanswerable" ]
[ [] ]
4d60e9494a412d581bd5e85f4e78881914085afc
What empirical data are the Blending Game predictions compared to?
[ "words length distribution, the frequency of use of the different forms and a measure for the combinatoriality" ]
[ [ "In this paper we have investigated duality of patterning at the lexicon level. We have quantified in particular the notions of combinatoriality and compositionality as observed in real languages as well as in a large-scale dataset produced in the framework of a web-based word association experiment BIBREF1 . We have paralleled this empirical analysis with a modeling scheme, the Blending Game, whose aim is that of identifying the main determinants for the emergence of duality of patterning in language. We analyzed the main properties of the lexicon emerged from the Blending Game as a function of the two parameters of the model, the graph connectivity $p_{link}$ and the memory scale $\\tau $ . We found that properties of the emerging lexicon related to the combinatoriality, namely the words length distribution, the frequency of use of the different forms and a measure for the combinatoriality itself, reflect both qualitatively and quantitatively the corresponding properties as measured in human languages, provided that the memory parameter $\\tau $ is sufficiently high, that is that a sufficiently high effort is required in order to understand and learn brand new forms. Conversely, the compositional properties of the lexicon are related to the parameter $p_{link}$ , that is a measure of the level of structure of the conceptual graph. For intermediate and low values of $p_{link}$ , semantic relations between objects are more differentiated with respect to the situation of a more dense graph, in which every object is related to anyone else, and compositionality is enhanced. In summary, while the graph connectivity strongly affects the compositionality of the lexicon, noise in communication strongly affects the combinatoriality of the lexicon." ] ]
cf63a4f9fe0f71779cf5a014807ae4528279c25a
How does the semi-automatic construction process work?
[ "Automatic transcription of 5000 tokens through sequential neural models trained on the annotated part of the corpus" ]
[ [ "In order to make the corpus collection easier and faster, we adopted a semi-automatic procedure based on sequential neural models BIBREF19, BIBREF20. Since transcribing Arabish into Arabic is by far the most important information to study the Arabish code-system, the semi-automatic procedure concerns only transcription from Arabish to Arabic script. In order to proceed, we used the first group of (roughly) 6,000 manually transcribed tokens as training and test data sets in a 10-fold cross validation setting with 9-1 proportions for training and test, respectively. As we explained in the previous section, French tokens were removed from the data. More precisely, whole sentences containing non-transcribable French tokens (code-switching) were removed from the data. Since at this level there is no way for predicting when a French word can be transcribed into Arabic and when it has to be left unchanged, French tokens create some noise for an automatic, probabilistic model. After removing sentences with French tokens, the data reduced to roughly 5,000 tokens. We chose this amount of tokens for annotation blocks in our incremental annotation procedure.", "We note that by combining sentence, paragraph and token index in the corpus, whole sentences can be reconstructed. However, from 5,000 tokens roughly 300 sentences could be reconstructed, which are far too few to be used for training a neural model. Instead, since tokens are transcribed at morpheme level, we split Arabish tokens into characters, and Arabic tokens into morphemes, and we treated each token itself as a sequence. Our model learns thus to map Arabish characters into Arabic morphemes.", "The 10-fold cross validation with this setting gave a token-level accuracy of roughly 71%. This result is not satisfactory on an absolute scale, however it is more than encouraging taking into account the small size of our data. This result means that less than 3 tokens, on average, out of 10, must be corrected to increase the size of our corpus. With this model we automatically transcribed into Arabic morphemes, roughly, 5,000 additional tokens, corresponding to the second annotation block. This can be manually annotated in at least 7,5 days, but thanks to the automatic annotation accuracy, it was manually corrected into 3 days. The accuracy of the model on the annotation of the second block was roughly 70%, which corresponds to the accuracy on the test set. The manually-corrected additional tokens were added to the training data of our neural model, and a new block was automatically annotated and manually corrected. Both accuracy on the test set and on the annotation block remained at around 70%. This is because the block added to the training data was significantly different from the previous and from the third. Adding the third block to the training data and annotating a fourth block with the new trained model gave in contrast an accuracy of roughly 80%. This incremental, semi-automatic transcription procedure is in progress for the remaining blocks, but it is clear that it will make the corpus annotation increasingly easier and faster as the amount of training data will grow up." ] ]