question_id
stringlengths 40
40
| question
stringlengths 4
171
| answer
sequence | evidence
sequence |
---|---|---|---|
2a1e6a69e06da2328fc73016ee057378821e0754 | How did they detect entity mentions? | [
"Exact matches to the entity string and predictions from a coreference resolution system"
] | [
[
"In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \\langle s, r, ? \\rangle $ , we identify mentions in $S_q$ of the entities in $C_q \\cup \\lbrace s\\rbrace $ and create one node per mention. This process is based on the following heuristic:",
"we consider mentions spans in $S_q$ exactly matching an element of $C_q \\cup \\lbrace s\\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.",
"we use predictions from a coreference resolution system to add mentions of elements in $C_q \\cup \\lbrace s\\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .",
"we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity."
]
] |
63403ffc0232ff041f3da8fa6c30827cfd6404b7 | What is the metric used with WIKIHOP? | [
"Accuracy"
] | [
[]
] |
a25c1883f0a99d2b6471fed48c5121baccbbae82 | What performance does the Entity-GCN get on WIKIHOP? | [
"During testing: 67.6 for single model without coreference, 66.4 for single model with coreference, 71.2 for ensemble of 5 models"
] | [
[]
] |
a88f8cae1f59cdc4f1f645e496d6d2ac4d9fba1b | Do they evaluate only on English datasets? | [
"Unanswerable"
] | [
[]
] |
bea60603d78baeeb6df1afb53ed08d8296b42f1e | What baseline models are used? | [
"Unanswerable"
] | [
[]
] |
37db7ba2c155c2f89fc7fb51fffd7f193c103a34 | What classical machine learning algorithms are used? | [
"Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF), gradient boosting (XGB)"
] | [
[
"We ran three groups of experiments, to assess both the effectiveness of our approach when compared with the approaches we found in literature and its capability of extracting features that are relevant for sarcasm in a cross-domain scenario. In both cases, we denote with the word model one of the possible combinations of classic/statistical LSA and a classifier. The used classifiers are Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF) and gradient boosting (XGB)."
]
] |
c2cbc2637761a2c2cf50f5f8caa248814277430e | What are the different methods used for different corpora? | [
"Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF) and gradient boosting (XGB)"
] | [
[
"We ran three groups of experiments, to assess both the effectiveness of our approach when compared with the approaches we found in literature and its capability of extracting features that are relevant for sarcasm in a cross-domain scenario. In both cases, we denote with the word model one of the possible combinations of classic/statistical LSA and a classifier. The used classifiers are Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF) and gradient boosting (XGB).",
"For the first group of experiments, we evaluated the performance of each of our models in every corpus. We use 10-fold cross-validation and report the mean values of INLINEFORM0 -score, precision, and recall among all the folds. The proportion of the two classes in each fold is equal to the proportion in the whole corpus. Where applicable, we compare our results with existing results in the literature. Besides, we compare with the method presented in Poira et al. cambria2016."
]
] |
774ead7c642f9a6c59cfbf6994c07ce9c6789a35 | In which domains is sarcasm conveyed in different ways? | [
"Amazon reviews"
] | [
[
"We now discuss the relations among the results of the different experiments to gain some further insights into the sarcastic content of our corpora. From the in-corpus experiments, we obtain good results on SarcasmCorpus, which is the only corpus containing Amazon reviews. Unfortunately, when we train our models in a cross-corpora or all-corpora setting, our results drop dramatically, especially in the cross-corpora case. These results mean that the sarcasm in SarcasmCorpus is conveyed through features that are not present in the other corpora. This is especially true when considering that in the inter-corpora experiments, using SarcasmCorpus as a training set in all cases yields results that are only better than the ones obtained when using irony-context as a training set."
]
] |
d86c7faf5a61d73a19397a4afa2d53206839b8ad | What modalities are being used in different datasets? | [
"Language, Vision, Acoustic"
] | [
[
"All the datasets consist of videos where only one speaker is in front of the camera. The descriptors we used for each of the modalities are as follows:",
"Language All the datasets provide manual transcriptions. We use pre-trained word embeddings (glove.840B.300d) BIBREF25 to convert the transcripts of videos into a sequence of word vectors. The dimension of the word vectors is 300.",
"Vision Facet BIBREF26 is used to extract a set of features including per-frame basic and advanced emotions and facial action units as indicators of facial muscle movement.",
"Acoustic We use COVAREP BIBREF27 to extract low level acoustic features including 12 Mel-frequency cepstral coefficients (MFCCs), pitch tracking and voiced/unvoiced segmenting features, glottal source parameters, peak slope parameters and maxima dispersion quotients."
]
] |
082bc58e1a2a65fc1afec4064a51e4c785674fd7 | What is the difference between Long-short Term Hybrid Memory and LSTMs? | [
"Long-short Term Hybrid Memory (LSTHM) is an extension of the Long-short Term Memory (LSTM) "
] | [
[
"In this section we outline our pipeline for human communication comprehension: the Multi-attention Recurrent Network (MARN). MARN has two key components: Long-short Term Hybrid Memory and Multi-attention Block. Long-short Term Hybrid Memory (LSTHM) is an extension of the Long-short Term Memory (LSTM) by reformulating the memory component to carry hybrid information. LSTHM is intrinsically designed for multimodal setups and each modality is assigned a unique LSTHM. LSTHM has a hybrid memory that stores view-specific dynamics of its assigned modality and cross-view dynamics related to its assigned modality. The component that discovers cross-view dynamics across different modalities is called the Multi-attention Block (MAB). The MAB first uses information from hidden states of all LSTHMs at a timestep to regress coefficients to outline the multiple existing cross-view dynamics among them. It then weights the output dimensions based on these coefficients and learns a neural cross-view dynamics code for LSTHMs to update their hybrid memories. Figure 1 shows the overview of the MARN. MARN is differentiable end-to-end which allows the model to be learned efficiently using gradient decent approaches. In the next subsection, we first outline the Long-short Term Hybrid Memory. We then proceed to outline the Multi-attention Block and describe how the two components are integrated in the MARN.",
"Long-short Term Memory (LSTM) networks have been among the most successful models in learning from sequential data BIBREF21 . The most important component of the LSTM is a memory which stores a representation of its input through time. In the LSTHM model, we seek to build a memory mechanism for each modality which in addition to storing view-specific dynamics, is also able to store the cross-view dynamics that are important for that modality. This allows the memory to function in a hybrid manner."
]
] |
46563a1fb2c3e1b39a185e4cbb3ee1c80c8012b7 | Do they report results only on English data? | [
"Unanswerable"
] | [
[]
] |
6b7d76c1e1a2490beb69609ba5652476dde8831b | What provisional explanation do the authors give for the impact of document context? | [
"adding context causes speakers to focus on broader semantic and pragmatic issues of discourse coherence"
] | [
[
"This effect of context on human ratings is very similar to the one reported in BIBREF5 . They find that sentences rated as ill formed out of context are improved when they are presented in their document contexts. However the mean ratings for sentences judged to be highly acceptable out of context declined when assessed in context. BIBREF5 's linear regression chart for the correlation between out-of-context and in-context acceptability judgments looks remarkably like our Fig FIGREF15 . There is, then, a striking parallel in the compression pattern that context appears to exert on human judgments for two entirely different linguistic properties.",
"This pattern requires an explanation. BIBREF5 suggest that adding context causes speakers to focus on broader semantic and pragmatic issues of discourse coherence, rather than simply judging syntactic well formedness (measured as naturalness) when a sentence is considered in isolation. On this view, compression of rating results from a pressure to construct a plausible interpretation for any sentence within its context."
]
] |
37753fbffc06ce7de6ada80c89f1bf5f190bbd88 | What document context was added? | [
"Preceding and following sentence of each metaphor and paraphrase are added as document context"
] | [
[
"We extracted 200 sentence pairs from BIBREF3 's dataset and provided each pair with a document context consisting of a preceding and a following sentence, as in the following example."
]
] |
7ee29d657ccb8eb9d5ec64d4afc3ca8b5f3bcc9f | What were the results of the first experiment? | [
"Best performance achieved is 0.72 F1 score"
] | [
[
"We also observe that the best combination seems to consist in training our model on the original out-of-context dataset and testing it on the in-context pairs. In this configuration we reach an F-score (0.72) only slightly lower than the one reported in BIBREF3 (0.74), and we record the highest Pearson correlation, 0.3 (which is still not strong, compared to BIBREF3 's best run, 0.75). This result may partly be an artifact of the the larger amount of training data provided by the out-of-context pairs."
]
] |
b42323d60827ecf0d9e478c9a31f90940cfae975 | How big is the evaluated dataset? | [
"contains thousands of XML files, each of which are constructed by several records"
] | [
[
"The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs. We replace the interested two drugs with “drug1” and “drug2” while the other drugs are replaced by “durg0”, as in BIBREF9 did. This step is called drug blinding. For example, the sentence in figure FIGREF5 generates 3 instances after drug blinding: “drug1: an increased risk of hepatitis has been reported to result from combined use of drug2 and drug0”, “drug1: an increased risk of hepatitis has been reported to result from combined use of drug0 and drug2”, “drug0: an increased risk of hepatitis has been reported to result from combined use of drug1 and drug2”. The drug blinded sentences are the instances that are fed to our model."
]
] |
1a69696034f70fb76cd7bb30494b2f5ab97e134d | By how much does their model outperform existing methods? | [
"Answer with content missing: (Table II) Proposed model has F1 score of 0.7220 compared to 0.7148 best state-state-of-the-art result."
] | [
[
"We compare our best F1 score with other state-of-the-art approaches in table TABREF39 , which shows our model has competitive advantage in dealing with drug-drug interaction extraction. The predictions confusion matrix is shown in table TABREF46 . The DDIs other than false being classified as false makes most of the classification error. It may perform better if a classifier which can tells true and false DDI apart is trained. We leave this two-stage classifier to our future work. Another phenomenon is that the “Int” type is often classified as “Effect”. The “Int” sentence describes there exists interaction between two drugs and this information implies the two drugs' combination will have good or bed effect. That's the reason why “Int” and “Effect” are often obfuscated."
]
] |
9a596bd3a1b504601d49c2bec92d1592d7635042 | What is the performance of their model? | [
"Answer with content missing: (Table II) Proposed model has F1 score of 0.7220."
] | [
[
"We compare our best F1 score with other state-of-the-art approaches in table TABREF39 , which shows our model has competitive advantage in dealing with drug-drug interaction extraction. The predictions confusion matrix is shown in table TABREF46 . The DDIs other than false being classified as false makes most of the classification error. It may perform better if a classifier which can tells true and false DDI apart is trained. We leave this two-stage classifier to our future work. Another phenomenon is that the “Int” type is often classified as “Effect”. The “Int” sentence describes there exists interaction between two drugs and this information implies the two drugs' combination will have good or bed effect. That's the reason why “Int” and “Effect” are often obfuscated."
]
] |
1ba28338d3f993674a19d2ee2ec35447e361505b | What are the existing methods mentioned in the paper? | [
"Chowdhury BIBREF14 and Thomas et al. BIBREF11, FBK-irst BIBREF10, Liu et al. BIBREF9, Sahu et al. BIBREF12"
] | [
[
"In DDI extraction task, NLP methods or machine learning approaches are proposed by most of the work. Chowdhury BIBREF14 and Thomas et al. BIBREF11 proposed methods that use linguistic phenomenons and two-stage SVM to classify DDIs. FBK-irst BIBREF10 is a follow-on work which applies kernel method to the existing model and outperforms it.",
"Neural network based approaches have been proposed by several works. Liu et al. BIBREF9 employ CNN for DDI extraction for the first time which outperforms the traditional machine learning based methods. Limited by the convolutional kernel size, the CNN can only extracted features of continuous 3 to 5 words rather than distant words. Liu et al. BIBREF8 proposed dependency-based CNN to handle distant but relevant words. Sahu et al. BIBREF12 proposed LSTM based DDI extraction approach and outperforms CNN based approach, since LSTM handles sentence as a sequence instead of slide windows. To conclude, Neural network based approaches have advantages of 1) less reliance on extra NLP toolkits, 2) simpler preprocessing procedure, 3) better performance than text analysis and machine learning methods."
]
] |
8ec94313ea908b6462e1f5ee809a977a7b6bdf01 | Does having constrained neural units imply word meanings are fixed across different context? | [
"No"
] | [
[
"In a natural language translation setting, suppose that an input word corresponds to a set of output tokens independently of its context. Even though this information might be useful to determine the syntax of the input utterance in the first place, the syntax does not determine this knowledge at all (by supposition). So, we can impose the constraint that our model's representation of the input's syntax cannot contain this context-invariant information. This regularization is strictly preferable to allowing all aspects of word meaning to propagate into the input's syntax representation. Without such a constraint, all inputs could, in principle, be given their own syntactic categories. This scenario is refuted by cognitive and neural theories. We incorporate the regularization with neural units that can separate representations of word meaning and arrangement."
]
] |
f052444f3b3bf61a3f226645278b780ebd7774db | Do they perform a quantitative analysis of their model displaying knowledge distortions? | [
"Yes"
] | [
[
"Additionally, we provide evidence that the model learns knowledge of a separation between syntax and the lexicon that is similar to that of a human. Figure FIGREF6 displays the learned $\\sigma (w)$ embeddings for some input words, across the domains. To avoid cherry-picking the results, we chose the input words arbitrarily, subject to the following constraint. We considered each word to typically have a different syntactic category than the other choices from that domain. This constraint was used to present a diverse selection of words. Table TABREF5 displays the output behavior of models that we damaged to resemble the damage that causes aphasia. To avoid cherry-picking the results, we arbitrarily chose an input for each domain, subject to the following constraint. The input is not in the train set and the undamaged LLA-LSTM model produces a translation that we judge to be correct. For all inputs that we chose, damage to the analog of Broca's area (the LSTMs) results in an output that describes content only if it is described by the input. However, the output does not show understanding of the input's syntax. In the naturalistic domains, damage to the analog of Wernicke's area (the Lexicon Unit) results in an output with incorrect content that would be acceptable if the input had different words but the same syntax. These knowledge distortions are precisely those that are expected in the respective human aphasics BIBREF0. We also provide corpus-level results from the damaged models by presenting mean precision on the test sets. Because the output languages in all of our domains use tokens to represent meanings in many cases, it is expected that the analog to Wernicke's area is responsible for maintaining a high precision."
]
] |
79ed71a3505cf6f5e8bf121fd7ec1518cab55cae | How do they damage different neural modules? | [
"Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information."
] | [
[]
] |
74eb363ce30c44d318078cc1a46f8decf7db3ade | Which weights from their model do they analyze? | [
"Unanswerable"
] | [
[]
] |
790ed4458a23aa23e2b5399259e50083c86d9e14 | Do all the instances contain code-switching? | [
"No"
] | [
[
"In this work, we use the datasets released by BIBREF1 and HEOT dataset provided by BIBREF0 . The datasets obtained pass through these steps of processing: (i) Removal of punctuatios, stopwords, URLs, numbers, emoticons, etc. This was then followed by transliteration using the Xlit-Crowd conversion dictionary and translation of each word to English using Hindi to English dictionary. To deal with the spelling variations, we manually added some common variations of popular Hinglish words. Final dictionary comprised of 7200 word pairs. Additionally, to deal with profane words, which are not present in Xlit-Crowd, we had to make a profanity dictionary (with 209 profane words) as well. Table TABREF3 gives some examples from the dictionary."
]
] |
562a2feeba9580f5435a94396f2a8751f79a4d5c | What embeddings do they use? | [
"Glove, Twitter word2vec"
] | [
[
"We tried Glove BIBREF2 and Twitter word2vec BIBREF3 code for training embeddings for the processed tweets. The embeddings were trained on both the datasets provided by BIBREF1 and HEOT. These embeddings help to learn distributed representations of tweets. After experimentation, we kept the size of embeddings fixed to 100."
]
] |
d4e78d3205e98cafdf93ddaa95627ea00b7c4d55 | Do they perform some annotation? | [
"No"
] | [
[
"In this work, we use the datasets released by BIBREF1 and HEOT dataset provided by BIBREF0 . The datasets obtained pass through these steps of processing: (i) Removal of punctuatios, stopwords, URLs, numbers, emoticons, etc. This was then followed by transliteration using the Xlit-Crowd conversion dictionary and translation of each word to English using Hindi to English dictionary. To deal with the spelling variations, we manually added some common variations of popular Hinglish words. Final dictionary comprised of 7200 word pairs. Additionally, to deal with profane words, which are not present in Xlit-Crowd, we had to make a profanity dictionary (with 209 profane words) as well. Table TABREF3 gives some examples from the dictionary.",
"Both the HEOT and BIBREF1 datasets contain tweets which are annotated in three categories: offensive, abusive and none (or benign). Some examples from the dataset are shown in Table TABREF4 . We use a LSTM based classifier model for training our model to classify these tweets into these three categories. An overview of the model is given in the Figure FIGREF12 . The model consists of one layer of LSTM followed by three dense layers. The LSTM layer uses a dropout value of 0.2. Categorical crossentropy loss was used for the last layer due to the presence of multiple classes. We use Adam optimizer along with L2 regularisation to prevent overfitting. As indicated by the Figure FIGREF12 , the model was initially trained on the dataset provided by BIBREF1 , and then re-trained on the HEOT dataset so as to benefit from the transfer of learned features in the last stage. The model hyperparameters were experimentally selected by trying out a large number of combinations through grid search."
]
] |
8b51db1f7a6293b5bd8be49386cce45989b8079f | Do they use dropout? | [
"Yes"
] | [
[
"Both the HEOT and BIBREF1 datasets contain tweets which are annotated in three categories: offensive, abusive and none (or benign). Some examples from the dataset are shown in Table TABREF4 . We use a LSTM based classifier model for training our model to classify these tweets into these three categories. An overview of the model is given in the Figure FIGREF12 . The model consists of one layer of LSTM followed by three dense layers. The LSTM layer uses a dropout value of 0.2. Categorical crossentropy loss was used for the last layer due to the presence of multiple classes. We use Adam optimizer along with L2 regularisation to prevent overfitting. As indicated by the Figure FIGREF12 , the model was initially trained on the dataset provided by BIBREF1 , and then re-trained on the HEOT dataset so as to benefit from the transfer of learned features in the last stage. The model hyperparameters were experimentally selected by trying out a large number of combinations through grid search."
]
] |
f37128d3533dabb58aadf2e8f9aa30f5bda81cd9 | What definition of hate speech do they use? | [
"Unanswerable"
] | [
[]
] |
0b9021cefca71081e617a362e7e3995c5f1d2a88 | What are the other models they compare to? | [
"CNN-C, CNN-W, CNN-Lex-C, CNN-Lex-W, Bi-LSTM-C , Bi-LSTM-W, Lex-rule, BOW"
] | [
[
"In this subsubsection, each of the three raw data sets (associated with their labels) shown in Table 1 is used. The clause data are not used. In other words, the training data used in this subsubsection are the same as those used in previous studies. For each data corpus, 1000 raw data samples are used as the test data, and the rest are used as the training data. The involved algorithms are detailed as follows.",
"CNN-C denotes the CNN with (Chinese) character embedding.",
"CNN-W denotes the CNN with (Chinese) word embedding.",
"CNN-Lex-C denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) character embedding is used.",
"CNN-Lex-W denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) word embedding is used.",
"Bi-LSTM-C denotes the BI-LSTM with (Chinese) character embedding.",
"Bi-LSTM-W denotes the Bi-LSTM with (Chinese) word embedding.",
"Lex-rule denotes the rule-based approach shows in Fig. 1. This approach is unsupervised.",
"BOW denotes the conventional algorithm which is based of bag-of-words features."
]
] |
6ad92aad46d2e52f4e7f3020723922255fd2b603 | What is the agreement value for each dataset? | [
"Unanswerable"
] | [
[]
] |
4fdc707fae5747fceae68199851e3c3186ab8307 | How many annotators participated? | [
"Unanswerable"
] | [
[]
] |
2d307b43746be9cedf897adac06d524419b0720b | How long are the datasets? | [
"Travel dataset contains 4100 raw samples, 11291 clauses, Hotel dataset contains 3825 raw samples, 11264 clauses, and the Mobile dataset contains 3483 raw samples and 8118 clauses"
] | [
[
"We compile three Chinese text corpora from online data for three domains, namely, “hotel\", “mobile phone (mobile)\", and “travel\". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora.",
"In the second stage, five users are invited to label each text sample in the three raw data sets. The average score of the five users on each sample is calculated. Samples with average scores located in [0.6, 1] are labeled as “positive\". Samples with average scores located in [0, 0.4] are labeled as “negative\". Others are labeled as “neutral\". The details of the labeling results are shown in Table 1."
]
] |
fe90eec1e3cdaa41d2da55864c86f6b6f042a56c | What are the sources of the data? | [
"User reviews written in Chinese collected online for hotel, mobile phone, and travel domains"
] | [
[
"We compile three Chinese text corpora from online data for three domains, namely, “hotel\", “mobile phone (mobile)\", and “travel\". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora."
]
] |
9d5df9022cc9eb04b9f5c5a9d8308a332ebdf50c | What is the new labeling strategy? | [
"They use a two-stage labeling strategy where in the first stage single annotators label a large number of short texts with relatively pure sentiment orientations and in the second stage multiple annotators label few text samples with mixed sentiment orientations"
] | [
[
"We address the above issues with a new methodology. First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators. Second, we propose a two-level long short-term memory (LSTM) BIBREF4 network to achieve two-level feature representation and classify the sentiment orientations of a text sample to utilize two labeled data sets. Lastly, in the proposed two-level LSTM network, lexicon embedding is leveraged to incorporate linguistic features used in lexicon-based methods."
]
] |
dbf606cb6fc1d070418cc25e38ae57bbbb7087a0 | Which future direction in NLG are discussed? | [
"1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context?, 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks?, 3) How to reduce the computing resources required for large-scale pre-training?, 4) What aspect of knowledge do the pre-trained models provide for better language generation?"
] | [
[
"The diversity of NLG applications poses challenges on the employment of unsupervised pre-training, yet it also raises more scientific questions for us to explore. In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation?"
]
] |
9651fbd887439bf12590244c75e714f15f50f73d | What experimental phenomena are presented? | [
"The advantage of pre-training gradually diminishes with the increase of labeled data, Fixed representations yield better results than fine-tuning in some cases, pre-training the Seq2Seq encoder outperforms pre-training the decoder"
] | [
[
"Existing researches on unsupervised pre-training for NLG are conducted on various tasks for different purposes. Probing into the assorted empirical results may help us discover some interesting phenomenons:",
"The advantage of pre-training gradually diminishes with the increase of labeled data BIBREF14, BIBREF17, BIBREF18.",
"Fixed representations yield better results than fine-tuning in some cases BIBREF24.",
"Overall, pre-training the Seq2Seq encoder outperforms pre-training the decoder BIBREF24, BIBREF17, BIBREF15, BIBREF16."
]
] |
1fd969f53bc714d9b5e6604a7780cbd6b12fd616 | How strategy-based methods handle obstacles in NLG? | [
"fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network"
] | [
[
"In response to the above challenges, two lines of work are proposed by resorting to architecture-based and strategy-based solutions, respectively. Architecture-based methods either try to induce task-specific architecture during pre-training (task-specific methods), or aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods). Strategy-based methods depart from the pre-training stage, seeking to take advantage of the pre-trained models during the process of target task learning. The approaches include fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, and knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network."
]
] |
cd37ad149d500e1c7d2de9de1f4bae8dcc443a72 | How architecture-based method handle obstacles in NLG? | [
"task-specific architecture during pre-training (task-specific methods), aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods)"
] | [
[
"In response to the above challenges, two lines of work are proposed by resorting to architecture-based and strategy-based solutions, respectively. Architecture-based methods either try to induce task-specific architecture during pre-training (task-specific methods), or aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods). Strategy-based methods depart from the pre-training stage, seeking to take advantage of the pre-trained models during the process of target task learning. The approaches include fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, and knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network."
]
] |
14eb2b89ba39e56c52954058b6b799a49d1b74bf | How are their changes evaluated? | [
"The changes are evaluated based on accuracy of intent and entity recognition on SNIPS dataset"
] | [
[
"To evaluate the performance of our approach, we used a subset of the SNIPS BIBREF12 dataset, which is readily available in RASA nlu format. Our training data consisted of 700 utterances, across 7 different intents (AddToPlaylist, BookRestaurant, GetWeather, PlayMusic, RateBook, SearchCreativeWork, and SearchScreeningEvent). In order to test our implementation of incremental components, we initially benchmarked their non-incremental counterparts, and used that as a baseline for the incremental versions (to treat the sium component as non-incremental, we simply applied all words in each utterance to it and obtained the distribution over intents after each full utterance had been processed).",
"We use accuracy of intent and entity recognition as our task and metric. To evaluate the components worked as intended, we then used the IncrementalInterpreter to parse the messages as individual ius. To ensure REVOKE worked as intended, we injected random incorrect words at a rate of 40%, followed by subsequent revokes, ensuring that an ADD followed by a revoke resulted in the same output as if the incorrect word had never been added. While we implemented both an update-incremental and a restart-incremental RASA nlu component, the results of the two cannot be directly compared for accuracy as the underlying models differ greatly (i.e., sium is generative, whereas Tensorflow Embedding is a discriminative neural network; moreover, sium was designed to work as a reference resolution component to physical objects, not abstract intents), nor are these results conducive to an argument of update- vs. restart-incremental approaches, as the underlying architecture of the models vary greatly."
]
] |
83f24e4bbf9de82d560cdde64b91d6d672def6bf | What baseline is used for the verb classification experiments? | [
"Unanswerable"
] | [
[]
] |
6b8a3100895f2192e08973006474428319dc298e | What clustering algorithm is used on top of the VerbNet-specialized representations? | [
"MNCut spectral clustering algorithm BIBREF58"
] | [
[
"Given the initial distributional or specialised collection of target language vectors INLINEFORM0 , we apply an off-the-shelf clustering algorithm on top of these vectors in order to group verbs into classes. Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 . Again, following prior work BIBREF17 , BIBREF61 , we estimate the number of clusters INLINEFORM1 using the self-tuning method of Zelnik:2004nips. This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details."
]
] |
daf624f7d1623ccd3facb1d93d4d9d616b3192f4 | How many words are translated between the cross-lingual translation pairs? | [
"Unanswerable"
] | [
[]
] |
74261f410882551491657d76db1f0f2798ac680f | What are the six target languages? | [
"Answer with content missing: (3 Experimental Setup) We experiment with six target languages: French (FR), Brazilian Portuguese (PT), Italian (IT), Polish (PL), Croatian (HR), and Finnish (FI)."
] | [
[
"Given the initial distributional or specialised collection of target language vectors INLINEFORM0 , we apply an off-the-shelf clustering algorithm on top of these vectors in order to group verbs into classes. Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 . Again, following prior work BIBREF17 , BIBREF61 , we estimate the number of clusters INLINEFORM1 using the self-tuning method of Zelnik:2004nips. This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details.",
"Results and Discussion"
]
] |
3d34a02ceebcc93ee79dc073c408651d25e538bc | what classifiers were used in this paper? | [
"Support Vector Machines (SVM) classifier"
] | [
[
"We feed the above-described hand-crafted features together with the task-specific embeddings learned by the deep neural neural network (a total of 1,892 attributes combined) into a Support Vector Machines (SVM) classifier BIBREF37 . SVMs have proven to perform well in different classification settings, including in the case of small and noisy datasets."
]
] |
96992460cfc5f0b8d065ee427067147293746b7a | what are their evaluation metrics? | [
"F1, accuracy"
] | [
[
"Evaluating the final model, we set as a baseline the prediction of the majority class, i.e., the fake news class. This baseline has an F1 of 41.59% and accuracy of 71.22%. The performance of the built models can be seen in Table TABREF19 . Another stable baseline, apart from just taking the majority class, is the TF.IDF bag-of-words approach, which sets a high bar for the general model score. We then observe how much the attention mechanism embeddings improve the score (AttNN). Finally, we add the hand-crafted features (Feats), which further improve the performance. From the results, we can conclude that both the attention-based task-specific embeddings and the manual features are important for the task of finding fake news."
]
] |
363ddc06db5720786ed440927d7fbb7d0a8078ae | what types of features were used? | [
"stylometric, lexical, grammatical, and semantic"
] | [
[
"We have presented the first attempt to solve the fake news problem for Bulgarian. Our method is purely text-based, and ignores the publication date and the source of the article. It combines task-specific embeddings, produced by a two-level attention-based deep neural network model, with manually crafted features (stylometric, lexical, grammatical, and semantic), into a kernel-based SVM classifier. We further produced and shared a number of relevant language resources for Bulgarian, which we created for solving the task."
]
] |
f12a282571f842b818d4bee86442751422b52337 | what lexical features did they experiment with? | [
"TF.IDF-based features"
] | [
[
"General lexical features are often used in natural language processing as they are somewhat task-independent and reasonably effective in terms of classification accuracy. In our experiments, we used TF.IDF-based features over the title and over the content of the article we wanted to classify. We had these features twice – once for the title and once for the the content of the article, as we wanted to have two different representations of the same article. Thus, we used a total of 1,100 TF.IDF-weighted features (800 content + 300 title), limiting the vocabulary to the top 800 and 300 words, respectively (which occurred in more than five articles). We should note that TF.IDF features should be used with caution as they may not remain relevant over time or in different contexts without retraining."
]
] |
5b1cd21936aeec85233c978ba8d7282931522a3a | what is the size of the dataset? | [
"The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322."
] | [
[
"In particular, the training dataset contains 434 unique articles with duplicates. These articles have three reposts each on average, with the most reposted article appearing 45 times. If we take into account the labels of the reposted articles, we can see that if an article is reposted, it is more likely to be fake news. The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. We detect the duplicates based on their titles as far as they are distinctive enough and the content is sometimes slightly modified when reposted."
]
] |
964705a100e53a9181d1a5ac8150696de12ecaf0 | what datasets were used? | [
" training dataset contains 2,815 examples, 761 testing examples"
] | [
[
"The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; we further have 761 testing examples. However, there is 98% correlation between fake news and click-baits, i.e., a model trained on fake news would do well on click-baits and vice versa. Thus, below we focus on fake news detection only."
]
] |
f08a66665f01c91cb9dfe082e9d1015ecf3df71d | what are the three reasons everybody hates them? | [
"Unanswerable"
] | [
[]
] |
c65b6470b7ed0a035548cc08e0bc541c2c4a95a7 | How are seed dictionaries obtained by fully unsupervised methods? | [
"the latest CLWE developments almost exclusively focus on fully unsupervised approaches BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 : they fully abandon any source of (even weak) supervision and extract the initial seed dictionary by exploiting topological similarities between pre-trained monolingual embedding spaces"
] | [
[
"The landscape of CLWE methods has recently been dominated by the so-called projection-based methods BIBREF15 , BIBREF16 , BIBREF17 . They align two monolingual embedding spaces by learning a projection/mapping based on a training dictionary of translation pairs. Besides their simple conceptual design and competitive performance, their popularity originates from the fact that they rely on rather weak cross-lingual supervision. Originally, the seed dictionaries typically spanned several thousand word pairs BIBREF15 , BIBREF18 , BIBREF19 , but more recent work has shown that CLWEs can be induced with even weaker supervision from small dictionaries spanning several hundred pairs BIBREF20 , identical strings BIBREF21 , or even only shared numerals BIBREF22 ."
]
] |
6e2899c444baaeb0469599f65722780894f90f29 | How does BLI measure alignment quality? | [
"we use mean average precision (MAP) as the main evaluation metric"
] | [
[
"Evaluation Task. Our task is bilingual lexicon induction (BLI). It has become the de facto standard evaluation for projection-based CLWEs BIBREF16 , BIBREF17 . In short, after a shared CLWE space has been induced, the task is to retrieve target language translations for a test set of source language words. Its lightweight nature allows us to conduct a comprehensive evaluation across a large number of language pairs. Since BLI is cast as a ranking task, following glavas2019howto we use mean average precision (MAP) as the main evaluation metric: in our BLI setup with only one correct translation for each “query” word, MAP is equal to mean reciprocal rank (MRR)."
]
] |
896e99d7f8f957f6217185ff787e94f84c136087 | What methods were used for unsupervised CLWE? | [
"Unsupervised CLWEs. These methods first induce a seed dictionary $D^{(1)}$ leveraging only two unaligned monolingual spaces (C1). While the algorithms for unsupervised seed dictionary induction differ, they all strongly rely on the assumption of similar topological structure between the two pretrained monolingual spaces. Once the seed dictionary is obtained, the two-step iterative self-learning procedure (C2) takes place: 1) a dictionary $D^{(k)}$ is first used to learn the joint space $\\mathbf {Y}^{(k)} = \\mathbf {X{W}}^{(k)}_x \\cup \\mathbf {Z{W}}^{(k)}_z$ ; 2) the nearest neighbours in $\\mathbf {Y}^{(k)}$ then form the new dictionary $D^{(k+1)}$ . We illustrate the general structure in Figure 1 ."
] | [
[
"Taking the idea of reducing cross-lingual supervision to the extreme, the latest CLWE developments almost exclusively focus on fully unsupervised approaches BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 : they fully abandon any source of (even weak) supervision and extract the initial seed dictionary by exploiting topological similarities between pre-trained monolingual embedding spaces. Their modus operandi can roughly be described by three main components: C1) unsupervised extraction of a seed dictionary; C2) a self-learning procedure that iteratively refines the dictionary to learn projections of increasingly higher quality; and C3) a set of preprocessing and postprocessing steps (e.g., unit length normalization, mean centering, (de)whitening) BIBREF31 that make the entire learning process more robust.",
"Model Configurations. Note that C2 and C3 can be equally used on top of any (provided) seed lexicon (i.e., $D^{(1)}$ := $D_0$ ) to enable weakly supervised learning, as we propose here. In fact, the variations of the three key components, C1) seed lexicon, C2) self-learning, and C3) preprocessing and postprocessing, construct various model configurations which can be analyzed to probe the importance of each component in the CLWE induction process. A selection of representative configurations evaluated later in § \"Results and Discussion\" is summarized in Table 1 ."
]
] |
63c0128935446e26eacc7418edbd9f50cba74455 | What is the size of the released dataset? | [
"440 sentences, 2247 triples extracted from those sentences, and 11262 judgements on those triples."
] | [
[
"We used two different data sources in our evaluation. The first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . These sentences were randomly selected by the creators of the dataset. This choice allows for a rough comparison between our results and theirs.",
"The second dataset (SCI) was a set of 220 sentences from the scientific literature. We sourced the sentences from the OA-STM corpus. This corpus is derived from the 10 most published in disciplines. It includes 11 articles each from the following domains: agriculture, astronomy, biology, chemistry, computer science, earth science, engineering, materials science, math, and medicine. The article text is made freely available and the corpus provides both an XML and a simple text version of each article.",
"We employed the following annotation process. Each OIE extractor was applied to both datasets with the settings described above. This resulted in the generation of triples for 199 of the 200 WIKI sentences and 206 of the 220 SCI sentences. That is there were some sentences in which no triples were extracted. We discuss later the sentences in which no triples were extracted. In total 2247 triples were extracted.",
"In total, 11262 judgements were obtained after running the annotation process. Every triple had at least 5 judgements from different annotators. All judgement data is made available. The proportion of overall agreement between annotators is 0.76 with a standard deviation of 0.25 on whether a triple is consequence of the given sentence. We also calculated inter-annotator agreement statistics. Using Krippendorff's alpha inter-annotator agreement was 0.44. This calculation was performed over all data and annotators as Krippendorff's alpha is designed to account for missing data and work across more than two annotators. Additionally, Fleiss' Kappa and Scott's pi were calculated pairwise between all annotators where there were overlapping ratings (i.e. raters had rated at least one triple in common). The average Fleiss's Kappa was 0.41 and the average of Scott's pi was 0.37. Using BIBREF14 as a guide, we interpret these statistics as suggesting there is moderate agreement between annotators and that agreement is above random chance. This moderate level of agreement is to be expected as the task itself can be difficult and requires judgement from the annotators at the margin."
]
] |
9a94dcee17cdb9a39d39977191e643adece58dfc | Were the OpenIE systems more accurate on some scientific disciplines than others? | [
"Unanswerable"
] | [
[]
] |
18e915b917c81056ceaaad5d6581781c0168dac9 | What is the most common error type? | [
"all annotators that a triple extraction was incorrect"
] | [
[
"We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences. There were several patterns that appeared in these sentences."
]
] |
9c68d6d5451395199ca08757157fbfea27f00f69 | Which OpenIE systems were used? | [
"OpenIE4 and MiniIE"
] | [
[
"We evaluate two OIE systems (i.e. extractors). The first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 . We view this as a baseline system. The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 . MinIE focuses on the notion of minimization - producing compact extractions from sentences. In our experience using OIE on scientific text, we have found that these systems often produce overly specific extractions that do not provide the redundancy useful for downstream tasks. Hence, we thought this was a useful package to explore."
]
] |
372fbf2d120ca7a101f70d226057f9639bf1f9f2 | What is the role of crowd-sourcing? | [
"Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence."
] | [
[
"The sentences and their corresponding triples were then divided. Each task contained 10 sentences and all of their unique corresponding triples from a particular OIE systems. Half of the ten sentences were randomly selected from SCI and the other half were randomly selected from WIKI. Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence. Examples of correct and incorrect triples were provided. Complete labelling instructions and the presentation of the HITS can be found with the dataset. All triples were labelled by at least 5 workers."
]
] |
f14ff780c28addab1d738f676c4ec0b4106356b6 | How are meta vertices computed? | [
"Meta vertex candidate identification. Edit distance and word lengths distance are used to determine whether two words should be merged into a meta vertex (only if length distance threshold is met, the more expensive edit distance is computed)., The meta vertex creation. As common identifiers, we use the stemmed version of the original vertices and if there is more than one resulting stem, we select the vertex from the identified candidates that has the highest centrality value in the graph and its stemmed version is introduced as a novel vertex (meta vertex)."
] | [
[
"Even if the optional lemmatization step is applied, one can still aim at further reducing the graph complexity by merging similar vertices. This step is called meta vertex construction. The motivation can be explained by the fact, that even similar lemmas can be mapped to the same keyword (e.g., mechanic and mechanical; normal and abnormal). This step also captures spelling errors (similar vertices that will not be handled by lemmatization), spelling differences (e.g., British vs. American English), non-standard writing (e.g., in Twitter data), mistakes in lemmatization or unavailable or omitted lemmatization step.",
"The meta-vertex construction step works as follows. Let INLINEFORM0 represent the set of vertices, as defined above. A meta vertex INLINEFORM1 is comprised of a set of vertices that are elements of INLINEFORM2 , i.e. INLINEFORM3 . Let INLINEFORM4 denote the INLINEFORM5 -th meta vertex. We construct a given INLINEFORM6 so that for each INLINEFORM7 , INLINEFORM8 's initial edges (prior to merging it into a meta vertex) are rewired to the newly added INLINEFORM9 . Note that such edges connect to vertices which are not a part of INLINEFORM10 . Thus, both the number of vertices, as well as edges get reduced substantially. This feature is implemented via the following procedure:",
"Meta vertex candidate identification. Edit distance and word lengths distance are used to determine whether two words should be merged into a meta vertex (only if length distance threshold is met, the more expensive edit distance is computed).",
"The meta vertex creation. As common identifiers, we use the stemmed version of the original vertices and if there is more than one resulting stem, we select the vertex from the identified candidates that has the highest centrality value in the graph and its stemmed version is introduced as a novel vertex (meta vertex)."
]
] |
b799936d6580c0e95102027175d3fe184f0ee253 | How are graphs derived from a given text? | [
"The given corpus is traversed, and for each element INLINEFORM6 , its successor INLINEFORM7 , together with a given element, forms a directed edge INLINEFORM8 . Finally, such edges are weighted according to the number of times they appear in a given corpus. Thus the graph, constructed after traversing a given corpus, consists of all local neighborhoods (order one), merged into a single joint structure. Global contextual information is potentially kept intact (via weights), even though it needs to be detected via network analysis"
] | [
[
"In this work we consider directed graphs. Let INLINEFORM0 represent a graph comprised of a set of vertices INLINEFORM1 and a set of edges ( INLINEFORM2 ), which are ordered pairs. Further, each edge can have a real-valued weight assigned. Let INLINEFORM3 represent a document comprised of tokens INLINEFORM4 . The order in which tokens in text appear is known, thus INLINEFORM5 is a totally ordered set. A potential way of constructing a graph from a document is by simply observing word co-occurrences. When two words co-occur, they are used as an edge. However, such approaches do not take into account the sequence nature of the words, meaning that the order is lost. We attempt to take this aspect into account as follows. The given corpus is traversed, and for each element INLINEFORM6 , its successor INLINEFORM7 , together with a given element, forms a directed edge INLINEFORM8 . Finally, such edges are weighted according to the number of times they appear in a given corpus. Thus the graph, constructed after traversing a given corpus, consists of all local neighborhoods (order one), merged into a single joint structure. Global contextual information is potentially kept intact (via weights), even though it needs to be detected via network analysis as proposed next."
]
] |
568ce2f5355d009ec9bc1471fb5ea74655f7e554 | In what sense if the proposed method interpretable? | [
"Unanswerable"
] | [
[]
] |
c000a43aff3cb0ad1cee5379f9388531b5521e9a | how are the bidirectional lms obtained? | [
"They pre-train forward and backward LMs separately, remove top layer softmax, and concatenate to obtain the bidirectional LMs."
] | [
[
"In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., INLINEFORM0 . Note that in our formulation, the forward and backward LMs are independent, without any shared parameters."
]
] |
a5b67470a1c4779877f0d8b7724879bbb0a3b313 | what metrics are used in evaluation? | [
"micro-averaged F1"
] | [
[
"We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . We report the official evaluation metric (micro-averaged INLINEFORM0 ). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options BIBREF15 . Following BIBREF8 , we use the Senna word embeddings BIBREF2 and pre-processed the text by lowercasing all tokens and replacing all digits with 0."
]
] |
12cfbaace49f9363fcc10989cf92a50dfe0a55ea | what results do they achieve? | [
"91.93% F1 score on CoNLL 2003 NER task and 96.37% F1 score on CoNLL 2000 Chunking task"
] | [
[
"Our main contribution is to show that the context sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embeddings in our system overall performance increases from 90.87% to 91.93% INLINEFORM0 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% INLINEFORM1 ) for the CoNLL 2000 Chunking task."
]
] |
4640793d82aa7db30ad7b88c0bf0a1030e636558 | what previous systems were compared to? | [
"Chiu and Nichols (2016), Lample et al. (2016), Ma and Hovy (2016), Yang et al. (2017), Hashimoto et al. (2016), Søgaard and Goldberg (2016) "
] | [
[
"Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the backward LSTM-2048-512)."
]
] |
a9c5252173d3df1c06c770c180a77520de68531b | what are the evaluation datasets? | [
"CoNLL 2003, CoNLL 2000"
] | [
[
"We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . We report the official evaluation metric (micro-averaged INLINEFORM0 ). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options BIBREF15 . Following BIBREF8 , we use the Senna word embeddings BIBREF2 and pre-processed the text by lowercasing all tokens and replacing all digits with 0."
]
] |
a45edc04277a458911086752af4f17405501230f | Are datasets publicly available? | [
"Yes"
] | [
[
"Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups. The use of FAIR (Findable, Accessible, Interoperable and Reusable) (meta)data principles can guide the management of scientific data BIBREF125. Automated workflows that are easy to use and do not require programming knowledge encourage the flow of information from one discipline to the other. Platform-free solutions such as Docker (docker.com) in which an image of the source code is saved and can be opened without requiring further installation could accelerate the reproduction process. A recent initiative to provide a unified-framework for predictive models in genomics can quickly be adopted by the medicinal chemistry community BIBREF126."
]
] |
8c8a32592184c88f61fac1eef12c7d233dbec9dc | Are this models usually semi/supervised or unsupervised? | [
"Both supervised and unsupervised, depending on the task that needs to be solved."
] | [
[
"Machine translation finds use in cheminformatics in “translation\" from one language (e.g. reactants) to another (e.g. products). Machine translation is a challenging task because the syntactic and semantic dependencies of each language differ from one another and this may give rise to ambiguities. Neural Machine Translation (NMT) models benefit from the potential of deep learning architectures to build a statistical model that aims to find the most probable target sequence for an input sequence by learning from a corpus of examples BIBREF110, BIBREF111. The main advantage of NMT models is that they provide an end-to-end system that utilizes a single neural network to convert the source sequence into the target sequence. BIBREF110 refer to their model as a sequence-to-sequence (seq2seq) system that addresses a major limitation of DNNs that can only work with fixed-dimensionality information as input and output. However, in the machine translation task, the length of the input sequences is not fixed, and the length of the output sequences is not known in advance.",
"The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101. BIBREF34 adopted this architecture for molecule generation. A traditional auto-encoder encodes the input into the latent space, which is then decoded to reconstruct the input. VAE differs from AE by explicitly defining a probability distribution on the latent space to generate new samples. BIBREF34 hypothesized that the variational part of the system integrates noise to the encoder, so that the decoder can be more robust to the large diversity of molecules. However, the authors also reported that the non-context free property of SMILES caused by matching ring numbers and parentheses might often lead the decoder to generate invalid SMILES strings. A grammar variational auto-encoder (GVAE), where the grammar for SMILES is explicitly defined instead of the auto-encoder learning the grammar itself, was proposed to address this issue BIBREF102. This way, the generation is based on the pre-defined grammar rules and the decoding process generates grammar production rules that should also be grammatically valid. Although syntactic validity would be ensured, the molecules may not have semantic validity (chemical validity). BIBREF103 built upon the VAE BIBREF34 and GVAE BIBREF102 architectures and introduced a syntax-directed variational autoencoder (SD-VAE) model for the molecular generation task. The syntax-direct generative mechanism in the decoder contributed to creating both syntactically and semantically valid SMILES sequences. BIBREF103 compared the latent representations of molecules generated by VAE, GVAE, and SD-VAE, and showed that SD-VAE provided better discriminative features for druglikeness. BIBREF104 proposed an adversarial AE for the same task. Conditional VAEs BIBREF105, BIBREF106 were trained to generate molecules conditioned on a desired property. The challenges that SMILES syntax presents inspired the introduction of new syntax such as DeepSMILES BIBREF29 and SELFIES BIBREF32 (details in Section SECREF3).",
"Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107. In text generation models, the novel molecules are drawn from a distribution, which are then fine-tuned to obtain specific features, whereas adversarial learning utilizes generator and discriminator networks to produce novel molecules BIBREF107, BIBREF108. ORGAN BIBREF108, a molecular generation methodology, was built upon a sequence generative adversarial network (SeqGAN) from NLP BIBREF109. ORGAN integrated RL in order to generate molecules with desirable properties such as solubility, druglikeness, and synthetizability through using domain-specific rewards BIBREF108."
]
] |
16646ee77975fed372b76ce639e2664ae2105dcf | Is there any concrete example in the paper that shows that this approach had huge impact on drug discovery? | [
"Yes"
] | [
[
"The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes BIBREF44 and protein-protein interactions (PPI) BIBREF56. BIBREF44 treated 3-mers as the words of the protein sequence and observed that 3-mers with similar biophysical and biochemical properties clustered together when their embeddings were mapped onto the 2D space. BIBREF56, on the other hand, utilized BPE-based word segmentation (i.e. bio-words) to determine the words. The authors argued that the improved performance for bio-words in the PPI prediction task might be due to the segmentation-based model providing more distinct words than $k$-mers, which include repetitive segments. Another recent study treated multi-domain proteins as sentences in which each domain was recognized as a word BIBREF60. The Word2Vec algorithm was trained on the domains (i.e. PFAM domain identifiers) of eukaryotic protein sequences to learn semantically interpretable representations of them. The domain representations were then investigated in terms of the Gene Ontology (GO) annotations that they inherit. The results indicated that semantically similar domains share similar GO terms."
]
] |
9c0cf1630804366f7a79a40934e7495ad9f32346 | Do the authors analyze what kinds of cases their new embeddings fail in where the original, less-interpretable embeddings didn't? | [
"No"
] | [
[]
] |
a4d8fdcaa8adf99bdd1d7224f1a85c610659a9d3 | When they say "comparable performance", how much of a performance drop do these new embeddings result in? | [
"Performance was comparable, with the proposed method quite close and sometimes exceeding performance of baseline method."
] | [
[]
] |
9ac923be6ada1ba2aa20ad62b0a3e593bb94e085 | How do they evaluate interpretability? | [
"For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests."
] | [
[
"$Coherence@k$ has been shown to have high correlation with human interpretability of topics learned via various topic modeling methods BIBREF7 . Hence, we can expect interpretable embeddings by maximizing it.",
"For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests. In word intrusion test BIBREF14 , top $k(=5)$ entities along a dimension are mixed with the bottom most entity (the intruder) in that dimension and shuffled. Then multiple (3 in our case) human annotators are asked to find out the intruder. We use majority voting to finalize one intruder. Amazon Mechanical Turk was used for crowdsourcing the task and we used 25 randomly selected dimensions for evaluation. For automated word intrusion BIBREF7 , we calculate following score for all $k+1$ entities"
]
] |
3b995a7358cefb271b986e8fc6efe807f25d60dc | What types of word representations are they evaluating? | [
"GloVE; SGNS"
] | [
[
"We trained word embeddings using either GloVe BIBREF11 or SGNS BIBREF12 on a small or a large corpus. The small corpus consists of the traditional Chinese part of Chinese Gigaword BIBREF13 and ASBC 4.0 BIBREF9 . The large corpus additionally includes the Chinese part of Wikipedia."
]
] |
2210178facc0e7b3b6341eec665f3c098abef5ac | What type of recurrent layers does the model use? | [
"GRU"
] | [
[]
] |
7cf726db952c12b1534cd6c29d8e7dfa78215f9e | What is a word confusion network? | [
"It is a network used to encode speech lattices to maintain a rich hypothesis space."
] | [
[
"We take a first step towards integrating ASR in an end-to-end SDS by passing on a richer hypothesis space to subsequent components. Specifically, we investigate how the richer ASR hypothesis space can improve DST. We focus on these two components because they are at the beginning of the processing pipeline and provide vital information for the subsequent SDS components. Typically, ASR systems output the best hypothesis or an n-best list, which the majority of DST approaches so far uses BIBREF11 , BIBREF8 , BIBREF7 , BIBREF12 . However, n-best lists can only represent a very limited amount of hypotheses. Internally, the ASR system maintains a rich hypothesis space in the form of speech lattices or confusion networks (cnets)."
]
] |
f9751e0ca03f49663a5fc82b33527bc8be1ed0aa | What type of simulations of real-time data feeds are used for validaton? | [
"simplified set of input data, in a variety of different formats that occur frequently in a healthcare setting"
] | [
[
"To support future large-scale operations, a multi-protocol message passing system was used for inter-module communication. This modular design also allows different components to be swapped out seamlessly, provided they continue to communicate via the expected interface. Routines were developed to simulate input data based on the authors experience with real healthcare data. The reasons for this choice were twofold: One, healthcare data can be high in incidental complexity, requiring one-off code to handle unusual inputs, but not necessarily in such a way as to significantly alter the fundamental engineering choices in a semantic enrichment engine such as this one. Two, healthcare data is strictly regulated, and the process for obtaining access to healthcare data for research can be cumbersome and time-consuming.",
"A simplified set of input data, in a variety of different formats that occur frequently in a healthcare setting, was used for simulation. In a production setting, the Java module that generates simulation data would be replaced by either a data source that directly writes to the input message queue or a Java module that intercepts or extracts production data, transforms it as needed, and writes it to the input message queue. A component-level view of the systems architecture is shown in Figure FIGREF7"
]
] |
ce18c50dadab7b9f28141fe615fd7de69355d9dd | How are FHIR and RDF combined? | [
"RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, RDF makes statements of fact, whereas FHIR makes records of events, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts"
] | [
[
"One of the several formats into which FHIR can be serialized is RDF. However, because RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, there is the potential for a slight mismatch between the models. This comes up in two ways: One, RDF makes statements of fact, whereas FHIR makes records of events. The example given in the FHIR documentation is the difference between \"patient x has viral pneumonia\" (statement of fact) and \"Dr. Jones diagnosed patient x with viral pneumonia\" (record of event). Two, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts. The example given for this mismatch is \"a modifier extension indicates that the surrounding element's meaning will likely be misunderstood if the modifier extension is not understood.\" The potential for serious error resulting from this mismatch is small, but it is worth bearing in mind when designing information systems."
]
] |
5a230fe4f0204bf2eebc0e944cf8defaf33d165c | What are the differences between FHIR and RDF? | [
"One of the several formats into which FHIR can be serialized is RDF, there is the potential for a slight mismatch between the models"
] | [
[
"One of the several formats into which FHIR can be serialized is RDF. However, because RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, there is the potential for a slight mismatch between the models. This comes up in two ways: One, RDF makes statements of fact, whereas FHIR makes records of events. The example given in the FHIR documentation is the difference between \"patient x has viral pneumonia\" (statement of fact) and \"Dr. Jones diagnosed patient x with viral pneumonia\" (record of event). Two, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts. The example given for this mismatch is \"a modifier extension indicates that the surrounding element's meaning will likely be misunderstood if the modifier extension is not understood.\" The potential for serious error resulting from this mismatch is small, but it is worth bearing in mind when designing information systems."
]
] |
d3bb06d730efbedd30ec226fe8cf828a4773bf5c | What do FHIR and RDF stand for? | [
"Health Level Seven Fast Healthcare Interoperability Resources (HL7 FHIR), Resource Description Framework (RDF)"
] | [
[
"Background ::: Health Level Seven Fast Healthcare Interoperability Resources (HL7 FHIR)",
"FHIR BIBREF5 is a new open standard for healthcare data developed by the same company that developed HL7v2. However, whereas HL7v2 uses an idiosyncratic data exchange format, FHIR uses data exchange formats based on those already in wide use on the World-Wide Web such as Extensible Markup Language (XML) and JavaScript Object Notation (JSON) BIBREF6, as well as the web's familiar transfer control protocols such as HyperText Transfer Protocol Secure (HTTPS) and Representational State Transfer (REST) BIBREF6 and system of contextual hyperlinks implemented with Uniform Resource Locators / Identifiers (URL/URI) BIBREF7. This design choice simplifies interoperability and discoverability and enables applications to be built rapidly on top of FHIR by the large number of engineers already familiar with web application design without a steep learning curve.",
"Background ::: Resource Description Framework (RDF)",
"RDF is the backbone of the semantic webBIBREF8. It is described as a framework, rather than a protocol or a standard, because it is an abstact model of information whose stated goal is \"to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines (a priori) the semantics of any application domain.\" BIBREF12 Its concrete realization is typically a serialization into one of several formats including XML, JSON, TTL, etc."
]
] |
2255c36c8c7ed6084da577b480eb01d349f52943 | What is the motivation behind the work? Why question generation is an important task? | [
"Such a system would benefit educators by saving time to generate quizzes and tests."
] | [
[
"Existing question generating systems reported in the literature involve human-generated templates, including cloze type BIBREF0, rule-based BIBREF1, BIBREF2, or semi-automatic questions BIBREF3, BIBREF4, BIBREF5. On the other hand, machine learned models developed recently have used recurrent neural networks (RNNs) to perform sequence transduction, i.e. sequence-to-sequence BIBREF6, BIBREF7. In this work, we investigated an automatic question generation system based on a machine learning model that uses transformers instead of RNNs BIBREF8, BIBREF9. Our goal was to generate questions without templates and with minimal human involvement using machine learning transformers that have been demonstrated to train faster and better than RNNs. Such a system would benefit educators by saving time to generate quizzes and tests."
]
] |
9e391c8325b48f6119ca4f3d428b1b2b037f5c13 | Why did they choose WER as evaluation metric? | [
"WER can reflect our model's effectiveness in generating questions that are similar to those of SQuAD, WER can be used for initial analyses"
] | [
[
"A low WER would indicate that a model-generated question is similar to the reference question, while a high WER means that the generated question differs significantly from the reference question. It should be noted that a high WER does not necessarily invalidate the generated question, as different questions can have the same answers, and there could be various ways of phrasing the same question. On the other hand, a situation with low WER of 1 could be due to the question missing a pertinent word to convey the correct idea. Despite these shortcomings in using WER as a metric of success, the WER can reflect our model's effectiveness in generating questions that are similar to those of SQuAD based on the given reading passage and answer. WER can be used for initial analyses that can lead to deeper insights as discussed further below."
]
] |
5bcc12680cf2eda2dd13ab763c42314a26f2d993 | What evaluation metrics were used in the experiment? | [
"For sentence-level prediction they used tolerance accuracy, for segment retrieval accuracy and MRR and for the pipeline approach they used overall accuracy"
] | [
[
"Metrics. We use tolerance accuracy BIBREF16, which measures how far away the predicted span is from the gold standard span, as a metric. The rationale behind the metric is that, in practice, it suffices to recommend a rough span which contains the answer – a difference of a few seconds would not matter much to the user.",
"Metrics. We used accuracy and MRR (Mean Reciprocal Ranking) as metrics. The accuracy is",
"Metrics. To evaluate our pipeline approach we use overall accuracy after filtering and accuracy given that the segment is in the top 10 videos. While the first metric is similar to SECREF17, the second can indicate if initially searching on the video space can be used to improve our selection:"
]
] |
7a53668cf2da4557735aec0ecf5f29868584ebcf | What kind of instructional videos are in the dataset? | [
"tutorial videos for a photo-editing software"
] | [
[
"The remainder of this paper is structured as follows. Section SECREF3 introduces TutorialVQA dataset as a case study of our proposed problem. The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. Section SECREF4 presents the baseline models and their experiment details on the sentence-level prediction and video segment retrieval tasks on our dataset. Then, we discuss the experimental results in Section SECREF5 and conclude the paper in Section SECREF6."
]
] |
8051927f914d730dfc61b2dc7a8580707b462e56 | What baseline algorithms were presented? | [
"a sentence-level prediction algorithm, a segment retrieval algorithm and a pipeline segment retrieval algorithm"
] | [
[
"Our video question answering task is novel and to our knowledge, no model has been designed specifically for this task. As a first step towards solving this problem, we evaluated the performance of state-of-the-art models developed for other QA tasks, including a sentence-level prediction task and two segment retrieval tasks. In this section, we report their results on the TutorialVQA dataset.",
"Baselines ::: Baseline1: Sentence-level prediction",
"Given a transcript (a sequence of sentences) and a question, Baseline1 predicts (starting sentence index, ending sentence index). The model is based on RaSor BIBREF13, which has been developed for the SQuAD QA task BIBREF6. RaSor concatenates the embedding vectors of the starting and the ending words to represent a span. Following this idea, Baseline1 represents a span of sentences by concatenating the vectors of the starting and ending sentences. The left diagram in Fig. FIGREF15 illustrates the Baseline1 model.",
"Model. The model takes two inputs, a transcript, $\\lbrace s_1, s_2, ... s_n\\rbrace $ where $s_i$ are individual sentences and a question, $q$. The output is the span scores, $y$, the scores over all possible spans. GLoVe BIBREF14 is used for the word representations in the transcript and the questions. We use two bi-LSTMs BIBREF15 to encode the transcript.",
"where n is the number of sentences . The output of Passage-level Encoding, $p$, is a sequence of vector, $p_i$, which represents the latent meaning of each sentence. Then, the model combines each pair of sentence embeddings ($p_i$, $p_j$) to generate a span embedding.",
"where [$\\cdot $,$\\cdot $] indicates the concatenation. Finally, we use a one-layer feed forward network to compute a score between each span and a question.",
"In training, we use cross-entropy as an objective function. In testing, the span with the highest score is picked as an answer.",
"Baselines ::: Baseline2: Segment retrieval",
"We also considered a simpler task by casting our problem as a retrieval task. Specifically, in addition to a plain transcript, we also provided the model with the segmentation information which was created during the data collection phrase (See Section. SECREF3). Note that each segments corresponds to a candidate answer. Then, the task is to pick the best segment for given a query. This task is easier than Baseline1's task in that the segmentation information is provided to the model. Unlike Baseline1, however, it is unable to return an answer span at various granularities. Baseline2 is based on the attentive LSTM BIBREF17, which has been developed for the InsuranceQA task. The right diagram in Fig. FIGREF15 illustrates the Baseline2 model.",
"Model. The two inputs, $s$ and $q$ represent the segment text and a question. The model first encodes the two inputs.",
"$h^s$ is then re-weighted using attention weights.",
"where $\\odot $ denotes the element-wise multiplication operation. The final score is computed using a one-layer feed-forward network.",
"During training, the model requires negative samples. For each positive example, (question, ground-truth segment), all the other segments in the same transcript are used as negative samples. Cross entropy is used as an objective function.",
"Baselines ::: Baseline3: Pipeline Segment retrieval",
"We construct a pipelined approach through another segment retrieval task, calculating the cosine similarities between the segment and question embeddings. In this task however, we want to test the accuracy of retrieving the segments given that we first retrieve the correct video from our 76 videos. First, we generate the TF-IDF embeddings for the whole video transcripts and questions. The next step involves retrieving the videos which have the lowest cosine distance between the video transcripts and question. We then filter and store the top ten videos, reducing the number of computations required in the next step. Finally, we calculate the cosine distances between the question and the segments which belong to the filtered top 10 videos, marking it as correct if found in these videos. While the task is less computationally expensive than the previous baseline, we do not learn the segment representations, as this task is a simple retrieval task based on TF-IDF embeddings.",
"Model. The first two inputs are are the question, q, and video transcript, v, encoded by their TF-IDF vectors: BIBREF18:",
"We then filter the top 10 video transcripts(out of 76) with the minimum cosine distance, and further compute the TF-IDF vectors for their segments, Stop10n, where n = 10. We repeat the process for the corresponding segments:",
"selecting the segment with the minimal cosine distance distance to the query."
]
] |
09621c9cd762e1409f22d501513858d67dcd3c7c | What is the source of the triples? | [
"a tutorial website about an image editing program "
] | [
[
"We downloaded 76 videos from a tutorial website about an image editing program . Each video is pre-processed to provide the transcripts and the time-stamp information for each sentence in the transcript. We then used Amazon Mechanical Turk to collect the question-answer pairs . One naive way of collecting the data is to prepare a question list and then, for each question, ask the workers to find the relevant parts in the video. However, this approach is not feasible and error-prone because the videos are typically long and finding a relevant part from a long video is difficult. Doing so might also cause us to miss questions which were relevant to the video segment. Instead, we took a reversed approach. First, for each video, we manually identified the sentence spans that can serve as answers. These candidates are of various granularity and may overlap. The segments are also complete in that they encompass the beginning and end of a task. In total, we identified 408 segments from the 76 videos. Second we asked AMT workers to provide question annotations for the videos."
]
] |
10ddac87daf153cf674589cc1c64a795907d5d9a | How much better is performance of the proposed model compared to the state of the art in these various experiments? | [
"significantly improves the accuracy and F1 score of aspect polarity classification"
] | [
[
"We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%."
]
] |
6cd874c4ae8e70f3c98c7176191c13a7decfbc45 | What was state of the art on SemEval-2014 task4 Restaurant and Laptop dataset? | [
"BERT-ADA, BERT-PT, AEN-BERT, SDGCN-BERT"
] | [
[
"BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset.",
"We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%."
]
] |
b807dd3d42251615b881632caa5e331e2203d269 | What was previous state-of-the-art on four Chinese reviews datasets? | [
"GANN obtained the state-of-the-art APC performance on the Chinese review datasets"
] | [
[
"GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets."
]
] |
d39c911bf2479fdb7af339b59acb32073242fab3 | In what four Chinese review datasets does LCF-ATEPC achieves state of the art? | [
"Car, Phone, Notebook, Camera"
] | [
[
"To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments."
]
] |
f53be1266be1fea5598a671080226c9c983b69e3 | Why authors think that researches do not pay attention to the research of the Chinese-oriented ABSA task? | [
"Unanswerable"
] | [
[]
] |
3be8859103016ce2afe4c0a8552b9d980f7958bf | What is specific to Chinese-oriented ABSA task, how is it different from other languages? | [
"Unanswerable"
] | [
[]
] |
e9f868f22ae70c7681c28228b6019e155013f8d6 | what is the size of this dataset? | [
"Unanswerable"
] | [
[]
] |
7aaaf7bff9947c6d3b954ae25be87e6e1c49db6d | did they use a crowdsourcing platform for annotations? | [
"Yes"
] | [
[
"In this study, we constructed the first Japanese word similarity dataset. It contains various parts of speech and includes rare words in addition to common words. Crowdsourced annotators assigned similarity to word pairs during the word similarity task. We gave examples of similarity in the task request sent to annotators, so that we reduced the variance of each word pair. However, we did not restrict the attributes of words, such as the level of feeling, during annotation. Error analysis revealed that the notion of similarity should be carefully defined when constructing a similarity dataset."
]
] |
4a2248e1c71c0b0183ab0d225440cae2da396b8d | where does the data come from? | [
"Evaluation Dataset of Japanese Lexical Simplification kodaira"
] | [
[
"We extracted Japanese word pairs from the Evaluation Dataset of Japanese Lexical Simplification kodaira. It targeted content words (nouns, verbs, adjectives, adverbs). It included 10 contexts about target words annotated with their lexical substitutions and rankings. Figure FIGREF1 shows an example of the dataset. A word in square brackets in the text is represented as a target word of simplification. A target word is not only recorded in the lemma form but also in the conjugated form. We built a Japanese similarity dataset from this dataset using the following procedure."
]
] |
1244cf6d75e3aa6d605a0f4b141c015923a3f2e7 | What is the criteria for a good metric? | [
"The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences."
] | [
[
"In our methodology to design new evaluation metrics for comparing reference summaries/translations to hypothesis ones, we established first-principles criteria on what a good evaluator should do. The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences."
]
] |
c8b9b962e4d40c50150b2f8873a4004f25398464 | What are the three limitations? | [
"High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries."
] | [
[
"In this part, we discuss three significant limitations of BLEU and ROUGE. These metrics can assign: High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries.",
"Challenges with BLEU and ROUGE ::: High score, opposite meanings",
"Suppose that we have a reference summary s1. By adding a few negation terms to s1, one can create a summary s2 which is semantically opposite to s1 but yet has a high BLEU/ROUGE score.",
"Challenges with BLEU and ROUGE ::: Low score, similar meanings",
"In addition not to be sensitive to negation, BLEU and ROUGE score can give low scores to sentences with equivalent meaning. If s2 is a paraphrase of s1, the meaning will be the same ;however, the overlap between words in s1 and s2 will not necessarily be significant.",
"Challenges with BLEU and ROUGE ::: High score, unintelligible sentences",
"A third weakness of BLEU and ROUGE is that in their simplest implementations, they are insensitive to word permutation and can give very high scores to unintelligible sentences. Let s1 be \"On a morning, I saw a man running in the street.\" and s2 be “On morning a, I saw the running a man street”. s2 is not an intelligible sentence. The unigram version of ROUGE and BLEU will give these 2 sentences a score of 1."
]
] |
616c205142c7f37b3f4e81c0d1c52c79f926bcdc | What is current state-of-the-art model? | [
"SUMBT BIBREF17 is the current state-of-the-art model on WOZ 2.0, TRADE BIBREF3 is the current published state-of-the-art model"
] | [
[
"We first evaluate our model on MultiWOZ 2.0 dataset as shown in Table TABREF16. We compare with five published baselines. TRADE BIBREF3 is the current published state-of-the-art model. It utilizes an encoder-decoder architecture that takes dialogue contexts as source sentences, and takes state annotations as target sentences. SUMBT BIBREF17 fine-tunes a pre-trained BERT model BIBREF11 to learn slot and utterance representations. Neural Reading BIBREF18 learns a question embedding for each slot, and predicts the span of each slot value. GCE BIBREF7 is a model improved over GLAD BIBREF6 by using a slot-conditioned global module. Details about baselines are in Section SECREF6.",
"Table TABREF24 shows the results on WOZ $2.0$ dataset. We compare with four published baselines. SUMBT BIBREF17 is the current state-of-the-art model on WOZ 2.0 dataset. It fine-tunes a pre-trained BERT model BIBREF11 to learn slot and utterance representations. StateNet PSI BIBREF5 maps contextualized slot embeddings and value embeddings into the same vector space, and calculate the Euclidean distance between these two. It also learns a joint model of all slots, enabling parameter sharing between slots. GLAD BIBREF6 proposes to use a global module to share parameters between slots and a local module to learn slot-specific features. Neural Beflief Tracker BIBREF4 applies CNN to learn n-gram utterance representations. Unlike prior works that transfer knowledge between slots by sharing parameters, our model implicitly transfers knowledge by formulating each slot as a question and learning to answer all the questions. Our model has a $1.24\\%$ relative joint accuracy improvement over StateNet PSI. Although SUMBT achieves higher joint accuracy than DSTQA on WOZ $2.0$ dataset, DSTQA achieves better performance than SUMBT on MultiWOZ $2.0$ dataset, which is a more challenging dataset."
]
] |
496e81769a8d9992dae187ed60639ff2eec531f3 | Which language(s) are found in the WSD datasets? | [
" WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese"
] | [
[
"While WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese, to evaluate the effectiveness of our approach in a different language. We use OntoNotes Release 5.0, which contains a number of annotations including word senses for Chinese. We follow the data setup of BIBREF26 and conduct an evaluation on four genres, i.e., broadcast conversation (BC), broadcast news (BN), magazine (MZ), and newswire (NW), as well as the concatenation of all genres. While the training and development datasets are divided into genres, we train on the concatenation of all genres and test on each individual genre."
]
] |
f103789b85b00ec973076652c639bd31c605381e | What datasets are used for testing? | [
"Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15), OntoNotes Release 5.0"
] | [
[
"For English all-words WSD, we train our WSD model on SemCor BIBREF24, and test it on Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15). This common benchmark, which has been annotated with WordNet-3.0 senses BIBREF25, has recently been adopted in English all-words WSD. Following BIBREF9, we choose SemEval 2007 Task 17 (SE07) as our development data to pick the best model parameters after a number of neural network updates, for models that require back-propagation training.",
"While WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese, to evaluate the effectiveness of our approach in a different language. We use OntoNotes Release 5.0, which contains a number of annotations including word senses for Chinese. We follow the data setup of BIBREF26 and conduct an evaluation on four genres, i.e., broadcast conversation (BC), broadcast news (BN), magazine (MZ), and newswire (NW), as well as the concatenation of all genres. While the training and development datasets are divided into genres, we train on the concatenation of all genres and test on each individual genre."
]
] |
9c4a4dfa7b0b977173e76e2d2f08fa984af86f0e | How does TP-N2F compare to LSTM-based Seq2Seq in terms of training and inference speed? | [
"Full Testing Set accuracy: 84.02\nCleaned Testing Set accuracy: 93.48"
] | [
[
"Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations."
]
] |
4c7ac51a66c15593082e248451e8f6896e476ffb | What is the performance proposed model achieved on AlgoList benchmark? | [
"Full Testing Set Accuracy: 84.02\nCleaned Testing Set Accuracy: 93.48"
] | [
[
"Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations."
]
] |