question_id
stringlengths 40
40
| question
stringlengths 4
171
| answer
sequence | evidence
sequence |
---|---|---|---|
ec91b87c3f45df050e4e16018d2bf5b62e4ca298 | What is the baseline used? | [
"Unanswerable"
] | [
[]
] |
f129c97a81d81d32633c94111018880a7ffe16d1 | Which attention mechanisms do they compare? | [
"Soft attention, Hard Stochastic attention, Local Attention"
] | [
[
"We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 . They have in common the fact that at each time step INLINEFORM1 of the decoding phase, all approaches first take as input the annotation sequence INLINEFORM2 to derive a time-dependent context vector that contain relevant information in the image to help predict the current target word INLINEFORM3 . Even though these models differ in how the time-dependent context vector is derived, they share the same subsequent steps. For each mechanism, we propose two hand-picked illustrations showing where the attention is placed in an image.",
"Soft attention",
"Hard Stochastic attention",
"Local Attention"
]
] |
100cf8b72d46da39fedfe77ec939fb44f25de77f | Which paired corpora did they use in the other experiment? | [
"dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1",
"Chinese dataset BIBREF0"
] | [
[
"In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0"
],
[
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.",
"We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model."
]
] |
8cc56fc44136498471754186cfa04056017b4e54 | By how much does their system outperform the lexicon-based models? | [
"Under the retrieval evaluation setting, their proposed model + IR2 had better MRR than NVDM by 0.3769, better MR by 4.6, and better Recall@10 by 20 . \nUnder the generative evaluation setting the proposed model + IR2 had better BLEU by 0.044 , better CIDEr by 0.033, better ROUGE by 0.032, and better METEOR by 0.029",
"Proposed model is better than both lexical based models by significan margin in all metrics: BLEU 0.261 vs 0.250, ROUGLE 0.162 vs 0.155 etc."
] | [
[
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios."
],
[
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios."
]
] |
5fa431b14732b3c47ab6eec373f51f2bca04f614 | Which lexicon-based models did they compare with? | [
"TF-IDF, NVDM"
] | [
[
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic."
]
] |
33ccbc401b224a48fba4b167e86019ffad1787fb | How many comments were used? | [
"from 50K to 4.8M"
] | [
[
"We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model."
]
] |
cca74448ab0c518edd5fc53454affd67ac1a201c | How many articles did they have? | [
"198,112"
] | [
[
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words."
]
] |
b69ffec1c607bfe5aa4d39254e0770a3433a191b | What news comment dataset was used? | [
"Chinese dataset BIBREF0"
] | [
[
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words."
]
] |
f5cf8738e8d211095bb89350ed05ee7f9997eb19 | By how much do they outperform standard BERT? | [
"up to four percentage points in accuracy"
] | [
[
"In this paper we presented a way of enriching BERT with knowledge graph embeddings and additional metadata. Exploiting the linked knowledge that underlies Wikidata improves performance for our task of document classification. With this approach we improve the standard BERT models by up to four percentage points in accuracy. Furthermore, our results reveal that with task-specific information such as author names and publication metadata improves the classification task essentially compared a text-only approach. Especially, when metadata feature engineering is less trivial, adding additional task-specific information from an external knowledge source such as Wikidata can help significantly. The source code of our experiments and the trained models are publicly available."
]
] |
bed527bcb0dd5424e69563fba4ae7e6ea1fca26a | What dataset do they use? | [
"2019 GermEval shared task on hierarchical text classification",
"GermEval 2019 shared task"
] | [
[
"In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task."
],
[
"Our experiments are modelled on the GermEval 2019 shared task and deal with the classification of books. The dataset contains 20,784 German books. Each record has:"
]
] |
aeab5797b541850e692f11e79167928db80de1ea | How do they combine text representations with the knowledge graph embeddings? | [
"all three representations are concatenated and passed into a MLP"
] | [
[
"The BERT architecture uses 12 hidden layers, each layer consists of 768 units. To derive contextualized representations from textual features, the book title and blurb are concatenated and then fed through BERT. To minimize the GPU memory consumption, we limit the input length to 300 tokens (which is shorter than BERT's hard-coded limit of 512 tokens). Only 0.25% of blurbs in the training set consist of more than 300 words, so this cut-off can be expected to have minor impact.",
"The non-text features are generated in a separate preprocessing step. The metadata features are represented as a ten-dimensional vector (two dimensions for gender, see Section SECREF10). Author embedding vectors have a length of 200 (see Section SECREF22). In the next step, all three representations are concatenated and passed into a MLP with two layers, 1024 units each and ReLu activation function. During training, the MLP is supposed to learn a non-linear combination of its input representations. Finally, the output layer does the actual classification. In the SoftMax output layer each unit corresponds to a class label. For sub-task A the output dimension is eight. We treat sub-task B as a standard multi-label classification problem, i. e., we neglect any hierarchical information. Accordingly, the output layer for sub-task B has 343 units. When the value of an output unit is above a given threshold the corresponding label is predicted, whereby thresholds are defined separately for each class. The optimum was found by varying the threshold in steps of $0.1$ in the interval from 0 to 1."
]
] |
bfa3776c30cb30e0088e185a5908e5172df79236 | What is the algorithm used for the classification tasks? | [
"Random Forest Ensemble classifiers"
] | [
[
"To test whether topic models can be used for dating poetry or attributing authorship, we perform supervised classification experiments with Random Forest Ensemble classifiers. We find that we obtain better results by training and testing on stanzas instead of full poems, as we have more data available. Also, we use 50 year slots (instead of 25) to ease the task."
]
] |
a2a66726a5dca53af58aafd8494c4de833a06f14 | Is the outcome of the LDA analysis evaluated in any way? | [
"Yes"
] | [
[
"The Style baseline achieves an Accuracy of 83%, LDA features 89% and a combination of the two gets 90%. However, training on full poems reduces this to 42—52%."
]
] |
ee87608419e4807b9b566681631a8cd72197a71a | What is the corpus used in the study? | [
"TextGrid Repository",
"The Digital Library in the TextGrid Repository"
] | [
[
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work). We find that around 51k texts are annotated with the label ’verse’ (TGRID-V), not distinguishing between ’lyric verse’ and ’epic verse’. However, the average length of these texts is around 150 token, dismissing most epic verse tales. Also, the poems are distributed over 229 authors, where the average author contributed 240 poems (median 131 poems). A drawback of TGRID-V is the circumstance that it contains a noticeable amount of French, Dutch and Latin (over 400 texts). To constrain our dataset to German, we filter foreign language material with a stopword list, as training a dedicated language identification classifier is far beyond the scope of this work."
],
[
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work). We find that around 51k texts are annotated with the label ’verse’ (TGRID-V), not distinguishing between ’lyric verse’ and ’epic verse’. However, the average length of these texts is around 150 token, dismissing most epic verse tales. Also, the poems are distributed over 229 authors, where the average author contributed 240 poems (median 131 poems). A drawback of TGRID-V is the circumstance that it contains a noticeable amount of French, Dutch and Latin (over 400 texts). To constrain our dataset to German, we filter foreign language material with a stopword list, as training a dedicated language identification classifier is far beyond the scope of this work."
]
] |
cda4612b4bda3538d19f4b43dde7bc30c1eda4e5 | What are the traditional methods to identifying important attributes? | [
"automated attribute-value extraction, score the attributes using the Bayes model, evaluate their importance with several different frequency metrics, aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model, OntoRank algorithm",
"TextRank, Word2vec BIBREF19, GloVe BIBREF20"
] | [
[
"Many proposed approaches formulate the entity attribute ranking problem as a post processing step of automated attribute-value extraction. In BIBREF0 , BIBREF1 , BIBREF2 , Pasca et al. firstly extract potential class-attribute pairs using linguistically motivated patterns from unstructured text including query logs and query sessions, and then score the attributes using the Bayes model. In BIBREF3 , Rahul Rai proposed to identify product attributes from customer online reviews using part-of-speech(POS) tagging patterns, and to evaluate their importance with several different frequency metrics. In BIBREF4 , Lee et al. developed a system to extract concept-attribute pairs from multiple data sources, such as Probase, general web documents, query logs and external knowledge base, and aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model. Those approaches typically suffer from the poor quality of the pattern rules, and the ranking process is used to identify relatively more precise attributes from all attribute candidates.",
"As for an already existing knowledge graph, there is plenty of work in literature dealing with ranking entities by relevance without or with a query. In BIBREF5 , Li et al. introduced the OntoRank algorithm for ranking the importance of semantic web objects at three levels of granularity: document, terms and RDF graphs. The algorithm is based on the rational surfer model, successfully used in the Swoogle semantic web search engine. In BIBREF6 , Hogan et al. presented an approach that adapted the well-known PageRank/HITS algorithms to semantic web data, which took advantage of property values to rank entities. In BIBREF7 , BIBREF8 , authors also focused on ranking entities, sorting the semantic web resources based on importance, relevance and query length, and aggregating the features together with an overall ranking model."
],
[
"The existing co-occurrence methods do not suit our application scenario at all, since exact string matching is too strong a requirement and initial trial has shown its incompetency. In stead we implemented an improved version of their method based on TextRank as our baseline. In addition, we also tested multiple semantic matching algorithms for comparison with our chosen method.",
"TextRank: TextRank is a graph-based ranking model for text processing. BIBREF18 It is an unsupervised algorithm for keyword extraction. Since product attributes are usually the keywords in enquiries, we can compare these keywords with the category attributes and find the most important attributes. This method consists of three steps. The first step is to merge all enquiries under one category as an article. The second step is to extract the top 50 keywords for each category. The third step is to find the most important attributes by comparing top keywords with category attributes.",
"Word2vec BIBREF19 : We use the word vector trained by BIBREF19 as the distributed representation of words. Then we get the enquiry sentence representation and category attribute representation. Finally we collect the statistics about the matched attributes of each category, and select the most frequent attributes under the same category.",
"GloVe BIBREF20 : GloVe is a global log-bilinear regression model for the unsupervised learning of word representations, which utilizes the ratios of word-word co-occurrence probabilities. We use the GloVe method to train the distributed representation of words. And attribute selection procedure is the same as word2vec."
]
] |
e12674f0466f8c0da109b6076d9939b30952c7da | What do you use to calculate word/sub-word embeddings | [
"FastText"
] | [
[
"Evaluating FastText, GloVe and word2vec, we show that compared to other word representation learning algorithms, the FastText performs best. We sample and analyze the category attributes and find that many self-filled attributes contain misspellings. The FastText algorithm represents words by a sum of its character n-grams and it is much robust against problems like misspellings. In summary, FastText has greater advantages in dealing with natural language corpus usually with spelling mistakes."
]
] |
9fe6339c7027a1a0caffa613adabe8b5bb6a7d4a | What user generated text data do you use? | [
"Unanswerable"
] | [
[]
] |
b5c3787ab3784214fc35f230ac4926fe184d86ba | Did they propose other metrics? | [
"Yes"
] | [
[
"We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions."
]
] |
9174aded45bc36915f2e2adb6f352f3c7d9ada8b | Which real-world datasets did they use? | [
"SST-2 (Stanford Sentiment Treebank, version 2), Snips",
"SST-2, Snips"
] | [
[
"In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.",
"The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents."
],
[
"In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.",
"The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents."
]
] |
a8f1029f6766bffee38a627477f61457b2d6ed5c | How did they obtain human intuitions? | [
"Unanswerable"
] | [
[]
] |
a2103e7fe613549a9db5e65008f33cf2ee0403bd | What are the country-specific drivers of international development rhetoric? | [
"wealth , democracy , population, levels of ODA, conflict "
] | [
[
"Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the global governance of development BIBREF1 . More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.",
"The analysis of discussion of international development in annual UN General Debate statements therefore uncovers two principle development topics: economic development and sustainable development. We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). However, we find that the extent to which countries discuss sustainable development (Topic 7) in their annual GD statements varies considerably according to these different structural factors. The results suggest that broadly-speaking we do not observe linear trends in the relationship between these country-specific factors and discussion of Topic 7. Instead, we find that there are significant fluctuations in the relationship between factors such as wealth, democracy, etc., and the extent to which these states discuss sustainable development in their GD statements. These relationships require further analysis and exploration."
]
] |
13b36644357870008d70e5601f394ec3c6c07048 | Is the dataset multilingual? | [
"No",
"No"
] | [
[
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements."
],
[
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements."
]
] |
e4a19b91b57c006a9086ae07f2d6d6471a8cf0ce | How are the main international development topics that states raise identified? | [
" They focus on exclusivity and semantic coherence measures: Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. They select select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence."
] | [
[
"We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by BIBREF6 to evaluate topic quality. BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments.",
"Exclusivity scores for each topic follows BIBREF7 . Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Cohesive and exclusive topics are more semantically useful. Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 ."
]
] |
fd0ef5a7b6f62d07776bf672579a99c67e61a568 | What experiments do the authors present to validate their system? | [
" we measure our system's performance for datasets across various domains, evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs"
] | [
[
"QnAMaker is not domain-specific and can be used for any type of data. To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19). Hybrid of deep learning(CDSSM) and machine learning features give our ranking model low computation cost, high explainability and significant F1/AUC score. Based on QnAMaker usage, we observed these trends:"
]
] |
071bcb4b054215054f17db64bfd21f17fd9e1a80 | How does the conversation layer work? | [
"Unanswerable"
] | [
[]
] |
f399d5a8dbeec777a858f81dc4dd33a83ba341a2 | What components is the QnAMaker composed of? | [
"QnAMaker Portal, QnaMaker Management APIs, Azure Search Index, QnaMaker WebApp, Bot",
"QnAMaker Portal, QnaMaker Management APIs, Azure Search Index, QnaMaker WebApp, Bot"
] | [
[
"System description ::: Architecture",
"As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:",
"QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.",
"QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.",
"Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.",
"QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.",
"Bot: Calls the WebApp with the User's query to get results."
],
[
"As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:",
"QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.",
"QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.",
"Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.",
"QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.",
"Bot: Calls the WebApp with the User's query to get results."
]
] |
d28260b5565d9246831e8dbe594d4f6211b60237 | How they measure robustness in experiments? | [
"We empirically provide a formula to measure the richness in the scenario of machine translation.",
"boost the training BLEU very greatly, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$"
] | [
[
"The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation.",
"The greater, the richer. In practice, we find a rough threshold of r is 5."
],
[
"First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective.",
"Second, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line. After 500 L-BFGS iterations, their performances are no less than the baseline, though only by a small margin."
]
] |
8670989ca39214eda6c1d1d272457a3f3a92818b | Is new method inferior in terms of robustness to MIRAs in experiments? | [
"Unanswerable"
] | [
[]
] |
923b12c0a50b0ee22237929559fad0903a098b7b | What experiments with large-scale features are performed? | [
"Plackett-Luce Model for SMT Reranking"
] | [
[
"Evaluation ::: Plackett-Luce Model for SMT Reranking",
"After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. Refer to Formula DISPLAY_FORM9, this is ideal to use the Plackett-Luce model. Results are shown in Figure FIGREF12. We observe some interesting phenomena.",
"This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree."
]
] |
67131c15aceeb51ae1d3b2b8241c8750a19cca8e | Which ASR system(s) is used in this work? | [
"Oracle "
] | [
[
"The preliminary architecture is shown in Fig. FIGREF4. For a given transcribed utterance, it is firstly encoded with Byte Pair Encoding (BPE) BIBREF14, a compression algorithm splitting words to fundamental subword units (pairs of bytes or BPs) and reducing the embedded vocabulary size. Then we use a BiLSTM BIBREF15 encoder and the output state of the BiLSTM is regarded as a vector representation for this utterance. Finally, a fully connected Feed-forward Neural Network (FNN) followed by a softmax layer, labeled as a multilayer perceptron (MLP) module, is used to perform the domain/intent classification task based on the vector.",
"For convenience, we simplify the whole process in Fig.FIGREF4 as a mapping $BM$ (Baseline Mapping) from the input utterance $S$ to an estimated tag's probability $p(\\tilde{t})$, where $p(\\tilde{t}) \\leftarrow BM(S)$. The $Baseline$ is trained on transcription and evaluated on ASR 1st best hypothesis ($S=\\text{ASR}\\ 1^{st}\\ \\text{best})$. The $Oracle$ is trained on transcription and evaluated on transcription ($S = \\text{Transcription}$). We name it Oracle simply because we assume that hypotheses are noisy versions of transcription."
]
] |
579a0603ec56fc2b4aa8566810041dbb0cd7b5e7 | What are the series of simple models? | [
"perform experiments to utilize ASR $n$-best hypotheses during evaluation"
] | [
[
"Besides the Baseline and Oracle, where only ASR 1-best hypothesis is considered, we also perform experiments to utilize ASR $n$-best hypotheses during evaluation. The models evaluating with $n$-bests and a BM (pre-trained on transcription) are called Direct Models (in Fig. FIGREF7):"
]
] |
c9c85eee41556c6993f40e428fa607af4abe80a9 | Over which datasets/corpora is this work evaluated? | [
"$\\sim $ 8.7M annotated anonymised user utterances",
"on $\\sim $ 8.7M annotated anonymised user utterances"
] | [
[
"We conduct our experiments on $\\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains."
],
[
"We conduct our experiments on $\\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains."
]
] |
f8281eb49be3e8ea0af735ad3bec955a5dedf5b3 | Is the semantic hierarchy representation used for any task? | [
"Yes, Open IE",
"Yes"
] | [
[
"An extrinsic evaluation was carried out on the task of Open IE BIBREF7. It revealed that when applying DisSim as a preprocessing step, the performance of state-of-the-art Open IE systems can be improved by up to 346% in precision and 52% in recall, i.e. leading to a lower information loss and a higher accuracy of the extracted relations. For details, the interested reader may refer to niklaus-etal-2019-transforming."
],
[
"Moreover, most current Open IE approaches output only a loose arrangement of extracted tuples that are hard to interpret as they ignore the context under which a proposition is complete and correct and thus lack the expressiveness needed for a proper interpretation of complex assertions BIBREF8. As illustrated in Figure FIGREF9, with the help of the semantic hierarchy generated by our discourse-aware sentence splitting approach the output of Open IE systems can be easily enriched with contextual information that allows to restore the semantic relationship between a set of propositions and, hence, preserve their interpretability in downstream tasks."
]
] |
a5ee9b40a90a6deb154803bef0c71c2628acb571 | What are the corpora used for the task? | [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains, The evaluation of the German version is in progress."
] | [
[
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress."
]
] |
e286860c41a4f704a3a08e45183cb8b14fa2ad2f | Is the model evaluated? | [
"the English version is evaluated. The German version evaluation is in progress "
] | [
[
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress."
]
] |
982979cb3c71770d8d7d2d1be8f92b66223dec85 | What new metrics are suggested to track progress? | [
" For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine"
] | [
[
"It is now clear that a key aspect for future work will be developing additional performance metrics based on topological properties. We are in line with recent work BIBREF16 , proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores. For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine. Future work will necessarily include developing this type of metrics."
]
] |
5ba6f7f235d0f5d1d01fd97dd5e4d5b0544fd212 | What intrinsic evaluation metrics are used? | [
"Class Membership Tests, Class Distinction Test, Word Equivalence Test",
"coverage metric, being distinct (cosine INLINEFORM0 0.7 or 0.8), belonging to the same class (cosine INLINEFORM1 0.7 or 0.8), being equivalent (cosine INLINEFORM2 0.85 or 0.95)"
] | [
[
"Tests and Gold-Standard Data for Intrinsic Evaluation",
"Using the gold standard data (described below), we performed three types of tests:",
"Class Membership Tests: embeddings corresponding two member of the same semantic class (e.g. “Months of the Year\", “Portuguese Cities\", “Smileys\") should be close, since they are supposed to be found in mostly the same contexts.",
"Class Distinction Test: this is the reciprocal of the previous Class Membership test. Embeddings of elements of different classes should be different, since words of different classes ere expected to be found in significantly different contexts.",
"Word Equivalence Test: embeddings corresponding to synonyms, antonyms, abbreviations (e.g. “porque\" abbreviated by “pq\") and partial references (e.g. “slb and benfica\") should be almost equal, since both alternatives are supposed to be used be interchangeable in all contexts (either maintaining or inverting the meaning).",
"Therefore, in our tests, two words are considered:",
"distinct if the cosine of the corresponding embeddings is lower than 0.70 (or 0.80).",
"to belong to the same class if the cosine of their embeddings is higher than 0.70 (or 0.80).",
"equivalent if the cosine of the embeddings is higher that 0.85 (or 0.95)."
],
[
"For all these tests we computed a coverage metric. Our embeddings do not necessarily contain information for all the words contained in each of these tests. So, for all tests, we compute a coverage metric that measures the fraction of the gold-standard pairs that could actually be tested using the different embeddings produced. Then, for all the test pairs actually covered, we obtain the success metrics for each of the 3 tests by computing the ratio of pairs we were able to correctly classified as i) being distinct (cosine INLINEFORM0 0.7 or 0.8), ii) belonging to the same class (cosine INLINEFORM1 0.7 or 0.8), and iii) being equivalent (cosine INLINEFORM2 0.85 or 0.95)."
]
] |
7ce7edd06925a943e32b59f3e7b5159ccb7acaf6 | What experimental results suggest that using less than 50% of the available training examples might result in overfitting? | [
"consistent increase in the validation loss after about 15 epochs"
] | [
[
"On the right side of Figure FIGREF28 we show how the number of training (and validation) examples affects the loss. For a fixed INLINEFORM0 = 32768 we varied the amount of data used for training from 25% to 100%. Three trends are apparent. As we train with more data, we obtain better validation losses. This was expected. The second trend is that by using less than 50% of the data available the model tends to overfit the data, as indicated by the consistent increase in the validation loss after about 15 epochs (check dashed lines in right side of Figure FIGREF28 ). This suggests that for the future we should not try any drastic reduction of the training data to save training time. Finally, when not overfitting, the validation loss seems to stabilize after around 20 epochs. We observed no phase-transition effects (the model seems simple enough for not showing that type of behavior). This indicates we have a practical way of safely deciding when to stop training the model."
]
] |
a883bb41449794e0a63b716d9766faea034eb359 | What multimodality is available in the dataset? | [
"context is a procedural text, the question and the multiple choice answers are composed of images",
"images and text"
] | [
[
"In the following, we explain our Procedural Reasoning Networks model. Its architecture is based on a bi-directional attention flow (BiDAF) model BIBREF6, but also equipped with an explicit reasoning module that acts on entity-specific relational memory units. Fig. FIGREF4 shows an overview of the network architecture. It consists of five main modules: An input module, an attention module, a reasoning module, a modeling module, and an output module. Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images."
],
[
"In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. To this end, inspired from BIBREF5, we propose Procedural Reasoning Networks (PRN) that incorporates entities into the comprehension process and allows to keep track of entities, understand their interactions and accordingly update their states across time. We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps."
]
] |
5d83b073635f5fd8cd1bdb1895d3f13406583fbd | What are previously reported models? | [
"Hasty Student, Impatient Reader, BiDAF, BiDAF w/ static memory"
] | [
[
"We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.",
"Hasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.",
"Impatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.",
"BiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context. Here, we adapt it to work in a multimodal setting and answer multiple choice questions instead.",
"BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities. However, it does not make any updates on the memory cells. That is, it uses the static entity embeeddings initialized with GloVe word vectors. We propose this baseline to test the significance of the use of relational memory updates."
]
] |
171ebfdc9b3a98e4cdee8f8715003285caeb2f39 | How better is accuracy of new model compared to previously reported models? | [
"Average accuracy of proposed model vs best prevous result:\nSingle-task Training: 57.57 vs 55.06\nMulti-task Training: 50.17 vs 50.59"
] | [
[
"Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe. In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF. Note that the model performances in the multi-task training setting are worse than single-task performances. We believe that this is due to the nature of the tasks that some are more difficult than the others. We think that the performance could be improved by employing a carefully selected curriculum strategy BIBREF20."
]
] |
3c3cb51093b5fd163e87a773a857496a4ae71f03 | How does the scoring model work? | [
"First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word",
" the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history"
] | [
[
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
],
[
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
]
] |
53a0763eff99a8148585ac642705637874be69d4 | How does the active learning model work? | [
"Active learning methods has a learning engine (mainly used for training of classification problems) and the selection engine (which chooses samples that need to be relabeled by annotators from unlabeled data). Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively."
] | [
[
"Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively."
]
] |
0bfed6f9cfe93617c5195c848583e3945f2002ff | Which neural network architectures are employed? | [
"gated neural network "
] | [
[
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
]
] |
352c081c93800df9654315e13a880d6387b91919 | What are the key points in the role of script knowledge that can be studied? | [
"Unanswerable"
] | [
[]
] |
18fbf9c08075e3b696237d22473c463237d153f5 | Did the annotators agreed and how much? | [
"For event types and participant types, there was a moderate to substantial level of agreement using the Fleiss' Kappa. For coreference chain annotation, there was average agreement of 90.5%.",
"Moderate agreement of 0.64-0.68 Fleiss’ Kappa over event type labels, 0.77 Fleiss’ Kappa over participant labels, and good agreement of 90.5% over coreference information."
] | [
[
"In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.",
"For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement."
],
[
"In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.",
"For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement."
]
] |
a37ef83ab6bcc6faff3c70a481f26174ccd40489 | How many subjects have been used to create the annotations? | [
" four different annotators"
] | [
[
"We used the WebAnno annotation tool BIBREF2 for our project. The stories from each scenario were distributed among four different annotators. In a calibration phase, annotators were presented with some sample texts for test annotations; the results were discussed with the authors. Throughout the whole annotation phase, annotators could discuss any emerging issues with the authors. All annotations were done by undergraduate students of computational linguistics. The annotation was rather time-consuming due to the complexity of the task, and thus we decided for single annotation mode. To assess annotation quality, a small sample of texts was annotated by all four annotators and their inter-annotator agreement was measured (see Section \"Inter-Annotator Agreement\" ). It was found to be sufficiently high."
]
] |
bc9c31b3ce8126d1d148b1025c66f270581fde10 | What datasets are used to evaluate this approach? | [
" Kinship and Nations knowledge graphs, YAGO3-10 and WN18KGs knowledge graphs ",
"WN18 and YAGO3-10"
] | [
[],
[
"Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\\%$ and $50.7\\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\\%$ accuracy in detecting errors."
]
] |
185841e979373808d99dccdade5272af02b98774 | How is this approach used to detect incorrect facts? | [
"if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. "
] | [
[
"Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ in the neighborhood of the train triple $\\langle s, r, o\\rangle $ , we need to find the triple $\\langle s^{\\prime },r^{\\prime },o\\rangle $ that results in the least change $\\Delta _{(s^{\\prime },r^{\\prime })}(s,r,o)$ when removed from the graph."
]
] |
d427e3d41c4c9391192e249493be23926fc5d2e9 | Can this adversarial approach be used to directly improve model accuracy? | [
"Yes"
] | [
[
"To evaluate this application, we inject random triples into the graph, and measure the ability of to detect the errors using our optimization. We consider two types of incorrect triples: 1) incorrect triples in the form of $\\langle s^{\\prime }, r, o\\rangle $ where $s^{\\prime }$ is chosen randomly from all of the entities, and 2) incorrect triples in the form of $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ where $s^{\\prime }$ and $r^{\\prime }$ are chosen randomly. We choose 100 random triples from the observed graph, and for each of them, add an incorrect triple (in each of the two scenarios) to its neighborhood. Then, after retraining DistMult on this noisy training data, we identify error triples through a search over the neighbors of the 100 facts. The result of choosing the neighbor with the least influence on the target is provided in the Table 7 . When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\\%$ and $55\\%$ in detecting errors."
]
] |
330f2cdeab689670b68583fc4125f5c0b26615a8 | what are the advantages of the proposed model? | [
"he proposed model outperforms all the baselines, being the svi version the one that performs best., the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm."
] | [
[
"For all the experiments the hyper-parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 were set using a simple grid search in the collection INLINEFORM3 . The same approach was used to optimize the hyper-parameters of the all the baselines. For the svi algorithm, different mini-batch sizes and forgetting rates INLINEFORM4 were tested. For the 20-Newsgroup dataset, the best results were obtained with a mini-batch size of 500 and INLINEFORM5 . The INLINEFORM6 was kept at 1. The results are shown in Fig. FIGREF87 for different numbers of topics, where we can see that the proposed model outperforms all the baselines, being the svi version the one that performs best.",
"In order to assess the computational advantages of the stochastic variational inference (svi) over the batch algorithm, the log marginal likelihood (or log evidence) was plotted against the number of iterations. Fig. FIGREF88 shows this comparison. Not surprisingly, the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm."
]
] |
c87b2dd5c439d5e68841a705dd81323ec0d64c97 | what are the state of the art approaches? | [
"Bosch 2006 (mv), LDA + LogReg (mv), LDA + Raykar, LDA + Rodrigues, Blei 2003 (mv), sLDA (mv)"
] | [
[
"With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced:",
"Bosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent topics, and the use of k-nearest neighbor (kNN) classifier using the documents' topics distributions. For this baseline, unsupervised LDA is used instead of pLSA, and the labels from the different annotators for kNN (with INLINEFORM0 ) are aggregated using majority voting (mv).",
"The results obtained by the different approaches for the LabelMe data are shown in Fig. FIGREF94 , where the svi version is using mini-batches of 200 documents.",
"Analyzing the results for the Reuters-21578 and LabelMe data, we can observe that MA-sLDAc outperforms all the baselines, with slightly better accuracies for the batch version, especially in the Reuters data. Interestingly, the second best results are consistently obtained by the multi-annotator approaches, which highlights the need for accounting for the noise and biases of the answers of the different annotators.",
"Both the batch and the stochastic variational inference (svi) versions of the proposed model (MA-sLDAc) are compared with the following baselines:",
"[itemsep=0.02cm]",
"LDA + LogReg (mv): This baseline corresponds to applying unsupervised LDA to the data, and learning a logistic regression classifier on the inferred topics distributions of the documents. The labels from the different annotators were aggregated using majority voting (mv). Notice that, when there is a single annotator label per instance, majority voting is equivalent to using that label for training. This is the case of the 20-Newsgroups' simulated annotators, but the same does not apply for the experiments in Section UID89 .",
"LDA + Raykar: For this baseline, the model of BIBREF21 was applied using the documents' topic distributions inferred by LDA as features.",
"LDA + Rodrigues: This baseline is similar to the previous one, but uses the model of BIBREF9 instead.",
"Blei 2003 (mv): The idea of this baseline is to replicate a popular state-of-the-art approach for document classification. Hence, the approach of BIBREF0 was used. It consists of applying LDA to extract the documents' topics distributions, which are then used to train a SVM. Similarly to the previous approach, the labels from the different annotators were aggregated using majority voting (mv).",
"sLDA (mv): This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting (mv) on the annotators' answers."
]
] |
f7789313a804e41fcbca906a4e5cf69039eeef9f | what datasets were used? | [
"Reuters-21578 BIBREF30, LabelMe BIBREF31, 20-Newsgroups benchmark corpus BIBREF29 ",
" 20-Newsgroups benchmark corpus , Reuters-21578, LabelMe"
] | [
[
"In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 .",
"In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. The 20-Newsgroups consists of twenty thousand messages taken from twenty newsgroups, and is divided in six super-classes, which are, in turn, partitioned in several sub-classes. For this first set of experiments, only the four most populated super-classes were used: “computers\", “science\", “politics\" and “recreative\". The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing."
],
[
"In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. The 20-Newsgroups consists of twenty thousand messages taken from twenty newsgroups, and is divided in six super-classes, which are, in turn, partitioned in several sub-classes. For this first set of experiments, only the four most populated super-classes were used: “computers\", “science\", “politics\" and “recreative\". The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.",
"In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 ."
]
] |
2376c170c343e2305dac08ba5f5bda47c370357f | How was the dataset collected? | [
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. , Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context., Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states., Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. ",
"They crawled travel information from the Web to build a database, created a multi-domain goal generator from the database, collected dialogue between workers an automatically annotated dialogue acts. "
] | [
[
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
],
[
"Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing. Domains include hotel, attraction, restaurant, metro, and taxi. The data collection process is summarized as below:",
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
]
] |
0137ecebd84a03b224eb5ca51d189283abb5f6d9 | What are the benchmark models? | [
"BERTNLU from ConvLab-2, a rule-based model (RuleDST) , TRADE (Transferable Dialogue State Generator) , a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy)"
] | [
[
"Model: We adapted BERTNLU from ConvLab-2. BERT BIBREF22 has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT BIBREF23 for initialization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Since there may exist more than one intent in an utterance, we modify the traditional method accordingly. For dialogue acts of inform and recommend intents such as (intent=Inform, domain=Attraction, slot=fee, value=free) whose values appear in the sentence, we perform sequential labeling using an MLP which takes word embeddings (\"free\") as input and outputs tags in BIO schema (\"B-Inform-Attraction-fee\"). For each of the other dialogue acts (e.g., (intent=Request, domain=Attraction, slot=fee)) that do not have actual values, we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act. To incorporate context information, we use the same BERT to get the embedding of last three utterances. We separate the utterances with [SEP] tokens and insert a [CLS] token at the beginning. Then each original input of the two MLP is concatenated with the context embedding (embedding of [CLS]), serving as the new input. We also conducted an ablation test by removing context information. We trained models with both system-side and user-side utterances.",
"Model: We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. RuleDST takes as input the previous system state and the last user dialogue acts. Then, the system state is updated according to hand-crafted rules. For example, If one of user dialogue acts is (intent=Inform, domain=Attraction, slot=fee, value=free), then the value of the \"fee\" slot in the attraction domain will be filled with \"free\". TRADE generates the system state directly from all the previous utterances using a copy mechanism. As mentioned in Section SECREF18, the first query of the system often records full user constraints, while the last one records relaxed constraints for recommendation. Thus the last one involves system policy, which is out of the scope of state tracking. We used the first query for these models and left state tracking with recommendation for future work.",
"Model: We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). The state $s$ consists of the last system dialogue acts, last user dialogue acts, system state of the current turn, the number of entities that satisfy the constraints in the current domain, and a terminal signal indicating whether the user goal is completed. The action $a$ is delexicalized dialogue acts of current turn which ignores the exact values of the slots, where the values will be filled back after prediction."
]
] |
5f6fbd57cce47f20a0fda27d954543c00c4344c2 | How was the corpus annotated? | [
"The workers were also asked to annotate both user states and system states, we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories"
] | [
[
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
]
] |
d6e2b276390bdc957dfa7e878de80cee1f41fbca | What models other than standalone BERT is new model compared to? | [
"Only Bert base and Bert large are compared to proposed approach."
] | [
[
"Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\\text{base}$ model, our architecture even outperforms a standalone BERT$_\\text{large}$ model by a large margin."
]
] |
32537fdf0d4f76f641086944b413b2f756097e5e | How much is representaton improved for rare/medum frequency words compared to standalone BERT and previous work? | [
"improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking"
] | [
[
"Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\\text{base}$ model, our architecture even outperforms a standalone BERT$_\\text{large}$ model by a large margin."
]
] |
ef081d78be17ef2af792e7e919d15a235b8d7275 | What are three downstream task datasets? | [
"MNLI BIBREF21, AG's News BIBREF22, DBPedia BIBREF23",
"MNLI, AG's News, DBPedia"
] | [
[
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35."
],
[
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35."
]
] |
537b2d7799124d633892a1ef1a485b3b071b303d | What is dataset for word probing task? | [
"WNLaMPro dataset"
] | [
[
"We evalute Bertram on the WNLaMPro dataset of BIBREF0. This dataset consists of cloze-style phrases like"
]
] |
9aca4b89e18ce659c905eccc78eda76af9f0072a | How fast is the model compared to baselines? | [
"Unanswerable"
] | [
[]
] |
b0376a7f67f1568a7926eff8ff557a93f434a253 | How big is the performance difference between this method and the baseline? | [
"Comparing with the highest performing baseline: 1.3 points on ACE2004 dataset, 0.6 points on CWEB dataset, and 0.86 points in the average of all scores."
] | [
[]
] |
dad8cc543a87534751f9f9e308787e1af06f0627 | What datasets used for evaluation? | [
"AIDA-B, ACE2004, MSNBC, AQUAINT, WNED-CWEB, WNED-WIKI",
"AIDA-CoNLL, ACE2004, MSNBC, AQUAINT, WNED-CWEB, WNED-WIKI, OURSELF-WIKI"
] | [
[
"We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.",
"ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.",
"MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)",
"AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.",
"WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.",
"WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation."
],
[
"We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.",
"ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.",
"MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)",
"AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.",
"WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.",
"WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.",
"OURSELF-WIKI is crawled by ourselves from Wikipedia pages."
]
] |
0481a8edf795768d062c156875d20b8fb656432c | what are the mentioned cues? | [
"output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7"
] | [
[
"Where $\\oplus $ indicates vector concatenation. The $V_{m_i}^t$ and $V_{e_i}^t$ respectively denote the vector of $m_i$ and $e_i$ at time $t$ . For each mention, there are multiple candidate entities correspond to it. With the purpose of comparing the semantic relevance between the mention and each candidate entity at the same time, we copy multiple copies of the mention vector. Formally, we extend $V_{m_i}^t \\in \\mathbb {R}^{1\\times {n}}$ to $V_{m_i}^t{^{\\prime }} \\in \\mathbb {R}^{k\\times {n}}$ and then combine it with $V_{e_i}^t \\in \\mathbb {R}^{k\\times {n}}$ . Since $V_{m_i}^t$ and $V_{m_i}^t$0 are mainly to represent semantic information, we add feature vector $V_{m_i}^t$1 to enrich lexical and statistical features. These features mainly include the popularity of the entity, the edit distance between the entity description and the mention context, the number of identical words in the entity description and the mention context etc. After getting these feature values, we combine them into a vector and add it to the current state. In addition, the global vector $V_{m_i}^t$2 is also added to $V_{m_i}^t$3 . As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 . Thus, the state $V_{m_i}^t$8 contains current information and previous decisions, while also covering the semantic representations and a variety of statistical features. Next, the concatenated vector will be fed into the policy network to generate action."
]
] |
b6a4ab009e6f213f011320155a7ce96e713c11cf | How did the author's work rank among other submissions on the challenge? | [
"Unanswerable"
] | [
[]
] |
cfffc94518d64cb3c8789395707e4336676e0345 | What approaches without reinforcement learning have been tried? | [
"classification, regression, neural methods",
" Support Vector Regression (SVR) and Support Vector Classification (SVC), deep learning regression models of BIBREF2 to convert them to classification models"
] | [
[
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.",
"The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer."
],
[
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.",
"Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28."
]
] |
f60629c01f99de3f68365833ee115b95a3388699 | What classification approaches were experimented for this task? | [
"NNC SU4 F1, NNC top 5, Support Vector Classification (SVC)"
] | [
[
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.",
"The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer."
]
] |
a7cb4f8e29fd2f3d1787df64cd981a6318b65896 | Did classification models perform better than previous regression one? | [
"Yes"
] | [
[
"We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels."
]
] |
642c4704a71fd01b922a0ef003f234dcc7b223cd | What are the main sources of recall errors in the mapping? | [
"irremediable annotation discrepancies, differences in choice of attributes to annotate, The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them, the two annotations encode distinct information, incorrectly applied UniMorph annotation, cross-lingual inconsistency in both resources"
] | [
[
"We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation."
]
] |
e477e494fe15a978ff9c0a5f1c88712cdaec0c5c | Do they look for inconsistencies between different languages' annotations in UniMorph? | [
"Yes"
] | [
[
"Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation."
]
] |
04495845251b387335bf2e77e2c423130f43c7d9 | Do they look for inconsistencies between different UD treebanks? | [
"Yes"
] | [
[
"The contributions of this work are:"
]
] |
564dcaf8d0bcc274ab64c784e4c0f50d7a2c17ee | Which languages do they validate on? | [
"Ar, Bg, Ca, Cs, Da, De, En, Es, Eu, Fa, Fi, Fr, Ga, He, Hi, Hu, It, La, Lt, Lv, Nb, Nl, Nn, PL, Pt, Ro, Ru, Sl, Sv, Tr, Uk, Ur",
"We apply this conversion to the 31 languages, Arabic, Hindi, Lithuanian, Persian, and Russian. , Dutch, Spanish"
] | [
[],
[
"A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.",
"There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset—which only contains verbs—we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.",
"We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased—albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance."
]
] |
f3d0e6452b8d24b7f9db1fd898d1fbe6cd23f166 | Does the paper evaluate any adjustment to improve the predicion accuracy of face and audio features? | [
"No"
] | [
[]
] |
9b1d789398f1f1a603e4741a5eee63ccaf0d4a4f | How is face and audio data analysis evaluated? | [
"confusion matrices, $\\text{F}_1$ score"
] | [
[
"Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35). While the classification results for joy are promising (R=43 %, P=57 %), the distinction of insecurity and annoyance from the other classes appears to be more challenging."
]
] |
00bcdffff7e055f99aaf1b05cf41c98e2748e948 | What is the baseline method for the task? | [
"For the emotion recognition from text they use described neural network as baseline.\nFor audio and face there is no baseline."
] | [
[
"For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model."
]
] |
f92ee3c5fce819db540bded3cfcc191e21799cb1 | What are the emotion detection tools used for audio and face input? | [
"We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions)",
"cannot be disclosed due to licensing restrictions"
] | [
[
"We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored.",
"We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance. We consider the outputs for the states of joy, anger, and fear, mapping analogously to our classes as for facial expressions. Low-confidence predictions are interpreted as “no emotion”. We accept the emotion with the highest score as the discrete prediction otherwise."
],
[
"We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored."
]
] |
4547818a3bbb727c4bb4a76554b5a5a7b5c5fedb | what amounts of size were used on german-english? | [
"Training data with 159000, 80000, 40000, 20000, 10000 and 5000 sentences, and 7584 sentences for development",
"ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words)"
] | [
[
"We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.",
"To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see SECREF7 ). Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary."
],
[
"Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our \"mainstream improvements\" add around 6–7 BLEU in both data conditions.",
"In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2 INLINEFORM2 16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9 INLINEFORM3 32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) to other data conditions, and Korean INLINEFORM4 English, for simplicity."
]
] |
07d7652ad4a0ec92e6b44847a17c378b0d9f57f5 | what were their experimental results in the low-resource dataset? | [
"10.37 BLEU"
] | [
[
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
]
] |
9f3444c9fb2e144465d63abf58520cddd4165a01 | what are the methods they compare with in the korean-english dataset? | [
"gu-EtAl:2018:EMNLP1"
] | [
[
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
]
] |
2348d68e065443f701d8052018c18daa4ecc120e | what pitfalls are mentioned in the paper? | [
"highly data-inefficient, underperform phrase-based statistical machine translation"
] | [
[
"While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:"
]
] |
5679fabeadf680e35a4f7b092d39e8638dca6b4d | Does the paper report the results of previous models applied to the same tasks? | [
"Yes",
"No"
] | [
[
"Technical and theoretical questions related to the proposed method and infrastructure for the exploration and facilitation of debates will be discussed in three sections. The first section concerns notions of how to define what constitutes a belief or opinion and how these can be mined from texts. To this end, an approach based on the automated extraction of semantic frames expressing causation is proposed. The observatory thus builds on the theoretical premise that expressions of causation such as `global warming causes rises in sea levels' can be revelatory for a person or group's underlying belief systems. Through a further technical description of the observatory's data-analytical components, section two of the paper deals with matters of spatially modelling the output of the semantic frame extractor and how this might be achieved without sacrificing nuances of meaning. The final section of the paper, then, discusses how insights gained from technologically observing opinion dynamics can inform conceptual modelling efforts and approaches to on-line opinion facilitation. As such, the paper brings into view and critically evaluates the fundamental conceptual leap from machine-guided observation to debate facilitation and intervention."
],
[]
] |
a939a53cabb4893b2fd82996f3dbe8688fdb7bbb | How is the quality of the discussion evaluated? | [
"Unanswerable"
] | [
[]
] |
8b99767620fd4efe51428b68841cc3ec06699280 | What is the technique used for text analysis and mining? | [
"Unanswerable"
] | [
[]
] |
312417675b3dc431eb7e7b16a917b7fed98d4376 | What are the causal mapping methods employed? | [
"Axelrod's causal mapping method"
] | [
[
"Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative). These causes and effects are to be extracted from relevant sources by means of a series of heuristics and an encoding scheme (it should be noted that for this task Axelrod had human readers in mind). The graphs resulting from these efforts provide a structural overview of the relations among causal assertions (and thus beliefs):"
]
] |
792d7b579cbf7bfad8fe125b0d66c2059a174cf9 | What is the previous work's model? | [
"Ternary Trans-CNN"
] | [
[
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.",
"The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work."
]
] |
44a2a8e187f8adbd7d63a51cd2f9d2d324d0c98d | What dataset is used? | [
"HEOT , A labelled dataset for a corresponding english tweets",
"HEOT"
] | [
[
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
],
[
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
]
] |
5908d7fb6c48f975c5dfc5b19bb0765581df2b25 | How big is the dataset? | [
"3189 rows of text messages",
"Resulting dataset was 7934 messages for train and 700 messages for test."
] | [
[
"Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data."
],
[
"Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:"
]
] |
cca3301f20db16f82b5d65a102436bebc88a2026 | How is the dataset collected? | [
"A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al, HEOT obtained from one of the past studies done by Mathur et al"
] | [
[
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
]
] |
cfd67b9eeb10e5ad028097d192475d21d0b6845b | Was each text augmentation technique experimented individually? | [
"No"
] | [
[]
] |
e1c681280b5667671c7f78b1579d0069cba72b0e | What models do previous work use? | [
"Ternary Trans-CNN , Hybrid multi-channel CNN and LSTM"
] | [
[
"Related Work ::: Transfer learning based approaches",
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.",
"The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.",
"Related Work ::: Hybrid models",
"In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture."
]
] |
58d50567df71fa6c3792a0964160af390556757d | Does the dataset contain content from various social media platforms? | [
"No"
] | [
[
"Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:"
]
] |
07c79edd4c29635dbc1c2c32b8df68193b7701c6 | What dataset is used? | [
"HEOT , A labelled dataset for a corresponding english tweets "
] | [
[
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
]
] |
66125cfdf11d3bf8e59728428e02021177142c3a | How they demonstrate that language-neutral component is sufficiently general in terms of modeling semantics to allow high-accuracy word-alignment? | [
"Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.",
"explicit projection had a negligible effect on the performance"
] | [
[
"Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.",
"We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.",
"To train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.",
"Results ::: Word Alignment.",
"Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance."
],
[
"Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance."
]
] |
222b2469eede9a0448e0226c6c742e8c91522af3 | Are language-specific and language-neutral components disjunctive? | [
"No"
] | [
[
"We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings."
]
] |
6f8386ad64dce3a20bc75165c5c7591df8f419cf | How they show that mBERT representations can be split into a language-specific component and a language-neutral component? | [
"We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space."
] | [
[
"Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.",
"We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings.",
"We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks. For all tasks, we test all layers of the model. For tasks utilizing a single-vector sentence representation, we test both the vector corresponding to the [cls] token and mean-pooled states."
]
] |
81dc39ee6cdacf90d5f0f62134bf390a29146c65 | What challenges this work presents that must be solved to build better language-neutral representations? | [
"contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks"
] | [
[
"Using a set of semantically oriented tasks that require explicit semantic cross-lingual representations, we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks."
]
] |
b1ced2d6dcd1d7549be2594396cbda34da6c3bca | What is the performance of their system? | [
"Unanswerable"
] | [
[]
] |
f3be1a27df2e6ad12eed886a8cd2dfe09b9e2b30 | What evaluation metrics are used? | [
"Unanswerable"
] | [
[]
] |
a45a86b6a02a98d3ab11f1d04acd3446e95f5a16 | What is the source of the dialogues? | [
"Unanswerable"
] | [
[]
] |
1f1a9f2dd8c4c10b671cb8affe56e181948e229e | What pretrained LM is used? | [
"Generative Pre-trained Transformer (GPT)",
"Generative Pre-trained Transformer (GPT)"
] | [
[
"We apply the Generative Pre-trained Transformer (GPT) BIBREF2 as our pre-trained language model. GPT is a multi-layer Transformer decoder with a causal self-attention which is pre-trained, unsupervised, on the BooksCorpus dataset. BooksCorpus dataset contains over 7,000 unique unpublished books from a variety of genres. Pre-training on such large contiguous text corpus enables the model to capture long-range dialogue context information. Furthermore, as existing EmpatheticDialogue dataset BIBREF4 is relatively small, fine-tuning only on such dataset will limit the chitchat topic of the model. Hence, we first integrate persona into CAiRE, and pre-train the model on PersonaChat BIBREF3 , following a previous transfer-learning strategy BIBREF1 . This pre-training procedure allows CAiRE to have a more consistent persona, thus improving the engagement and consistency of the model. We refer interested readers to the code repository recently released by HuggingFace. Finally, in order to optimize empathy in CAiRE, we fine-tune this pre-trained model using EmpatheticDialogue dataset to help CAiRE understand users' feeling."
],
[
"In contrast to such modularized dialogue system, end-to-end systems learn all components as a single model in a fully data-driven manner, and mitigate the lack of labeled data by sharing representations among different modules. In this paper, we build an end-to-end empathetic chatbot by fine-tuning BIBREF1 the Generative Pre-trained Transformer (GPT) BIBREF2 on the PersonaChat dataset BIBREF3 and the Empathetic-Dialogue dataset BIBREF4 . We establish a web-based user interface which allows multiple users to asynchronously chat with CAiRE online. CAiRE can also collect user feedback and continuously improve its response quality and discard undesirable generation behaviors (e.g. unethical responses) via active learning and negative training."
]
] |
Subsets and Splits