question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
eeaceee98ef1f6c971dac7b0b8930ee8060d71c2
What approaches they propose?
[ "Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks., Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves." ]
[ [ "We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness. We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.", "We note two possible approaches to this end:", "Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.", "For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.", "Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only." ] ]
3371d586a3a81de1552d90459709c57c0b1a2594
What faithfulness criteria does they propose?
[ "Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks., Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves." ]
[ [ "We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness. We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.", "We note two possible approaches to this end:", "Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.", "For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.", "Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only." ] ]
d4b9cdb4b2dfda1e0d96ab6c3b5e2157fd52685e
Which are three assumptions in current approaches for defining faithfulness?
[ "Two models will make the same predictions if and only if they use the same reasoning process., On similar inputs, the model makes similar decisions if and only if its reasoning is similar., Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other.", "Two models will make the same predictions if and only if they use the same reasoning process., On similar inputs, the model makes similar decisions if and only if its reasoning is similar., Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other." ]
[ [ "Defining Faithfulness ::: Assumption 1 (The Model Assumption).", "Two models will make the same predictions if and only if they use the same reasoning process.", "Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).", "On similar inputs, the model makes similar decisions if and only if its reasoning is similar.", "Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).", "Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other." ], [ "Defining Faithfulness ::: Assumption 1 (The Model Assumption).", "Two models will make the same predictions if and only if they use the same reasoning process.", "Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).", "On similar inputs, the model makes similar decisions if and only if its reasoning is similar.", "Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).", "Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other." ] ]
2a859e80d8647923181cb2d8f9a2c67b1c3f4608
Which are key points in guidelines for faithfulness evaluation?
[ "Be explicit in what you evaluate., Faithfulness evaluation should not involve human-judgement on the quality of interpretation., Faithfulness evaluation should not involve human-provided gold labels., Do not trust “inherent interpretability” claims., Faithfulness evaluation of IUI systems should not rely on user performance." ]
[ [ "Guidelines for Evaluating Faithfulness", "We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.", "Guidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.", "Conflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized.", "Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.", "We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.", "Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.", "We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.", "Guidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.", "Inherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.", "Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.", "End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness." ] ]
aceac4ad16ffe1af0f01b465919b1d4422941a6b
Did they use the state-of-the-art model to analyze the attention?
[ "we provide an extensive analysis of the state-of-the-art model" ]
[ [ "We make two main contributions. First, we introduce new strategies for interpreting the behavior of deep models in their intermediate layers, specifically, by examining the saliency of the attention and the gating signals. Second, we provide an extensive analysis of the state-of-the-art model for the NLI task and show that our methods reveal interesting insights not available from traditional methods of inspecting attention and word saliency." ] ]
f7070b2e258beac9b09514be2bfcc5a528cc3a0e
What is the performance of their model?
[ "Unanswerable", "Unanswerable" ]
[ [], [] ]
2efdcebebeb970021233553104553205ce5d6567
How many layers are there in their model?
[ "two LSTM layers" ]
[ [ "Instead of considering individual dimensions of the gating signals, we aggregate them to consider their norm, both for the signal and for its saliency. Note that ESIM models have two LSTM layers, the first (input) LSTM performs the input encoding and the second (inference) LSTM generates the representation for inference." ] ]
4fa851d91388f0803e33f6cfae519548598cd37c
Did they compare with gradient-based methods?
[ "Unanswerable" ]
[ [] ]
a891039441e008f1fd0a227dbed003f76c140737
What MC abbreviate for?
[ "machine comprehension" ]
[ [ "Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs." ] ]
73738e42d488b32c9db89ac8adefc75403fa2653
how much of improvement the adaptation model can get?
[ " 69.10%/78.38%" ]
[ [ "Table 2 shows the ablation performances of various Q-code on the development set. Note that since the testset is hidden from us, we can only perform such an analysis on the development set. Our baseline model using no Q-code achieved a 68.00% and 77.36% EM and F1 scores, respectively. When we added the explicit question type T-code into the baseline model, the performance was improved slightly to 68.16%(EM) and 77.58%(F1). We then used TreeLSTM introduce syntactic parses for question representation and understanding (replacing simple question type as question understanding Q-code), which consistently shows further improvement. We further incorporated the soft adaptation. When letting the number of hidden question types ( $K$ ) to be 20, the performance improves to 68.73%/77.74% on EM and F1, respectively, which corresponds to the results of our model reported in Table 1 . Furthermore, after submitted our result, we have experimented with a large value of $K$ and found that when $K=100$ , we can achieve a better performance of 69.10%/78.38% on the development set." ] ]
6c8bd7fa1cfb1b2bbeb011cc9c712dceac0c8f06
what is the architecture of the baseline model?
[ "word embedding, input encoder, alignment, aggregation, and prediction.", "Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction." ]
[ [ "Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details." ], [ "Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details." ] ]
fa218b297d9cdcae238cef71096752ce27ca8f4a
What is the exact performance on SQUAD?
[ "Our model achieves a 68.73% EM score and 77.39% F1 score" ]
[ [ "Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling)." ] ]
ff28d34d1aaa57e7ad553dba09fc924dc21dd728
What are their correlation results?
[ "High correlation results range from 0.472 to 0.936" ]
[ [] ]
ae8354e67978b7c333094c36bf9d561ca0c2d286
What dataset do they use?
[ "datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks" ]
[ [ "We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems)." ] ]
02348ab62957cb82067c589769c14d798b1ceec7
What simpler models do they look at?
[ "BiGRU s with attention, ROUGE, Language model (LM), Next sentence prediction", "BiGRUs with attention, ROUGE, Language model, and next sentence prediction " ]
[ [ "Methods ::: Baselines ::: BiGRU s with attention:", "This is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).", "Methods ::: Baselines ::: ROUGE:", "This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.", "Methods ::: Baselines ::: Language model (LM):", "For a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.", "Methods ::: Baselines ::: Next sentence prediction:", "BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:", "where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary." ], [ "Methods ::: Baselines ::: BiGRU s with attention:", "This is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).", "Methods ::: Baselines ::: ROUGE:", "This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.", "Methods ::: Baselines ::: Language model (LM):", "For a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.", "Methods ::: Baselines ::: Next sentence prediction:", "BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:", "where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary." ] ]
3748787379b3a7d222c3a6254def3f5bfb93a60e
What linguistic quality aspects are addressed?
[ "Grammaticality, non-redundancy, referential clarity, focus, structure & coherence" ]
[ [] ]
6852217163ea678f2009d4726cb6bd03cf6a8f78
What benchmark datasets are used for the link prediction task?
[ "WN18RR, FB15k-237, YAGO3-10", "WN18RR BIBREF26, FB15k-237 BIBREF18, YAGO3-10 BIBREF27" ]
[ [ "We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18." ], [ "We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18.", "WN18RR, FB15k-237, and YAGO3-10 are subsets of WN18 BIBREF8, FB15k BIBREF8, and YAGO3 BIBREF27, respectively. As pointed out by BIBREF26 and BIBREF18, WN18 and FB15k suffer from the test set leakage problem. One can attain the state-of-the-art results even using a simple rule based model. Therefore, we use WN18RR and FB15k-237 as the benchmark datasets." ] ]
cd1ad7e18d8eef8f67224ce47f3feec02718ea1a
What are state-of-the art models for this task?
[ "TransE, DistMult, ComplEx, ConvE, RotatE" ]
[ [ "In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7." ] ]
9c9e90ceaba33242342a5ae7568e89fe660270d5
How better does HAKE model peform than state-of-the-art methods?
[ "0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively, doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10, HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively" ]
[ [ "WN18RR dataset consists of two kinds of relations: the symmetric relations such as $\\_similar\\_to$, which link entities in the category (b); other relations such as $\\_hypernym$ and $\\_member\\_meronym$, which link entities in the category (a). Actually, RotatE can model entities in the category (b) very well BIBREF7. However, HAKE gains a 0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively. The superior performance of HAKE compared with RotatE implies that our proposed model can better model different levels in the hierarchy.", "FB15k-237 dataset has more complex relation types and fewer entities, compared with WN18RR and YAGO3-10. Although there are relations that reflect hierarchy in FB15k-237, there are also lots of relations, such as “/location/location/time_zones” and “/film/film/prequel”, that do not lead to hierarchy. The characteristic of this dataset accounts for why our proposed models doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10 datasets. However, the results also show that our models can gain better performance so long as there exists semantic hierarchies in knowledge graphs. As almost all knowledge graphs have such hierarchy structures, our model is widely applicable.", "YAGO3-10 datasets contains entities with high relation-specific indegree BIBREF18. For example, the link prediction task $(?, hasGender, male)$ has over 1000 true answers, which makes the task challenging. Fortunately, we can regard “male” as an entity at higher level of the hierarchy and the predicted head entities as entities at lower level. In this way, YAGO3-10 is a dataset that clearly has semantic hierarchy property, and we can expect that our proposed models is capable of working well on this dataset. Table TABREF19 validates our expectation. Both ModE and HAKE significantly outperform the previous state-of-the-art. Notably, HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively." ] ]
2a058f8f6bd6f8e80e8452e1dba9f8db5e3c7de8
How are entities mapped onto polar coordinate system?
[ "radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively" ]
[ [ "Combining the modulus part and the phase part, HAKE maps entities into the polar coordinate system, where the radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively. That is, HAKE maps an entity $h$ to $[\\textbf {h}_m;\\textbf {h}_p]$, where $\\textbf {h}_m$ and $\\textbf {h}_p$ are generated by the modulus part and the phase part, respectively, and $[\\,\\cdot \\,; \\,\\cdot \\,]$ denotes the concatenation of two vectors. Obviously, $([\\textbf {h}_m]_i,[\\textbf {h}_p]_i)$ is a 2D point in the polar coordinate system. Specifically, we formulate HAKE as follows:" ] ]
db9021ddd4593f6fadf172710468e2fdcea99674
What additional techniques are incorporated?
[ "Unanswerable", "incorporating coding syntax tree model" ]
[ [], [ "Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–", "\"define the method tzname with 2 arguments: self and dt.\"", "is translated into–", "def __init__ ( self , regex ) :.", "The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax." ] ]
8ea4bd4c1d8a466da386d16e4844ea932c44a412
What dataset do they use?
[ "A parallel corpus where the source is an English expression of code and the target is Python code.", " text-code parallel corpus" ]
[ [ "SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language." ], [ "SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language." ] ]
92240eeab107a4f636705b88f00cefc4f0782846
Do they compare to other models?
[ "No" ]
[ [] ]
4196d329061f5a9d147e1e77aeed6a6bd9b35d18
What is the architecture of the system?
[ "seq2seq translation" ]
[ [ "In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation." ] ]
a37e4a21ba98b0259c36deca0d298194fa611d2f
How long are expressions in layman's language?
[ "Unanswerable" ]
[ [] ]
321429282557e79061fe2fe02a9467f3d0118cdd
What additional techniques could be incorporated to further improve accuracy?
[ "phrase-based word embedding, Abstract Syntax Tree(AST)" ]
[ [ "The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial." ] ]
891cab2e41d6ba962778bda297592c916b432226
What programming language is target language?
[ "Python" ]
[ [ "SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language." ] ]
1eeabfde99594b8d9c6a007f50b97f7f527b0a17
What dataset is used to measure accuracy?
[ "validation data" ]
[ [ "Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17)." ] ]
e96adf8466e67bd19f345578d5a6dc68fd0279a1
Is text-to-image synthesis trained is suppervized or unsuppervized manner?
[ "unsupervised ", "Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis" ]
[ [ "Following the above definition, the $\\min \\max $ objective function in Eq. (DISPLAY_FORM10) aims to learn parameters for the discriminator ($\\theta _d$) and generator ($\\theta _g$) to reach an optimization goal: The discriminator intends to differentiate true vs. fake images with maximum capability $\\max _{\\theta _d}$ whereas the generator intends to minimize the difference between a fake image vs. a true image $\\min _{\\theta _g}$. In other words, the discriminator sets the characteristics and the generator produces elements, often images, iteratively until it meets the attributes set forth by the discriminator. GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images. Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. The following sections go into further detail regarding how the generator and discriminator are trained in GANs." ], [ "black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8." ] ]
c1477a6c86bd1670dd17407590948000c9a6b7c6
What challenges remain unresolved?
[ "give more independence to the several learning methods (e.g. less human intervention) involved in the studies, increasing the size of the output images" ]
[ [ "blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images." ] ]
e020677261d739c35c6f075cde6937d0098ace7f
What is the conclusion of comparison of proposed solution?
[ "HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset, In terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, text to image synthesis is continuously improving the results for better visual perception and interception" ]
[ [ "While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.", "blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.", "blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception." ] ]
6389d5a152151fb05aae00b53b521c117d7b5e54
What is typical GAN architecture for each text-to-image synhesis group?
[ "Semantic Enhancement GANs: DC-GANs, MC-GAN\nResolution Enhancement GANs: StackGANs, AttnGAN, HDGAN\nDiversity Enhancement GANs: AC-GAN, TAC-GAN etc.\nMotion Enhancement GAGs: T2S, T2V, StoryGAN" ]
[ [ "In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24. The taxonomy organizes GAN frameworks into four categories, including Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. Following the proposed taxonomy, each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the text-to-mage synthesis challenges." ] ]
7fe48939ce341212c1d801095517dc552b98e7b3
Where do they employ feature-wise sigmoid gating?
[ "gating mechanism acts upon each dimension of the word and character-level vectors" ]
[ [ "The vector gate is inspired by BIBREF11 and BIBREF12 , but is different to the former in that the gating mechanism acts upon each dimension of the word and character-level vectors, and different to the latter in that it does not rely on external sources of information for calculating the gating mechanism." ] ]
65ad17f614b7345f0077424c04c94971c831585b
Which model architecture do they use to obtain representations?
[ "BiLSTM with max pooling" ]
[ [ "To enable sentence-level classification we need to obtain a sentence representation from the word vectors INLINEFORM0 . We achieved this by using a BiLSTM with max pooling, which was shown to be a good universal sentence encoding mechanism BIBREF13 ." ] ]
323e100a6c92d3fe503f7a93b96d821408f92109
Which downstream sentence-level tasks do they evaluate on?
[ "BIBREF13 , BIBREF18" ]
[ [ "Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .", "table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper. The provided URLs correspond to the original sources, and not necessarily to the URLs where SentEval got the data from." ] ]
9f89bff89cea722debc991363f0826de945bc582
Which similarity datasets do they use?
[ "MEN, MTurk287, MTurk771, RG, RW, SimLex999, SimVerb3500, WS353, WS353R, WS353S", "WS353S, SimLex999, SimVerb3500" ]
[ [ "table:word-similarity-dataset lists the word-similarity datasets and their corresponding reference. As mentioned in subsec:datasets, all the word-similarity datasets contain pairs of words annotated with similarity or relatedness scores, although this difference is not always explicit. Below we provide some details for each.", "MEN contains 3000 annotated word pairs with integer scores ranging from 0 to 50. Words correspond to image labels appearing in the ESP-Game and MIRFLICKR-1M image datasets.", "MTurk287 contains 287 annotated pairs with scores ranging from 1.0 to 5.0. It was created from words appearing in both DBpedia and in news articles from The New York Times.", "MTurk771 contains 771 annotated pairs with scores ranging from 1.0 to 5.0, with words having synonymy, holonymy or meronymy relationships sampled from WordNet BIBREF56 .", "RG contains 65 annotated pairs with scores ranging from 0.0 to 4.0 representing “similarity of meaning”.", "RW contains 2034 pairs of words annotated with similarity scores in a scale from 0 to 10. The words included in this dataset were obtained from Wikipedia based on their frequency, and later filtered depending on their WordNet synsets, including synonymy, hyperonymy, hyponymy, holonymy and meronymy. This dataset was created with the purpose of testing how well models can represent rare and complex words.", "SimLex999 contains 999 word pairs annotated with similarity scores ranging from 0 to 10. In this case the authors explicitly considered similarity and not relatedness, addressing the shortcomings of datasets that do not, such as MEN and WS353. Words include nouns, adjectives and verbs.", "SimVerb3500 contains 3500 verb pairs annotated with similarity scores ranging from 0 to 10. Verbs were obtained from the USF free association database BIBREF66 , and VerbNet BIBREF63 . This dataset was created to address the lack of representativity of verbs in SimLex999, and the fact that, at the time of creation, the best performing models had already surpassed inter-annotator agreement in verb similarity evaluation resources. Like SimLex999, this dataset also explicitly considers similarity as opposed to relatedness.", "WS353 contains 353 word pairs annotated with similarity scores from 0 to 10.", "WS353R is a subset of WS353 containing 252 word pairs annotated with relatedness scores. This dataset was created by asking humans to classify each WS353 word pair into one of the following classes: synonyms, antonyms, identical, hyperonym-hyponym, hyponym-hyperonym, holonym-meronym, meronym-holonym, and none-of-the-above. These annotations were later used to group the pairs into: similar pairs (synonyms, antonyms, identical, hyperonym-hyponym, and hyponym-hyperonym), related pairs (holonym-meronym, meronym-holonym, and none-of-the-above with a human similarity score greater than 5), and unrelated pairs (classified as none-of-the-above with a similarity score less than or equal to 5). This dataset is composed by the union of related and unrelated pairs.", "WS353S is another subset of WS353 containing 203 word pairs annotated with similarity scores. This dataset is composed by the union of similar and unrelated pairs, as described previously." ], [ "To face the previous problem, we tested our methods in a wide variety of datasets, including some that explicitly model relatedness (WS353R), some that explicitly consider similarity (WS353S, SimLex999, SimVerb3500), and some where the distinction is not clear (MEN, MTurk287, MTurk771, RG, WS353). We also included the RareWords (RW) dataset for evaluating the quality of rare word representations. See appendix:datasets for a more complete description of the datasets we used." ] ]
735f58e28d84ee92024a36bc348cfac2ee114409
Are there datasets with relation tuples annotated, how big are datasets available?
[ "Yes" ]
[ [ "We focus on the task of extracting multiple tuples with overlapping entities from sentences. We choose the New York Times (NYT) corpus for our experiments. This corpus has multiple versions, and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities. (i) The first version is used by BIBREF6 (BIBREF6) (mentioned as NYT in their paper) and has 24 relations. We name this version as NYT24. (ii) The second version is used by BIBREF11 (BIBREF11) (mentioned as NYT10 in their paper) and has 29 relations. We name this version as NYT29. We select 10% of the original training data and use it as the validation dataset. The remaining 90% is used for training. We include statistics of the training and test datasets in Table TABREF11." ] ]
710fa8b3e74ee63d2acc20af19f95f7702b7ce5e
Which one of two proposed approaches performed better in experiments?
[ "WordDecoding (WDec) model" ]
[ [ "Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\\%$ and $1.3\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\\%$ and $3.5\\%$ higher F1 scores and PNDec achieves $4.2\\%$ and $2.9\\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively." ] ]
56123dd42cf5c77fc9a88fc311ed2e1eb672126e
What is previous work authors reffer to?
[ "SPTree, Tagging, CopyR, HRL, GraphR, N-gram Attention" ]
[ [ "We compare our model with the following state-of-the-art joint entity and relation extraction models:", "(1) SPTree BIBREF4: This is an end-to-end neural entity and relation extraction model using sequence LSTM and Tree LSTM. Sequence LSTM is used to identify all the entities first and then Tree LSTM is used to find the relation between all pairs of entities.", "(2) Tagging BIBREF5: This is a neural sequence tagging model which jointly extracts the entities and relations using an LSTM encoder and an LSTM decoder. They used a Cartesian product of entity tags and relation tags to encode the entity and relation information together. This model does not work when tuples have overlapping entities.", "(3) CopyR BIBREF6: This model uses an encoder-decoder approach for joint extraction of entities and relations. It copies only the last token of an entity from the source sentence. Their best performing multi-decoder model is trained with a fixed number of decoders where each decoder extracts one tuple.", "(4) HRL BIBREF11: This model uses a reinforcement learning (RL) algorithm with two levels of hierarchy for tuple extraction. A high-level RL finds the relation and a low-level RL identifies the two entities using a sequence tagging approach. This sequence tagging approach cannot always ensure extraction of exactly two entities.", "(5) GraphR BIBREF14: This model considers each token in a sentence as a node in a graph, and edges connecting the nodes as relations between them. They use graph convolution network (GCN) to predict the relations of every edge and then filter out some of the relations.", "(6) N-gram Attention BIBREF9: This model uses an encoder-decoder approach with N-gram attention mechanism for knowledge-base completion using distantly supervised data. The encoder uses the source tokens as its vocabulary and the decoder uses the entire Wikidata BIBREF15 entity IDs and relation IDs as its vocabulary. The encoder takes the source sentence as input and the decoder outputs the two entity IDs and relation ID for every tuple. During training, it uses the mapping of entity names and their Wikidata IDs of the entire Wikidata for proper alignment. Our task of extracting relation tuples with the raw entity names from a sentence is more challenging since entity names are not of fixed length. Our more generic approach is also helpful for extracting new entities which are not present in the existing knowledge bases such as Wikidata. We use their N-gram attention mechanism in our model to compare its performance with other attention models (Table TABREF17)." ] ]
1898f999626f9a6da637bd8b4857e5eddf2fc729
How higher are F1 scores compared to previous work?
[ "WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively, PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\\%$ and $1.3\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively", "Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively, In the ensemble scenario, compared to HRL, WDec achieves $4.2\\%$ and $3.5\\%$ higher F1 scores" ]
[ [ "Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\\%$ and $1.3\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\\%$ and $3.5\\%$ higher F1 scores and PNDec achieves $4.2\\%$ and $2.9\\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively." ], [ "Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\\%$ and $1.3\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\\%$ and $3.5\\%$ higher F1 scores and PNDec achieves $4.2\\%$ and $2.9\\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively." ] ]
d32b6ac003cfe6277f8c2eebc7540605a60a3904
what were the baselines?
[ "Rank by the number of times a citation is mentioned in the document, Rank by the number of times the citation is cited in the literature (citation impact). , Rank using Google Scholar Related Articles., Rank by the TF*IDF weighted cosine similarity. , ank using a learning-to-rank model trained on text similarity rankings", "(1) Rank by the number of times a citation is mentioned in the document., (2) Rank by the number of times the citation is cited in the literature (citation impact)., (3) Rank using Google Scholar Related Articles., (4) Rank by the TF*IDF weighted cosine similarity., (5) Rank using a learning-to-rank model trained on text similarity rankings." ]
[ [ "We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments." ], [ "We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments." ] ]
c10f38ee97ed80484c1a70b8ebba9b1fb149bc91
what is the supervised model they developed?
[ "SVMRank" ]
[ [ "Support Vector Machine (SVM) ( BIBREF25 ) is a commonly used supervised classification algorithm that has shown good performance over a range of tasks. SVM can be thought of as a binary linear classifier where the goal is to maximize the size of the gap between the class-separating line and the points on either side of the line. This helps avoid over-fitting on the training data. SVMRank is a modification to SVM that assigns scores to each data point and allows the results to be ranked ( BIBREF26 ). We use SVMRank in the experiments below. SVMRank has previously been used in the task of document retrieval in ( BIBREF27 ) for a more traditional short query task and has been shown to be a top-performing system for ranking." ] ]
340501f23ddc0abe344a239193abbaaab938cc3a
what is the size of this built corpus?
[ "90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations" ]
[ [ "We asked authors to rank documents by how “close to your work” they were. The definition of closeness was left to the discretion of the author. The dataset is composed of 90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations." ] ]
fbb85cbd41de6d2818e77e8f8d4b91e431931faa
what crowdsourcing platform is used?
[ "asked the authors to rank by closeness five citations we selected from their paper" ]
[ [ "Given the full text of a scientific publication, we want to rank its citations according to the author's judgments. We collected recent publications from the open-access PLoS journals and asked the authors to rank by closeness five citations we selected from their paper. PLoS articles were selected because its journals cover a wide array of topics and the full text articles are available in XML format. We selected the most recent publications as previous work in crowd-sourcing annotation shows that authors' willingness to participate in an unpaid annotation task declines with the age of publication ( BIBREF23 ). We then extracted the abstract, citations, full text, authors, and corresponding author email address from each document. The titles and abstracts of the citations were retrieved from PubMed, and the cosine similarity between the PLoS abstract and the citation's abstract was calculated. We selected the top five most similar abstracts using TF*IDF weighted cosine similarity, shuffled their order, and emailed them to the corresponding author for annotation. We believe that ranking five articles (rather than the entire collection of the references) is a more manageable task for an author compared to asking them to rank all references. Because the documents to be annotated were selected based on text similarity, they also represent a challenging baseline for models based on text-similarity features. In total 416 authors were contacted, and 92 responded (22% response rate). Two responses were removed from the dataset for incomplete annotation." ] ]
1951cde612751410355610074c3c69cec94824c2
Which deep learning model performed better?
[ "autoencoders", "CNN" ]
[ [ "To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1" ], [ "In the literature, deep learning based automated feature extraction has been shown to outperform state-of-the-art manual feature engineering based classifiers such as Support Vector Machine (SVM), Naive Bayes (NB) or Multilayer Perceptron (MLP) etc. One of the important techniques in deep learning is the autoencoder that generally involves reducing the number of feature dimensions under consideration. The aim of dimensionality reduction is to obtain a set of principal variables to improve the performance of the approach. Similarly, CNNs have been proven to be very effective in sentiment analysis. However, little work has been carried out to exploit deep learning based feature representation for Persian sentiment analysis BIBREF8 BIBREF9 . In this paper, we present two deep learning models (deep autoencoders and CNNs) for Persian sentiment analysis. The obtained deep learning results are compared with MLP.", "To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1" ] ]
4140d8b5a78aea985546aa1e323de12f63d24add
By how much did the results improve?
[ "Unanswerable" ]
[ [] ]
61272b1d0338ed7708cf9ed9c63060a6a53e97a2
What was their performance on the dataset?
[ "accuracy of 82.6%" ]
[ [ "To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1" ] ]
53b02095ba7625d85721692fce578654f66bbdf0
How large is the dataset?
[ "Unanswerable" ]
[ [] ]
0cd0755ac458c3bafbc70e4268c1e37b87b9721b
Did the authors use crowdsourcing platforms?
[ "Yes", "Yes" ]
[ [ "We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs." ], [ "We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location. The guide has access to a map and knows the target location, but does not know where the tourist is; the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The agents need to work together through communication in order to successfully solve the task. An example of the task is given in Figure FIGREF3 .", "We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs." ] ]
c1ce652085ef9a7f02cb5c363ce2b8757adbe213
How was the dataset collected?
[ "crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk)" ]
[ [ "We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs." ] ]
96be67b1729c3a91ddf0ec7d6a80f2aa75e30a30
What language do the agents talk in?
[ "English" ]
[ [ "Tourist: I can't go straight any further.", "Guide: ok. turn so that the theater is on your right.", "Guide: then go straight", "Tourist: That would be going back the way I came", "Guide: yeah. I was looking at the wrong bank", "Tourist: I'll notify when I am back at the brooks brothers, and the bank.", "Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT", "Guide: make a right when the bank is on your left", "Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT", "Tourist: Making the right at the bank.", "Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT", "Tourist: I can't go that way.", "Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT", "Tourist: Bank is ahead of me on the right", "Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT", "Guide: turn around on that intersection", "Tourist: I can only go to the left or back the way I just came.", "Guide: you're in the right place. do you see shops on the corners?", "Guide: If you're on the corner with the bank, cross the street", "Tourist: I'm back where I started by the shop and the bank." ] ]
b85ab5f862221fac819cf2fef239bcb08b9cafc6
What evaluation metrics did the authors look at?
[ "localization accuracy" ]
[ [ "In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work." ] ]
7e34501255b89d64b9598b409d73f96489aafe45
What data did they use?
[ " dataset on Mechanical Turk involving human perception, action and communication" ]
[ [ "Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication." ] ]
e854edcc5e9111922e6e120ae17d062427c27ec1
Do the authors report results only on English data?
[ "Unanswerable", "Unanswerable" ]
[ [], [] ]
bd6cec2ab620e67b3e0e7946fc045230e6906020
How is the accuracy of the system measured?
[ "F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates, distances between duplicate and non-duplicate questions using different embedding systems" ]
[ [ "The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems. The X axis shows the euclidean distance between vectors and the Y axis frequency. A perfect result would be a blue peak to the left and an entirely disconnected orange spike to the right, showing that all non-duplicate questions have a greater euclidean distance than the least similar duplicate pair of questions. As can be clearly seen in the figure above, Elmo BIBREF23 and Infersent BIBREF13 show almost no separation and therefore cannot be considered good models for this problem. A much greater disparity is shown by the Google USE models BIBREF14 , and even more for the Google USE Large model. In fact the Google USE Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates.", "In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate. We had no original data on whether sentences were semantically similar, but we did have a corpus of articles clustered into stories. Working on the assumption that similar claims would be more likely to be in the same story, we developed an equation to judge how well our corpus of sentences was clustered, rewarding clustering which matches the article clustering and the total number of claims clustered. The precise formula is given below, where INLINEFORM0 is the proportion of claims in clusters from one story cluster, INLINEFORM1 is the proportion of claims in the correct claim cluster, where they are from the most common story cluster, and INLINEFORM2 is the number of claims placed in clusters. A,B and C are parameters to tune. INLINEFORM3" ] ]
4b0ba460ae3ba7a813f204abd16cf631b871baca
How is an incoming claim used to retrieve similar factchecked claims?
[ "text clustering on the embeddings of texts" ]
[ [ "Traditional text clustering methods, using TFIDF and some clustering algorithm, are poorly suited to the problem of clustering and comparing short texts, as they can be semantically very similar but use different words. This is a manifestation of the the data sparsity problem with Bag-of-Words (BoW) models. BIBREF16 . Dimensionality reduction methods such as Latent Dirichlet Allocation (LDA) can help solve this problem by giving a dense approximation of this sparse representation BIBREF17 . More recently, efforts in this area have used text embedding-based systems in order to capture dense representation of the texts BIBREF18 . Much of this recent work has relied on the increase of focus in word and text embeddings. Text embeddings have been an increasingly popular tool in NLP since the introduction of Word2Vec BIBREF19 , and since then the number of different embeddings has exploded. While many focus on giving a vector representation of a word, an increasing number now exist that will give a vector representation of a entire sentence or text. Following on from this work, we seek to devise a system that can run online, performing text clustering on the embeddings of texts one at a time" ] ]
63b0c93f0452d0e1e6355de1d0f3ff0fd67939fb
What existing corpus is used for comparison in these experiments?
[ "Quora duplicate question dataset BIBREF22" ]
[ [ "In order to choose an embedding, we sought a dataset to represent our problem. Although no perfect matches exist, we decided upon the Quora duplicate question dataset BIBREF22 as the best match. To study the embeddings, we computed the euclidean distance between the two questions using various embeddings, to study the distance between semantically similar and dissimilar questions." ] ]
d27f23bcd80b12f6df8e03e65f9b150444925ecf
What are the components in the factchecking algorithm?
[ "Unanswerable" ]
[ [] ]
b11ee27f3de7dd4a76a1f158dc13c2331af37d9f
What is the baseline?
[ " path ranking-based KGC (PRKGC)" ]
[ [ "RC-QED$^{\\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively. PRKGC meets our purposes because of its glassboxness: we can trace the derivation steps of the model easily." ] ]
7aba5e4483293f5847caad144ee0791c77164917
What dataset was used in the experiment?
[ "WikiHop" ]
[ [ "Our study uses WikiHop BIBREF0, as it is an entity-based multi-hop QA dataset and has been actively used. We randomly sampled 10,000 instances from 43,738 training instances and 2,000 instances from 5,129 validation instances (i.e. 36,000 annotation tasks were published on AMT). We manually converted structured WikiHop question-answer pairs (e.g. locatedIn(Macchu Picchu, Peru)) into natural language statements (Macchu Picchu is located in Peru) using a simple conversion dictionary." ] ]
565d668947ffa6d52dad019af79289420505889b
Did they use any crowdsourcing platform?
[ "Yes", "Yes" ]
[ [ "We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\\ge 5,000$ HITs experiences and an approval rate of $\\ge $ 99.0%, and pay ¢20 as a reward per instance." ], [ "We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\\ge 5,000$ HITs experiences and an approval rate of $\\ge $ 99.0%, and pay ¢20 as a reward per instance." ] ]
d83304c70fe66ae72e78aa1d183e9f18b7484cd6
How was the dataset annotated?
[ "True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable), why they are unsure from two choices (“Not stated in the article” or “Other”), The “summary” text boxes" ]
[ [ "Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”).", "If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences. We give a ¢6 bonus to those workers who select True or Likely. To encourage an abstraction of selected sentences, we also introduce a gamification scheme to give a bonus to those who provide shorter NLDs. Specifically, we probabilistically give another ¢14 bonus to workers according to a score they gain. The score is always shown on top of the screen, and changes according to the length of NLDs they write in real time. To discourage noisy annotations, we also warn crowdworkers that their work would be rejected for noisy submissions. We periodically run simple filtering to exclude noisy crowdworkers (e.g. workers who give more than 50 submissions with the same answers)." ] ]
e90ac9ee085dc2a9b6fe132245302bbce5f3f5ab
What is the source of the proposed dataset?
[ "Unanswerable" ]
[ [] ]
5b029ad0d20b516ec11967baaf7d2006e8d7199f
How many label options are there in the multi-label task?
[ " two labels " ]
[ [ "For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about)." ] ]
79bd2ad4cb5c630ce69d5a859ed118132cae62d7
What is the interannotator agreement of the crowd sourced users?
[ "Unanswerable" ]
[ [] ]
d3a1a53521f252f869fdae944db986931d9ffe48
Who are the experts?
[ "political pundits of the Washington Post", "the experts in the field" ]
[ [ "The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd." ], [ "This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debates/Grammys/Super Bowl. We observed that in most of the cases, the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful. In some cases where there were disagreements, however, we observed that the opinions of the experts did have some influence on the opinions of the users. We also find that the features that were most useful in our case of multi-label classification was a combination of the document-embedding and topic features." ] ]
38e11663b03ac585863742044fd15a0e875ae9ab
Who is the crowd in these experiments?
[ " peoples' sentiments expressed over social media" ]
[ [ "The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd." ] ]
14421b7ae4459b647033b3ccba635d4ba7bb114b
How do you establish the ground truth of who won a debate?
[ "experts in Washington Post" ]
[ [ "Trend Analysis: We also analyze some certain trends of the debates. Firstly, we look at the change in sentiments of the users towards the candidates over time (hours, days, months). This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time, across debates. Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users. Besides that, to study the behavior of the users, we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time (hours, days, months) for debates.", "Next, we investigate how the sentiments of the users towards the candidates change before and after the debate. In essence, we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates. Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate, 15th December 2015. It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days. One particular example is that of Jeb Bush. It seems that the populace are generally prejudiced towards the candidates, which is reflected in their sentiments of the candidates on the day of the debate. The results of the Washington Post are released in the morning after the debate. One can see the winners suggested by the Washington Post in Table TABREF35. One of the winners in that debate according to them is Jeb Bush. Coincidentally, Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate (essentially, one day after the results given by the experts are out)." ] ]
52f7e42fe8f27d800d1189251dfec7446f0e1d3b
How much better is performance of proposed method than state-of-the-art methods in experiments?
[ "Accuracy of best proposed method KANE (LSTM+Concatenation) are 0.8011, 0.8592, 0.8605 compared to best state-of-the art method R-GCN + LR 0.7721, 0.8193, 0.8229 on three datasets respectively." ]
[ [ "Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power." ] ]
00e6324ecd454f5d4b2a4b27fcf4104855ff8ee2
What further analysis is done?
[ "we use t-SNE tool BIBREF27 to visualize the learned embedding" ]
[ [ "Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods." ] ]
aa0d67c2a1bc222d1f2d9e5d51824352da5bb6dc
What seven state-of-the-art methods are used for comparison?
[ "TransE, TransR and TransH, PTransE, and ALL-PATHS, R-GCN BIBREF24 and KR-EAR BIBREF26" ]
[ [ "1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.", "2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.", "3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets." ] ]
cf0085c1d7bd9bc9932424e4aba4e6812d27f727
What three datasets are used to measure performance?
[ "FB24K, DBP24K, Game30K", "Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph" ]
[ [ "In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24." ], [ "In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24." ] ]
586b7470be91efe246c3507b05e30651ea6b9832
How does KANE capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner?
[ "To capture both high-order structural information of KGs, we used an attention-based embedding propagation method." ]
[ [ "The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification." ] ]
31b20a4bab09450267dfa42884227103743e3426
What are recent works on knowedge graph embeddings authors mention?
[ "entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12, logical rules BIBREF23, deep neural network models BIBREF24" ]
[ [ "In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24." ] ]
45306b26447ea4b120655d6bb2e3636079d3d6e0
Do they report results only on English data?
[ "Yes" ]
[ [] ]
0c08af6e4feaf801185f2ec97c4da04c8b767ad6
Do the authors mention any confounds to their study?
[ "No" ]
[ [ "A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction." ] ]
6412e97373e8e9ae3aa20aa17abef8326dc05450
What baseline model is used?
[ "Human evaluators" ]
[ [] ]
957bda6b421ef7d2839c3cec083404ac77721f14
What stylistic features are used to detect drunk texts?
[ "LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalisation, Length, Emoticon (Presence/Count ) \n and Sentiment Ratio", "LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalization, Length, Emoticon (Presence/Count), Sentiment Ratio." ]
[ [], [] ]
368317b4fd049511e00b441c2e9550ded6607c37
Is the data acquired under distant supervision verified by humans at any stage?
[ "Yes" ]
[ [ "Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1." ] ]
b3ec918827cd22b16212265fcdd5b3eadee654ae
What hashtags are used for distant supervision?
[ "Unanswerable" ]
[ [] ]
387970ebc7ef99f302f318d047f708274c0e8f21
Do the authors equate drunk tweeting with drunk texting?
[ "Yes" ]
[ [ "The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'." ] ]
2fffff59e57b8dbcaefb437a6b3434fc137f813b
What corpus was the source of the OpenIE extractions?
[ "domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining", "Unanswerable" ]
[ [ "We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T).", "We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\\prime }_{qa}$ . Additionally, TableILP uses $\\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams." ], [] ]
eb95af36347ed0e0808e19963fe4d058e2ce3c9f
What is the accuracy of the proposed technique?
[ "51.7 and 51.6 on 4th and 8th grade question sets with no curated knowledge. 47.5 and 48.0 on 4th and 8th grade question sets when both solvers are given the same knowledge" ]
[ [] ]
cd1792929b9fa5dd5b1df0ae06fc6aece4c97424
Is an entity linking process used?
[ "No" ]
[ [ "Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\\lbrace a_i\\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.", "Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\\textit {tf}(x, q)=1\\; \\textmd {if x} \\in q ; \\textit {idf}(x) = log(1 + N/n_x) \\\\ &\\textit {tf-idf}(t, q)=\\sum _{x \\in t\\cap q} idf(x) $" ] ]
65d34041ffa4564385361979a08706b10b92ebc7
Are the OpenIE extractions all triples?
[ "No" ]
[ [ "We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \\ldots )$ , we add a triple $(S; P; O_i)$ to this table." ] ]
e215fa142102f7f9eeda9c9eb8d2aeff7f2a33ed
What method was used to generate the OpenIE extractions?
[ "for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S, take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$" ]
[ [ "We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)." ] ]
a8545f145d5ea2202cb321c8f93e75ad26fcf4aa
Can the method answer multi-hop questions?
[ "Yes" ]
[ [ "Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work." ] ]
417dabd43d6266044d38ed88dbcb5fdd7a426b22
What was the textual source to which OpenIE was applied?
[ "domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining" ]
[ [ "We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\\prime }_{qa}$ . Additionally, TableILP uses $\\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.", "We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)." ] ]
fed230cef7c130f6040fb04304a33bbc17ca3a36
What OpenIE method was used to generate the extractions?
[ "for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S, take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$" ]
[ [ "We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)." ] ]
7917d44e952b58ea066dc0b485d605c9a1fe3dda
Is their method capable of multi-hop reasoning?
[ "Yes" ]
[ [ "Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work." ] ]
7d5ba230522df1890619dedcfb310160958223c1
Do the authors offer any hypothesis about why the dense mode outperformed the sparse one?
[ "Yes" ]
[ [ "We use two different unsupervised approaches for word sense disambiguation. The first, called `sparse model', uses a straightforward sparse vector space model, as widely used in Information Retrieval, to represent contexts and synsets. The second, called `dense model', represents synsets and contexts in a dense, low-dimensional space by averaging word embeddings.", "In the synset embeddings model approach, we follow SenseGram BIBREF14 and apply it to the synsets induced from a graph of synonyms. We transform every synset into its dense vector representation by averaging the word embeddings corresponding to each constituent word: DISPLAYFORM0", "We observe that the SenseGram-based approach for word sense disambiguation yields substantially better results in every case (Table TABREF25 ). The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words. Thus, we recommend using the dense approach in further studies. Although the AdaGram approach trained on a large text corpus showed better results according to the weighted average, this result does not transfer to languages with less available corpus size." ] ]
a48cc6d3d322a7b159ff40ec162a541bf74321eb
What evaluation is conducted?
[ "Word Sense Induction & Disambiguation" ]
[ [ "We conduct our experiments using the evaluation methodology of SemEval 2010 Task 14: Word Sense Induction & Disambiguation BIBREF5 . In the gold standard, each word is provided with a set of instances, i.e., the sentences containing the word. Each instance is manually annotated with the single sense identifier according to a pre-defined sense inventory. Each participating system estimates the sense labels for these ambiguous words, which can be viewed as a clustering of instances, according to sense labels. The system's clustering is compared to the gold-standard clustering for evaluation." ] ]
2bc0bb7d3688fdd2267c582ca593e2ce72718a91
Which corpus of synsets are used?
[ "Wiktionary" ]
[ [ "The following different sense inventories have been used during the evaluation:", "Watlink, a word sense network constructed automatically. It uses the synsets induced in an unsupervised way by the Watset[CWnolog, MCL] method BIBREF2 and the semantic relations from such dictionaries as Wiktionary referred as Joint INLINEFORM0 Exp INLINEFORM1 SWN in Ustalov:17:dialogue. This is the only automatically built inventory we use in the evaluation." ] ]
8c073b7ea8cb5cc54d7fecb8f4bf88c1fb621b19
What measure of semantic similarity is used?
[ "cosine similarity" ]
[ [ "In the vector space model approach, we follow the sparse context-based disambiguated method BIBREF12 , BIBREF13 . For estimating the sense of the word INLINEFORM0 in a sentence, we search for such a synset INLINEFORM1 that maximizes the cosine similarity to the sentence vector: DISPLAYFORM0" ] ]
dcb18516369c3cf9838e83168357aed6643ae1b8
Which retrieval system was used for baselines?
[ "The dataset comes with a ranked set of relevant documents. Hence the baselines do not use a retrieval system." ]
[ [ "Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution. In this section, we describe how each of these fields was generated for each Quasar variant.", "The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.", "Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy. The performance of the baseline among these instances is referred to as the Reading Accuracy, and the overall performance (which is a product of the two) is referred to as the Overall Accuracy. In Figure 4 we compare how these three vary as the number of context documents is varied. Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents. Hence, simply retrieving more documents is not sufficient – finding the few most relevant ones will allow the reader to work best." ] ]
f46a907360d75ad566620e7f6bf7746497b6e4a9
What word embeddings were used?
[ "Kyubyong Park, Edouard Grave et al BIBREF11" ]
[ [ "We use the word embeddings for Vietnamese that created by Kyubyong Park and Edouard Grave at al:", "Kyubyong Park: In his project, he uses two methods including fastText and word2vec to generate word embeddings from wikipedia database backup dumps. His word embedding is the vector of 100 dimension and it has about 10k words.", "Edouard Grave et al BIBREF11: They use fastText tool to generate word embeddings from Wikipedia. The format is the same at Kyubyong's, but their embedding is the vector of 300 dimension, and they have about 200k words" ] ]
79d999bdf8a343ce5b2739db3833661a1deab742
What type of errors were produced by the BLSTM-CNN-CRF system?
[ "No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag" ]
[ [ "Step 2: Based on the best results (BLSTM-CNN-CRF), error analysis is performed based on five types of errors (No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag), in a way similar to BIBREF10, but we analyze on both gold labels and predicted labels (more detail in figure 1 and 2)." ] ]
71d59c36225b5ee80af11d3568bdad7425f17b0c
How much better was the BLSTM-CNN-CRF than the BLSTM-CRF?
[ "Best BLSTM-CNN-CRF had F1 score 86.87 vs 86.69 of best BLSTM-CRF " ]
[ [ "Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings." ] ]
efc65e5032588da4a134d121fe50d49fe8fe5e8c
What supplemental tasks are used for multitask learning?
[ "Multitask learning is used for the task of predicting relevance of a comment on a different question to a given question, where the supplemental tasks are predicting relevance between the questions, and between the comment and the corresponding question" ]
[ [ "Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments.", "In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant. The left side of Figure 1 shows one intuitive way to predict relationships using RNNs. Parallel LSTMs encode two objects independently, and then concatenate their outputs as an input to a feed-forward neural network (FNN) with a softmax output layer for classification.", "For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC)." ] ]
a30958c7123d1ad4723dcfd19d8346ccedb136d5
Is the improvement actually coming from using an RNN?
[ "No" ]
[ [] ]