question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
acda028a21a465c984036dcbb124b7f03c490b41
How does muli-agent dual learning work?
[ "MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models." ]
[ [ "The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\\mathcal {X}$ to domain $\\mathcal {Y}$) and dual task (mapping from domain $\\mathcal {Y}$ to $\\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. It was integrated into our submitted systems for German$\\leftrightarrow $English and German$\\leftrightarrow $French translations." ] ]
42af0472e6895eaf7b9392674b0d956e64e86b03
Which language directions are machine translation systems of WMT evaluated on?
[ "German$\\leftrightarrow $English, German$\\leftrightarrow $French, Chinese$\\leftrightarrow $English, English$\\rightarrow $Lithuanian, English$\\rightarrow $Finnish, and Russian$\\rightarrow $English, Lithuanian$\\rightarrow $English, Finnish$\\rightarrow $English, and English$\\rightarrow $Kazakh" ]
[ [ "We participated in the WMT19 shared news translation task in 11 translation directions. We achieved first place for 8 directions: German$\\leftrightarrow $English, German$\\leftrightarrow $French, Chinese$\\leftrightarrow $English, English$\\rightarrow $Lithuanian, English$\\rightarrow $Finnish, and Russian$\\rightarrow $English, and three other directions were placed second (ranked by teams), which included Lithuanian$\\rightarrow $English, Finnish$\\rightarrow $English, and English$\\rightarrow $Kazakh." ] ]
a85698f19a91ecd3cd3a90a93a453d2acebae1b7
Approximately how much computational cost is saved by using this model?
[ "Unanswerable" ]
[ [] ]
af073d84b8a7c968e5822c79bef34a28655886de
What improvement does the MOE model make over the SOTA on machine translation?
[ "1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3, perplexity scores are also better, On the Google Production dataset, our model achieved 1.01 higher test BLEU score" ]
[ [ "Tables TABREF42 , TABREF43 , and TABREF44 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time." ] ]
e8fcfb1412c3b30da6cbc0766152b6e11e17196c
What improvement does the MOE model make over the SOTA on language modelling?
[ "Perpexity is improved from 34.7 to 28.0." ]
[ [ "The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .", "In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation." ] ]
0cd90e5b79ea426ada0203177c28812a7fc86be5
How is the correct number of experts to use decided?
[ "varied the number of experts between models" ]
[ [ "Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains INLINEFORM0 parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section UID14 ) with INLINEFORM1 for the ordinary MoE layers and INLINEFORM2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M." ] ]
f01a88e15ef518a68d8ca2bec992f27e7a3a6add
What equations are used for the trainable gating network?
[ "DISPLAYFORM0, DISPLAYFORM0 DISPLAYFORM1" ]
[ [ "A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0", "We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1" ] ]
44104668796a6ca10e2ea3ecf706541da1cec2cf
What is the difference in performance between the interpretable system (e.g. vectors and cosine distance) and LSTM with ELMo system?
[ "Accuracy of best interpretible system was 0.3945 while accuracy of LSTM-ELMo net was 0.6818." ]
[ [ "The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in existing literature. This might be explained by the already mentioned feature of Wikipedia edits, which can be expected to be to some degree self-reviewed before submission. This can very well limit the number of most trivial mistakes." ] ]
bbcd77aac74989f820e84488c52f3767d0405d51
What solutions are proposed for error detection and context awareness?
[ "Unanswerable" ]
[ [] ]
6a31bd676054222faf46229fc1d283322478a020
How is PIEWi annotated?
[ "[error, correction] pairs" ]
[ [ "PlEWi BIBREF20 is an early version of WikEd BIBREF21 error corpus, containing error type annotations allowing us to select only non-word errors for evaluation. Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique. The corpus contains data extracted from histories of page versions of Polish Wikipedia. An algorithm designed by the corpus author determined where the changes were correcting spelling errors, as opposed to expanding content and disagreements among Wikipedia editors." ] ]
e4d16050f0b457c93e590261732a20401def9cde
What methods are tested in PIEWi?
[ "Levenshtein distance metric BIBREF8, diacritical swapping, Levenshtein distance is used in a weighted sum to cosine distance between word vectors, ELMo-augmented LSTM" ]
[ [ "The methods that we evaluated are baselines are the ones we consider to be basic and with moderate potential of yielding particularly good results. Probably the most straightforward approach to error correction is selecting known words from a dictionary that are within the smallest edit distance from the error. We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 . It is a version of edit distance that treats deletions, insertions and replacements as adding one unit distance, without giving a special treatment to character swaps. The SGJP – Grammatical Dictionary of Polish BIBREF10 was used as the reference vocabulary.", "Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 . Namely, from the incorrect form we try to produce all strings obtainable by either adding or removing diacritical marks from characters. We then exclude options that are not present in SGJP, and select as the correction the one within the smallest edit distance from the error. It is possible for the number of such diacritically-swapped options to become very big. For example, the token Modlin-Zegrze-Pultusk-Różan-Ostrołęka-Łomża-Osowiec (taken from PlEWi corpus of spelling errors, see below) can yield over INLINEFORM0 states with this method, such as Módłiń-Żęgrzę-Pułtuśk-Roźąń-Óśtróleką-Lómzą-Óśówięć. The actual correction here is just fixing the ł in Pułtusk. Hence we only try to correct in this way tokens that are shorter than 17 characters.", "A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors. This is based on the observation that trained vectors models of distributional semantics contain also representations of spelling errors, if they were not pruned. Their representations tend to be similar to those of their correct counterparts. For example, the token enginir will appear in similar contexts as engineer, and therefore will be assigned a similar vector embedding.", "(applied cellwise) in order to obtain the initial setting of parameters for the main LSTM. Our ELMo-augmented LSTM is bidirectional." ] ]
b25e7137f49f77e7e67ee2f40ca585d3a377f8b5
Which specific error correction solutions have been proposed for specialized corpora in the past?
[ "spellchecking mammography reports and tweets BIBREF7 , BIBREF4" ]
[ [ "Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 ." ] ]
d803b782023553bbf9b36551fbc888ad189b1f29
What was the criteria for human evaluation?
[ "to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness" ]
[ [ "We conducted the human evaluation using Amazon Mechanical Turk to assess subjective quality. We recruit master level workers (who have good prior approval rates) to perform a human comparison between generated responses from two systems (which are randomly sampled from comparison systems). The workers are required to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness. Informativeness indicates the extent to which generated utterance contains all the information specified in the dialog act. Naturalness denotes whether the utterance is as natural as a human does. To reduce judgement bias, we distribute each question to three different workers. Finally, we collected in total of 5800 judges." ] ]
fc5f9604c74c9bb804064f315676520937131e17
What automatic metrics are used to measure performance of the system?
[ "BLEU scores and the slot error rate (ERR)" ]
[ [ "Following BIBREF3, BLEU scores and the slot error rate (ERR) are reported. BLEU score evaluates how natural the generated utterance is compared with human readers. ERR measures the exact matching of the slot tokens in the candidate utterances. $\\text{ERR}=(p+q)/M$, where $M$ is the total number of slots in the dialog act, and $p$, $q$ is the number of missing and redundant slots in the given realisation. For each dialog act, we generate five utterances and select the top one with the lowest ERR as the final output." ] ]
b37fd665dfa5fad43977069d5623f4490a979305
What existing methods is SC-GPT compared to?
[ "$({1})$ SC-LSTM BIBREF3, $({2})$ GPT-2 BIBREF6 , $({3})$ HDSA BIBREF7" ]
[ [ "We compare with three baseline methods. $({1})$ SC-LSTM BIBREF3 is a canonical model and a strong baseline that uses an additional dialog act vector and a reading gate to guide the utterance generation. $({2})$ GPT-2 BIBREF6 is used to directly fine-tune on the domain-specific labels, without pre-training on the large-scale corpus of (dialog act, response) pairs. $({3})$ HDSA BIBREF7 is a state-of-the-art model on MultiWOZ. It leverages dialog act structures to enable transfer in the multi-domain setting, showing superior performance than SC-LSTM." ] ]
c1f4d632da78714308dc502fe4e7b16ea6f76f81
Which language-pair had the better performance?
[ "French-English" ]
[ [] ]
749a307c3736c5b06d7b605dc228d80de36cbabe
Which datasets were used in the experiment?
[ "WMT 2019 parallel dataset, a restricted dataset containing the full TED corpus from MUST-C BIBREF10, sampled sentences from WMT 2019 dataset" ]
[ [ "Contribution: We propose a preliminary study of a generic approach allowing any model to benefit from document-level information while translating sentence pairs. The core idea is to augment source data by adding document information to each sentence of a source corpus. This document information corresponds to the belonging document of a sentence and is computed prior to training, it takes every document word into account. Our approach focuses on pre-processing and consider whole documents as long as they have defined boundaries. We conduct experiments using the Transformer base model BIBREF1. For the English-German language pair we use the full WMT 2019 parallel dataset. For the English-French language pair we use a restricted dataset containing the full TED corpus from MUST-C BIBREF10 and sampled sentences from WMT 2019 dataset. We obtain important improvements over the baseline and present evidences that this approach helps to resolve cross-sentence ambiguities." ] ]
102de97c123bb1e247efec0f1d958f8a3a86e2f6
What evaluation metrics did they use?
[ "BLEU and TER scores" ]
[ [ "We consider two different models for each language pair: the Baseline and the Document model. We evaluate them on 3 test sets and report BLEU and TER scores. All experiments are run 8 times with different seeds, we report averaged results and p-values for each experiment." ] ]
3460393d6888dd34113fa0813a1b3a1514c66aa6
Do they evaluate only on English datasets?
[ "Unanswerable" ]
[ [] ]
d491ee69db39ec65f0f6da9ec03450520389699a
What are the differences in the use of emojis between gang member and the rest of the Twitter population?
[ "32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them, gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior" ]
[ [ "Motivated by recent work involving the use of emojis by gang members BIBREF22 , we also studied if and how gang and non-gang members use emoji symbols in their tweets. Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. The frequency of each emoji symbol used across the set of user's tweets are thus considered as features for our classifier." ] ]
d3839c7acee4f9c8db0a4a475214a8dcbd0bc26f
What are the differences in the use of YouTube links between gang member and the rest of the Twitter population?
[ "76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre" ]
[ [ "It has been recognized that music is a key cultural component in an urban lifestyle and that gang members often want to emulate the scenarios and activities the music conveys BIBREF7 . Our analysis confirms that the influence of gangster rap is expressed in gang members' Twitter posts. We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre. Moreover, this high proportion is not driven by a small number of profiles that prolifically share YouTube links; eight YouTube links are shared on average by a gang member." ] ]
a6d00f44ff8f83b6c1787e39333e759b0c3daf15
What are the differences in the use of images between gang member and the rest of the Twitter population?
[ "user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash" ]
[ [ "In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash. Descriptions of these images may thus empower our classifier. Thus, we translated profile images into features with the Clarifai web service. Clarifai offers a free API to query a deep learning system that tags images with a set of scored keywords that reflect what is seen in the image. We tagged the profile image and cover image for each profile using 20 tags identified by Clarifai. Figure FIGREF36 offers the 20 most often used tags applied to gang and non-gang member profiles. Since we take all the tags returned for an image, we see common words such as people and adult coming up in the top 20 tag set. However, gang member profile images were assigned unique tags such as trigger, bullet, worship while non-gang images were uniquely tagged with beach, seashore, dawn, wildlife, sand, pet. The set of tags returned by Clarifai were thus considered as features for the classifier." ] ]
0d4aa05eb00d9dee74000ea5b21b08f693ba1e62
What are the differences in language use between gang member and the rest of the Twitter population?
[ "Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us" ]
[ [ "Figure FIGREF14 summarizes the words seen most often in the gang and non-gang members' tweets as clouds. They show a clear difference in language. For example, we note that gang members more frequently use curse words in comparison to ordinary users. Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter. The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us. These differences make it clear that the individual words used by gang and non-gang members will be relevant features for gang profile classification." ] ]
382bef47d316d7c12ea190ae160bf0912a0f4ffe
How is gang membership verified?
[ "Manual verification" ]
[ [ "3. Manual verification of Twitter profiles: We verified each profile discovered manually by examining the profile picture, profile background image, recent tweets, and recent pictures posted by a user. During these checks, we searched for terms, activities, and symbols that we believed could be associated with a gang. For example, profiles whose image or background included guns in a threatening way, stacks of money, showing gang hand signs and gestures, and humans holding or posing with a gun, appeared likely to be from a gang member. Such images were often identified in profiles of users who submitted tweets that contain messages of support or sadness for prisoners or recently fallen gang members, or used a high volume of threatening and intimidating slang language. Only profiles where the images, words, and tweets all suggested gang affiliation were labeled as gang affiliates and added to our dataset. Although this manual verification does have a degree of subjectivity, in practice, the images and words used by gang members on social media are so pronounced that we believe any reasonable analyst would agree that they are gang members. We found that not all the profiles collected belonged to gang members; we observed relatives and followers of gang members posting the same hashtags as in Step 1 to convey similar feelings in their profile descriptions." ] ]
32a232310babb92991c4b1b75f7aa6b4670ec447
Do the authors provide evidence that 'most' street gang members use Twitter to intimidate others?
[ "No" ]
[ [ "Street gang members have established online presences coinciding with their physical occupation of neighborhoods. The National Gang Threat Assessment Report confirms that at least tens of thousands of gang members are using social networking websites such as Twitter and video sharing websites such as YouTube in their daily life BIBREF0 . They are very active online; the 2007 National Assessment Center's survey of gang members found that 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF4 . Gang members typically use social networking sites and social media to develop online respect for their street gang BIBREF5 and to post intimidating, threatening images or videos BIBREF6 . This “Cyber-” or “Internet banging” BIBREF7 behavior is precipitated by the fact that an increasing number of young members of the society are joining gangs BIBREF8 , and these young members have become enamored with technology and with the notion of sharing information quickly and publicly through social media. Stronger police surveillance in the physical spaces where gangs congregate further encourages gang members to seek out virtual spaces such as social media to express their affiliation, to sell drugs, and to celebrate their illegal activities BIBREF9 ." ] ]
5845d1db7f819dbadb72e7df69d49c3f424b5730
What is English mixed with in the TRAC dataset?
[ "Hindi" ]
[ [ "In future, we will explore other methods to increase the understanding of deep learning models on group targeted text, although the categories are well defined we will look after if we further fine-tune the categories with more data. In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources).", "The block diagram of the proposed system is shown in Figure FIGREF22. The proposed system does not use any data augmentation techniques like BIBREF14, which is the top performer in TRAC (in English code-mixed Facebook data). This means the performance achieved by our system totally depends on the training dataset provided by TRAC. This also proves the effectiveness of our approach. Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set. The remaining part of this paper is organized as follows: Section SECREF2 is an overview of related work. Section SECREF3 presents the methodology and algorithmic details. Section SECREF4 discusses the experimental evaluation of the system, and Section SECREF5 concludes this paper.", "The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:", "Overtly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, \"Well said sonu..you have courage to stand against dadagiri of Muslims\".", "Covertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, \"Dear India, stop playing with the emotions of your people for votes.\"", "Non-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive." ] ]
e829f008d62312357e0354a9ed3b0827c91c9401
Which psycholinguistic and basic linguistic features are used?
[ "Emotion Sensor Feature, Part of Speech, Punctuation, Sentiment Analysis, Empath, TF-IDF Emoticon features" ]
[ [ "Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term \"NLP Features\" to represent it in the entire paper.", "We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.)." ] ]
54fe8f05595f2d1d4a4fd77f4562eac519711fa6
How have the differences in communication styles between Twitter and Facebook increase the complexity of the problem?
[ "Systems do not perform well both in Facebook and Twitter texts" ]
[ [ "Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets. So, we concentrated to develop a robust system for English code-mixed texts, and uni-lingual texts, which can also handle different writing styles. Our approach is based on three main ideas:" ] ]
61404466cf86a21f0c1783ce535eb39a01528ce8
What are the key differences in communication styles between Twitter and Facebook?
[ "Unanswerable" ]
[ [] ]
fbe5e513745d723aad711ceb91ce0c3c2ceb669e
What data/studies do the authors provide to support the assertion that the majority of aggressive conversations contain code-mixed languages?
[ "None" ]
[ [ "The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset." ] ]
1571e16063b53409f2d1bd6ec143fccc5b29ebb9
What is the baseline?
[ "Majority Class baseline (MC) , Random selection baseline (RAN)" ]
[ [ "Emotions have been used in many natural language processing tasks and they showed their efficiency BIBREF35. We aim at investigating their efficiency to detect false information. In addition to EIN, we created a model (Emotion-based Model) that uses emotional features only and compare it to two baselines. Our aim is to investigate if the emotional features independently can detect false news. The two baselines of this model are Majority Class baseline (MC) and the Random selection baseline (RAN)." ] ]
d71937fa5da853f7529f767730547ccfb70e5908
What datasets did they use?
[ "News Articles, Twitter" ]
[ [ "Evaluation Framework ::: Datasets ::: News Articles", "Our dataset source of news articles is described in BIBREF2. This dataset was built from two different sources, for the trusted news (real news) they sampled news articles from the English Gigaword corpus. For the false news, they collected articles from seven different unreliable news sites. These news articles include satires, hoaxes, and propagandas but not clickbaits. Since we are interested also in analyzing clickbaits, we slice a sample from an available clickbait dataset BIBREF33 that was originally collected from two sources: Wikinews articles' headlines and other online sites that are known to publish clickbaits. The satire, hoax, and propaganda news articles are considerably long (some of them reach the length of 5,000 words). This length could affect the quality of the analysis as we mentioned before. We focus on analyzing the initial part of the article. Our intuition is that it is where emotion-bearing words will be more frequent. Therefore, we shorten long news articles into a maximum length of N words (N=300). We choose the value of N based on the length of the shortest articles. Moreover, we process the dataset by removing very short articles, redundant articles or articles that do not have a textual content.", "With the complicated political and economic situations in many countries, some agendas are publishing suspicious news to affect public opinions regarding specific issues BIBREF0. The spreading of this phenomenon is increasing recently with the large usage of social media and online news sources. Many anonymous accounts in social media platforms start to appear, as well as new online news agencies without presenting a clear identity of the owner. Twitter has recently detected a campaign organized by agencies from two different countries to affect the results of the last U.S. presidential elections of 2016. The initial disclosures by Twitter have included 3,841 accounts. A similar attempt was done by Facebook, as they detected coordinated efforts to influence U.S. politics ahead of the 2018 midterm elections.", "For this dataset, we rely on a list of several Twitter accounts for each type of false information from BIBREF6. This list was created based on public resources that annotated suspicious Twitter accounts. The authors in BIBREF6 have built a dataset by collecting tweets from these accounts and they made it available. For the real news, we merge this list with another 32 Twitter accounts from BIBREF34. In this work we could not use the previous dataset and we decide to collect tweets again. For each of these accounts, we collected the last M tweets posted (M=1000). By investigating these accounts manually, we found that many tweets just contain links without textual news. Therefore, to ensure of the quality of the crawled data, we chose a high value for M (also to have enough data). After the collecting process, we processed these tweets by removing duplicated, very short tweets, and tweets without textual content. Table TABREF35 shows a summary for both datasets." ] ]
8d258899e36326183899ebc67aeb4188a86f682c
What scoring function does the model use to score triples?
[ "$ f_r(h, t) & = & \\Vert \\textbf {W}_{r,1}\\textbf {h} + \\textbf {r} - \\textbf {W}_{r,2}\\textbf {t}\\Vert _{\\ell _{1/2}} $" ]
[ [ "Let $\\mathcal {E}$ denote the set of entities and $\\mathcal {R}$ the set of relation types. For each triple $(h, r, t)$ , where $h, t \\in \\mathcal {E}$ and $r \\in \\mathcal {R}$ , the STransE model defines a score function $f_r(h, t)$ of its implausibility. Our goal is to choose $f$ such that the score $f_r(h,t)$ of a plausible triple $(h,r,t)$ is smaller than the score $f_{r^{\\prime }}(h^{\\prime },t^{\\prime })$ of an implausible triple $\\mathcal {R}$0 . We define the STransE score function $\\mathcal {R}$1 as follows:", "$ f_r(h, t) & = & \\Vert \\textbf {W}_{r,1}\\textbf {h} + \\textbf {r} - \\textbf {W}_{r,2}\\textbf {t}\\Vert _{\\ell _{1/2}} $", "using either the $\\ell _1$ or the $\\ell _2$ -norm (the choice is made using validation data; in our experiments we found that the $\\ell _1$ norm gave slightly better results). To learn the vectors and matrices we minimize the following margin-based objective function: $ \\mathcal {L} & = & \\sum _{\\begin{array}{c}(h,r,t) \\in \\mathcal {G} \\\\ (h^{\\prime },r,t^{\\prime }) \\in \\mathcal {G}^{\\prime }_{(h, r, t)}\\end{array}} [\\gamma + f_r(h, t) - f_r(h^{\\prime }, t^{\\prime })]_+ $" ] ]
955ca31999309685c1daa5cb03867971ca99ec52
What datasets are used to evaluate the model?
[ "WN18, FB15k" ]
[ [ "As we show below, STransE performs better than the SE and TransE models and other state-of-the-art link prediction models on two standard link prediction datasets WN18 and FB15k, so it can serve as a new baseline for KB completion. We expect that the STransE will also be able to serve as the basis for extended models that exploit a wider variety of information sources, just as TransE does." ] ]
9b2b063e8a9938da195c9c0d6caa3e37a4a615a8
How long it took for each Doc2Vec model to be trained?
[ "Unanswerable" ]
[ [] ]
ac3c88ace59bf75788370062db139f60499c2056
How better are results for pmra algorithm than Doc2Vec in human evaluation?
[ "The D2V model has been rated 80 times as \"bad relevance\" while the pmra returned only 24 times badly relevant documents." ]
[ [ "Regarding the evaluation itself, based on the three-modality scale (bad, partial or full relevance), models are clearly not equivalents (Figure FIGREF26). The D2V model has been rated 80 times as \"bad relevance\" while the pmra returned only 24 times badly relevant documents. By looking at the results ranking, the mean position for D2V was 14.09 (ranging from 13.98 for JPL to 14.20 for EL). Regarding the pmra, this average position was equal to 6.89 (ranging from 6.47 for EL to 7.23 for SJD)." ] ]
26012f57cba21ba44b9a9f7ed8b1ed9e8ee7625d
What Doc2Vec architectures other than PV-DBOW have been tried?
[ "PV-DM" ]
[ [ "Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector." ] ]
bd26a6d5d8b68d62e1b6eaf974796f3c34a839c4
What four evaluation tasks are defined to determine what influences proximity?
[ "String length, Words co-occurrences, Stems co-occurrences, MeSH similarity" ]
[ [ "The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity.", "Methods ::: Evaluation ::: String length", "To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).", "Methods ::: Evaluation ::: Words co-occurrences", "A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \\in D_{x}$ and all words $WC_{x} \\in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.", "Methods ::: Evaluation ::: Stems co-occurrences", "The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed.", "Methods ::: Evaluation ::: MeSH similarity", "It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V." ] ]
7d4fad6367f28c67ad22487094489680c45f5062
What six parameters were optimized with grid search?
[ "window_size, alpha, sample, dm, hs, vector_size" ]
[ [ "Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector." ] ]
3aa7173612995223a904cc0f8eef4ff203cbb860
What baseline models do they compare against?
[ "SLQA, Rusalka, HMA Model (single), TriAN (single), jiangnan (ensemble), MITRE (ensemble), TriAN (ensemble), HMA Model (ensemble)" ]
[ [] ]
acc8d9918d19c212ec256181e51292f2957b37d7
What are the differences with previous applications of neural networks for this task?
[ "This approach considers related images" ]
[ [ "One common point in all the approaches yet has been the use of only textual features available in the dataset. Our model not only incorporates textual features, modeled using BiLSTM and augmented with an attention mechanism, but also considers related images for the task." ] ]
6f2f304ef292d8bcd521936f93afeec917cbe28a
How much improvement is gained from the proposed approaches?
[ "It eliminates non-termination in some models fixing for some models up to 6% of non-termination ratio." ]
[ [ "Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\\left<\\text{eos}\\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate.", "For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition." ] ]
82fa2b99daa981fc42a882bb6db8481bdbbb9675
Is the problem of determining whether a given model would generate an infinite sequence is a decidable problem?
[ "Unanswerable" ]
[ [] ]
61fb982b2c67541725d6db76b9c710dd169b533d
Is infinite-length sequence generation a result of training with maximum likelihood?
[ "There are is a strong conjecture that it might be the reason but it is not proven." ]
[ [ "We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future.", "Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding." ] ]
68edb6a483cdec669c9130c928994654f1c19839
What metrics are used in challenge?
[ "NDCG, MRR, recall@k, mean rank" ]
[ [ "For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values." ] ]
f64531e460e0ac09b58584047b7616fdb7dd5b3f
What model was winner of the Visual Dialog challenge 2019?
[ "Unanswerable" ]
[ [] ]
cee29acec4da1b247795daa4e2e82ef8a7b25a64
What model was winner of the Visual Dialog challenge 2018?
[ "DL-61" ]
[ [ "For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average." ] ]
7e54c7751dbd50d9d14b9f8b13dc94947a46e42f
Which method for integration peforms better ensemble or consensus dropout fusion with shared parameters?
[ "ensemble model" ]
[ [ "As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters.", "As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation." ] ]
d3bcfcea00dec99fa26283cdd74ba565bc907632
How big is dataset for this challenge?
[ "133,287 images" ]
[ [ "We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context." ] ]
cdf65116a7c50edddcb115e9afd86b2b6accb8ad
What open relation extraction tasks did they experiment on?
[ "verb/preposition-based relation, nominal attribute, descriptive phrase and hyponymy relation." ]
[ [ "We first measure the utility of various components in Logician to select the optimal model, and then compare this model to the state-of-the-art methods in four types of information extraction tasks: verb/preposition-based relation, nominal attribute, descriptive phrase and hyponymy relation. The SAOKE data set is split into training set, validating set and testing set with ratios of 80%, 10%, 10%, respectively. For all algorithms involved in the experiments, the training set can be used to train the model, the validating set can be used to select an optimal model, and the testing set is used to evaluate the performance." ] ]
c8031c1629d270dedc3b0c16dcb7410524ff1bab
How is Logician different from traditional seq2seq models?
[ "restricted copy mechanism to ensure literally honestness, coverage mechanism to alleviate the under extraction and over extraction problem, and gated dependency attention mechanism to incorporate dependency information" ]
[ [ "Logician is a supervised end-to-end neural learning algorithm which transforms natural language sentences into facts in the SAOKE form. Logician is trained under the attention-based sequence-to-sequence paradigm, with three mechanisms: restricted copy mechanism to ensure literally honestness, coverage mechanism to alleviate the under extraction and over extraction problem, and gated dependency attention mechanism to incorporate dependency information. Experimental results on four types of open information extraction tasks reveal the superiority of the Logician algorithm." ] ]
8c0e8a312b85c4ffdffabeef0d29df1ef8ff7fb2
What's the size of the previous largest OpenIE dataset?
[ "3,200 sentences" ]
[ [ "Prior to the SAOKE data set, an annotated data set for OIE tasks with 3,200 sentences in 2 domains was released in BIBREF20 to evaluate OIE algorithms, in which the data set was said BIBREF20 “13 times larger than the previous largest annotated Open IE corpus”. The SAOKE data set is 16 times larger than the data set in BIBREF20 . To the best of our knowledge, SAOKE data set is the largest publicly available human labeled data set for OIE tasks. Furthermore, the data set released in BIBREF20 was generated from a QA-SRL data set BIBREF21 , which indicates that the data set only contains facts that can be discovered by SRL (Semantic Role Labeling) algorithms, and thus is biased, whereas the SAOKE data set is not biased to an algorithm. Finally, the SAOKE data set contains sentences and facts from a large number of domains." ] ]
8816333fbed2bfb1838407df9d6c084ead89751c
How is data for RTFM collected?
[ "Unanswerable" ]
[ [] ]
37e8f5851133a748c4e3e0beeef0d83883117a98
How better is performance of proposed model compared to baselines?
[ "Proposed model achive 66+-22 win rate, baseline CNN 13+-1 and baseline FiLM 32+-3 ." ]
[ [] ]
c9e9c5f443649593632656a5934026ad8ccc1712
How does propose model model that capture three-way interactions?
[ " We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation." ]
[ [ "We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure FIGREF18 shows the model." ] ]
4d844c9453203069363173243e409698782bac3f
Do transferring hurt the performance is the corpora are not related?
[ "Yes" ]
[ [ "We also performe another experiment to examine INIT and MULT method for original WikiQA. The F1-score for this dataset is equal to 33.73; however, the average INIT result for both SQuAD and SelQA as initializers is 30.50. In addition, the average results for MULT and ISS-MULT are 31.76 and 32.65, respectively. The result on original WikiQA indicates that all three transfer learning methods not only do not improve the results but also hurt the F1-score. Therefore, SelQA and SQuAD could not estimate a proper initial point for gradient based optimization method. Moreover, these corpora could not refine the error surface of the original WikiQA dataset during optimization for MULT and ISS-MULT method.", "These are because other datasets could not add new information to the original dataset or they apparently add some redundant information which are dissimilar to the target dataset. Although ISS-MULT tries to remove this effect and consequently the result is improved, this method is on top of MULT method, and the result is significantly based on the effectiveness of this method." ] ]
5633d93ef356aca02592bae3dfc1b3ec8fce27dc
Is accuracy the only metric they used to compare systems?
[ "No" ]
[ [ "In this paper, two main question answering tasks such as answer selection and answer triggering have been examined. In the answer triggering task, there is not a guarantee to have the correct answer among the list of answers. However, in answer selection, there is at least one correct answer among the candidates. As a result, answer triggering is a more challenging task. To report the result for answer selection, MAP and MRR are used; however, the answer triggering task is evaluated by F1-score. The result for MULT Method is reported in Table. 1." ] ]
134598831939a3ae20d177cec7033d133625a88e
How do they transfer the model?
[ "In the MULT method, two datasets are simultaneously trained, and the weights are tuned based on the inputs which come from both datasets. The hyper-parameter $\\lambda \\in (0,1)$ is calculated based on a brute-force search or using general global search. This hyper parameter is used to calculate the final cost function which is computed from the combination of the cost function of the source dataset and the target datasets. , this paper proposes to use the most relevant samples from the source dataset to train on the target dataset. One way to find the most similar samples is finding the pair-wise distance between all samples of the development set of the target dataset and source dataset., we propose using a clustering algorithm on the development set. The clustering algorithm used ihere is a hierarchical clustering algorithm. The cosine similarity is used as a criteria to cluster each question and answer. Therefore, these clusters are representative of the development set of the target dataset and the corresponding center for each cluster is representative of all the samples on that cluster. In the next step, the distance of each center is used to calculate the cosine similarity. Finally, the samples in the source dataset which are far from these centers are ignored. In other words, the outliers do not take part in transfer learning." ]
[ [ "Another way to improve this method could be to select the samples which are more relevant to the target dataset. Based on the importance of the similarity between the datasets for transfer learning in the NLP tasks, this paper proposes to use the most relevant samples from the source dataset to train on the target dataset. One way to find the most similar samples is finding the pair-wise distance between all samples of the development set of the target dataset and source dataset.", "To solve this problem, we propose using a clustering algorithm on the development set. The clustering algorithm used ihere is a hierarchical clustering algorithm. The cosine similarity is used as a criteria to cluster each question and answer. Therefore, these clusters are representative of the development set of the target dataset and the corresponding center for each cluster is representative of all the samples on that cluster. In the next step, the distance of each center is used to calculate the cosine similarity. Finally, the samples in the source dataset which are far from these centers are ignored. In other words, the outliers do not take part in transfer learning." ] ]
4bae74eb707ed71d5f438ddb3d9c2192ac490f66
Will these findings be robust through different datasets and different question answering algorithms?
[ "Yes" ]
[ [ "We evaluate a multi-task approach and three algorithms that explicitly model the task dependencies. We perform experiments on document-level variants of the SQuAD dataset BIBREF1 . The contributions for our papers are:" ] ]
c30c3e0f8450b1c914d29f41c17a22764fa078e0
What is the underlying question answering algorithm?
[ "The system extends BiDAF BIBREF4 with self-attention" ]
[ [ "We build on the state-of-the-art publicly available question answering system by docqa. The system extends BiDAF BIBREF4 with self-attention and performs well on document-level QA. We reuse all hyperparameters from docqa with the exception of number of paragraphs sampled in training: 8 instead of 4. Using more negative examples was important when learning from both fine and coarse annotations. The model uses character embeddings with dimension 50, pre-trained Glove embeddings, and hidden units for bi-directional GRU encoders with size 100. Adadelta is used for optimization for all methods. We tune two hyperparameters separately for each condition based on the held-out set: (1) $\\alpha \\in \\lbrace .01, .1, .5, 1, 5, 10, 100 \\rbrace $ , the weight of the coarse loss, and (2) the number of steps for early stopping. The training time for all methods using both coarse and fine supervision is comparable. We use Adadelta for optimization for all methods." ] ]
21656039994cab07f79e89553cbecc31ba9853d4
What datasets have this method been evaluated on?
[ "document-level variants of the SQuAD dataset " ]
[ [ "We evaluate a multi-task approach and three algorithms that explicitly model the task dependencies. We perform experiments on document-level variants of the SQuAD dataset BIBREF1 . The contributions for our papers are:" ] ]
bee74e96f2445900e7220bc27795bfe23accd0a7
Is there a machine learning approach that tries to solve same problem?
[ "Unanswerable" ]
[ [] ]
a56fbe90d5d349336f94ef034ba0d46450525d19
What DCGs are used?
[ "Author's own DCG rules are defined from scratch." ]
[ [ "convertible.pl: implementing DCG rules for 1st and 3rd steps in the three-steps conversion, as well as other rules including lexicon." ] ]
b1f2db88a6f89d0f048803e38a0a568f5ba38fc5
What else is tried to be solved other than 12 tenses, model verbs and negative form?
[ "cases of singular/plural, subject pronoun/object pronoun, etc." ]
[ [ "Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he\" is used for the subject as “he\" but is used for the object as “him\"." ] ]
cf3af2b68648fa8695e7234b6928d014e3b141f1
What is used for evaluation of this approach?
[ "Unanswerable" ]
[ [] ]
7883a52f008f3c4aabfc9f71ce05d7c4107e79bb
Is there information about performance of these conversion methods?
[ "No" ]
[ [] ]
cd9776d03fe48903e43e916385df12e1e798070a
Are there some experiments performed in the paper?
[ "No" ]
[ [] ]
1a252ffeaebdb189317aefd6c606652ba9677112
How much is performance improved by disabling attention in certain heads?
[ "disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%, this operation vary across tasks" ]
[ [ "Our experiments suggest that certain heads have a detrimental effect on the overall performance of BERT, and this trend holds for all the chosen tasks. Unexpectedly, disabling some heads leads not to a drop in accuracy, as one would expect, but to an increase in performance. This is effect is different across tasks and datasets. While disabling some heads improves the results, disabling the others hurts the results. However, it is important to note that across all tasks and datasets, disabling some heads leads to an increase in performance. The gain from disabling a single head is different for different tasks, ranging from the minimum absolute gain of 0.1% for STS-B, to the maximum of 1.2% for MRPC (see fig:disableheadsall). In fact, for some tasks, such as MRPC and RTE, disabling a random head gives, on average, an increase in performance. Furthermore, disabling a whole layer, that is, all 12 heads in a given layer, also improves the results. fig:disablelayers shows the resulting model performance on the target GLUE tasks when different layers are disabled. Notably, disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%. However, effects of this operation vary across tasks, and for QNLI and MNLI, it produces a performance drop of up to -0.2%." ] ]
da4d25dd9de09d16168788bb02ad600f5b0b3ba4
In which certain heads was attention disabled in experiments?
[ "single head, disabling a whole layer, that is, all 12 heads in a given layer" ]
[ [ "Our experiments suggest that certain heads have a detrimental effect on the overall performance of BERT, and this trend holds for all the chosen tasks. Unexpectedly, disabling some heads leads not to a drop in accuracy, as one would expect, but to an increase in performance. This is effect is different across tasks and datasets. While disabling some heads improves the results, disabling the others hurts the results. However, it is important to note that across all tasks and datasets, disabling some heads leads to an increase in performance. The gain from disabling a single head is different for different tasks, ranging from the minimum absolute gain of 0.1% for STS-B, to the maximum of 1.2% for MRPC (see fig:disableheadsall). In fact, for some tasks, such as MRPC and RTE, disabling a random head gives, on average, an increase in performance. Furthermore, disabling a whole layer, that is, all 12 heads in a given layer, also improves the results. fig:disablelayers shows the resulting model performance on the target GLUE tasks when different layers are disabled. Notably, disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%. However, effects of this operation vary across tasks, and for QNLI and MNLI, it produces a performance drop of up to -0.2%." ] ]
2870fbce43a3cf6daf982f720137c008b30c60dc
What handcrafter features-of-interest are used?
[ "nouns, verbs, pronouns, subjects, objects, negation words, special BERT tokens" ]
[ [ "We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks." ] ]
65b579b2c62982e2ff154c8160288c2950d509f2
What subset of GLUE tasks is used?
[ "MRPC, STS-B, SST-2, QQP, RTE, QNLI, MNLI" ]
[ [ "We use the following subset of GLUE tasks BIBREF4 for fine-tuning:", "MRPC: the Microsoft Research Paraphrase Corpus BIBREF13", "STS-B: the Semantic Textual Similarity Benchmark BIBREF14", "SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15", "QQP: the Quora Question Pairs dataset", "RTE: the Recognizing Textual Entailment datasets", "QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3", "MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16" ] ]
b2c8c90041064183159cc825847c142b1309a849
Do they predict the sentiment of the review summary?
[ "No" ]
[ [ "To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification." ] ]
68e3f3908687505cb63b538e521756390c321a1c
What is the performance difference of using a generated summary vs. a user-written one?
[ "2.7 accuracy points" ]
[ [ "Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary.", "Experiments ::: Datasets", "We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set." ] ]
2f9d30e10323cf3a6c9804ecdc7d5872d8ae35e4
Which review dataset do they use?
[ "SNAP (Stanford Network Analysis Project)" ]
[ [ "We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark.", "We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set." ] ]
327e06e2ce09cf4c6cc521101d0aecfc745b1738
What evaluation metrics did they look at?
[ "accuracy with standard deviation" ]
[ [ "Los resultados de la evaluación se presentan en la Tabla TABREF42, en la forma de promedios normalizados entre [0,1] y de su desviación estándar $\\sigma $." ] ]
40b9f502f15e955ba8615822e6fa08cb5fd29c81
What datasets are used?
[ "Corpus 5KL, Corpus 8KF" ]
[ [ "Corpus utilizados ::: Corpus 5KL", "Este corpus fue constituido con aproximadamente 5 000 documentos (en su mayor parte libros) en español. Los documentos originales, en formatos heterogéneos, fueron procesados para crear un único documento codificado en utf8. Las frases fueron segmentadas automáticamente, usando un programa en PERL 5.0 y expresiones regulares, para obtener una frase por línea.", "Las características del corpus 5KL se encuentran en la Tabla TABREF4. Este corpus es empleado para el entrenamiento de los modelos de aprendizaje profundo (Deep Learning, Sección SECREF4).", "El corpus literario 5KL posee la ventaja de ser muy extenso y adecuado para el aprendizaje automático. Tiene sin embargo, la desventaja de que no todas las frases son necesariamente “frases literarias”. Muchas de ellas son frases de lengua general: estas frases a menudo otorgan una fluidez a la lectura y proporcionan los enlaces necesarios a las ideas expresadas en las frases literarias.", "Otra desventaja de este corpus es el ruido que contiene. El proceso de segmentación puede producir errores en la detección de fronteras de frases. También los números de página, capítulos, secciones o índices producen errores. No se realizó ningún proceso manual de verificación, por lo que a veces se introducen informaciones indeseables: copyrights, datos de la edición u otros. Estas son, sin embargo, las condiciones que presenta un corpus literario real.", "Corpus utilizados ::: Corpus 8KF", "Un corpus heterogéneo de casi 8 000 frases literarias fue constituido manualmente a partir de poemas, discursos, citas, cuentos y otras obras. Se evitaron cuidadosamente las frases de lengua general, y también aquellas demasiado cortas ($N \\le 3$ palabras) o demasiado largas ($N \\ge 30$ palabras). El vocabulario empleado es complejo y estético, además que el uso de ciertas figuras literarias como la rima, la anáfora, la metáfora y otras pueden ser observadas en estas frases.", "Las características del corpus 8KF se muestran en la Tabla TABREF6. Este corpus fue utilizado principalmente en los dos modelos generativos: modelo basado en cadenas de Markov (Sección SECREF13) y modelo basado en la generación de Texto enlatado (Canned Text, Sección SECREF15)." ] ]
ba56afe426906c4cfc414bca4c66ceb4a0a68121
What are the datasets used for the task?
[ "Datasets used are Celex (English, Dutch), Festival (Italian), OpenLexuque (French), IIT-Guwahati (Manipuri), E-Hitz (Basque)" ]
[ [ "To produce a language-agnostic syllabifier, it is crucial to test syllabification accuracy across different language families and language groupings within families. We selected six evaluation languages: English, Dutch, Italian, French, Basque, and Manipuri. These represent two language families (Indo-European, Sino-Tibetan), a language isolate thought to be unrelated to any existing language (Basque), and two different subfamilies within the Indo-European family (West Germanic, Romance). The primary constraint was the availability of syllabified datasets for training and testing. Table TABREF17 presents details of each dataset.", "Among the six languages we evaluate with, both English and Dutch are notable for the availability of rich datasets of phonetic and syllabic transcriptions. These are found in the CELEX (Dutch Centre for Lexical Information) database BIBREF16. CELEX was built jointly by the University of Nijmegen, the Institute for Dutch Lexicology in Leiden, the Max Planck Institute for Psycholinguistics in Nijmegen, and the Institute for Perception Research in Eindhoven. CELEX is maintained by the Max Planck Institute for Psycholinguistics. The CELEX database contains information on orthography, phonology, morphology, syntax and word frequency. It also contains syllabified words in Dutch and English transcribed using SAM-PA, CELEX, CPA, and DISC notations. The first three are variations of the International Phonetic Alphabet (IPA), in that each uses a standard ASCII character to represent each IPA character. DISC is different than the other three in that it maps a distinct ASCII character to each phone in the sound systems of Dutch, English, and German BIBREF38. Different phonetic transcriptions are used in different datasets. Part of the strength of our proposed syllabifier is that every transcription can be used as-is without any additional modification to the syllabifier or the input sequences. The other datasets were hand-syllabified by linguists with the exception of the IIT-Guwahat dataset and the Festival dataset. Both IIT-Guwahat and Festival were initially syllabified with a naive algorithm and then each entry was confirmed or corrected by hand." ] ]
14634943d96ea036725898ab2e652c2948bd33eb
What is the accuracy of the model for the six languages tested?
[ "Authors report their best models have following accuracy: English CELEX (98.5%), Dutch CELEX (99.47%), Festival (99.990%), OpenLexique (100%), IIT-Guwahat (95.4%), E-Hitz (99.83%)" ]
[ [ "We tested three model versions against all datasets. The model we call Base is the BiLSTM-CNN-CRF model described in Section SECREF2 with the associated hyperparameters. Another model, Small, uses the same architecture as Base but reduces the number of convolutional layers to 1, the convolutional filters to 40, the LSTM dimension $l$ to 50, and the phone embedding size $d$ to 100. We also tested a Base-Softmax model, which replaces the CRF output of the Base model with a softmax. A comparison of the results of these three models can be seen in Table TABREF25. This comparison empirically motivates the CRF output because Base almost always outperforms Base-Softmax. Of these three models, the Base model performed the best with the exception of the French and Manipuri datasets. The differences in the French results can be considered negligible because the accuracies are all near $100\\%$. The Small model performed best on Manipuri, which may suggest that reducing the number of parameters of the Base model leads to better accuracy on smaller datasets." ] ]
d71cb7f3aa585e256ca14eebdc358edfc3a9539c
Which models achieve state-of-the-art performances?
[ "CELEX (Dutch and English) - SVM-HMM\nFestival, E-Hitz and OpenLexique - Liang hyphenation\nIIT-Guwahat - Entropy CRF" ]
[ [ "For each dataset used to evaluate the proposed model, we compare our results with published accuracies of existing syllabification systems. Table TABREF21 shows the performance of well known and state of the art syllabifiers for each dataset. Liang's hyphenation algorithm is commonly known for its usage in . The patgen program was used to learn the rules of syllable boundaries BIBREF39. What we call Entropy CRF is a method particular to Manipuri; a rule-based component estimates the entropy of phones and phone clusters while a data-driven CRF component treats syllabification as a sequence modeling task BIBREF35." ] ]
f6556d2a8b42b133eaa361f562745edbe56c0b51
Is the LSTM bidirectional?
[ "Yes" ]
[ [ "We use the LSTM network as follows. The $x$ vector is fed through the LSTM network which outputs a vector $\\overrightarrow{h_i}$ for each time step $i$ from 0 to $n-1$. This is the forward LSTM. As we have access to the complete vector $x$, we can process a backward LSTM as well. This is done by computing a vector $\\overleftarrow{h_i}$ for each time step $i$ from $n-1$ to 0. Finally, we concatenate the backward LSTM with the forward LSTM:", "Both $\\overrightarrow{h_i}$ and $\\overleftarrow{h_i}$ have a dimension of $l$, which is an optimized hyperparameter. The BiLSTM output $h$ thus has dimension $2l\\times n$." ] ]
def3d623578bf84139d920886aa3bd6cdaaa7c41
What are the three languages studied in the paper?
[ "Arabic, Czech and Turkish" ]
[ [ "In this paper, we explore the benefit of explicitly modeling variations in the surface forms of words using methods from deep latent variable modeling in order to improve the translation accuracy in low-resource and morphologically-rich languages. Latent variable models allow us to inject inductive biases relevant to the task, which, in our case, is word formation, and we believe that follows a certain hierarchical procedure. Our model translates words one character at a time based on word representations learned compositionally from sub-lexical components, which are parameterized by a hierarchical latent variable model mimicking the process of morphological inflection, consisting of a continuous-space dense vector capturing the lexical semantics, and a set of (approximately) discrete features, representing the morphosyntactic role of the word in a given sentence. Each word representation during decoding is reformulated based on the shared latent morphological features, aiding in learning more reliable representations of words under sparse settings by generalizing across their different surface forms. We evaluate our method in translating English into three morphologically-rich languages each with a distinct morphological typology: Arabic, Czech and Turkish, and show that our model is able to obtain better translation accuracy and generalization capacity than conventional approaches to open-vocabulary NMT." ] ]
d51069595f67a3a53c044c8a37bae23facbfa45d
Do they use pretrained models as part of their parser?
[ "Yes" ]
[ [ "As a classifier, we use a feed-forward neural network with two hidden layers of 200 tanh units and learning rate set to 0.1, with linear decaying. The input to the network consists of the concatenation of embeddings for words, POS tags and Stanford parser dependencies, one-hot vectors for named entities and additional sparse features, extracted from the current configuration of the transition system; this is reported in more details in Table TABREF27 . The embeddings for words and POS tags were pre-trained on a large unannotated corpus consisting of the first 1 billion characters from Wikipedia. For lexical information, we also extract the leftmost (in the order of the aligned words) child (c), leftmost parent (p) and leftmost grandchild (cc). Leftmost and rightmost items are common features for transition-based parsers BIBREF17 , BIBREF18 but we found only leftmost to be helpful in our case. All POS tags, dependencies and named entities are generated using Stanford CoreNLP BIBREF19 . The accuracy of this classifier on the development set is 84%." ] ]
1a6e2bd41ee43df83fef2a1c1941e6f95a619ae8
Which subtasks do they evaluate on?
[ " entity recognition, semantic role labeling and co-reference resolution" ]
[ [ "Semantic parsing aims to solve the problem of canonicalizing language and representing its meaning: given an input sentence, it aims to extract a semantic representation of that sentence. Abstract meaning representation BIBREF0 , or AMR for short, allows us to do that with the inclusion of most of the shallow-semantic natural language processing (NLP) tasks that are usually addressed separately, such as named entity recognition, semantic role labeling and co-reference resolution. AMR is partially motivated by the need to provide the NLP community with a single dataset that includes basic disambiguation information, instead of having to rely on different datasets for each disambiguation problem. The annotation process is straightforward, enabling the development of large datasets. Alternative semantic representations have been developed and studied, such as CCG BIBREF1 , BIBREF2 and UCCA BIBREF3 ." ] ]
e6c163f80a11bd057bbd0b6e1451ac82edddc78d
Do they test their approach on large-resource tasks?
[ "Yes" ]
[ [ "We first describe our corpus collection. Table. TABREF3 lists all corpora we used in the experiments. There are 16 corpora from 10 languages. To increase the variety of corpus, we selected 4 English corpora and 4 Mandarin corpora in addition to the low resource language corpora. As the target of this experiment is low resource speech recognition, we only randomly select 100,000 utterances even if there are more in each corpus. All corpora are available in LDC, voxforge, openSLR or other public websites. Each corpus is manually assigned one domain based on its speech style. Specifically, the domain candidates are telephone, read and broadcast." ] ]
6adfa9eee76b96953a76c03356bf41d8a9378851
By how much do they, on average, outperform the baseline multilingual model on 16 low-resource tasks?
[ "1.6% lower phone error rate on average" ]
[ [ "To evaluate our sampling strategy, we compare it with the pretrained model and fine-tuned model on 16 different corpora. The results show that our approach outperforms those baselines on all corpora: it achieves 1.6% lower phone error rate on average. Additionally, we demonstrate that our corpus-level embeddings are able to capture the characteristics of each corpus, especially the language and domain information. The main contributions of this paper are as follows:" ] ]
450a359d117bcfa2de4ffd987f787945f25b3b25
How do they compute corpus-level embeddings?
[ "First, the embedding matrix INLINEFORM4 for all corpora is initialized, during the training phase, INLINEFORM9 can be used to bias the input feature, Next, we apply the language specific softmax to compute logits INLINEFORM4 and optimize them with the CTC objective" ]
[ [ "Corpus Embedding", "Suppose that INLINEFORM0 is the target low-resource corpus, we are interested in optimizing the acoustic model with a much larger training corpora set INLINEFORM1 where INLINEFORM2 is the number of corpora and INLINEFORM3 . Each corpus INLINEFORM4 is a collection of INLINEFORM5 pairs where INLINEFORM6 is the input features and INLINEFORM7 is its target.", "Our purpose here is to compute the embedding INLINEFORM0 for each corpus INLINEFORM1 where INLINEFORM2 is expected to encode information about its corpus INLINEFORM3 . Those embeddings can be jointly trained with the standard multilingual model BIBREF4 . First, the embedding matrix INLINEFORM4 for all corpora is initialized, the INLINEFORM5 -th row of INLINEFORM6 is corresponding to the embedding INLINEFORM7 of the corpus INLINEFORM8 . Next, during the training phase, INLINEFORM9 can be used to bias the input feature INLINEFORM10 as follows. DISPLAYFORM0", "where INLINEFORM0 is an utterance sampled randomly from INLINEFORM1 , INLINEFORM2 is its hidden features, INLINEFORM3 is the parameter of the acoustic model and Encoder is the stacked bidirectional LSTM as shown in Figure. FIGREF5 . Next, we apply the language specific softmax to compute logits INLINEFORM4 and optimize them with the CTC objective BIBREF29 . The embedding matrix INLINEFORM5 can be optimized together with the model during the training process." ] ]
70f84c73172211186de1a27b98f5f5ae25a94e55
Which dataset do they use?
[ "Stanford Sentiment Treebank (SST) BIBREF15 and AG News BIBREF16" ]
[ [ "In this paper, we study verifiable robustness, i.e., providing a certificate that for a given network and test input, no attack or perturbation under the specification can change predictions, using the example of text classification tasks, Stanford Sentiment Treebank (SST) BIBREF15 and AG News BIBREF16. The specification against which we verify is that a text classification model should preserve its prediction under character (or synonym) substitutions in a character (or word) based model. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation (IBP) BIBREF17, BIBREF18, BIBREF19 to compute worst case bounds on specification satisfaction, as illustrated in Figure FIGREF1. Since these bounds can be computed efficiently, we can furthermore derive an auxiliary objective for models to become verifiable. The resulting classifiers are efficiently verifiable and improve robustness on adversarial examples, while maintaining comparable performance in terms of nominal test accuracy." ] ]
10ddc5caf36fe9d7438eb5a3936e24580c4ffe6a
Which competitive relational classification models do they test?
[ "For relation prediction they test TransE and for relation extraction they test position aware neural sequence model" ]
[ [ "In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence.", "We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE BIBREF3 as the subject as well as FB15K BIBREF3 as the dataset. TransE embeds entities and relations as vectors, and train these embeddings by minimizing DISPLAYFORM0", "For relation extraction, we consider the supervised relation extraction setting and TACRED dataset BIBREF10 . As for the subject model, we use the best model on TACRED dataset — position-aware neural sequence model. This method first passes the sentence into an LSTM and then calculate an attention sum of the hidden states in the LSTM by taking positional features into account. This simple and effective method achieves the best in TACRED dataset." ] ]
29571867fe00346418b1ec36c3b7685f035e22ce
Which tasks do they apply their method to?
[ "relation prediction, relation extraction, Open IE" ]
[ [ "In this section, we will show that our defined similarity quantification could help Open IE by identifying redundant relations. To be specific, we set a toy experiment to remove redundant relations in KBs for a preliminary comparison (sec:toy-experiment). Then, we evaluate our model and baselines on the real-world dataset extracted by Open IE methods (sec:real-experiment). Considering the existing evaluation metric for Open IE and Open RE rely on either labor-intensive annotations or distantly supervised annotations, we propose a metric approximating recall and precision evaluation based on operable human annotations for balancing both efficiency and accuracy.", "In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence." ] ]
1a678d081f97531d54b7122254301c20b3531198
Which knowledge bases do they use?
[ "Wikidata, ReVerb, FB15K, TACRED" ]
[ [ "We show the statistics of the dataset we use in tab:statistics, and the construction procedures will be introduced in this section.", "In Wikidata BIBREF8 , facts can be described as (Head item/property, Property, Tail item/property). To construct a dataset suitable for our task, we only consider the facts whose head entity and tail entity are both items. We first choose the most common 202 relations and 120000 entities from Wikidata as our initial data. Considering that the facts containing the two most frequently appearing relations (P2860: cites, and P31: instance of) occupy half of the initial data, we drop the two relations to downsize the dataset and make the dataset more balanced. Finally, we keep the triples whose head and tail both come from the selected 120000 entities as well as its relation comes from the remaining 200 relations.", "ReVerb BIBREF9 is a program that automatically identifies and extracts binary relationships from English sentences. We use the extractions from running ReVerb on Wikipedia. We only keep the relations appear more than 10 times and their corresponding triples to construct our dataset.", "FB15K BIBREF3 is a subset of freebase. TACRED BIBREF10 is a large supervised relation extraction dataset obtained via crowdsourcing. We directly use these two dataset, no extra processing steps were applied." ] ]
b9f2a30f5ef664ff845d860cf4bfc2afb0a46e5a
How do they gather human judgements for similarity between relations?
[ "By assessing similarity of 360 pairs of relations from a subset of Wikidata using an integer similarity score from 0 to 4" ]
[ [ "Human Judgments", "Following BIBREF11 , BIBREF12 and the vast amount of previous work on semantic similarity, we ask nine undergraduate subjects to assess the similarity of 360 pairs of relations from a subset of Wikidata BIBREF8 that are chosen to cover from high to low levels of similarity. In our experiment, subjects were asked to rate an integer similarity score from 0 (no similarity) to 4 (perfectly the same) for each pair. The inter-subject correlation, estimated by leaving-one-out method BIBREF13 , is r = INLINEFORM0 , standard deviation = INLINEFORM1 . This important reference value (marked in fig:correlation) could be seen as the highest expected performance for machines BIBREF12 ." ] ]
3513682d4ee2e64725b956c489cd5b5995a6acf2
Which sampling method do they use to approximate similarity between the conditional probability distributions over entity pairs?
[ "monte-carlo, sequential sampling" ]
[ [ "Just as introduced in sec:introduction, it is intractable to compute similarity exactly, as involving INLINEFORM0 computation. Hence, we consider the monte-carlo approximation: DISPLAYFORM0", "where INLINEFORM0 is a list of entity pairs sampled from INLINEFORM1 . We use sequential sampling to gain INLINEFORM6 , which means we first sample INLINEFORM7 given INLINEFORM8 from INLINEFORM9 , and then sample INLINEFORM10 given INLINEFORM11 and INLINEFORM12 from INLINEFORM13 ." ] ]
30b5e5293001f65d2fb9e4d1fdf4dc230e8cf320
What text classification task is considered?
[ "To classify a text as belonging to one of the ten possible classes." ]
[ [ "We trained word embeddings and a uni-directional GRU connected to a dense layer end-to-end for text classification on a set of scraped tweets using cross-entropy as the loss function. End-to-end training was selected to impose as few heuristic constraints on the system as possible. Each tweet was tokenized using NLTK TweetTokenizer and classified as one of 10 potential accounts from which it may have originated. The accounts were chosen based on the distinct topics each is known to typically tweet about. Tokens that occurred fewer than 5 times were disregarded in the model. The model was trained on 22106 tweets over 10 epochs, while 5526 were reserved for validation and testing sets (2763 each). The network demonstrated an insensitivity to the initialization of the hidden state, so, for algebraic considerations, INLINEFORM0 was chosen for hidden dimension of INLINEFORM1 . A graph of the network is shown in Fig.( FIGREF13 )." ] ]
993b896771c31f3478f28112a7335e7be9d03f21
What novel class of recurrent-like networks is proposed?
[ "A network, whose learned functions satisfy a certain equation. The network contains RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state." ]
[ [ "First, we propose a class of recurrent-like neural networks for NLP tasks that satisfy the differential equation DISPLAYFORM0", "where DISPLAYFORM0", "and where INLINEFORM0 and INLINEFORM1 are learned functions. INLINEFORM2 corresponds to traditional RNNs, with INLINEFORM3 . For INLINEFORM4 , this takes the form of RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state. In particular, using INLINEFORM5 for sentence generation is the topic of a manuscript presently in preparation." ] ]
dee116df92f9f92d9a67ac4d30e32822c22158a6
Is there a formal proof that the RNNs form a representation of the group?
[ "No" ]
[ [ "Second, we propose embedding schemes that explicitly embed words as elements of a Lie group. In practice, these embedding schemes would involve representing words as constrained matrices, and optimizing the elements, subject to the constraints, according to a loss function constructed from invariants of the matrices, and then applying the matrix log to obtain Lie vectors. A prototypical implementation, in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation." ] ]
94bee0c58976b58b4fef9e0adf6856fe917232e5
How much bigger is Switchboard-2000 than Switchboard-300 database?
[ "Switchboard-2000 contains 1700 more hours of speech data." ]
[ [ "This study focuses on Switchboard-300, a standard 300-hour English conversational speech recognition task. Our acoustic and text data preparation follows the Kaldi BIBREF29 s5c recipe. Our attention based seq2seq model is similar to BIBREF30, BIBREF31 and follows the structure of BIBREF32.", "As a contrast to our best results on Switchboard-300, we also train a seq2seq model on the 2000-hour Switchboard+Fisher data. This model consists of 10 encoder layers, and is trained for only 50 epochs. Our overall results on the Hub5'00 and other evaluation sets are summarized in Table TABREF14. The results in Fig. FIGREF12 and Table TABREF14 show that adding more training data greatly improves the system, by around 30% relative in some cases. For comparison with others, the 2000-hour system reaches 8.7% and 7.4% WER on rt02 and rt04. We observe that the regularization techniques, which are extremely important on the 300h setup, are still beneficial but have a significantly smaller effect." ] ]
7efbe48e84894971d7cd307faf5f6dae9d38da31
How big is Switchboard-300 database?
[ "300-hour English conversational speech" ]
[ [ "This study focuses on Switchboard-300, a standard 300-hour English conversational speech recognition task. Our acoustic and text data preparation follows the Kaldi BIBREF29 s5c recipe. Our attention based seq2seq model is similar to BIBREF30, BIBREF31 and follows the structure of BIBREF32." ] ]
7f452eb145d486c15ac4d1107fc914e48ebba60f
What crowdsourcing platform is used for data collection and data validation?
[ "the Common Voice website, iPhone app" ]
[ [ "The data presented in this paper was collected and validated via Mozilla's Common Voice initiative. Using either the Common Voice website or iPhone app, contributors record their voice by reading sentences displayed on the screen (see Figure (FIGREF5)). The recordings are later verified by other contributors using a simple voting system. Shown in Figure (FIGREF6), this validation interface has contributors mark $<$audio,transcript$>$ pairs as being either correct (up-vote) or incorrect (down-vote)." ] ]
bb71a638668a21c2d446b44cbf51676c839658f7
How is validation of the data performed?
[ "A maximum of three contributors will listen to any audio clip. If an $<$audio,transcript$>$ pair first receives two up-votes, then the clip is marked as valid. If instead the clip first receives two down-votes, then it is marked as invalid." ]
[ [ "A maximum of three contributors will listen to any audio clip. If an $<$audio,transcript$>$ pair first receives two up-votes, then the clip is marked as valid. If instead the clip first receives two down-votes, then it is marked as invalid. A contributor may switch between recording and validation as they wish.", "Only clips marked as valid are included in the official training, development, and testing sets for each language. Clips which did not recieve enough votes to be validated or invalidated by the time of release are released as “other”. The train, test, and development sets are bucketed such that any given speaker may appear in only one. This ensures that contributors seen at train time are not seen at test time, which would skew results. Additionally, repetitions of text sentences are removed from the train, test, and development sets of the corpus." ] ]
5fa464a158dc8abf7cef8ca7d42a7080670c1edd
Is audio data per language balanced in dataset?
[ "No" ]
[ [ "The data presented in Table (TABREF12) shows the currently available data. Each of the released languages is available for individual download as a compressed directory from the Mozilla Common Voice website. The directory contains six files with Tab-Separated Values (i.e. TSV files), and a single clips subdirectory which contains all of the audio data. Each of the six TSV files represents a different segment of the voice data, with all six having the following column headers: [client_id, path, sentence, up_votes, down_votes, age, gender, accent]. The first three columns refer to an anonymized ID for the speaker, the location of the audio file, and the text that was read. The next two columns contain information on how listeners judged the $<$audio,transcript$>$ pair. The last three columns represent demographic data which was optionally self-reported by the speaker of the audio.", "We made dataset splits (c.f. Table (TABREF19)) such that one speaker's recordings are only present in one data split. This allows us to make a fair evaluation of speaker generalization, but as a result some training sets have very few speakers, making this an even more challenging scenario. The splits per language were made as close as possible to 80% train, 10% development, and 10% test." ] ]