question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
9785ecf1107090c84c57112d01a8e83418a913c1
Which languages do they work with?
[ "German, Spanish, Chinese" ]
[ [ "We use experiments to evaluate the effectiveness of our proposed method on NER task. On three different low-resource languages, we conducted an experimental evaluation to prove the effectiveness of our back attention mechanism on the NER task. Four datasets are used in our work, including CoNLL 2003 German BIBREF9 , CoNLL 2002 Spanish BIBREF10 , OntoNotes 4 BIBREF11 and Weibo NER BIBREF12 . All the annotations are mapped to the BIOES format. Table TABREF14 shows the detailed statistics of the datasets.", "Table TABREF22 shows the results on Chinese OntoNotes 4.0. Adding BAN to baseline model leads to an increase from 63.25% to 72.15% F1-score. In order to further improve the performance, we use the BERT model BIBREF20 to produce word embeddings. With no segmentation, we surpass the previous state-of-the-art approach by 6.33% F1-score. For Weibo dataset, the experiment results are shown in Table TABREF23 , where NE, NM and Overall denote named entities, nominal entities and both. The baseline model gives a 33.18% F1-score. Using the transfer knowledge by BAN, the baseline model achieves an immense improvement in F1-score, rising by 10.39%. We find that BAN still gets consistent improvement on a strong model. With BAN, the F1-score of BERT+BiLSTM+CRF increases to 70.76%." ] ]
e051d68a7932f700e6c3f48da57d3e2519936c6d
Which pre-trained English NER model do they use?
[ "Bidirectional LSTM based NER model of Flair" ]
[ [ "Pre-trained English NER model We construct the English NER system following BIBREF7 . This system uses a bidirectional LSTM as a character-level language model to take context information for word embedding generation. The hidden states of the character language model (CharLM) are used to create contextualized word embeddings. The final embedding INLINEFORM0 is concatenated by the CharLM embedding INLINEFORM1 and GLOVE embedding INLINEFORM2 BIBREF8 . A standard BiLSTM-CRF named entity recognition model BIBREF0 takes INLINEFORM3 to address the NER task.", "We implement the basic BiLSTM-CRF model using PyTorch framework. FASTTEXT embeddings are used for generating word embeddings. Translation models are trained on United Nation Parallel Corpus. For pre-trained English NER system, we use the default NER model of Flair." ] ]
9e2e5918608a2911b341d4887f58a4595d7d1429
How much training data is required for each low-resource language?
[ "Unanswerable" ]
[ [] ]
0ec4143a4f1a8f597b435f83c0451145be2ab95b
What are the best within-language data augmentation methods?
[ "Frequency masking, Time masking, Additive noise, Speed and volume perturbation" ]
[ [ "In this section, we consider 3 categories of data augmentation techniques that are effectively applicable to video datasets.", "To further increase training data size and diversity, we can create new audios via superimposing each original audio with additional noisy audios in time domain. To obtain diverse noisy audios, we use AudioSet, which consists of 632 audio event classes and a collection of over 2 million manually-annotated 10-second sound clips from YouTube videos BIBREF28.", "We consider applying the frequency and time masking techniques – which are shown to greatly improve the performance of end-to-end ASR models BIBREF23 – to our hybrid systems. Similarly, they can be applied online during each epoch of LF-MMI training, without the need for realignment.", "Consider each utterance (i.e. after the audio segmentation in Section SECREF5), and we compute its log mel spectrogram with $\\nu $ dimension and $\\tau $ time steps:", "Frequency masking is applied $m_F$ times, and each time the frequency bands $[f_0$, $f_0+ f)$ are masked, where $f$ is sampled from $[0, F]$ and $f_0$ is sampled from $[0, \\nu - f)$.", "Time masking is optionally applied $m_T$ times, and each time the time steps $[t_0$, $t_0+ t)$ are masked, where $t$ is sampled from $[0, T]$ and $t_0$ is sampled from $[0, \\tau - t)$.", "Data augmentation ::: Speed and volume perturbation", "Both speed and volume perturbation emulate mean shifts in spectrum BIBREF18, BIBREF19. To perform speed perturbation of the training data, we produce three versions of each audio with speed factors $0.9$, $1.0$, and $1.1$. The training data size is thus tripled. For volume perturbation, each audio is scaled with a random variable drawn from a uniform distribution $[0.125, 2]$." ] ]
90159e143487505ddc026f879ecd864b7f4f479e
How much of the ASR grapheme set is shared between languages?
[ "Little overlap except common basic Latin alphabet and that Hindi and Marathi languages use same script." ]
[ [ "The character sets of these 7 languages have little overlap except that (i) they all include common basic Latin alphabet, and (ii) both Hindi and Marathi use Devanagari script. We took the union of 7 character sets therein as the multilingual grapheme set (Section SECREF2), which contained 432 characters. In addition, we deliberately split 7 languages into two groups, such that the languages within each group were more closely related in terms of language family, orthography or phonology. We thus built 3 multilingual ASR models trained on:" ] ]
d10e256f2f724ad611fd3ff82ce88f7a78bad7f7
What is the performance of the model for the German sub-task A?
[ "macro F1 score of 0.62" ]
[ [ "The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively." ] ]
c691b47c0380c9529e34e8ca6c1805f98288affa
Is the model tested for language identification?
[ "No" ]
[ [] ]
892e42137b14d9fabd34084b3016cf3f12cac68a
Is the model compared to a baseline model?
[ "No" ]
[ [] ]
dc69256bdfe76fa30ce4404b697f1bedfd6125fe
What are the languages used to test the model?
[ "Hindi, English and German (German task won)" ]
[ [ "The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages.", "In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21." ] ]
097ab15f58cb1fce5b5ffb5082b8d7bbee720659
Which language has the lowest error rate reduction?
[ "thai" ]
[ [] ]
b8d5e9fa08247cb4eea835b19377262d86107a9d
What datasets did they use?
[ "IBM-UB-1 dataset BIBREF25, IAM-OnDB dataset BIBREF42, The ICDAR-2013 Competition for Online Handwriting Chinese Character Recognition BIBREF45, ICFHR2018 Competition on Vietnamese Online Handwritten Text Recognition using VNOnDB BIBREF50" ]
[ [ "We present an extensive comparison of the differences in recognition accuracy for eight languages (Sec. SECREF5 ) and compare the accuracy of models trained on publicly available datasets where available (Sec. SECREF4 ). In addition, we propose a new standard experimental protocol for the IBM-UB-1 dataset BIBREF25 (Sec. SECREF50 ) to enable easier comparison between approaches in the future.", "The IAM-OnDB dataset BIBREF42 is probably the most used evaluation dataset for online handwriting recognition. It consists of 298 523 characters in 86 272 word instances from a dictionary of 11 059 words written by 221 writers. We use the standard IAM-OnDB dataset separation: one training set, two validations sets and a test set containing 5 363, 1 438, 1 518 and 3 859 written lines, respectively. We tune the decoder weights using the validation set with 1 438 items and report error rates on the test set.", "We provide an evaluation of our production system trained on our in-house datasets applied to a number of publicly available benchmark datasets from the literature. Note that for all experiments presented in this section we evaluate our current live system without any tuning specifec to the tasks at hand.", "The ICDAR-2013 Competition for Online Handwriting Chinese Character Recognition BIBREF45 introduced a dataset for classifying the most common Chinese characters. We report the error rates in comparison to published results from the competition and more recent work done by others in Table TABREF56 .", "In the ICFHR2018 Competition on Vietnamese Online Handwritten Text Recognition using VNOnDB BIBREF50 , our production system was evaluated against other systems. The system used in the competition is the one reported and described in this paper. Due to licensing restrictions we were unable to do any experiments on the competition training data, or specific tuning for the competition, which was not the case for the other systems mentioned here." ] ]
8de64483ae96c0a03a8e527950582f127b43dceb
Do they report results only on English data?
[ "Yes" ]
[ [ "Dos and Don'ts for the Moral Choice Machine. The verb extraction identifies the most positive and most negative associated verbs in vocabulary, to infer socially desired and neglected behaviour. BIBREF0 (BIBREF0) extracted them with the general positive and negative association sets on the Google Slim embedding. Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter.", "Summarized, even though the contained positive verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. As BIBREF0 (BIBREF0) describe, the listed words can be accepted as commonly agreed Dos and Don'ts." ] ]
4d062673b714998800e61f66b6ccbf7eef5be2ac
What is the Moral Choice Machine?
[ "Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs" ]
[ [ "BIBREF0 (BIBREF0) developed Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template (cf. Tab. TABREF15). The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value." ] ]
f4238f558d6ddf3849497a130b3a6ad866ff38b3
How is moral bias measured?
[ "Answer with content missing: (formula 1) bias(q, a, b) = cos(a, q) − cos(b, q)\nBias is calculated as substraction of cosine similarities of question and some answer for two opposite answers." ]
[ [ "Analogous to word embeddings, sentence embeddings, e.g. the Universal Sentence Encoder BIBREF8 and Sentence-BERT BIBREF6, allow one to calculate the cosine similarity of various different sentences, as for instance the similarity of a question and the corresponding answer. The more appropriate a specific answer is to a given question, the stronger is its cosine similarity expected to be. When considering two opposite answers, it is therefore possible to determine a bias value:", "where $\\vec{q}$ is the vector representation of the question and $\\vec{a}$ and $\\vec{b}$ the representations of the two answers/choices. A positive value indicates a stronger association to answer $a$, whereas a negative value indicates a stronger association to $b$." ] ]
3582fac4b2705db056f75a14949db7b80cbc3197
What sentence embeddings were used in the previous Jentzsch paper?
[ "Unanswerable" ]
[ [] ]
96dcabaa8b6bd89b032da609e709900a1569a0f9
How do the authors define deontological ethical reasoning?
[ "These ask which choices are morally required, forbidden, or permitted, norms are understood as universal rules of what to do and what not to do" ]
[ [ "With regards to complexity and intangibility of ethics and morals, we restrict ourselves to a rather basic implementation of this construct, following the theories of deontological ethics. These ask which choices are morally required, forbidden, or permitted instead of asking which kind of a person we should be or which consequences of our actions are to be preferred. Thus, norms are understood as universal rules of what to do and what not to do. Therefore, we focus on the valuation of social acceptance in single verbs and single verbs with surrounding context information —e.g. trust my friend or trust a machine— to figure out which of them represent a Do and which tend to be a Don't. Because we specifically chose templates in the first person, i.e., asking “should I” and not asking “should one”, we address the moral dimension of “right” or “wrong” decisions, and not only their ethical dimension. This is the reason why we will often use the term “moral”, although we actually touch upon “ethics” and “moral”. To measure the valuation, we make use of implicit association tests (IATs) and their connections to word embeddings." ] ]
f416c6818a7a8acb7ec4682ed424ecdbd7dd6df1
How does framework automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model?
[ "The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs." ]
[ [ "The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges." ] ]
a1d422cb2e428333961370496ca281a1be99fdff
What human judgement metrics are used?
[ "coherence, logical consistency, fluency and diversity" ]
[ [ "We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects." ] ]
3de9bf4b0b667b3f1181da9f006da1354565bcbd
What automatic evaluation metrics are used?
[ "BLEU, embedding-based metrics (Average, Extrema, Greedy and Coherence), , entropy-based metrics (Ent-{1,2}), distinct metrics (Dist-{1,2,3} and Intra-{1,2,3})" ]
[ [ "We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6." ] ]
1a1293e24f4924064e6fb9998658f5a329879109
What state of the art models were used in experiments?
[ "SEQ2SEQ, CVAE, Transformer, HRED, DialogWAE" ]
[ [ "We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6." ] ]
3ccd337f77c5d2f7294eb459ccc1770796c2eaef
What five dialogue attributes were analyzed?
[ "Model Confidence, Continuity, Query-relatedness, Repetitiveness, Specificity" ]
[ [ "Curriculum Plausibility ::: Conversational Attributes ::: Specificity", "A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1):", "Curriculum Plausibility ::: Conversational Attributes ::: Repetitiveness", "Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as:", "Curriculum Plausibility ::: Conversational Attributes ::: Continuity", "A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them.", "Curriculum Plausibility ::: Conversational Attributes ::: Model Confidence", "Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model.", "Curriculum Plausibility ::: Conversational Attributes ::: Query-relatedness", "A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\\textit {cos\\_sim}(\\textit {sent\\_emb}(c), \\textit {sent\\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\\textit {sent\\_emb}(e)=\\frac{1}{|e|}\\sum _{w\\in {}e}\\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively." ] ]
f6937199e4b06bfbaa22edacc7339410de9703db
What three publicly available coropora are used?
[ "PersonaChat BIBREF12, DailyDialog BIBREF13, OpenSubtitles BIBREF7" ]
[ [ "Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively." ] ]
61c9f97ee1ac5a4b8654aa152f05f22e153e7e6e
Which datasets do they use?
[ " Wikipedia toxic comments" ]
[ [ "We validate our approach on the Wikipedia toxic comments dataset BIBREF18 . Our fairness experiments show that the classifiers trained with our method achieve the same performance, if not better, on the original task, while improving AUC and fairness metrics on a synthetic, unbiased dataset. Models trained with our technique also show lower attributions to identity terms on average. Our technique produces much better word vectors as a by-product when compared to the baseline. Lastly, by setting an attribution target of 1 on toxic words, a classifier trained with our objective function achieves better performance when only a subset of the data is present." ] ]
9ae084e76095194135cd602b2cdb5fb53f2935c1
What metrics are used for evaluation?
[ "word error rate" ]
[ [ "The Density Ratio method consistently outperformed Shallow Fusion for the cross-domain scenarios examined, with and without fine-tuning to audio data from the target domain. Furthermore, the gains in WER over the baseline are significantly larger for the Density Ratio method than for Shallow Fusion, with up to 28% relative reduction in WER (17.5% $\\rightarrow $ 12.5%) compared to up to 17% relative reduction (17.5% $\\rightarrow $ 14.5%) for Shallow Fusion, in the no fine-tuning scenario." ] ]
67ee7a53aa57ce0d0bc1a20d41b64cb20303f4b7
How much training data is used?
[ "163,110,000 utterances" ]
[ [ "The following data sources were used to train the RNN-T and associated RNN-LMs in this study.", "Source-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28.", "Source-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30).", "Target-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens.", "Target-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively." ] ]
7eb3852677e9d1fb25327ba014d2ed292184210c
How is the training data collected?
[ "from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering, from a Voice Search service" ]
[ [ "The following data sources were used to train the RNN-T and associated RNN-LMs in this study.", "Source-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28.", "Source-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30).", "Target-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens.", "Target-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively." ] ]
4f9a8b50903deb1850aee09c95d1b6204a7410b4
What language(s) is the model trained/tested on?
[ "Unanswerable" ]
[ [] ]
c96a6b30d71c6669592504e4ee8001e9d1eb1fba
was bert used?
[ "Yes" ]
[ [ "BERT BIBREF6 is a state-of-the-art model, which has been pre-trained on a large corpus and is suitable to be fine-tuned for various downstream NLP tasks. The main innovation between this model and existing language models is in how the model is trained. For BERT, the text conditioning happens on both the left and right context of every word and is therefore bidirectional. In previous models BIBREF7, a unidirectional language model was usually used in the pre-training. With BERT, two fully unsupervised tasks are performed. The Masked Language Model and the Next Sentence Prediction (NSP).", "For this study, the NSP is used as a proxy for the relevance of response. Furthermore, in order to improve performance, we fine-tune on a customized dataset which achieved an accuracy of 82.4%. For the main analysis, we used the single-turn dataset, which gave us a correlation of 0.28 between the mean of the AMT evaluation and the BERT NSP. Next, we put each score into a category. For example, if the average score is 2.3, this would be placed in category 2. We then displayed the percentage of positive and negative predictions in a histogram for each of the categories. As seen in Figure FIGREF5, a clear pattern is seen between the higher scores and the positive prediction, and the lower scores and the negative predictions. details of how they are combined to create a final classification layer." ] ]
42d66726b5bf8de5b0265e09d76f5ab00c0e851a
what datasets did they use?
[ "Single-Turn, Multi-Turn" ]
[ [ "Single-Turn: This dataset consists of single-turn instances of statements and responses from the MiM chatbot developed at Constellation AI BIBREF5. The responses provided were then evaluated using Amazon Mechanical Turk (AMT) workers. A total of five AMT workers evaluated each of these pairs. The mean of the five evaluations is then used as the target variable. A sample can be seen in Table TABREF3. This dataset was used during experiments with results published in the Results section.", "Multi-Turn: This dataset is taken from the ConvAI2 challenge and consists of various types of dialogue that have been generated by human-computer conversations. At the end of each dialogue, an evaluation score has been given, for each dialogue, between 1–4." ] ]
c418deef9e44bc8448d9296c6517824cb95bd554
which existing metrics do they compare with?
[ "F1-score, BLEU score" ]
[ [ "To create a final metric, we combine the individual components from Section 2 as features into a Support Vector Machine. The final results for our F1-score from this classification technique are 0.52 and 0.31 for the single and multi-turn data respectively.", "We compare our results for both the single-turn and multi-turn experiments to the accuracies on the test data based off the BLEU score. We see an increase of 6% for our method with respect to the BLEU score in the single turn data, and a no change when using the multi-turn test set." ] ]
d6d29040e7fafceb188e62afba566016b119b23c
Which datasets do they evaluate on?
[ "PDP-60, WSC-273" ]
[ [ "Recently, neural models pre-trained on a language modeling task, such as ELMo BIBREF0 , OpenAI GPT BIBREF1 , and BERT BIBREF2 , have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. The success of BERT can largely be associated to the notion of context-aware word embeddings, which differentiate it from common approaches such as word2vec BIBREF3 that establish a static semantic embedding. Since the introduction of BERT, the NLP community continues to be impressed by the amount of ideas produced on top of this powerful language representation model. However, despite its success, it remains unclear whether the representations produced by BERT can be utilized for tasks such as commonsense reasoning. Particularly, it is not clear whether BERT shed light on solving tasks such as the Pronoun Disambiguation Problem (PDP) and Winograd Schema Challenge (WSC). These tasks have been proposed as potential alternatives to the Turing Test, because they are formulated to be robust to statistics of word co-occurrence BIBREF4 .", "In this paper, we show that the attention maps created by an out-of-the-box BERT can be directly exploited to resolve coreferences in long sentences. As such, they can be simply repurposed for the sake of commonsense reasoning tasks while achieving state-of-the-art results on the multiple task. On both PDP and WSC, our method outperforms previous state-of-the-art methods, without using expensive annotated knowledge bases or hand-engineered features. On a Pronoun Disambiguation dataset, PDP-60, our method achieves 68.3% accuracy, which is better than the state-of-art accuracy of 66.7%. On a WSC dataset, WSC-273, our method achieves 60.3%. As of today, state-of-the-art accuracy on the WSC-273 for single model performance is around 57%, BIBREF14 and BIBREF13 . These results suggest that BERT implicitly learns to establish complex relationships between entities such as coreference resolution. Although this helps in commonsense reasoning, solving this task requires more than employing a language model learned from large text corpora." ] ]
21663d2744a28e0d3087fbff913c036686abbb9a
How does their model differ from BERT?
[ "Their model does not differ from BERT." ]
[ [ "In all our experiments, we used the out-of-the-box BERT models without any task-specific fine-tuning. Specifically, we use the PyTorch implementation of pre-trained $bert-base-uncased$ models supplied by Google. This model has 12 layers (i.e., Transformer blocks), a hidden size of 768, and 12 self-attention heads. In all cases we set the feed-forward/filter size to be 3072 for the hidden size of 768. The total number of parameters of the model is 110M." ] ]
d8cecea477dfc5163dca6e2078a2fe6bc94ce09f
Which metrics are they evaluating with?
[ "accuracy" ]
[ [ "We evaluated baselines and our model using accuracy as the metric on the ROCStories dataset, and summarized these results in Table 2 . The linear classifier with language model, Msap, achieved an accuracy of 75.2%. When adding additional features, such as sentiment trajectories and topic words to traditional machine learning methods, HCM achieved an accuracy of 77.6%. Recently, more neural network-based models are used. DSSM simply used a deep structured semantic model to learn representations for both bodies and endings only achieved an accuracy of 58.5%. Utilizing Cai improved neural model performance to 74.7% by applying attention mechanisms on a BiLSTM RNN structure. SeqMANN further improved the performance to 84.7%, when combining more information from embedding layers, like character features, part-of-speech (POS) tagging features, sentiment polarity, negation information and some external knowledge of semantic sequence. Researchers also improved model performance by pre-training word embeddings on external large corpus. FTLM pre-trained a language model on a large unlabeled corpus and fine-tuned on the ROCStories dataset, and achieved an accuracy of 86.5%." ] ]
dd2f21d60cfca3917a9eb8b192c194f4de85e8b2
What different properties of the posterior distribution are explored in the paper?
[ "interdependence between rate and distortion, impact of KL on the sharpness of the approximated posteriors, demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities, some experiments to find if any form of syntactic information is encoded in the latent space" ]
[ [ "We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\\beta =1$. We do not use larger $\\beta $s because the constraint $\\text{KL}=C$ is always satisfied." ] ]
ccf7415b515fe5c59fa92d4a8af5d2437c591615
Why does proposed term help to avoid posterior collapse?
[ "by setting a non-zero positive constraint ($C\\ge 0$) on the KL term ($|D_{KL}\\big (q_\\phi ({z}|{x}) || p({z})\\big )-C|$)" ]
[ [ "The immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\\ge 0$) on the KL term ($|D_{KL}\\big (q_\\phi ({z}|{x}) || p({z})\\big )-C|$). We experimented with a range of constraints ($C$) on the KL term and various powerful and weak decoder architectures (LSTM, GRU, and CNN), and empirically confirmed that in all cases the constraint was satisfied." ] ]
fee5aef7ae521ccd1562764a91edefecec34624d
How does explicit constraint on the KL divergence term that authors propose looks like?
[ "Answer with content missing: (Formula 2) Formula 2 is an answer: \n\\big \\langle\\! \\log p_\\theta({x}|{z}) \\big \\rangle_{q_\\phi({z}|{x})} - \\beta |D_{KL}\\big(q_\\phi({z}|{x}) || p({z})\\big)-C|" ]
[ [ "Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\\beta $-VAE offers regularizing the ELBO via an additional coefficient $\\beta \\in {\\rm I\\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term,", "where $C\\!\\! \\in \\!\\! {\\rm I\\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\\text{KL}\\!\\!=\\!\\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\\max \\big (C,D_{KL}\\big (q_\\phi ({z}|{x}) || p({z})\\big )\\big )$ at the risk of breaking the ELBO when $\\text{KL}\\!\\!<\\!\\!C$ BIBREF22." ] ]
90f80a94fabaab72833256572db1d449c2779beb
Did they experiment with the tool?
[ "Yes" ]
[ [ "We present two use cases on which Seshat was developped: clinical interviews, and daylong child-centered recordings." ] ]
5872279c5165cc8a0c58cf1f89838b7c43217b0e
Can it be used for any language?
[ "Unanswerable" ]
[ [] ]
da55878d048e4dca3ca3cec192015317b0d630b1
Is this software available to the public?
[ "Yes" ]
[ [ "Setting up a modern fully-fledged web service is a arduous task, usually requiring a seasoned system administrator as well as sometimes having very precise system requirements. Luckily, the Docker virtualisation platform ensures that anyone with a recent-enough install of that software can set up Seshat in about one command (while still allowing some flexibility via a configuration file). For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation)." ] ]
7d300176afa04947ac847135ac6ea2929908c0b0
What shared task does this system achieve SOTA in?
[ "tweetLID workshop shared task" ]
[ [ "The tweetLID workshop shared task requires systems to identify the language of tweets written in Spanish (es), Portuguese (pt), Catalan (ca), English (en), Galician (gl) and Basque (eu). Some language pairs are similar (es and ca; pt and gl) and this poses a challenge to systems that rely on content features alone. We use the supplied evaluation corpus, which has been manually labelled with six languages and evenly split into training and test collections. We use the official evaluation script and report precision, recall and F-score, macro-averaged across languages. This handles ambiguous tweets by permitting systems to return any of the annotated languages. Table TABREF10 shows that using the content model alone is more effective for languages that are distinct in our set of languages (i.e. English and Basque). For similar languages, adding the social model helps discriminate them (i.e. Spanish, Portuguese, Catalan and Galician), particularly those where a less-resourced language is similar to a more popular one. Using the social graph almost doubles the F-score for undecided (und) languages, either not in the set above or hard-to-identify, from 18.85% to 34.95%. Macro-averaged, our system scores 76.63%, higher than the best score in the competition: 75.2%." ] ]
df9d16a2c4983a0ff46081e3ff4d6e7ef3338264
How are labels propagated using this approach?
[ "We use Junto (mad) BIBREF5 to propagate labels from labelled to unlabelled nodes. " ]
[ [ "We create the graph using all data, and training set tweets have an initial language label distribution. A naïve approach to building the tweet-tweet subgraph requires O( INLINEFORM0 ) comparisons, measuring the similarity of each tweet with all others. Instead, we performed INLINEFORM1 -nearest-neighbour classification on all tweets, represented as a bag of unigrams, and compared each tweet and the top- INLINEFORM2 neighbours. We use Junto (mad) BIBREF5 to propagate labels from labelled to unlabelled nodes. Upon convergence, we renormalise label scores for initially unlabelled nodes to find the value of INLINEFORM4 ." ] ]
8c35caf3772637e6297009ceab38f7f5be38ea9d
What information is contained in the social graph of tweet authors?
[ " the graph, composed of three types of nodes: tweets (T), users (U) and the “world” (W). Edges are created between nodes and weighted as follows: T-T the unigram cosine similarity between tweets, T-U weighted 100 between a tweet and its author, U-U weighted 1 between two users in a “follows” relationship and U-W weighted 0.001 to ensure a connected graph for the mad algorithm." ]
[ [ "We use a graph to model the social media context, relating tweets to one another, authors to tweets and other authors. Figure FIGREF7 shows the graph, composed of three types of nodes: tweets (T), users (U) and the “world” (W). Edges are created between nodes and weighted as follows: T-T the unigram cosine similarity between tweets, T-U weighted 100 between a tweet and its author, U-U weighted 1 between two users in a “follows” relationship and U-W weighted 0.001 to ensure a connected graph for the mad algorithm." ] ]
3a1bd3ec1a7ce9514da0cb2dfcaa454ba8c0ed14
What were the five English subtasks?
[ " five subtasks which involve standard classification, ordinal classification and distributional estimation. For a more detailed description see BIBREF0" ]
[ [ "Determining the sentiment polarity of tweets has become a landmark homework exercise in natural language processing (NLP) and data science classes. This is perhaps because the task is easy to understand and it is also easy to get good results with very simple methods (e.g. positive - negative words counting). The practical applications of this task are wide, from monitoring popular events (e.g. Presidential debates, Oscars, etc.) to extracting trading signals by monitoring tweets about public companies. These applications often benefit greatly from the best possible accuracy, which is why the SemEval-2017 Twitter competition promotes research in this area. The competition is divided into five subtasks which involve standard classification, ordinal classification and distributional estimation. For a more detailed description see BIBREF0 ." ] ]
39492338e27cb90bf1763e4337c2f697cf5082ba
How many CNNs and LSTMs were ensembled?
[ "10 CNNs and 10 LSTMs" ]
[ [ "To reduce variance and boost accuracy, we ensemble 10 CNNs and 10 LSTMs together through soft voting. The models ensembled have different random weight initializations, different number of epochs (from 4 to 20 in total), different set of filter sizes (either INLINEFORM0 , INLINEFORM1 or INLINEFORM2 ) and different embedding pre-training algorithms (either Word2vec or FastText)." ] ]
a7adb63db5066d39fdf2882d8a7ffefbb6b622f0
what was the baseline?
[ "There is no baseline." ]
[ [ "This paper experimented with two different models and compared them against each other. The inspiration for the first model comes from BIBREF1 in their paper DBLP:journals/corr/cmp-lg-9707002 where they used an MLP for text genre detection. The model used in this paper comes from scikit-learn's neural_network module and is called MLPClassifier. Table TABREF35 shows all parameters that were changed from the default values." ] ]
980568848cc8e7c43f767da616cf1e176f406b05
how many movie genres do they explore?
[ "27 " ]
[ [ "The second source of data was the genres for all reviews which were scraped from the IMDb site. A total of 27 different genres were scraped. A list of all genres can be find in Appendix SECREF8 . A review can have one genre or multiple genres. For example a review can be for a movie that is both Action, Drama and Thriller at the same time while another move only falls into Drama." ] ]
f1b738a7f118438663f9d77b4ccd3a2c4fd97c01
what evaluation metrics are discussed?
[ "precision , recall , Hamming loss, micro averaged precision and recall " ]
[ [ "When evaluating classifiers it is common to use accuracy, precision and recall as well as Hamming loss. Accuracy, precision and recall are defined by the the four terms true positive ( INLINEFORM0 ), true negative ( INLINEFORM1 ), false positive ( INLINEFORM2 ) and false negative ( INLINEFORM3 ) which can be seen in table TABREF16 .", "It has been shown that when calculating precision and recall on multi-label classifiers, it can be advantageous to use micro averaged precision and recall BIBREF6 . The formulas for micro averaged precision are expressed as DISPLAYFORM0 DISPLAYFORM1" ] ]
5a23f436a7e0c33e4842425cf86d5fd8ba78ac92
How big is dataset used?
[ "553,451 documents" ]
[ [] ]
2f4acd34eb2d09db9b5ad9b1eb82cb4a88c13f5b
What is dataset used for news-driven stock movement prediction?
[ "the public financial news dataset released by BIBREF4" ]
[ [ "We use the public financial news dataset released by BIBREF4, which is crawled from Reuters and Bloomberg over the period from October 2006 to November 2013. We conduct our experiments on predicting the Standard & Poor’s 500 stock (S&P 500) index and its selected individual stocks, obtaining indices and prices from Yahoo Finance. Detailed statistics of the training, development and test sets are shown in Table TABREF8. We report the final results on test set after using development set to tune some hyper-parameters." ] ]
e7329c403af26b7e6eef8b60ba6fefbe40ccf8ce
How much better does this baseline neural model do?
[ "The model outperforms at every point in the\nimplicit-tuples PR curve reaching almost 0.8 in recall" ]
[ [] ]
e79a5e435fcf5587535f06c9215d19a66caadaff
What is the SemEval-2016 task 8?
[ "Unanswerable" ]
[ [] ]
f7d67d6c6fbc62b2953ab74db6871b122b3c92cc
How much faster is training time for MGNC-CNN over the baselines?
[ "It is an order of magnitude more efficient in terms of training time., his model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour" ]
[ [ "Our approach enjoys the following advantages compared to the only existing comparable model BIBREF11 : (i) It can leverage diverse, readily available word embeddings with different dimensions, thus providing flexibility. (ii) It is comparatively simple, and does not, for example, require mutual learning or pre-training. (iii) It is an order of magnitude more efficient in terms of training time.", "More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement).", "We can see that MGNC-CNN and MG-CNN always outperform baseline methods (including C-CNN), and MGNC-CNN is usually better than MG-CNN. And on the Subj dataset, MG-CNN actually achieves slightly better results than BIBREF11 , with far less complexity and required training time (MGNC-CNN performs comparably, although no better, here). On the TREC dataset, the best-ever accuracy we are aware of is 96.0% BIBREF21 , which falls within the range of the result of our MGNC-CNN model with three word embeddings. On the irony dataset, our model with three embeddings achieves 4% improvement (in terms of AUC) compared to the baseline model. On SST-1 and SST-2, our model performs slightly worse than BIBREF11 . However, we again note that their performance is achieved using a much more complex model which involves pre-training and mutual-learning steps. This model takes days to train, whereas our model requires on the order of an hour." ] ]
085147cd32153d46dd9901ab0f9195bfdbff6a85
What are the baseline models?
[ "MC-CNN\nMVCNN\nCNN" ]
[ [ "We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 ." ] ]
c0035fb1c2b3de15146a7ce186ccd2e366fb4da2
By how much of MGNC-CNN out perform the baselines?
[ "In terms of Subj the Average MGNC-CNN is better than the average score of baselines by 0.5. Similarly, Scores of SST-1, SST-2, and TREC where MGNC-CNN has similar improvements. \nIn case of Irony the difference is about 2.0. \n" ]
[ [ "We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . We also show results on Subj, SST-1 and SST-2 achieved by the more complex model of BIBREF11 for comparison; this represents the state-of-the-art on the three datasets other than TREC." ] ]
a8e4a67dd67ae4a9ebf983a90b0d256f4b9ff6c6
What dataset/corpus is this evaluated over?
[ " SST-1, SST-2, Subj , TREC , Irony " ]
[ [ "Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set.", "Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each.", "TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances.", "Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced." ] ]
34dd0ee1374a3afd16cf8b0c803f4ef4c6fec8ac
What are the comparable alternative architectures?
[ "standard CNN, C-CNN, MVCNN " ]
[ [ "We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .", "More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement)." ] ]
53377f1c5eda961e438424d71d16150e669f7072
Which state-of-the-art model is surpassed by 9.68% attraction score?
[ "pure summarization model NHG" ]
[ [ "In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores." ] ]
f37ed011e7eb259360170de027c1e8557371f002
What is increase in percentage of humor contained in headlines generated with TitleStylist method (w.r.t. baselines)?
[ "Humor in headlines (TitleStylist vs Multitask baseline):\nRelevance: +6.53% (5.87 vs 5.51)\nAttraction: +3.72% (8.93 vs 8.61)\nFluency: 1,98% (9.29 vs 9.11)" ]
[ [ "The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters." ] ]
41d3750ae666ea5a9cea498ddfb973a8366cccd6
How is attraction score measured?
[ "annotators are asked how attractive the headlines are, Likert scale from 1 to 10 (integer values)" ]
[ [ "We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices." ] ]
90b2154ec3723f770c74d255ddfcf7972fe136a2
How is presence of three target styles detected?
[ "human evaluation task about the style strength" ]
[ [ "We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices." ] ]
f3766c6937a4c8c8d5e954b4753701a023e3da74
How is fluency automatically evaluated?
[ "fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs" ]
[ [ "Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency", "We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs." ] ]
2898e4aa7a3496c628e7ddf2985b48fb11aa3bba
What are the measures of "performance" used in this paper?
[ "test-set perplexity, likelihood convergence and clustering measures, visualizing the topic summaries, authors' topic distributions and by performing an automatic labeling task" ]
[ [ "We evaluate the TN topic model quantitatively with standard topic model measures such as test-set perplexity, likelihood convergence and clustering measures. Qualitatively, we evaluate the model by visualizing the topic summaries, authors' topic distributions and by performing an automatic labeling task. We compare our model with HDP-LDA, a nonparametric variant of the author-topic model (ATM), and the original random function network model. We also perform ablation studies to show the importance of each component in the model. The results of the comparison and ablation studies are shown in Table 1 . We use two tweets corpus for experiments, first is a subset of Twitter7 dataset BIBREF18 , obtained by querying with certain keywords (e.g. finance, sports, politics). we remove tweets that are not English with langid.py BIBREF19 and filter authors who do not have network information and who authored less than 100 tweets. The corpus consists of 60370 tweets by 94 authors. We then randomly select 90% of the dataset as training documents and use the rest for testing. Second tweets corpus is obtained from BIBREF20 , which contains a total of 781186 tweets. We note that we perform no word normalization to prevent any loss of meaning of the noisy text." ] ]
fa9df782d743ce0ce1a7a5de6a3de226a7e423df
What are the languages they consider in this paper?
[ "The languages considered were English, Chinese, German, Russian, Arabic, Spanish, French" ]
[ [ "To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section \"UG-WGAN\" . The embedding size of the table was 300 and the internal LSTM hidden size was 512. A dropout rate of $0.1$ was used and trained with the ADAM optimization method BIBREF23 . Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 . We binarize the label's for all the datasets.", "For this experiment we trained UG-WGAN on the English and Russian language following the procedure described in Section \"UG-WGAN\" . We kept the hyper-parameters equivalent to the Sentiment Analysis experiment. All of the NLI model tested were run over the fixed UG embeddings. We trained two different models from literature, Densely-Connected Recurrent and Co-Attentive Network by BIBREF30 and Multiway Attention Network by BIBREF31 . Please refer to this papers for further implementation details.", "One way to measure universality is by studying perplexity of our multi-lingual language model as we increase the number of languages. To do so we trained 6 UG-WGAN models on the following languages: English, Russian, Arabic, Chinese, German, Spanish, French. We maintain the same procedure as described above. The hidden size of the language model was increased to 1024 with 16K BPE tokens being used. The first model was trained on English Russian, second was trained on English Russian Arabic and so on. For arabic we still trained from left to right even though naturally the language is read from right to left. We report the results in Figure 5 . As the number of languages increases the gap between a UG-WGAN without any distribution matching and one with diminishes. This implies that the efficiency and representative power of UG-WGAN grows as we increase the number of languages it has to model." ] ]
6270d5247f788c4627be57de6cf30112560c863f
Did they experiment with tasks other than word problems in math?
[ "They experimented with sentiment analysis and natural language inference task" ]
[ [ "To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section \"UG-WGAN\" . The embedding size of the table was 300 and the internal LSTM hidden size was 512. A dropout rate of $0.1$ was used and trained with the ADAM optimization method BIBREF23 . Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 . We binarize the label's for all the datasets.", "A natural language inference task consists of two sentences; a premise and a hypothesis which are either contradictions, entailments or neutral. Learning a NLI task takes a certain nuanced understanding of language. Therefore it is of interest whether or not UG-WGAN captures the necessary linguistic features. For this task we use the Stanford NLI (sNLI) dataset as our training data in english BIBREF29 . To test the zero-shot learning capabilities we created a russian sNLI test set by random sampling 400 sNLI test samples and having a native russian speaker translate both premise and hypothesis to russian. The label was kept the same." ] ]
cb370692fe0beef90cdaa9c8e43a0aab6f0e117a
Do they report results only on English data?
[ "Unanswerable" ]
[ [] ]
d0c636fa9ef99c4f44ab39e837a680217b140269
Do the authors offer any hypothesis as to why the transformations sometimes disimproved performance?
[ "No" ]
[ [] ]
c47f593a5b92abc2e3c536fe2baaca226913688b
What preprocessing techniques are used in the experiments?
[ "See Figure FIGREF3" ]
[ [ "To explore how helpful these transformations are, we incorporated 20 simple transformations and 15 additional sequences of transformations in our experiment to see their effect on different type of metrics on four different ML models (See Figure FIGREF3 )." ] ]
c3a9732599849ba4a9f07170ce1e50867cf7d7bf
What state of the art models are used in the experiments?
[ "2) Naïve Bayes with SVM (NBSVM), 3) Extreme Gradient Boosting (XGBoost), 4) FastText algorithm with Bidirectional LSTM (FastText-BiLSTM)" ]
[ [ "We used four classification algorithms: 1) Logistic regression, which is conventionally used in sentiment classification. Other three algorithms which are relatively new and has shown great results on sentiment classification types of problems are: 2) Naïve Bayes with SVM (NBSVM), 3) Extreme Gradient Boosting (XGBoost) and 4) FastText algorithm with Bidirectional LSTM (FastText-BiLSTM)." ] ]
0fd678d24c86122b9ab27b73ef20216bbd9847d1
What evaluation metrics are used?
[ "Accuracy on each dataset and the average accuracy on all datasets." ]
[ [ "Table TABREF34 shows the performances of the different methods. From the table, we can see that the performances of most tasks can be improved with the help of multi-task learning. FS-MTL shows the minimum performance gain from multi-task learning since it puts all private and shared information into a unified space. SSP-MTL and PSP-MTL achieve similar performance and are outperformed by ASP-MTL which can better separate the task-specific and task-invariant features by using adversarial training. Our proposed models (SA-MTL and DA-MTL) outperform ASP-MTL because we model a richer representation from these 16 tasks. Compared to SA-MTL, DA-MTL achieves a further improvement of INLINEFORM0 accuracy with the help of the dynamic and flexible query vector. It is noteworthy that our models are also space efficient since the task-specific information is extracted by using only a query vector, instead of a BiLSTM layer in the shared-private models." ] ]
b556fd3a9e0cff0b33c63fa1aef3aed825f13e28
What dataset did they use?
[ "16 different datasets from several popular review corpora used in BIBREF20, CoNLL 2000 BIBREF22" ]
[ [ "We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets.", "We use CoNLL 2000 BIBREF22 sequence labeling dataset for both POS Tagging and Chunking tasks. There are 8774 sentences in training data, 500 sentences in development data and 1512 sentences in test data. The average sentence length is 24 and has a total vocabulary size as 17k." ] ]
0db1ba66a7e75e91e93d78c31f877364c3724a65
What tasks did they experiment with?
[ "Sentiment Classification, Transferability of Shared Sentence Representation, Introducing Sequence Labeling as Auxiliary Task" ]
[ [ "Exp I: Sentiment Classification", "We first conduct a multi-task experiment on sentiment classification.", "We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets.", "All the datasets in each task are partitioned randomly into training set, development set and testing set with the proportion of 70%, 10% and 20% respectively. The detailed statistics about all the datasets are listed in Table TABREF27 .", "Exp II: Transferability of Shared Sentence Representation ", "With attention mechanism, the shared sentence encoder in our proposed models can generate more generic task-invariant representations, which can be considered as off-the-shelf knowledge and then be used for unseen new tasks.", "Exp III: Introducing Sequence Labeling as Auxiliary Task", "A good sentence representation should include its linguistic information. Therefore, we incorporate sequence labeling task (such as POS Tagging and Chunking) as an auxiliary task into the multi-task learning framework, which is trained jointly with the primary tasks (the above 16 tasks of sentiment classification). The auxiliary task shares the sentence encoding layer with the primary tasks and connected to a private fully connected layer followed by a softmax non-linear layer to process every hidden state INLINEFORM0 and predicts the labels." ] ]
b44ce9aae8b1479820555b99ce234443168dc1fe
What multilingual parallel data is used for training proposed model?
[ "MultiUN BIBREF20, OpenSubtitles BIBREF21" ]
[ [ "We adopt the mixture of two multilingual translation corpus as our training data: MultiUN BIBREF20 and OpenSubtitles BIBREF21. MultiUN consists of 463,406 official documents in six languages, containing around 300M words for each language. OpenSubtitles is a corpus consisting of movie and TV subtitles, which contains 2.6B sentences over 60 languages. We select four shared languages of the two corpora: English, Spanish, Russian and Chinese. Statistics of the training corpus are shown in Table TABREF14. Sentences are tokenized by Wordpiece as in BERT. A multilingual vocabulary of 50K tokens is used. For validation and testing, we randomly sample 10000 sentences respectively from each language pair. The rest data are used for training. For monolingual pre-training, we use English Wikipedia corpus, which contains 2,500M words." ] ]
b9c0049a7a5639c33efdb6178c2594b8efdefabb
How much better are results of proposed model compared to pivoting method?
[ "our method outperforms the baseline in both relevance and fluency significantly." ]
[ [ "First we compare our models with the conventional pivoting method, i.e., round-trip translation. As shown in Figure FIGREF15 (a)(b), either the bilingual or the multilingual model is better than the baseline in terms of relevance and diversity in most cases. In other words, with the same generation diversity (measured by both Distinct-2 and Self-BLEU), our models can generate paraphrase with more semantically similarity to the input sentence.", "As shown in Table TABREF28, our method outperforms the baseline in both relevance and fluency significantly. We further calculate agreement (Cohen's kappa) between two annotators.", "Both round-trip translation and our method performs well as to fluency. But the huge gap of relevance between the two systems draw much attention of us. We investigate the test set in details and find that round-trip approach indeed generate more noise as shown in case studies." ] ]
99c50d51a428db09edaca0d07f4dab0503af1b94
What kind of Youtube video transcripts did they use?
[ "youtube video transcripts on news covering different topics like technology, human rights, terrorism and politics" ]
[ [ "We focused evaluation over a small but diversified dataset composed by 10 YouTube videos in the English language in the news context. The selected videos cover different topics like technology, human rights, terrorism and politics with a length variation between 2 and 10 minutes. To encourage the diversity of content format we included newscasts, interviews, reports and round tables." ] ]
d1747b1b56fddb05bb1225e98fd3c4c043d74592
Which SBD systems did they compare?
[ "Convolutional Neural Network , bidirectional Recurrent Neural Network model with attention mechanism" ]
[ [ "To exemplify the INLINEFORM0 score we evaluated and compared the performance of two different SBD systems over a set of YouTube videos in a multi-reference enviroment. The first system (S1) employs a Convolutional Neural Network to determine if the middle word of a sliding window corresponds to a SU boundary or not BIBREF30 . The second approach (S2) by contrast, introduces a bidirectional Recurrent Neural Network model with attention mechanism for boundary detection BIBREF31 ." ] ]
5a29b1f9181f5809e2b0f97b4d0e00aea8996892
What makes it a more reliable metric?
[ "It takes into account the agreement between different systems" ]
[ [ "Results form Table TABREF31 may give an idea that INLINEFORM0 is just an scaled INLINEFORM1 . While it is true that they show a linear correlation, INLINEFORM2 may produce a different system ranking than INLINEFORM3 given the integral multi-reference principle it follows. However, what we consider the most profitable about INLINEFORM4 is the twofold inclusion of all available references it performs. First, the construction of INLINEFORM5 to provide a more inclusive reference against to whom be evaluated and then, the computation of INLINEFORM6 , which scales the result depending of the agreement between references.", "In this paper we presented WiSeBE, a semi-automatic multi-reference sentence boundary evaluation protocol based on the necessity of having a more reliable way for evaluating the SBD task. We showed how INLINEFORM0 is an inclusive metric which not only evaluates the performance of a system against all references, but also takes into account the agreement between them. According to your point of view, this inclusivity is very important given the difficulties that are present when working with spoken language and the possible disagreements that a task like SBD could provoke." ] ]
f5db12cd0a8cd706a232c69d94b2258596aa068c
How much in experiments is performance improved for models trained with generated adversarial examples?
[ "Answer with content missing: (Table 1) The performance of all the target models raises significantly, while that on the original\nexamples remain comparable (e.g. the overall accuracy of BERT on modified examples raises from 24.1% to 66.0% on Quora)" ]
[ [ "Adversarial training can often improve model robustness BIBREF25, BIBREF27. We also fine-tune the target models using adversarial training. At each training step, we train the model with a batch of original examples along with adversarial examples with balanced labels. The adversarial examples account for around 10% in a batch. During training, we generate adversarial examples with the current model as the target and update the model parameters with the hybrid batch iteratively. The beam size for generation is set to 1 to reduce the computation cost, since the generation success rate is minor in adversarial training. We evaluate the adversarially trained models, as shown in Table TABREF18.", "After adversarial training, the performance of all the target models raises significantly, while that on the original examples remain comparable. Note that since the focus of this paper is on model robustness which can hardly be reflected in original data, we do not expect performance improvement on original data. The results demonstrate that adversarial training with our adversarial examples can significantly improve the robustness we focus on without remarkably hurting the performance on original data. Moreover, although the adversarial example generation is constrained by a BERT language model, BiMPM and DIIN which do not use the BERT language model can also significantly benefit from the adversarial examples, further demonstrating the effectiveness of our method." ] ]
2c8d5e3941a6cc5697b242e64222f5d97dba453c
How much dramatically results drop for models on generated adversarial examples?
[ "BERT on Quora drops from 94.6% to 24.1%" ]
[ [ "After adversarial modifications, the performance of the original target models (those without the “-adv” suffix) drops dramatically (e.g. the overall accuracy of BERT on Quora drops from 94.6% to 24.1%), revealing that the target models are vulnerable to our adversarial examples. Particularly, even though our generation is constrained by a BERT language model, BERT is still vulnerable to our adversarial examples. These results demonstrate the effectiveness of our algorithm for generating adversarial examples and also revealing the corresponding robustness issues. Moreover, we present some generated adversarial examples in the appendix." ] ]
78102422a5dc99812739b8dd2541e4fdb5fe3c7a
What is discriminator in this generative adversarial setup?
[ " current model" ]
[ [ "Adversarial training can often improve model robustness BIBREF25, BIBREF27. We also fine-tune the target models using adversarial training. At each training step, we train the model with a batch of original examples along with adversarial examples with balanced labels. The adversarial examples account for around 10% in a batch. During training, we generate adversarial examples with the current model as the target and update the model parameters with the hybrid batch iteratively. The beam size for generation is set to 1 to reduce the computation cost, since the generation success rate is minor in adversarial training. We evaluate the adversarially trained models, as shown in Table TABREF18." ] ]
930c51b9f3936d936ee745716536a4b40f531c7f
What are benhmark datasets for paraphrase identification?
[ "Quora, MRPC" ]
[ [ "We adopt the following two datasets:", "Quora BIBREF1: The Quora Question Pairs dataset contains question pairs annotated with labels indicating whether the two questions are paraphrases. We use the same dataset partition as BIBREF5, with 384,348/10,000/10,000 pairs in the training/development/test set respectively.", "MRPC BIBREF34: The Microsoft Research Paraphrase Corpus consists of sentence pairs collected from online news. Each pair is annotated with a label indicating whether the two sentences are semantically equivalent. There are 4,076/1,725 pairs in the training/test set respectively." ] ]
20eb673b01d202b731e7ba4f84efc10a18616dd3
What representations are presented by this paper?
[ "the number of speakers of each gender category, their speech duration" ]
[ [ "Following work by doukhan2018open, we wanted to explore the corpora looking at the number of speakers of each gender category as well as their speech duration, considering both variables as good features to account for gender representation. After the download, we manually extracted information about gender representation in each corpus." ] ]
e8e6719d531e7bef5d827ac92c7b1ab0b8ec3c8e
What corpus characteristics correlate with more equitable gender balance?
[ "Unanswerable" ]
[ [] ]
f6e5febf2ea53ec80135bbd532d6bb769d843dd8
What natural languages are represented in the speech resources studied?
[ "Unanswerable" ]
[ [] ]
4059c6f395640a6acf20a0ed451d0ad8681bc59b
How is the delta-softmax calculated?
[ "Answer with content missing: (Formula) Formula is the answer." ]
[ [ "We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\\Delta }_s$ (for delta-softmax), which can be written as:", "where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \\in 1 \\ldots N$ ignoring separators, or $\\phi $, the empty set)." ] ]
99d7bef0ef395360b939a3f446eff67239551a9d
Are some models evaluated using this metric, what are the findings?
[ "Yes" ]
[ [ "As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30." ] ]
a1097ce59270d6f521d92df8d2e3a279abee3e67
Where does proposed metric differ from juman judgement?
[ "model points out plausible signals which were passed over by an annotator, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action" ]
[ [ "In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead:", ". [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\\xrightarrow[\\text{pred:solutionhood}]{\\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183.", "Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators." ] ]
56e58bdf0df76ad1599021801f6d4c7b77953e29
Where does proposed metric overlap with juman judgement?
[ "influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments" ]
[ [ "Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose." ] ]
e74ba39c35af53d3960be5a6c86eddd62cef859f
Which baseline performs best?
[ "IR methods perform better than the best neural models" ]
[ [ "Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used." ] ]
458f3963387de57fdc182875c9ca3798b612b633
Which baselines are explored?
[ "GPT2, SciBERT model of BIBREF11" ]
[ [ "To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \\ldots x_n$ and citing sentence $Y = y_1 \\ldots y_m$ with a special separator token $\\mho $. The model learns to approximate next token probabilities for each index after $\\mho $:", "We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as:" ] ]
69a88b6be3b34acc95c5e36acbe069c0a0bc67d6
What is the size of the corpus?
[ "8.1 million scientific documents, 154K computer science articles, 622K citing sentences" ]
[ [ "For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4." ] ]
7befb7a8354fca9d2a94e3fd4364625c98067ebb
How was the evaluation corpus collected?
[ "Unanswerable" ]
[ [] ]
da1994421934082439e8fe5071a01d3d17b56601
Are any machine translation sysems tried with these embeddings, what is the performance?
[ "No" ]
[ [] ]
30c6d34b878630736f819fd898319ac4e71ee50b
Are any experiments performed to try this approach to word embeddings?
[ "Yes" ]
[ [ "In the literature, two main types of datasets are used for machine translation: Word-aligned data and Sentence-aligned data. The first one is basically a dictionary between the two languages, where there is a direct relation between same words in different languages. The second one has the relation between corresponding sentences in the two languages. We decided to start with the sentence aligned corpus, since it was more interesting to infer dependency from contexts among words. For our experiment we decided to use the Europarl dataset, using the data from the WMT11 .The Europarl parallel corpus is extracted from the proceedings of the European Parliament. It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek. For this experience, we used the English-French parallel corpus, which contains 2,007,723 sentences and the English-Italian corpus, that contains 1,909,115 sentences." ] ]
a4ff1b91643e0c8a0d4cc1502d25ca85995cf428
Which two datasets does the resource come from?
[ "two surveys by two groups - school students and meteorologists to draw on a map a polygon representing a given geographical descriptor" ]
[ [ "The resource is composed of data from two different surveys. In both surveys subjects were asked to draw on a map (displayed under a Mercator projection) a polygon representing a given geographical descriptor, in the context of the geography of Galicia in Northwestern Spain (see Fig. FIGREF1 ). However, the surveys were run with different purposes, and the subject groups that participated in each survey and the list of descriptors provided were accordingly different.", "The first survey was run in order to obtain a high number of responses to be used as an evaluation testbed for modeling algorithms. It was answered by 15/16 year old students in a high school in Pontevedra (located in Western Galicia). 99 students provided answers for a list of 7 descriptors (including cardinal points, coast, inland, and a proper name). Figure FIGREF2 shows a representation of the answers given by the students for “Northern Galicia” and a contour map that illustrates the percentages of overlapping answers.", "The second survey was addressed to meteorologists in the Galician Weather Agency BIBREF12 . Its purpose was to gather data to create fuzzy models that will be used in a future NLG system in the weather domain. Eight meteorologists completed the survey, which included a list of 24 descriptors. For instance, Figure FIGREF3 shows a representation of the answers given by the meteorologists for “Eastern Galicia” and a contour map that illustrates the percentage of overlapping answers." ] ]
544e29937e0c972abcdd27c953dc494b2376dd76
What model was used by the top team?
[ "Two different BERT models were developed" ]
[ [ "BIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue context. Both Next Sentence Prediction (NSP) phase on the complete unlabeled scripts from all 10 seasons of Friends, which are available for download. In addition, the model learned the emotional disposition of each of six main six main characters in Friends (Rachel, Monica, Phoebe, Joey, Chandler and Ross) by adding a special token to represent the speaker. For EmotionPush, pre-training was performed on Twitter data, as it is similar in nature to chat based dialogues. In both cases, special attention was given to the class imbalance issue by applying “weighted balanced warming” on the loss function." ] ]
b8fdc600f9e930133bb3ec8fbcc9c600d60d24b0
What was the baseline?
[ "Unanswerable" ]
[ [] ]
bdc93ac1b8643617c966e91d09c01766f7503872
What is the size of the second dataset?
[ "1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation" ]
[ [ "The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table ." ] ]
4ca0d52f655bb9b4bc25310f3a76c5d744830043
How large is the first dataset?
[ "1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation" ]
[ [ "The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table ." ] ]
d2fbf34cf4b5b1fd82394124728b03003884409c
Who was the top-scoring team?
[ "IDEA" ]
[ [ "The submissions and the final results are summarized in Tables and . Two of the submissions did not follow up with technical papers and thus they do not appear in this summary. We note that the top-performing models used BERT, reflecting the recent state-of-the-art performance of this model in many NLP tasks. For Friends and EmotionPush the top micro-F1 scores were 81.5% and 88.5% respectively." ] ]
4c71ed7d30ee44cf85ffbd7756b985e32e8e07da
What supervised learning tasks are attempted with these representations?
[ "document categorization, regression tasks" ]
[ [ "In this paper, I introduce an interpretable word embedding model, and an associated topic model, which are designed to work well when trained on a small to medium-sized corpus of interest. The primary insight is to use a data-efficient parameter sharing scheme via mixed membership modeling, with inspiration from topic models. Mixed membership models provide a flexible yet efficient latent representation, in which entities are associated with shared, global representations, but to uniquely varying degrees. I identify the skip-gram word2vec model of BIBREF0 , BIBREF1 as corresponding to a certain naive Bayes topic model, which leads to mixed membership extensions, allowing the use of fewer vectors than words. I show that this leads to better modeling performance without big data, as measured by predictive performance (when the context is leveraged for prediction), as well as to interpretable latent representations that are highly valuable for computational social science applications. The interpretability of the representations arises from defining embeddings for words (and hence, documents) in terms of embeddings for topics. My experiments also shed light on the relative merits of training embeddings on generic big data corpora versus domain-specific data.", "I tested the performance of the representations as features for document categorization and regression tasks. The results are given in Table TABREF26 . For document categorization, I used three standard benchmark datasets: 20 Newsgroups (19,997 newsgroup posts), Reuters-150 newswire articles (15,500 articles and 150 classes), and Ohsumed medical abstracts on 23 cardiovascular diseases (20,000 articles). I held out 4,000 test documents for 20 Newsgroups, and used the standard train/test splits from the literature in the other corpora (e.g. for Ohsumed, 50% of documents were assigned to training and to test sets). I obtained document embeddings for the MMSG, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token. Vector addition was similarly used to construct document vectors for the other embedding models. All vectors were normalized to unit length. I also considered a tf-idf baseline. Logistic regression models were trained on the features extracted on the training set for each method.", "I also analyzed the regression task of predicting the year of a state of the Union address based on its text information. I used lasso-regularized linear regression models, evaluated via a leave-one-out cross-validation experimental setup. Root-mean-square error (RMSE) results are reported in Table TABREF26 (bottom). Unlike for the other tasks, the Google big data vectors were the best individual features in this case, outperforming the domain-specific SG and MMSG embeddings individually. On the other hand, SG+MMSG+Google performed the best overall, showing that domain-specific embeddings can improve performance even when big data embeddings are successful. The tf-idf baseline was beaten by all of the embedding models on this task." ] ]