question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
08333e4dd1da7d6b5e9b645d40ec9d502823f5d7
How much performance gap between their approach and the strong handcrafted method?
[ "0.007 MAP on Task A, 0.032 MAP on Task B, 0.055 MAP on Task C" ]
[ [] ]
bc1bc92920a757d5ec38007a27d0f49cb2dde0d1
What is a strong feature-based method?
[ "Unanswerable" ]
[ [] ]
942eb1f7b243cdcfd47f176bcc71de2ef48a17c4
Did they experimnet in other languages?
[ "Yes" ]
[ [ "Moreover, our approach is also language independent. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. The results are competitive with a hand-tuned strong baseline from SemEval-2015." ] ]
9bffc9a9c527e938b2a95ba60c483a916dbd1f6b
Do they use multi-attention heads?
[ "Yes" ]
[ [ "The attentional encoder layer is a parallelizable and interactive alternative of LSTM and is applied to compute the hidden states of the input embeddings. This layer consists of two submodules: the Multi-Head Attention (MHA) and the Point-wise Convolution Transformation (PCT)." ] ]
8434974090491a3c00eed4f22a878f0b70970713
How big is their model?
[ "Proposed model has 1.16 million parameters and 11.04 MB." ]
[ [ "To figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the Restaurant dataset. Statistical results are reported in Table TABREF37 . We implement all the compared models base on the same source code infrastructure, use the same hyperparameters, and run them on the same GPU ." ] ]
b67420da975689e47d3ea1c12b601851018c4071
How is their model different from BERT?
[ "overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer." ]
[ [ "Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT." ] ]
01d91d356568fca79e47873bd0541bd22ba66ec0
What datasets were used?
[ "datasets given on the shared task, without using any additional external data" ]
[ [ "JESSI is trained using only the datasets given on the shared task, without using any additional external data. Despite this, JESSI performs second on Subtask A with an F1 score of 77.78% among 33 other team submissions. It also performs well on Subtask B with an F1 score of 79.59%." ] ]
37e45a3439b048a80c762418099a183b05772e6a
How did they do compared to other teams?
[ "second on Subtask A with an F1 score of 77.78% among 33 other team submissions, performs well on Subtask B with an F1 score of 79.59%" ]
[ [ "JESSI is trained using only the datasets given on the shared task, without using any additional external data. Despite this, JESSI performs second on Subtask A with an F1 score of 77.78% among 33 other team submissions. It also performs well on Subtask B with an F1 score of 79.59%." ] ]
a4e66e842be1438e5cd8d7cb2a2c589f494aee27
Which tested technique was the worst performer?
[ "Depeche + SVM" ]
[ [ "We computed bag-of-words-based benchmarks using the following methods:", "Classification with TF-IDF + Linear SVM (TF-IDF + SVM)", "Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)", "Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)", "Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)" ] ]
cb78e280e3340b786e81636431834b75824568c3
How many emotions do they look at?
[ "9" ]
[ [ "The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral." ] ]
2941874356e98eb2832ba22eae9cb08ec8ce0308
What are the baseline benchmarks?
[ "TF-IDF + SVM, Depeche + SVM, NRC + SVM, TF-NRC + SVM, Doc2Vec + SVM, Hierarchical RNN, BiRNN + Self-Attention, ELMo + BiRNN, Fine-tuned BERT" ]
[ [ "We computed bag-of-words-based benchmarks using the following methods:", "Classification with TF-IDF + Linear SVM (TF-IDF + SVM)", "Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)", "Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)", "Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)", "Benchmarks ::: Doc2Vec + SVM", "We also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier.", "Benchmarks ::: Hierarchical RNN", "For this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.", "The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training.", "Benchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)", "One challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.", "Self-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.", "The benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.", "Note that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments.", "Benchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)", "Deep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.", "We used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function.", "Benchmarks ::: Fine-tuned BERT", "Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.", "We used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\\textrm {{\\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%." ] ]
4e50e9965059899d15d3c3a0c0a2d73e0c5802a0
What is the size of this dataset?
[ "9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words" ]
[ [ "The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words." ] ]
67d8e50ddcc870db71c94ad0ad7f8a59a6c67ca6
How many annotators were there?
[ "3 " ]
[ [ "We required all annotators have a `master' MTurk qualification. Each passage was labelled by 3 unique annotators. Only passages with a majority agreement between annotators were accepted as valid. This is equivalent to a Fleiss's $\\kappa $ score of greater than $0.4$." ] ]
aecb485ea7d501094e50ad022ade4f0c93088d80
Can SCRF be used to pretrain the model?
[ "No" ]
[ [ "One of the major drawbacks of SCRF models is their high computational cost. In our experiments, the CTC model is around 3–4 times faster than the SRNN model that uses the same RNN encoder. The joint model by multitask learning is slightly more expensive than the stand-alone SRNN model. To cut down the computational cost, we investigated if CTC can be used to pretrain the RNN encoder to speed up the training of the joint model. This is analogous to sequence training of HMM acoustic models, where the network is usually pretrained by the frame-level CE criterion. Figure 2 shows the convergence curves of the joint model with and without CTC pretraining, and we see pretraining indeed improves the convergence speed of the joint model." ] ]
2fea3c955ff78220b2c31a8ad1322bc77f6706f8
What conclusions are drawn from the syntactic analysis?
[ " our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them" ]
[ [ "We highlight the problem of translating between languages with different morphological systems, in which the target translation must contain gender and number information that is not available in the source. We propose a method for injecting such information into a pre-trained NMT model in a black-box setting. We demonstrate the effectiveness of this method by showing an improvement of 2.3 BLEU in an English-to-Hebrew translation setting where the speaker and audience gender can be inferred. We also perform a fine-grained syntactic analysis that shows how our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them. In future work we would like to explore automatic generation of the injected context, or the use of cross-sentence context to infer the injected information." ] ]
faa4f28a2f2968cecb770d9379ab2cfcaaf5cfab
What type of syntactic analysis is performed?
[ "Speaker's Gender Effects, Interlocutors' Gender and Number Effects" ]
[ [ "The BLEU score is an indication of how close the automated translation is to the reference translation, but does not tell us what exactly changed concerning the gender and number properties we attempt to control. We perform a finer-grained analysis focusing on the relation between the injected speaker and audience information, and the morphological realizations of the corresponding elements. We parse the translations and the references using a Hebrew dependency parser. In addition to the parse structure, the parser also performs morphological analysis and tagging of the individual tokens. We then perform the following analysis.", "Speaker's Gender Effects: We search for first-person singular pronouns with subject case (ani, unmarked for gender, corresponding to the English I), and consider the gender of its governing verb (or adjectives in copular constructions such as `I am nice'). The possible genders are `masculine', `feminine' and `both', where the latter indicates a case where the none-diacriticized written form admits both a masculine and a feminine reading. We expect the gender to match the ones requested in the prefix.", "Interlocutors' Gender and Number Effects: We search for second-person pronouns and consider their gender and number. For pronouns in subject position, we also consider the gender and number of their governing verbs (or adjectives in copular constructions). For a singular audience, we expect the gender and number to match the requested ones. For a plural audience, we expect the masculine-plural forms." ] ]
da068b20988883bc324e55c073fb9c1a5c39be33
How is it demonstrated that the correct gender and number information is injected using this system?
[ " correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline, Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference" ]
[ [ "We compare the different conditions by comparing BLEU BIBREF5 with respect to the reference Hebrew translations. We use the multi-bleu.perl script from the Moses toolkit BIBREF6 . Table shows BLEU scores for the different prefixes. The numbers match our expectations: Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline. We note the BLEU score improves in all cases, even when given the wrong gender of either the speaker or the audience. We hypothesise this improvement stems from the addition of the word “said” which hints the model to generate a more “spoken” language which matches the tested scenario. Providing correct information for both speaker and audience usually helps more than providing correct information to either one of them individually. The one outlier is providing “She” for the speaker and “her” for the audience. While this is not the correct scenario, we hypothesise it gives an improvement in BLEU as it further reinforces the female gender in the sentence.", "Results: Speaker. Figure FIGREF3 shows the result for controlling the morphological properties of the speaker ({he, she, I} said). It shows the proportion of gender-inflected verbs for the various conditions and the reference. We see that the baseline system severely under-predicts the feminine form of verbs as compared to the reference. The “He said” conditions further decreases the number of feminine verbs, while the “I said” conditions bring it back to the baseline level. Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference (though still under-predicting some of the feminine cases)." ] ]
0d6d5b6c00551dd0d2519f117ea81d1e9e8785ec
Which neural machine translation system is used?
[ "Google's machine translation system (GMT)" ]
[ [ "To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API. To test the method on real-world sentences, we consider a monologue from the stand-up comedy show “Sarah Silverman: A Speck of Dust”. The monologue consists of 1,244 English sentences, all by a female speaker conveyed to a plural, gender-neutral audience. Our parallel corpora consists of the 1,244 English sentences from the transcript, and their corresponding Hebrew translations based on the Hebrew subtitles. We translate the monologue one sentence at a time through the Google Cloud API. Eyeballing the results suggest that most of the translations use the incorrect, but default, masculine and singular forms for the speaker and the audience, respectively. We expect that by adding the relevant condition of “female speaking to an audience” we will get better translations, affecting both the gender of the speaker and the number of the audience." ] ]
edcde2b675cf8a362a63940b2bbdf02c150fe01f
What are the components of the black-box context injection system?
[ "supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences" ]
[ [ "Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.", "To verify this, we experiment with translating the sentences with the following variations: No Prefix—The baseline translation as returned by the GMT system. “He said:”—Signaling a male speaker. We expect to further skew the system towards masculine forms. “She said:”—Signaling a female speaker and unknown audience. As this matches the actual speaker's gender, we expect an improvement in translation of first-person pronouns and verbs with first-person pronouns as subjects. “I said to them:”—Signaling an unknown speaker and plural audience. “He said to them:”—Masculine speaker and plural audience. “She said to them:”—Female speaker and plural audience—the complete, correct condition. We expect the best translation accuracy on this setup. “He/she said to him/her”—Here we set an (incorrect) singular gender-marked audience, to investigate our ability to control the audience morphology." ] ]
d20d6c8ecd7cb0126479305d27deb0c8b642b09f
What normalization techniques are mentioned?
[ "FBanks with cepstral mean normalization (CMN), variance with mean normalization (CMVN)" ]
[ [ "We explored different normalization techniques. FBanks with cepstral mean normalization (CMN) perform better than raw FBanks. We found using variance with mean normalization (CMVN) unnecessary for the task. Using deltas and delta-deltas improves model, so we used them in other experiments. Models trained with spectrogram features converge slower and to worse minimum, but the difference when using CMN is not very big compared to FBanks." ] ]
11e6b79f1f48ddc6c580c4d0a3cb9bcb42decb17
What features do they experiment with?
[ "40 mel-scaled log filterbank enegries (FBanks) computed every 10 ms with 25 ms window, deltas and delta-deltas (120 features in vector), spectrogram" ]
[ [ "All models are trained with CTC-loss. Input features are 40 mel-scaled log filterbank enegries (FBanks) computed every 10 ms with 25 ms window, concatenated with deltas and delta-deltas (120 features in vector). We also tried to use spectrogram and experimented with different normalization techniques." ] ]
2677b88c2def3ed94e25a776599555a788d197f2
Which architecture is their best model?
[ "6-layer bLSTM with 1024 hidden units" ]
[ [ "To train our best model we chose the best network from our experiments (6-layer bLSTM with 1024 hidden units), trained it with Adam optimizer and fine-tuned with SGD with momentum using exponential learning rate decay. The best model trained with speed and volume perturbation BIBREF24 achieved 45.8% WER, which is the best published end-to-end result on Babel Turkish dataset using in-domain data. For comparison, WER of model trained using in-domain data in BIBREF18 is 53.1%, using 4 additional languages (including English Switchboard dataset) – 48.7%. It is also not far from Kaldi DNN-HMM system BIBREF22 with 43.8% WER." ] ]
8ca31caa34cc5b65dc1d01d0d1f36bf8c4928805
What kind of spontaneous speech is used?
[ "Unanswerable" ]
[ [] ]
9ab43f941c11a4b09a0e4aea61b4a5b4612e7933
What approach did previous models use for multi-span questions?
[ "Only MTMSM specifically tried to tackle the multi-span questions. Their approach consisted of two parts: first train a dedicated categorical variable to predict the number of spans to extract and the second was to generalize the single-span head method of extracting a span" ]
[ [ "MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable." ] ]
5a02a3dd26485a4e4a77411b50b902d2bda3731b
How they use sequence tagging to answer multi-span questions?
[ "To model an answer which is a collection of spans, the multi-span head uses the $\\mathtt {BIO}$ tagging format BIBREF8: $\\mathtt {B}$ is used to mark the beginning of a span, $\\mathtt {I}$ is used to mark the inside of a span and $\\mathtt {O}$ is used to mark tokens not included in a span" ]
[ [ "To model an answer which is a collection of spans, the multi-span head uses the $\\mathtt {BIO}$ tagging format BIBREF8: $\\mathtt {B}$ is used to mark the beginning of a span, $\\mathtt {I}$ is used to mark the inside of a span and $\\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans." ] ]
579941de2838502027716bae88e33e79e69997a6
What is difference in peformance between proposed model and state-of-the art on other question types?
[ "For single-span questions, the proposed LARGE-SQUAD improve performance of the MTMSNlarge baseline for 2.1 EM and 1.55 F1.\nFor number type question, MTMSNlarge baseline have improvement over LARGE-SQUAD for 3,11 EM and 2,98 F1. \nFor date question, LARGE-SQUAD have improvements in 2,02 EM but MTMSNlarge have improvement of 4,39 F1." ]
[ [] ]
9a65cfff4d99e4f9546c72dece2520cae6231810
What is the performance of proposed model on entire DROP dataset?
[ "The proposed model achieves EM 77,63 and F1 80,73 on the test and EM 76,95 and F1 80,25 on the dev" ]
[ [ "Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions." ] ]
a9def7958eac7b9a780403d4f136927f756bab83
What is the previous model that attempted to tackle multi-span questions as a part of its design?
[ "MTMSN BIBREF4" ]
[ [ "MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable." ] ]
547be35cff38028648d199ad39fb48236cfb99ee
How much more data does the model trained using XR loss have access to, compared to the fully supervised model?
[ "Unanswerable" ]
[ [] ]
47a30eb4d0d6f5f2ff4cdf6487265a25c1b18fd8
Does the system trained only using XR loss outperform the fully supervised neural system?
[ "Yes" ]
[ [] ]
e42fbf6c183abf1c6c2321957359c7683122b48e
How accurate is the aspect based sentiment classifier trained only using the XR loss?
[ "BiLSTM-XR-Dev Estimation accuracy is 83.31 for SemEval-15 and 87.68 for SemEval-16.\nBiLSTM-XR accuracy is 83.31 for SemEval-15 and 88.12 for SemEval-16.\n" ]
[ [] ]
e574f0f733fb98ecef3c64044004aa7a320439be
How is the expectation regularization loss defined?
[ "DISPLAYFORM0" ]
[ [ "Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0" ] ]
b65b1c366c8bcf544f1be5710ae1efc6d2b1e2f1
What were the non-neural baselines used for the task?
[ "The Lemming model in BIBREF17" ]
[ [ "BIBREF17: The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization. The model is globally normalized with the use of a second order linear-chain CRF. To efficiently calculate the partition function, the choice of lemmata are pruned with the use of pre-extracted edit trees." ] ]
bd3ccb63fd8ce5575338d7332e96def7a3fabad6
Which publicly available NLU dataset is used?
[ "ROMULUS dataset, NLU-Benchmark dataset" ]
[ [ "We tested the system on two datasets, different in size and complexity of the addressed language.", "Experimental Evaluation ::: Datasets ::: NLU-Benchmark dataset", "The first (publicly available) dataset, NLU-Benchmark (NLU-BM), contains $25,716$ utterances annotated with targeted Scenario, Action, and involved Entities. For example, “schedule a call with Lisa on Monday morning” is labelled to contain a calendar scenario, where the set_event action is instantiated through the entities [event_name: a call with Lisa] and [date: Monday morning]. The Intent is then obtained by concatenating scenario and action labels (e.g., calendar_set_event). This dataset consists of multiple home assistant task domains (e.g., scheduling, playing music), chit-chat, and commands to a robot BIBREF7.", "Experimental Evaluation ::: Datasets ::: ROMULUS dataset", "The second dataset, ROMULUS, is composed of $1,431$ sentences, for each of which dialogue acts, semantic frames, and corresponding frame elements are provided. This dataset is being developed for modelling user utterances to open-domain conversational systems for robotic platforms that are expected to handle different interaction situations/patterns – e.g., chit-chat, command interpretation. The corpus is composed of different subsections, addressing heterogeneous linguistic phenomena, ranging from imperative instructions (e.g., “enter the bedroom slowly, turn left and turn the lights off ”) to complex requests for information (e.g., “good morning I want to buy a new mobile phone is there any shop nearby?”) or open-domain chit-chat (e.g., “nope thanks let's talk about cinema”). A considerable number of utterances in the dataset is collected through Human-Human Interaction studies in robotic domain ($\\approx $$70\\%$), though a small portion has been synthetically generated for balancing the frame distribution." ] ]
7c794fa0b2818d354ca666969107818a2ffdda0c
What metrics other than entity tagging are compared?
[ "We also report the metrics in BIBREF7 for consistency, we report the span F1, Exact Match (EM) accuracy of the entire sequence of labels, metric that combines intent and entities" ]
[ [ "Following BIBREF7, we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table TABREF23). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems. The statistical analysis shows a significant improvement over Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.803, p = .005]$.", "In this section we report the experiments performed on the ROMULUS dataset (Table TABREF27). Together with the evaluation metrics used in BIBREF7, we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions – e.g., a match is when all the three sequences are correct.", "Results in terms of EM reflect the complexity of the different tasks, motivating their position within the hierarchy. Specifically, dialogue act identification is the easiest task ($89.31\\%$) with respect to frame ($82.60\\%$) and frame element ($79.73\\%$), due to the shallow semantics it aims to catch. However, when looking at the span F1, its score ($89.42\\%$) is lower than the frame element identification task ($92.26\\%$). What happens is that even though the label set is smaller, dialogue act spans are supposed to be longer than frame element ones, sometimes covering the whole sentence. Frame elements, instead, are often one or two tokens long, that contribute in increasing span based metrics. Frame identification is the most complex task for several reasons. First, lots of frame spans are interlaced or even nested; this contributes to increasing the network entropy. Second, while the dialogue act label is highly related to syntactic structures, frame identification is often subject to the inherent ambiguity of language (e.g., get can evoke both Commerce_buy and Arriving). We also report the metrics in BIBREF7 for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks. However, the frame element scores are comparable to the benchmark, since the task is very similar." ] ]
1ef5fc4473105f1c72b4d35cf93d312736833d3d
Do they provide decision sequences as supervision while training models?
[ "No" ]
[ [ "The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)." ] ]
5f9bd99a598a4bbeb9d2ac46082bd3302e961a0f
What are the models evaluated on?
[ "They evaluate F1 score and agent's test performance on their own built interactive datasets (iSQuAD and iNewsQA)" ]
[ [ "We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. Both original datasets share similar properties. Specifically, every data-point consists of a tuple, $\\lbrace p, q, a\\rbrace $, where $p$ represents a paragraph, $q$ a question, and $a$ is the answer. The answer is a word span defined by head and tail positions in $p$. NewsQA is more difficult than SQuAD because it has a larger vocabulary, more difficult questions, and longer source documents.", "iMRC: Making MRC Interactive ::: Evaluation Metric", "Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance ." ] ]
b2fab9ffbcf1d6ec6d18a05aeb6e3ab9a4dbf2ae
How do they train models in this setup?
[ "Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)." ]
[ [ "The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)." ] ]
e9cf1b91f06baec79eb6ddfd91fc5d434889f652
What commands does their setup provide to models seeking information?
[ "previous, next, Ctrl+F $<$query$>$, stop" ]
[ [ "Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \\le k \\le n$:", "previous: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_n & \\text{if $k = 1$,}\\\\ s_{k-1} & \\text{otherwise;} \\end{array}\\right.} $", "next: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_1 & \\text{if $k = n$,}\\\\ s_{k+1} & \\text{otherwise;} \\end{array}\\right.} $", "Ctrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;", "stop: terminate information gathering phase." ] ]
6976296126e4a5c518e6b57de70f8dc8d8fde292
What models do they propose?
[ "Feature Concatenation Model (FCM), Spatial Concatenation Model (SCM), Textual Kernels Model (TKM)" ]
[ [ "The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any)." ] ]
53640834d68cf3b86cf735ca31f1c70aa0006b72
Are all tweets in English?
[ "Unanswerable" ]
[ [] ]
b2b0321b0aaf58c3aa9050906ade6ef35874c5c1
How large is the dataset?
[ " $150,000$ tweets" ]
[ [ "Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps." ] ]
4e9684fd68a242cb354fa6961b0e3b5c35aae4b6
What is the results of multimodal compared to unimodal models?
[ "Unimodal LSTM vs Best Multimodal (FCM)\n- F score: 0.703 vs 0.704\n- AUC: 0.732 vs 0.734 \n- Mean Accuracy: 68.3 vs 68.4 " ]
[ [ "Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models." ] ]
2e632eb5ad611bbd16174824de0ae5efe4892daf
What is author's opinion on why current multimodal models cannot outperform models analyzing only text?
[ "Noisy data, Complexity and diversity of multimodal relations, Small set of multimodal examples" ]
[ [ "Despite the model trained only with images proves that they are useful for hate speech detection, the proposed multimodal models are not able to improve the detection compared to the textual models. Besides the different architectures, we have tried different training strategies, such as initializing the CNN weights with a model already trained solely with MMHS150K images or using dropout to force the multimodal models to use the visual information. Eventually, though, these models end up using almost only the text input for the prediction and producing very similar results to those of the textual models. The proposed multimodal models, such as TKM, have shown good performance in other tasks, such as VQA. Next, we analyze why they do not perform well in this task and with this data:", "[noitemsep,leftmargin=*]", "Noisy data. A major challenge of this task is the discrepancy between annotations due to subjective judgement. Although this affects also detection using only text, its repercussion is bigger in more complex tasks, such as detection using images or multimodal detection.", "Complexity and diversity of multimodal relations. Hate speech multimodal publications employ a lot of background knowledge which makes the relations between visual and textual elements they use very complex and diverse, and therefore difficult to learn by a neural network.", "Small set of multimodal examples. Fig. FIGREF5 shows some of the challenging multimodal hate examples that we aimed to detect. But although we have collected a big dataset of $150K$ tweets, the subset of multimodal hate there is still too small to learn the complex multimodal relations needed to identify multimodal hate." ] ]
d1ff6cba8c37e25ac6b261a25ea804d8e58e09c0
What metrics are used to benchmark the results?
[ "F-score, Area Under the ROC Curve (AUC), mean accuracy (ACC), Precision vs Recall plot, ROC curve (which plots the True Positive Rate vs the False Positive Rate)" ]
[ [ "Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models." ] ]
24c0f3d6170623385283dfda7f2b6ca2c7169238
How is data collected, manual collection or Twitter api?
[ "Twitter API" ]
[ [ "We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9. We filtered out retweets, tweets containing less than three words and tweets containing porn related terms. From that selection, we kept the ones that included images and downloaded them. Twitter applies hate speech filters and other kinds of content control based on its policy, although the supervision is based on users' reports. Therefore, as we are gathering tweets from real-time posting, the content we get has not yet passed any filter." ] ]
21a9f1cddd7cb65d5d48ec4f33fe2221b2a8f62e
How many tweats does MMHS150k contains, 150000?
[ "$150,000$ tweets" ]
[ [ "Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps." ] ]
a0ef0633d8b4040bf7cdc5e254d8adf82c8eed5e
What unimodal detection models were used?
[ " single layer LSTM with a 150-dimensional hidden state for hate / not hate classification" ]
[ [ "We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations. Since our dataset is not big enough to train a GloVe word embedding model, we used a pre-trained model that has been trained in two billion tweets. This ensures that the model will be able to produce word embeddings for slang and other words typically used in Twitter. To process the tweets text before generating the word embeddings, we use the same pipeline as the model authors, which includes generating symbols to encode Twitter special interactions such as user mentions (@user) or hashtags (#hashtag). To encode the tweet text and input it later to multimodal models, we use the LSTM hidden state after processing the last tweet word. Since the LSTM has been trained for hate speech classification, it extracts the most useful information for this task from the text, which is encoded in the hidden state after inputting the last tweet word." ] ]
b0799e26152197aeb3aa3b11687a6cc9f6c31011
What different models for multimodal detection were proposed?
[ "Feature Concatenation Model (FCM), Spatial Concatenation Model (SCM), Textual Kernels Model (TKM)" ]
[ [ "The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any)." ] ]
4ce4db7f277a06595014db181342f8cb5cb94626
What annotations are available in the dataset - tweat used hate speach or not?
[ "No attacks to any community, racist, sexist, homophobic, religion based attacks, attacks to other communities" ]
[ [ "We annotate the gathered tweets using the crowdsourcing platform Amazon Mechanical Turk. There, we give the workers the definition of hate speech and show some examples to make the task clearer. We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities. Each one of the $150,000$ tweets is labeled by 3 different workers to palliate discrepancies among workers." ] ]
62a6382157d5f9c1dce6e6c24ac5994442053002
What were the evaluation metrics used?
[ "accuracy, normalized mutual information" ]
[ [ "The clustering performance is evaluated by comparing the clustering results of texts with the tags/labels provided by the text corpus. Two metrics, the accuracy (ACC) and the normalized mutual information metric (NMI), are used to measure the clustering performance BIBREF38 , BIBREF48 . Given a text INLINEFORM0 , let INLINEFORM1 and INLINEFORM2 be the obtained cluster label and the label provided by the corpus, respectively. Accuracy is defined as: DISPLAYFORM0" ] ]
9e04730907ad728d62049f49ac828acb4e0a1a2a
What were their performance results?
[ "On SearchSnippets dataset ACC 77.01%, NMI 62.94%, on StackOverflow dataset ACC 51.14%, NMI 49.08%, on Biomedical dataset ACC 43.00%, NMI 38.18%" ]
[ [] ]
5a0841cc0628e872fe473874694f4ab9411a1d10
By how much did they outperform the other methods?
[ "on SearchSnippets dataset by 6.72% in ACC, by 6.94% in NMI; on Biomedical dataset by 5.77% in ACC, 3.91% in NMI" ]
[ [] ]
a5dd569e6d641efa86d2c2b2e970ce5871e0963f
Which popular clustering methods did they experiment with?
[ "K-means, Skip-thought Vectors, Recursive Neural Network and Paragraph Vector based clustering methods" ]
[ [ "In our experiment, some widely used text clustering methods are compared with our approach. Besides K-means, Skip-thought Vectors, Recursive Neural Network and Paragraph Vector based clustering methods, four baseline clustering methods are directly based on the popular unsupervised dimensionality reduction methods as described in Section SECREF11 . We further compare our approach with some other non-biased neural networks, such as bidirectional RNN. More details are listed as follows:" ] ]
785c054f6ea04701f4ab260d064af7d124260ccc
What datasets did they use?
[ "SearchSnippets, StackOverflow, Biomedical" ]
[ [ "We test our proposed approach on three public short text datasets. The summary statistics and semantic topics of these datasets are described in Table TABREF24 and Table TABREF25 .", "SearchSnippets. This dataset was selected from the results of web search transaction using predefined phrases of 8 different domains by Phan et al. BIBREF41 .", "StackOverflow. We use the challenge data published in Kaggle.com. The raw dataset consists 3,370,528 samples through July 31st, 2012 to August 14, 2012. In our experiments, we randomly select 20,000 question titles from 20 different tags as in Table TABREF25 .", "Biomedical. We use the challenge data published in BioASQ's official website. In our experiments, we randomly select 20, 000 paper titles from 20 different MeSH major topics as in Table TABREF25 . As described in Table TABREF24 , the max length of selected paper titles is 53." ] ]
3f6610d1d68c62eddc2150c460bf1b48a064e5e6
Does pre-training on general text corpus improve performance?
[ "No" ]
[ [ "Our attempt at language pre-training fell short of our expectations in all but one tested dataset. We had hoped that more stable language understanding would improve results in general. As previously mentioned, using more general and comprehensive corpora of language could help grow semantic ability.", "Our pre-training was unsuccessful in improving accuracy, even when applied to networks larger than those reported. We may need to use more inclusive language, or pre-train on very math specific texts to be successful. Our results support our thesis of infix limitation." ] ]
4c854d33a832f3f729ce73b206ff90677e131e48
What neural configurations are explored?
[ "tried many configurations of our network models, but report results with only three configurations, Transformer Type 1, Transformer Type 2, Transformer Type 3" ]
[ [ "We compare medium-sized, small, and minimal networks to show if network size can be reduced to increase training and testing efficiency while retaining high accuracy. Networks over six layers have shown to be non-effective for this task. We tried many configurations of our network models, but report results with only three configurations of Transformers.", "Transformer Type 1: This network is a small to medium-sized network consisting of 4 Transformer layers. Each layer utilizes 8 attention heads with a depth of 512 and a feed-forward depth of 1024.", "Transformer Type 2: The second model is small in size, using 2 Transformer layers. The layers utilize 8 attention heads with a depth of 256 and a feed-forward depth of 1024.", "Transformer Type 3: The third type of model is minimal, using only 1 Transformer layer. This network utilizes 8 attention heads with a depth of 256 and a feed-forward depth of 512." ] ]
163c15da1aa0ba370a00c5a09294cd2ccdb4b96d
Are the Transformers masked?
[ "Yes" ]
[ [ "We calculate the loss in training according to a mean of the sparse categorical cross-entropy formula. Sparse categorical cross-entropy BIBREF23 is used for identifying classes from a feature set, which assumes a large target classification set. Evaluation between the possible translation classes (all vocabulary subword tokens) and the produced class (predicted token) is the metric of performance here. During each evaluation, target terms are masked, predicted, and then compared to the masked (known) value. We adjust the model's loss according to the mean of the translation accuracy after predicting every determined subword in a translation." ] ]
90dd5c0f5084a045fd6346469bc853c33622908f
How is this problem evaluated?
[ "BLEU-2, average accuracies over 3 test trials on different randomly sampled test sets" ]
[ [ "Approach ::: Method: Training and Testing ::: Experiment 1: Representation", "Some of the problems encountered by prior approaches seem to be attributable to the use of infix notation. In this experiment, we compare translation BLEU-2 scores to spot the differences in representation interpretability. Traditionally, a BLEU score is a metric of translation quality BIBREF24. Our presented BLEU scores represent an average of scores a given model received over each of the target test sets. We use a standard bi-gram weight to show how accurate translations are within a window of two adjacent terms. After testing translations, we calculate an average BLEU-2 score per test set, which is related to the success over that data. An average of the scores for each dataset become the presented value.", "Approach ::: Method: Training and Testing ::: Experiment 2: State-of-the-art", "This experiment compares our networks to recent previous work. We count a given test score by a simple “correct versus incorrect\" method. The answer to an expression directly ties to all of the translation terms being correct, which is why we do not consider partial precision. We compare average accuracies over 3 test trials on different randomly sampled test sets from each MWP dataset. This calculation more accurately depicts the generalization of our networks." ] ]
095888f6e10080a958d9cd3f779a339498f3a109
What datasets do they use?
[ "AI2 BIBREF2, CC BIBREF19, IL BIBREF4, MAWPS BIBREF20" ]
[ [ "We work with four individual datasets. The datasets contain addition, subtraction, multiplication, and division word problems.", "AI2 BIBREF2. AI2 is a collection of 395 addition and subtraction problems, containing numeric values, where some may not be relevant to the question.", "CC BIBREF19. The Common Core dataset contains 600 2-step questions. The Cognitive Computation Group at the University of Pennsylvania gathered these questions.", "IL BIBREF4. The Illinois dataset contains 562 1-step algebra word questions. The Cognitive Computation Group compiled these questions also.", "MAWPS BIBREF20. MAWPS is a relatively large collection, primarily from other MWP datasets. We use 2,373 of 3,915 MWPs from this set. The problems not used were more complex problems that generate systems of equations. We exclude such problems because generating systems of equations is not our focus." ] ]
57e783f00f594e08e43a31939aedb235c9d5a102
What evaluation metrics were used?
[ "AUC-ROC" ]
[ [ "A two-step validation was conducted for English-speaking customers. An initial A/B testing for the LR model in a production setting was performed to compare the labelling strategies. A second offline comparison of the models was conducted on historical data and a selected labelling strategy. One month of data and a subset of the customers was used (approx. eighty thousand). The sampled dataset presents a fraction of positive labels of approximately 0.5 for reuse and 0.2 for one-day return. Importantly, since this evaluation is done on a subset of users, the dataset characteristic's do not necessarily represent real production traffic. The joke corpus in this dataset contains thousands of unique jokes of different categories (sci-fi, sports, etc) and types (puns, limerick, etc). The dataset was split timewise into training/validation/test sets, and hyperparameters were optimized to maximize the AUC-ROC on the validation set. As a benchmark, we also consider two additional methods: a non-personalized popularity model and one that follows BIBREF16, replacing the transformer joke encoder with a CNN network (the specialized loss and other characteristics of the DL model are kept)." ] ]
9646fa1abbe3102a0364f84e0a55d107d45c97f0
Where did the real production data come from?
[ " jokes of different categories (sci-fi, sports, etc) and types (puns, limerick, etc)" ]
[ [ "A two-step validation was conducted for English-speaking customers. An initial A/B testing for the LR model in a production setting was performed to compare the labelling strategies. A second offline comparison of the models was conducted on historical data and a selected labelling strategy. One month of data and a subset of the customers was used (approx. eighty thousand). The sampled dataset presents a fraction of positive labels of approximately 0.5 for reuse and 0.2 for one-day return. Importantly, since this evaluation is done on a subset of users, the dataset characteristic's do not necessarily represent real production traffic. The joke corpus in this dataset contains thousands of unique jokes of different categories (sci-fi, sports, etc) and types (puns, limerick, etc). The dataset was split timewise into training/validation/test sets, and hyperparameters were optimized to maximize the AUC-ROC on the validation set. As a benchmark, we also consider two additional methods: a non-personalized popularity model and one that follows BIBREF16, replacing the transformer joke encoder with a CNN network (the specialized loss and other characteristics of the DL model are kept)." ] ]
29983f4bc8a5513a198755e474361deee93d4ab6
What feedback labels are used?
[ "five-minute reuse and one-day return" ]
[ [ "The proposed models use binary classifiers to perform point-wise ranking, and therefore require a labelled dataset. To generate it, we explore two implicit user-feedback labelling strategies: five-minute reuse and one-day return. Online A/B testing is used to determine if these labelling strategies are suited to optimize the desired user-satisfaction metrics, and offline data to evaluated and compared the system's performance." ] ]
6c0f97807cd83a94a4d26040286c6f89c4a0f8e0
What representations for textual documents do they use?
[ "finite sequence of terms" ]
[ [ "A document $d$ can be defined as a finite sequence of terms (independent textual entities within a document, for example, words), namely $d=(t_1,t_2,\\dots ,t_n)$. A general idea is to associate weight to each term $t_i$ within $d$, such that" ] ]
13ca4bf76565564c8ec3238c0cbfacb0b41e14d2
Which dataset(s) do they use?
[ "14 TDs, BIBREF15" ]
[ [ "We have used a dataset of 14 TDs to conduct our experiments. There are several subjects on which their content is based: (aliens, stories, law, news) BIBREF15." ] ]
70797f66d96aa163a3bee2be30a328ba61c40a18
How do they evaluate knowledge extraction performance?
[ "SRCC" ]
[ [ "The results in Table TABREF7 show that SRCC performs much better in knowledge extraction. The two documents' contents contain the same idea expressed by terms in a different order that John had asked Mary to marry him before she left. It is obvious that cosine similarity cannot recognize this association, but SRCC has successfully recognized it and produced a similarity value of -0.285714." ] ]
71f2b368228a748fd348f1abf540236568a61b07
What is CamemBERT trained on?
[ "unshuffled version of the French OSCAR corpus" ]
[ [ "We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens." ] ]
d3d4eef047aa01391e3e5d613a0f1f786ae7cfc7
Which tasks does CamemBERT not improve on?
[ "its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa" ]
[ [ "Experiments ::: Results ::: Natural Language Inference: XNLI", "On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. It should be noted that CamemBERT uses far fewer parameters than RoBERTa (110M vs. 355M parameters)." ] ]
63723c6b398100bba5dc21754451f503cb91c9b8
What is the state of the art?
[ "POS and DP task: CONLL 2018\nNER task: (no extensive work) Strong baselines CRF and BiLSTM-CRF\nNLI task: mBERT or XLM (not clear from text)" ]
[ [ "We will compare to the more recent cross-lingual language model XLM BIBREF12, as well as the state-of-the-art CoNLL 2018 shared task results with predicted tokenisation and segmentation in an updated version of the paper.", "In French, no extensive work has been done due to the limited availability of NER corpora. We compare our model with the strong baselines settled by BIBREF49, who trained both CRF and BiLSTM-CRF architectures on the FTB and enhanced them using heuristics and pretrained word embeddings.", "In the TRANSLATE-TRAIN setting, we report the best scores from previous literature along with ours. BiLSTM-max is the best model in the original XNLI paper, mBERT which has been reported in French in BIBREF52 and XLM (MLM+TLM) is the best-presented model from BIBREF50." ] ]
5471766ca7c995dd7f0f449407902b32ac9db269
How much better was results of CamemBERT than previous results on these tasks?
[ "2.36 point increase in the F1 score with respect to the best SEM architecture, on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM), lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa, For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT, For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT" ]
[ [ "CamemBERT also demonstrates higher performances than mBERT on those tasks. We observe a larger error reduction for parsing than for tagging. For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT. For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT.", "On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. It should be noted that CamemBERT uses far fewer parameters than RoBERTa (110M vs. 355M parameters).", "For named entity recognition, our experiments show that CamemBERT achieves a slightly better precision than the traditional CRF-based SEM architectures described above in Section SECREF25 (CRF and Bi-LSTM+CRF), but shows a dramatic improvement in finding entity mentions, raising the recall score by 3.5 points. Both improvements result in a 2.36 point increase in the F1 score with respect to the best SEM architecture (BiLSTM-CRF), giving CamemBERT the state of the art for NER on the FTB. One other important finding is the results obtained by mBERT. Previous work with this model showed increased performance in NER for German, Dutch and Spanish when mBERT is used as contextualised word embedding for an NER-specific model BIBREF48, but our results suggest that the multilingual setting in which mBERT was trained is simply not enough to use it alone and fine-tune it for French NER, as it shows worse performance than even simple CRF models, suggesting that monolingual models could be better at NER." ] ]
dc49746fc98647445599da9d17bc004bafdc4579
Was CamemBERT compared against multilingual BERT on these tasks?
[ "Yes" ]
[ [ "To demonstrate the value of building a dedicated version of BERT for French, we first compare CamemBERT to the multilingual cased version of BERT (designated as mBERT). We then compare our models to UDify BIBREF36. UDify is a multitask and multilingual model based on mBERT that is near state-of-the-art on all UD languages including French for both POS tagging and dependency parsing." ] ]
8720c096c8b990c7b19f956ee4930d5f2c019e2b
How long was CamemBERT trained?
[ "Unanswerable" ]
[ [] ]
b573b36936ffdf1d70e66f9b5567511c989b46b2
What data is used for training CamemBERT?
[ "unshuffled version of the French OSCAR corpus" ]
[ [ "We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens." ] ]
bf25a202ac713a34e09bf599b3601058d9cace46
What are the state of the art measures?
[ "Randomwalk, Walktrap, Louvain clustering" ]
[ [ "As Garimella et al. BIBREF23 have made their code public , we reproduced their best method Randomwalk on our datasets and measured the AUC ROC, obtaining a score of 0.935. An interesting finding was that their method had a poor performance over their own datasets. This was due to the fact (already explained in Section SECREF4) that it was not possible to retrieve the complete discussions, moreover, in no case could we restore more than 50% of the tweets. So we decided to remove these discussions and measure again the AUC ROC of this method, obtaining a 0.99 value. Our hypothesis is that the performance of that method was seriously hurt by the incompleteness of the data. We also tested our method on these datasets, obtaining a 0.99 AUC ROC with Walktrap and 0.989 with Louvain clustering." ] ]
abebf9c8c9cf70ae222ecb1d3cabf8115b9fc8ac
What controversial topics are experimented with?
[ "political events such as elections, corruption cases or justice decisions" ]
[ [ "To define the amount of data needed to run our method we established that the Fasttext model has to predict at least one user of each community with a probability greater or equal than 0.9 during ten different trainings. If that is not the case, we are not able to use DPC method. This decision made us consider only a subset of the datasets used in BIBREF23, because due to the time elapsed since their work, many tweets had been deleted and consequently the volume of the data was not enough for our framework. To enlarge our experiment base we added new debates, more detailed information about each one is shown in Table TABREF24 in UNKREF6. To select new discussions and to determine if they are controversial or not we looked for topics widely covered by mainstream media, and that have generated ample discussion, both online and offline. For non-controversy discussions we focused on “soft news\" and entertainment, but also to events that, while being impactful and/or dramatic, did not generate large controversies. To validate that intuition, we manually checked a sample of tweets, being unable to identify any clear instance of controversy On the other side, for controversial debates we focused on political events such as elections, corruption cases or justice decisions." ] ]
2df910c9806f0c379d7bb1bc2be2610438e487dc
What datasets did they use?
[ "BIBREF32, BIBREF23, BIBREF33, discussions in four different languages: English, Portuguese, Spanish and French, occurring in five regions over the world: South and North America, Western Europe, Central and Southern Asia. " ]
[ [ "The method we propose to measure the controversy equates in accuracy the one developed by Garimella et al.BIBREF23 and improves considerably computing time and robustness wrt the amount of data needed to effectively apply it. Our method is also based on a graph approach but it has its main focus on the vocabulary. We first train an NLP classifier that estimates opinion polarity of main users, then we run label-propagation BIBREF31 on the endorsement graph to get polarity of the whole network. Finally we compute the controversy score through a computation inspired in Dipole Moment, a measure used in physics to estimate electric polarity on a system. In our experiments we use the same data-sets from other works BIBREF32, BIBREF23, BIBREF33 as well as other datasets that we collected by us using a similar criterion (described in Section SECREF4).", "In this section we detail the discussions we use to test our metric and how we determine the ground truth (i.e. if the discussion is controversial or not). We use thirty different discussions that took place between March 2015 and June 2019, half of them with controversy and half without it. We considered discussions in four different languages: English, Portuguese, Spanish and French, occurring in five regions over the world: South and North America, Western Europe, Central and Southern Asia. We also studied these discussions taking first 140 characters and then 280 from each tweet to analyze the difference in performance and computing time wrt the length of the posts." ] ]
a2a3af59f3f18a28eb2ca7055e1613948f395052
What social media platform is observed?
[ "Twitter" ]
[ [ "Having this in mind and if we draw from the premise that when a discussion has a high controversy it is in general due to the presence of two principal communities fighting each other (or, conversely, that when there is no controversy there is just one principal community the members of which share a common point of view), we can measure the controversy by detecting if the discussion has one or two principal jargons in use. Our method is tested on Twitter datasets. This microblogging platform has been widely used to analyze discussions and polarization BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF2. It is a natural choice for these kind of problems, as it represents one of the main fora for public debate in online social media BIBREF15, it is a common destination for affiliative expressions BIBREF16 and is often used to report and read news about current events BIBREF17. An extra advantage of Twitter for this kind of studies is the availability of real-time data generated by millions of users. Other social media platforms offer similar data-sharing services, but few can match the amount of data and the accompanied documentation provided by Twitter. One last asset of Twitter for our work is given by retweets, whom typically indicate endorsement BIBREF18 and hence become a useful concept to model discussions as we can set “who is with who\". However, our method has a general approach and it could be used a priori in any social network. In this work we report excellent result tested on Twitter but in future work we are going to test it in other social networks." ] ]
d92f1c15537b33b32bfc436e6d017ae7d9d6c29a
How many languages do they experiment with?
[ "four different languages: English, Portuguese, Spanish and French" ]
[ [ "In this section we detail the discussions we use to test our metric and how we determine the ground truth (i.e. if the discussion is controversial or not). We use thirty different discussions that took place between March 2015 and June 2019, half of them with controversy and half without it. We considered discussions in four different languages: English, Portuguese, Spanish and French, occurring in five regions over the world: South and North America, Western Europe, Central and Southern Asia. We also studied these discussions taking first 140 characters and then 280 from each tweet to analyze the difference in performance and computing time wrt the length of the posts." ] ]
fa3663567c48c27703e09c42930e51bacfa54905
What is the current SOTA for sentiment analysis on Twitter at the time of writing?
[ "deep convolutional networks BIBREF53 , BIBREF54" ]
[ [ "Supervised learning. Traditionally, the above features were fed into classifiers such as Maximum Entropy (MaxEnt) and Support Vector Machines (SVM) with various kernels. However, observation over the SemEval Twitter sentiment task in recent years shows growing interest in, and by now clear dominance of methods based on deep learning. In particular, the best-performing systems at SemEval-2015 and SemEval-2016 used deep convolutional networks BIBREF53 , BIBREF54 . Conversely, kernel machines seem to be less frequently used than in the past, and the use of learning methods other than the ones mentioned above is at this point scarce. All these models are examples of supervised learning as they need labeled training data." ] ]
7997b9971f864a504014110a708f215c84815941
What difficulties does sentiment analysis on Twitter have, compared to sentiment analysis in other domains?
[ "Tweets noisy nature, use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, short (length limited) text" ]
[ [ "Pre-processing. Tweets are subject to standard preprocessing steps for text such as tokenization, stemming, lemmatization, stop-word removal, and part-of-speech tagging. Moreover, due to their noisy nature, they are also processed using some Twitter-specific techniques such as substitution/removal of URLs, of user mentions, of hashtags, and of emoticons, spelling correction, elongation normalization, abbreviation lookup, punctuation removal, detection of amplifiers and diminishers, negation scope detection, etc. For this, one typically uses Twitter-specific NLP tools such as part-of-speech and named entity taggers, syntactic parsers, etc. BIBREF47 , BIBREF48 , BIBREF49 .", "Despite all these opportunities, the rise of social media has also presented new challenges for natural language processing (NLP) applications, which had largely relied on NLP tools tuned for formal text genres such as newswire, and thus were not readily applicable to the informal language and style of social media. That language proved to be quite challenging with its use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, e.g., RT for re-tweet and #hashtags. In addition to the genre difference, there is also a difference in length: social media messages are generally short, often length-limited by design as in Twitter, i.e., a sentence or a headline rather than a full document. How to handle such challenges has only recently been the subject of thorough research BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 ." ] ]
0d1408744651c3847469c4a005e4a9dccbd89cf1
What are the metrics to evaluate sentiment analysis on Twitter?
[ "Unanswerable" ]
[ [] ]
a3d83c2a1b98060d609e7ff63e00112d36ce2607
How many sentence transformations on average are available per unique sentence in dataset?
[ "27.41 transformation on average of single seed sentence is available in dataset." ]
[ [ "In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset. Statistics of individual annotators are available in tab:statistics." ] ]
aeda22ae760de7f5c0212dad048e4984cd613162
What annotations are available in the dataset?
[ "For each source sentence, transformation sentences that are transformed according to some criteria (paraphrase, minimal change etc.)" ]
[ [ "We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space.", "Several modification types were specifically selected to constitute a thorough test of embeddings. In different meaning, the annotators should create a sentence with some other meaning using the same words as the original sentence. Other transformations which should be difficult for embeddings include minimal change, in which the sentence meaning should be significantly changed by using only very small modification, or nonsense, in which words of the source sentence should be shuffled so that it is grammatically correct, but without any sense." ] ]
d5fa26a2b7506733f3fa0973e2fe3fc1bbd1a12d
How are possible sentence transformations represented in dataset, as new sentences?
[ "Yes, as new sentences." ]
[ [ "In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset. Statistics of individual annotators are available in tab:statistics." ] ]
2d536961c6e1aec9f8491e41e383dc0aac700e0a
What are all 15 types of modifications ilustrated in the dataset?
[ "- paraphrase 1\n- paraphrase 2\n- different meaning\n- opposite meaning\n- nonsense\n- minimal change\n- generalization\n- gossip\n- formal sentence\n- non-standard sentence\n- simple sentence\n- possibility\n- ban\n- future\n- past" ]
[ [ "We selected 15 modifications types to collect COSTRA 1.0. They are presented in annotationinstructions." ] ]
18482658e0756d69e39a77f8fcb5912545a72b9b
Is this dataset publicly available?
[ "Yes" ]
[ [ "The corpus is freely available at the following link:", "http://hdl.handle.net/11234/1-3123" ] ]
9d336c4c725e390b6eba8bb8fe148997135ee981
Are some baseline models trained on this dataset?
[ "Yes" ]
[ [ "We embedded COSTRA sentences with LASER BIBREF15, the method that performed very well in revealing linear relations in BaBo2019. Having browsed a number of 2D visualizations (PCA and t-SNE) of the space, we have to conclude that visually, LASER space does not seem to exhibit any of the desired topological properties discussed above, see fig:pca for one example." ] ]
016b59daa84269a93ce821070f4f5c1a71752a8a
Do they do any analysis of of how the modifications changed the starting set of sentences?
[ "Yes" ]
[ [ "The lack of semantic relations in the LASER space is also reflected in vector similarities, summarized in similarities. The minimal change operation substantially changed the meaning of the sentence, and yet the embedding of the transformation lies very closely to the original sentence (average similarity of 0.930). Tense changes and some form of negation or banning also keep the vectors very similar." ] ]
771b373d09e6eb50a74fffbf72d059ad44e73ab0
How do they introduce language variation?
[ " we were looking for original and uncommon sentence change suggestions" ]
[ [ "We acquired the data in two rounds of annotation. In the first one, we were looking for original and uncommon sentence change suggestions. In the second one, we collected sentence alternations using ideas from the first round. The first and second rounds of annotation could be broadly called as collecting ideas and collecting data, respectively." ] ]
efb52bda7366d2b96545cf927f38de27de3b5b77
Do they use external resources to make modifications to sentences?
[ "No" ]
[ [] ]
1a7d28c25bb7e7202230e1b70a885a46dac8a384
How big is dataset domain-specific embedding are trained on?
[ "Unanswerable" ]
[ [] ]
6bc45d4f908672945192390642da5a2760971c40
How big is unrelated corpus universal embedding is traned on?
[ "Unanswerable" ]
[ [] ]
48cc41c372d44b69a477998be449f8b81384786b
How better are state-of-the-art results than this model?
[ "we achieve better results than GCN+PADG but without any use of domain-specific hand-crafted features, RegSum achieves a similar ROUGE-2 score" ]
[ [ "Table TABREF32 depicts models producing 100 words summaries, all depending on hand-crafted features. We use as baselines FreqSum BIBREF22 ; TsSum BIBREF23 ; traditional graph-based approaches such as Cont. LexRank BIBREF9 ; Centroid BIBREF24 ; CLASSY04 BIBREF25 ; its improved version CLASSY11 BIBREF26 and the greedy model GreedyKL BIBREF27. All of these models are significantly underperforming compared to SemSentSum. In addition, we include state-of-the-art models : RegSum BIBREF0 and GCN+PADG BIBREF3. We outperform both in terms of ROUGE-1. For ROUGE-2 scores we achieve better results than GCN+PADG but without any use of domain-specific hand-crafted features and a much smaller and simpler model. Finally, RegSum achieves a similar ROUGE-2 score but computes sentence saliences based on word scores, incorporating a rich set of word-level and domain-specific features. Nonetheless, our model is competitive and does not depend on hand-crafted features due to its full data-driven nature and thus, it is not limited to a single domain." ] ]
efb3a87845460655c53bd7365bcb8393c99358ec
What were their results on the three datasets?
[ "accuracy of 86.63 on STS, 85.14 on Sanders and 80.9 on HCR" ]
[ [] ]
0619fc797730a3e59ac146a5a4575c81517cc618
What was the baseline?
[ "We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN., we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. " ]
[ [ "Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN. As can be seen, 86.63 is the best prediction accuracy of our model so far for the STS Corpus.", "For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset." ] ]
846a1992d66d955fa1747bca9a139141c19908e8
Which datasets did they use?
[ "Stanford - Twitter Sentiment Corpus (STS Corpus), Sanders - Twitter Sentiment Corpus, Health Care Reform (HCR)" ]
[ [ "Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .", "Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.", "Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ." ] ]
1ef8d1cb1199e1504b6b0daea52f2e4bd2ef7023
Are results reported only on English datasets?
[ "Yes" ]
[ [ "Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .", "Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.", "Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 .", "Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN. As can be seen, 86.63 is the best prediction accuracy of our model so far for the STS Corpus.", "For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset." ] ]
12d77ac09c659d2e04b5e3955a283101c3ad1058
Which three Twitter sentiment classification datasets are used for experiments?
[ "Stanford - Twitter Sentiment Corpus (STS Corpus), Sanders - Twitter Sentiment Corpus, Health Care Reform (HCR)" ]
[ [ "Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .", "Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.", "Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ." ] ]
d60a3887a0d434abc0861637bbcd9ad0c596caf4
What semantic rules are proposed?
[ "rules that compute polarity of words after POS tagging or parsing steps" ]
[ [ "In Twitter social networking, people express their opinions containing sub-sentences. These sub-sentences using specific PoS particles (Conjunction and Conjunctive adverbs), like \"but, while, however, despite, however\" have different polarities. However, the overall sentiment of tweets often focus on certain sub-sentences. For example:", "@lonedog bwahahah...you are amazing! However, it was quite the letdown.", "@kirstiealley my dentist is great but she's expensive...=(", "In two tweets above, the overall sentiment is negative. However, the main sentiment is only in the sub-sentences following but and however. This inspires a processing step to remove unessential parts in a tweet. Rule-based approach can assists these problems in handling negation and dealing with specific PoS particles led to effectively affect the final output of classification BIBREF11 BIBREF16 . BIBREF11 summarized a full presentation of their semantic rules approach and devised ten semantic rules in their hybrid approach based on the presentation of BIBREF16 . We use five rules in the semantic rules set because other five rules are only used to compute polarity of words after POS tagging or Parsing steps. We follow the same naming convention for rules utilized by BIBREF11 to represent the rules utilized in our proposed method. The rules utilized in the proposed method are displayed in Table TABREF15 in which is included examples from STS Corpus and output after using the rules. Table TABREF16 illustrates the number of processed sentences on each dataset." ] ]
69a7a6675c59a4c5fb70006523b9fe0f01ca415c
Which knowledge graph completion tasks do they experiment with?
[ "link prediction , triplet classification" ]
[ [ "We evaluate the effectiveness of our LAN model on two typical knowledge graph completion tasks, i.e., link prediction and triplet classification. We compare our LAN with two baseline aggregators, MEAN and LSTM, as described in the Encoder section. MEAN is used on behalf of pooling functions since it leads to the best performance in BIBREF9 ijcai2017-250. LSTM is used due to its large expressive capability BIBREF26 ." ] ]