question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
8bfbf78ea7fae0c0b8a510c9a8a49225bbdb5383
Does the paper motivate the use of CRF as the baseline model?
[ "the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data" ]
[ [ "A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24." ] ]
97757a69d9fc28b260e68284fd300726fbe358d0
What are the handcrafted features used?
[ "Bias feature, Token feature, Uppercase feature (y/n), Titlecase feature (y/n), Character trigram feature, Quotation feature (y/n), Word suffix feature (last three characters), POS tag (provided by spaCy utilities), Word shape (provided by spaCy utilities), Word embedding (see Table TABREF26)" ]
[ [ "The following handcrafted features were used for the model:", "Bias feature", "Token feature", "Uppercase feature (y/n)", "Titlecase feature (y/n)", "Character trigram feature", "Quotation feature (y/n)", "Word suffix feature (last three characters)", "POS tag (provided by spaCy utilities)", "Word shape (provided by spaCy utilities)", "Word embedding (see Table TABREF26)" ] ]
41830ebb8369a24d490e504b7cdeeeaa9b09fd9c
What is state of the art method?
[ "Unanswerable" ]
[ [] ]
4904ef32a8f84cf2f53b1532ccf7aa77273b3d19
By how much do proposed architectures autperform state-of-the-art?
[ "Unanswerable" ]
[ [] ]
45b28a6ce2b0f1a8b703a3529fd1501f465f3fdf
What are three new proposed architectures?
[ "special dedicated discriminator is added to the model to control that the latent representation does not contain stylistic information, shifted autoencoder or SAE, combination of both approaches" ]
[ [ "Let us propose two further extensions of this baseline architecture. To improve reproducibility of the research the code of the studied models is open. Both extensions aim to improve the quality of information decomposition within the latent representation. In the first one, shown in Figure FIGREF12, a special dedicated discriminator is added to the model to control that the latent representation does not contain stylistic information. The loss of this discriminator is defined as", "The second extension of the baseline architecture does not use an adversarial component $D_z$ that is trying to eradicate information on $c$ from component $z$. Instead, the system, shown in Figure FIGREF16 feeds the \"soft\" generated sentence $\\tilde{G}$ into encoder $E$ and checks how close is the representation $E(\\tilde{G} )$ to the original representation $z = E(x)$ in terms of the cosine distance. We further refer to it as shifted autoencoder or SAE. Ideally, both $E(\\tilde{G} (E(x), c))$ and $E(\\tilde{G} (E(x), \\bar{c}))$, where $\\bar{c}$ denotes an inverse style code, should be both equal to $E(x)$. The loss of the shifted autoencoder is", "We also study a combination of both approaches described above, shown on Figure FIGREF17." ] ]
d6a27c41c81f12028529e97e255789ec2ba39eaa
How much does the standard metrics for style accuracy vary on different re-runs?
[ "accuracy can change up to 5 percentage points, whereas BLEU can vary up to 8 points" ]
[ [ "On Figure FIGREF1 one can see that the outcomes for every single rerun differ significantly. Namely, accuracy can change up to 5 percentage points, whereas BLEU can vary up to 8 points. This variance can be partially explained with the stochasticity incurred due to sampling from the latent variables. However, we show that results for state of the art models sometimes end up within error margins from one another, so one has to report the margins to compare the results rigorously. More importantly, one can see that there is an inherent trade-off between these two performance metrics. This trade-off is not only visible across models but is also present for the same retrained architecture. Therefore, improving one of the two metrics is not enough to confidently state that one system solves the style-transfer problem better than the other. One has to report error margins after several consecutive retrains and instead of comparing one of the two metrics has to talk about Pareto-like optimization that would show confident improvement of both." ] ]
2d3bf170c1647c5a95abae50ee3ef3b404230ce4
Which baseline methods are used?
[ "standard parametrized attention and a non-attention baseline" ]
[ [ "Table 1 shows the BLEU scores of our model on different sequence lengths while varying $K$ . This is a study of the trade-off between computational time and representational power. A large $K$ allows us to compute complex source representations, while a $K$ of 1 limits the source representation to a single vector. We can see that performance consistently increases with $K$ up to a point that depends on the data length, with longer sequences requiring more complex representations. The results with and without position encodings are almost identical on the toy data. Our technique learns to fit the data as well as the standard attention mechanism despite having less representational power. Both beat the non-attention baseline by a significant margin.", "All models are implemented using TensorFlow based on the seq2seq implementation of BIBREF15 and trained on a single machine with a Nvidia K40m GPU. We use a 2-layer 256-unit, a bidirectional LSTM BIBREF16 encoder, a 2-layer 256-unit LSTM decoder, and 256-dimensional embeddings. For the attention baseline, we use the standard parametrized attention BIBREF2 . Dropout of 0.2 (0.8 keep probability) is applied to the input of each cell and we optimize using Adam BIBREF17 at a learning rate of 0.0001 and batch size 128. We train for at most 200,000 steps (see Figure 3 for sample learning curves). BLEU scores are calculated on tokenized data using the multi-bleu.perl script in Moses. We decode using beam search with a beam" ] ]
6e8c587b6562fafb43a7823637b84cd01487059a
How much is the BLEU score?
[ "Ranges from 44.22 to 100.00 depending on K and the sequence length." ]
[ [] ]
ab9453fa2b927c97b60b06aeda944ac5c1bfef1e
Which datasets are used in experiments?
[ "Sequence Copy Task and WMT'17" ]
[ [ "Due to the reduction of computational time complexity we expect our method to yield performance gains especially for longer sequences and tasks where the source can be compactly represented in a fixed-size memory matrix. To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 . We generated 4 training datasets of 100,000 examples and a validation dataset of 1,000 examples. The vocabulary size was 20. For each dataset, the sequences had lengths randomly chosen between 0 to $L$ , for $L\\in \\lbrace 10, 50, 100, 200\\rbrace $ unique to each dataset.", "Next, we explore if the memory-based attention mechanism is able to fit complex real-world datasets. For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples). We used the newly available pre-processed datasets for the WMT'17 task. Note that our scores may not be directly comparable to other work that performs their own data pre-processing. We learn shared vocabularies of 16,000 subword units using the BPE algorithm BIBREF19 . We use newstest2015 as a validation set, and report BLEU on newstest2016." ] ]
3a8d65eb8e1dbb995981a0e02d86ebf3feab107a
What regularizers were used to encourage consistency in back translation cycles?
[ "an adversarial loss ($\\ell _{adv}$) for each model as in the baseline, a cycle consistency loss ($\\ell _{cycle}$) on each side" ]
[ [ "We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is similar to $X$ and $\\mathcal {F}(\\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\\ell _{cycle}$) on each side to avoid $\\mathcal {F}$ and $\\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4." ] ]
d0c79f4a5d5c45fe673d9fcb3cd0b7dd65df7636
What are new best results on standard benchmark?
[ "New best results of accuracy (P@1) on Vecmap:\nOurs-GeoMMsemi: EN-IT 50.00 IT-EN 42.67 EN-DE 51.60 DE-EN 47.22 FI-EN 39.62 EN-ES 39.47 ES-EN 36.43" ]
[ [ "Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima." ] ]
54c7fc08598b8b91a8c0399f6ab018c45e259f79
How better is performance compared to competitive baselines?
[ "Proposed method vs best baseline result on Vecmap (Accuracy P@1):\nEN-IT: 50 vs 50\nIT-EN: 42.67 vs 42.67\nEN-DE: 51.6 vs 51.47\nDE-EN: 47.22 vs 46.96\nEN-FI: 35.88 vs 36.24\nFI-EN: 39.62 vs 39.57\nEN-ES: 39.47 vs 39.30\nES-EN: 36.43 vs 36.06" ]
[ [ "Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima." ] ]
5112bbf13c7cf644bf401daecb5e3265889a4bfc
How big is data used in experiments?
[ "Unanswerable" ]
[ [] ]
03ce42ff53aa3f1775bc57e50012f6eb1998c480
What 6 language pairs is experimented on?
[ "EN<->ES\nEN<->DE\nEN<->IT\nEN<->EO\nEN<->MS\nEN<->FI" ]
[ [ "Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization." ] ]
ebeedbb8eecdf118d543fdb5224ae610eef212c8
What are current state-of-the-art methods that consider the two tasks independently?
[ "Procrustes, GPA, GeoMM, GeoMM$_{semi}$, Adv-C-Procrustes, Unsup-SL, Sinkhorn-BT" ]
[ [ "In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$)." ] ]
9efd025cfa69c6ff2777528bd158f79ead9353d1
How big is their training set?
[ "Unanswerable" ]
[ [] ]
559c1307610a15427caeb8aff4d2c01ae5c9de20
What baseline do they compare to?
[ "For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 ." ]
[ [ "Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods." ] ]
4ecb6674bcb4162bf71aea8d8b82759255875df3
Which pre-trained transformer do they use?
[ "BIBREF5" ]
[ [ "Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods." ] ]
eacc1eb65daad055df934e0e878f417b73b2ecc1
What is the FEVER task?
[ "tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem" ]
[ [ "The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted.", "As entailment data, premises in FEVER data differ substantially from those in the image caption data used as the basis for the Stanford Natural Language Inference (SNLI) BIBREF1 dataset. Sentences are longer (31 compared to 14 words on average), vocabulary is more abstract, and the prevalence of named entities and out-of-vocabulary terms is higher.", "The retrieval aspect of FEVER is not straightforward either. A claim may have small word overlap with the relevant evidence, especially if the claim is refuted by the evidence." ] ]
d353a6bbdc66be9298494d0c853e0d8d752dec4b
How is correctness of automatic derivation proved?
[ "empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method)" ]
[ [ "In this section, we empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method) in ROOT. We show that AD can drastically improve accuracy and performance of derivative evaluation, compared to ND." ] ]
e2cfaa2ec89b944bbc46e5edf7753b3018dbdc8f
Is this AD implementation used in any deep learning framework?
[ "Unanswerable" ]
[ [] ]
22c36082b00f677e054f0f0395ed685808965a02
Do they conduct any human evaluation?
[ "Yes" ]
[ [ "We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01." ] ]
85a7dbf6c2e21bfb7a3a938381890ac0ec2a19e0
What dataset do they use for experiments?
[ "English$\\rightarrow $Italian/German portions of the MuST-C corpus, As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)" ]
[ [ "Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder." ] ]
90bc60320584ebba11af980ed92a309f0c1b5507
How do they enrich the positional embedding with length information
[ "They introduce new trigonometric encoding which besides information about position uses additional length information (abs or relative)." ]
[ [ "Methods ::: Length Encoding Method", "Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length:", "where $i=1,\\ldots ,d/2$.", "Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers:", "where $q_N: [0, 1] \\rightarrow \\lbrace 0, 1, .., N\\rbrace $ is simply defined as $q_N(x) = \\lfloor {x \\times N}\\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9." ] ]
f52b2ca49d98a37a6949288ec5f281a3217e5ae8
How do they condition the output to a given target-source class?
[ "They use three groups short/normal/long translation classes to learn length token, which is in inference used to bias network to generate desired length group." ]
[ [ "Methods ::: Length Token Method", "Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\\text{min}$ and $t_\\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\\text{min}$ and $t_\\text{max}$ are in the normal group, the ones with ratio below $t_\\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group." ] ]
228425783a4830e576fb98696f76f4c7c0a1b906
Which languages do they focus on?
[ "two translation directions (En-It and En-De)" ]
[ [ "We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01." ] ]
9d1135303212356f3420ed010dcbe58203cc7db4
What dataset do they use?
[ "English$\\rightarrow $Italian/German portions of the MuST-C corpus, As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)" ]
[ [ "Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder." ] ]
d8bf4a29c7af213a9a176eb1503ec97d01cc8f51
Do they experiment with combining both methods?
[ "Yes" ]
[ [ "Methods ::: Combining the two methods", "We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length." ] ]
73abb173a3cc973ab229511cf53b426865a2738b
What state-of-the-art models are compared against?
[ "a deep neural network (DNN) architecture proposed in BIBREF24 , maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model" ]
[ [ "As a comparison to the state of the art methods, Table 1 presents accuracy results of the best Collective Matrix Factorization model, with a latent space dimension of 350, which has been determined by cross-validation on a development set, where the value of each slot is instantiated as the most probable w.r.t the inference procedure presented in Section \"Spectral decomposition model for state tracking in slot-filling dialogs\" . In our experiments, the variance is estimated using standard dataset reshuffling. The same results are obtained for several state of the art methods of generative and discriminative state tracking on this dataset using the publicly available results as reported in BIBREF22 . More precisely, as provided by the state-of-the-art approaches, the accuracy scores computes $p(s^*_{t+1}|s_t,z_t)$ commonly name the joint goal. Our proposition is compared to the 4 baseline trackers provided by the DSTC organisers. They are the baseline tracker (Baseline), the focus tracker (Focus), the HWU tracker (HWU) and the HWU tracker with “original” flag set to (HWU+) respectively. Then a comparison to a maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model and finally a deep neural network (DNN) architecture proposed in BIBREF24 as reported also in BIBREF22 is presented." ] ]
1d9b953a324fe0cfbe8e59dcff7a44a2f93c568d
Does API provide ability to connect to models written in some other deep learning framework?
[ "Yes" ]
[ [ "The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\\ell $, the user can request samples $z \\sim \\textsc {CRF}(\\ell )$, probabilities $\\textsc {CRF}(z;\\ell )$, modes $\\arg \\max _z \\textsc {CRF}(\\ell )$, or other distributional properties such as $\\mathbb {H}(\\textsc {CRF}(\\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning." ] ]
093039f974805952636c19c12af3549aa422ec43
Is this library implemented into Torch or is framework agnostic?
[ "It uses deep learning framework (pytorch)" ]
[ [ "With this challenge in mind, we introduce Torch-Struct with three specific contributions:", "Modularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework.", "Completeness: a broad array of classical algorithms are implemented and new models can easily be added in Python.", "Efficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization." ] ]
8df89988adff57279db10992846728ec4f500eaa
What baselines are used in experiments?
[ "Typical implementations of dynamic programming algorithms are serial in the length of the sequence, Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized, Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient" ]
[ [ "Optimizations ::: a) Parallel Scan Inference", "The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \\bigoplus _c \\ell _{t, \\cdot , c} \\otimes \\ell _{t^{\\prime }, c, \\cdot }$. Under this approach, we only need $O(\\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models.", "Optimizations ::: b) Vectorized Parsing", "Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$,", "Optimizations ::: c) Semiring Matrix Operations", "The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\\sum , \\times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \\times M$ and $M \\times O$, we can broadcast with $\\otimes $ to a tensor of size $N \\times M \\times O$ and then reduce dim $M$ by $\\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing," ] ]
94edac71eea1e78add678fb5ed2d08526b51016b
What general-purpose optimizations are included?
[ "Parallel Scan Inference, Vectorized Parsing, Semiring Matrix Operations" ]
[ [ "Optimizations ::: a) Parallel Scan Inference", "Optimizations ::: b) Vectorized Parsing", "Optimizations ::: c) Semiring Matrix Operations", "Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming. Figure FIGREF17 shows the impact of these optimizations on the core algorithms." ] ]
9c4ed8ca59ba6d240f031393b01f634a9dc3615d
what baseline do they compare to?
[ "VecMap, Muse, Barista" ]
[ [ "We compare Blse (Sections UID23 – UID30 ) to VecMap, Muse, and Barista (Section \"Previous Work\" ) as baselines, which have similar data requirements and to machine translation (MT) and monolingual (Mono) upper bounds which request more resources. For all models (Mono, MT, VecMap, Muse, Barista), we take the average of the word embeddings in the source-language training examples and train a linear SVM. We report this instead of using the same feed-forward network as in Blse as it is the stronger upper bound. We choose the parameter $c$ on the target language development set and evaluate on the target language test set." ] ]
ca7e71131219252d1fab69865804b8f89a2c0a8f
How does this compare to traditional calibration methods like Platt Scaling?
[ "No reliability diagrams are provided and no explicit comparison is made between confidence scores or methods." ]
[ [ "Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision. Ideally, the resulting model after one-round of training becomes better not only at confidence modeling, but also at assertion generation, suggesting that extractions of higher quality can be added as training samples to continue this training process iteratively. The resulting iterative learning procedure (alg:iter) incrementally includes extractions generated by the current model as training samples to optimize the binary classification loss to obtain a better model, and this procedure is continued until convergence. [t] training data $\\mathcal {D}$ , initial model $\\theta ^{(0)}$ model after convergence $\\theta $ $t \\leftarrow 0$ # iteration", "A key step in open IE is confidence modeling, which ranks a list of candidate extractions based on their estimated quality. This is important for downstream tasks, which rely on trade-offs between the precision and recall of extracted assertions. For instance, an open IE-powered medical question answering (QA) system may require its assertions in higher precision (and consequently lower recall) than QA systems for other domains. For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 . However, we observe that this often yields sub-optimal ranking results, with incorrect extractions of one sentence having higher likelihood than correct extractions of another sentence. We hypothesize this is due to the issue of a disconnect between training and test-time objectives. Specifically, the system is trained solely to raise likelihood of gold-standard extractions, and during training the model is not aware of its test-time behavior of ranking a set of system-generated assertions across sentences that potentially include incorrect extractions.", "We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score. An extraction is judged as correct if the predicate and arguments include the syntactic head of the gold standard counterparts." ] ]
d77c9ede2727c28e0b5a240b2521fd49a19442e0
What's the input representation of OpenIE tuples into the model?
[ "word embeddings" ]
[ [ "Our training method in sec:ours could potentially be used with any probabilistic open IE model, since we make no assumptions about the model and only the likelihood of the extraction is required for iterative rank-aware learning. As a concrete instantiation in our experiments, we use RnnOIE BIBREF3 , BIBREF9 , a stacked BiLSTM with highway connections BIBREF10 , BIBREF11 and recurrent dropout BIBREF12 . Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $ \\mathbf {x}_t = [\\mathbf {W}_{\\text{emb}}(w_t), \\mathbf {W}_{\\text{mask}}(w_t = v)]. $" ] ]
a9610cbcca813f4376fbfbf21cc14689c7fbd677
What statistics on the VIST dataset are reported?
[ "In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories." ]
[ [ "We used the Visual storytelling (VIST) dataset comprising of image sequences obtained from Flickr albums and respective annotated descriptions collected through Amazon Mechanical Turk BIBREF1. Each sequence has 5 images with corresponding descriptions that together make up for a story. Furthermore, for each Flickr album there are 5 permutations of a selected set of its images. In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories." ] ]
64ab2b92e986e0b5058bf4f1758e849f6a41168b
What is the performance difference in performance in unsupervised feature learning between adverserial training and FHVAE-based disentangled speech represenation learning?
[ "Unanswerable" ]
[ [] ]
bcd6befa65cab3ffa6334c8ecedd065a4161028b
What are puns?
[ "a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect" ]
[ [ "Puns are a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect BIBREF3 . Puns where the two meanings share the same pronunciation are known as homophonic or perfect puns, while those relying on similar but non-identical sounding words are known as heterophonic BIBREF4 or imperfect puns BIBREF5 . In this paper, we study automatic target recoverability of English-Hindi code mixed puns - which are more commonly imperfect puns, but may also be perfect puns in some cases." ] ]
479fc9e6d6d80e69f425d9e82e618e6b7cd12764
What are the categories of code-mixed puns?
[ "intra-sequential and intra-word" ]
[ [ "With India being a diverse linguistic region, there is an ever increasing usage of code-mixed Hindi-English language (along with various others) because bilingualism and even multilingualism are quite common. Consequently, we have also seen an increase in the usage of code-mixed language in online forums, advertisements etc. Code-mixed humour, especially puns have become increasingly popular because being able to use the same punning techniques but with two languages in play has opened up numerous avenues for new and interesting wordplays. With the increasing popularity and acceptance for the usage of code-mixed language, it has become important that computers are also able to process it and even decipher complex phenomena like humour. Traditional Word Sense Disambiguation (WSD) based methods cannot be used in target recovery of code-mixed puns, because they are no longer about multiple senses of a single word but about two words from two different languages. Code-switching comes with no markers, and the punning word may not even be a word in either of the languages being used. Sometimes words from the two languages can be combined to form a word which only a bilingual speaker would understand. Hence, this task on such data calls for a different set of strategies altogether. We approach this problem in two parts. First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word. Second, we develop a four stage pipeline to achieve our goal - Language Identification, Pun Candidate Identification, Context Lookup and Phonetic Distance Minimization. We then test our approach on a small dataset and note that our method is successfully able to recover targets for a majority of the puns." ] ]
bc26eee4ef1c8eff2ab8114a319901695d044edb
How is dialogue guided to avoid interactions that breach procedures and processes only known to experts?
[ "pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction" ]
[ [ "In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows:", "Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions." ] ]
9c94ff8c99d3e51c256f2db78c34b2361f26b9c2
What is meant by semiguided dialogue, what part of dialogue is guided?
[ "The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard." ]
[ [ "The solution we propose in this paper tries to minimise these costs by increasing the pool of Wizards to anyone wanting to collaborate in the data collection, by providing them the necessary guidance to generate the desired dialogue behaviour. This is a valuable solution for collecting dialogues in domains where specific expertise is required and the cost of training capable Wizards is high. We required fine-grained control over the Wizard interface so as to be able to generate more directed dialogues for specialised domains, such as emergency response for offshore facilities. By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages:", "A guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge.", "Providing several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios.", "Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.", "The dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely.", "The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available." ] ]
8e9de181fa7d96df9686d0eb2a5c43841e6400fa
Is CRWIZ already used for data collection, what are the results?
[ "Yes, CRWIZ has been used for data collection and its initial use resulted in 145 dialogues. The average time taken for the task was close to the estimate of 10 minutes, 14 dialogues (9.66%) resolved the emergency in the scenario, and these dialogues rated consistently higher in subjective and objective ratings than those which did not resolve the emergency. Qualitative results showed that participants believed that they were interacting with an automated assistant." ]
[ [ "For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4.", "Data Analysis ::: Subjective Data", "Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.", "Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.", "Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“.", "Data Analysis ::: Single vs Multiple Wizards", "In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.", "Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.", "The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate.", "Data Analysis ::: Limitations", "It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use." ] ]
ff1595a388769c6429423a75b6e1734ef88d3e46
How does framework made sure that dialogue will not breach procedures?
[ "The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions." ]
[ [ "Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection.", "Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.", "System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:", "Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.", "Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.", "Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface." ] ]
dd2046f5481f11b7639a230e8ca92904da75feed
How do they combine the models?
[ "maximum of two scores assigned by the two separate models, average score" ]
[ [ "Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions." ] ]
47e6c3e6fcc9be8ca2437f41a4fef58ef4c02579
What is their baseline?
[ "Logistic regression model with character-level n-gram features" ]
[ [ "For logistic regression model implementation, we use l2 loss. We adopt the balanced class weight as described in Scikit learn. Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective. BIBREF0 , BIBREF9" ] ]
569ad21441e99ae782d325d5f5e1ac19e08d5e76
What context do they use?
[ "title of the news article, screen name of the user" ]
[ [ "In logistic regression models, we extract four types of features, word-level and character-level n-gram features as well as two types of lexicon derived features. We extract these four types of features from the target comment first. Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment." ] ]
90741b227b25c42e0b81a08c279b94598a25119d
What is their definition of hate speech?
[ "language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation" ]
[ [ "Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful." ] ]
1d739bb8e5d887fdfd1f4b6e39c57695c042fa25
What architecture has the neural network?
[ "three parallel LSTM BIBREF21 layers" ]
[ [ "Our neural network model mainly consists of three parallel LSTM BIBREF21 layers. It has three different inputs, including the target comment, its news title and its username. Comment and news title are encoded into a sequence of word embeddings. We use pre-trained word embeddings in word2vec. Username is encoded into a sequence of characters. We use one-hot encoding of characters." ] ]
5c70fdd3d6b67031768d3e28336942e49bf9a500
How is human interaction consumed by the model?
[ "displays three different versions of a story written by three distinct models for a human to compare, human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages" ]
[ [ "gordon2009sayanything use an information retrieval based system to write by alternating turns between a human and their system. clark2018mil use a similar turn-taking approach to interactivity, but employ a neural model for generation and allow the user to edit the generated sentence before accepting it. They find that users prefer a full-sentence collaborative setup (vs. shorter fragments) but are mixed with regard to the system-driven approach to interaction. roemmele2017eval experiment with a user-driven setup, where the machine doesn't generate until the user requests it to, and then the user can edit or delete at will. They leverage user-acceptance or rejection of suggestions as a tool for understanding the characteristics of a helpful generation. All of these systems involve the user in the story-writing process, but lack user involvement in the story-planning process, and so they lean on the user's ability to knit a coherent overall story together out of locally related sentences. They also do not allow a user to control the novelty or “unexpectedness” of the generations, which clark2018mil find to be a weakness. Nor do they enable iteration; a user cannot revise earlier sentences and have the system update later generations. We develop a system that allows a user to interact in all of these ways that were limitations in previous systems; it enables involvement in planning, editing, iterative revising, and control of novelty. We conduct experiments to understand which types of interaction are most effective for improving stories and for making users satisfied and engaged. We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories. The full range of interactions available to a user is: select a model, provide a topic, change diversity of content, collaborate on the planning for the story, and collaborate on the story sentences. It is entirely user-driven, as the users control how much is their own work and how much is the machine's at every stage. It supports revision; a user can modify an earlier part of a written story or of the story plan at any point, and observe how this affects later generations." ] ]
f27502c3ece9ade265389d5ace90ca9ca42b46f3
How do they evaluate generated stories?
[ "separate set of Turkers to rate the stories for overall quality and the three improvement areas" ]
[ [ "We then ask a separate set of Turkers to rate the stories for overall quality and the three improvement areas. All ratings are on a five-point scale. We collect two ratings per story, and throw out ratings that disagree by more than 2 points. A total of 11% of ratings were thrown out, leaving four metrics across 241 stories for analysis." ] ]
ffb7a12dfe069ab7263bb7dd366817a9d22b8ef2
Do they evaluate in other language appart from English?
[ "Unanswerable" ]
[ [] ]
aa4b38f601cc87bf93849245d5f65124da3dc112
What are the baselines?
[ "Title-to-Story system" ]
[ [ "The Title-to-Story system is a baseline, which generates directly from topic." ] ]
08b87a90139968095433f27fc88f571d939cd433
What is used a baseline?
[ "As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12" ]
[ [ "As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the proposed model. As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 . As presented in TABLE TABREF24 , the score obtained by the proposed model is clearly higher than the baseline. Here, as described in Section SECREF14 , the sizes of window and lower bounds of frequency for selecting contextual keywords are tuned as 4 and 7 throughout the evaluation of English dataset, and tuned as 3 and 4 throughout the evaluation of Chinese dataset. The number of extracted contextual keywords from the English dataset is 1,328, and from the Chinese dataset is 331." ] ]
ef872807cb0c9974d18bbb886a7836e793727c3d
What contextual features are used?
[ "The words that can indicate the characteristics of the neighbor words as contextual keywords and generate it from the automatically extracted contextual keywords." ]
[ [ "IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as \"download\", \"malware\", \"malicious\", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords." ] ]
4db3c2ca6ddc87209c31b20763b7a3c1c33387bc
Where are the cybersecurity articles used in the model sourced from?
[ " from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018" ]
[ [ "For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. All of these cybersecurity articles are used to train the English word embedding. Afterwards, we randomly select 370 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 70 articles as the validation set and 70 articles as the test set; the remaining articles are used for training." ] ]
63337fd803f6fdd060ebd0f53f9de79d451810cd
What type of hand-crafted features are used in state of the art IOC detection systems?
[ "Unanswerable" ]
[ [] ]
63496705fff20c55d4b3d8cdf4786f93e742dd3d
Do they compare DeepER against other approaches?
[ "Yes" ]
[ [] ]
7b44bee49b7cb39cb7d5eec79af5773178c27d4d
How is the data in RAFAEL labelled?
[ "Using a set of annotation tools such as Morfeusz, PANTERA, Spejd, NERF and Liner" ]
[ [ "Secondly, texts go through a cascade of annotation tools, enriching it with the following information:", "Morphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,", "Tagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,", "Syntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,", "Named entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 ." ] ]
6d54bad91b6ccd1108d1ddbff1d217c6806e0842
How do they handle polysemous words in their entity library?
[ "only the first word sense (usually the most common) is taken into account" ]
[ [ "Figure FIGREF54 shows an exemplary process of converting the first paragraph of a Polish Wikipedia entry, describing former Polish president Lech Wałęsa, into a list of WordNet synsets. First, we omit all unessential parts of the paragraph (1). This includes text in brackets or quotes, but also introductory expressions like jeden z (one of) or typ (type of). Then, an entity name is detached from the text by matching one of definition patterns (2). In the example we can see the most common one, a dash (–). Next, all occurrences of separators (full stops, commas and semicolons) are used to divide the text into separate chunks (3). The following step employs shallow parsing annotation – only nominal groups that appear at the beginning of the chunks are passed on (4). The first chunk that does not fulfil this requirement and all its successors get excluded from further analysis (4.1). Finally, we split the coordination groups and check, whether their lemmas correspond to any lexemes in WordNet (5). If not, the process repeats with the group replaced by its semantic head. In case of polysemous words, only the first word sense (usually the most common) is taken into account." ] ]
238ec3c1e1093ce2f5122ee60209b969f7669fae
How is the fluctuation in the sense of the word and its neighbors measured?
[ "Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:\n1) Setting N, the size of the neighbor.\n2) Choosing N neighboring words ai in the order whose angle with the vector of the given word w is the smallest.\n3) Computing the surrounding uniformity for ai(0 < i ≤ N) and w.\n4) Computing the mean m and the sample variance σ for the uniformities of ai .\n5) Checking whether the uniformity of w is less than m − 3σ. If the value is less than m − 3σ, we may regard w as a polysemic word." ]
[ [ "Distributed representation of word sense provides us with the ability to perform several operations on the word. One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word. When a word has several senses, it is called a polysemic word. When a word has only one sense, it is called a monosemic word. We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word. We can explain this fact as follows. Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.", "To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity. The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor. We have found that there is a difference in the surrounding uniformity between a monosemic word and a polysemic word. This paper describes how to compute surrounding uniformity for a given word, and discuss the relationship between surrounding uniformity and polysemy.", "We choose the uniformity of vectors, which can be regarded as general case of triangle inequality. The uniformity of a set of vectors is a ratio, i.e., the size of the vector of the vector addition of the vectors divided by the scalar sum of the sizes of the vectors. If and only if all directions of the vectors are the same, the uniformity becomes 1.0. We compute this uniformity for the neighbors, including the word itself. Surrounding Uniformity (SU) can be expressed as follows: $SU(\\vec{w}) = \\frac{|\\vec{s}(\\vec{w})|}{|\\vec{w}| + \\sum _{i}^{N}|\\vec{a_i}(\\vec{w})|}$", "where $\\vec{s}(\\vec{w}) = \\vec{w} + \\sum _{i}^{N} \\vec{a_i}(\\vec{w}).$" ] ]
f704d182c9e01a2002381b76bf21e4bb3c0d3efc
Among various transfer learning techniques, which technique yields to the best performance?
[ "Unanswerable" ]
[ [] ]
da544015511e535503dee2eaf4912a5e36c806cd
What is the architecture of the model?
[ "BIBREF5 to train neural sequence-to-sequence, NMF topic model with scikit-learn BIBREF14" ]
[ [ "We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models. As in that study, before training ST, we pre-train the models using English ASR data from the Switchboard Telephone speech corpus BIBREF7, which consists of around 300 hours of English speech and transcripts. This was reported to substantially improve translation quality when the training set for ST was only tens of hours.", "Obtaining gold topic labels for our data would require substantial manual annotation, so we instead use the human translations from the 1K (train20h) training set utterances to train the NMF topic model with scikit-learn BIBREF14, and then use this model to infer topics on the evaluation set. These silver topics act as an oracle: they tell us what a topic model would infer if it had perfect translations. NMF and model hyperparameters are described in Appendix SECREF7." ] ]
7bc993b32484d6ae3c86d0b351a68e59fd2757a5
What language do they look at?
[ "Spanish" ]
[ [ "We use the Fisher Spanish speech corpus BIBREF11, which consists of 819 phone calls, with an average duration of 12 minutes, amounting to a total of 160 hours of data. We discard the associated transcripts and pair the speech with English translations BIBREF12, BIBREF13. To simulate a low-resource scenario, we sampled 90 calls (20h) of data (train20h) to train both ST and topic models, reserving 450 calls (100h) to evaluate topic models (eval100h). Our experiments required ST models of varying quality, so we also trained models with decreasing amounts of data: ST-10h, ST-5h, and ST-2.5h are trained on 10, 5, and 2.5 hours of data respectively, sampled from train20h. To evaluate ST only, we use the designated Fisher test set, as in previous work." ] ]
da495e2f99ee2d5db9cc17eca5517ddaa5ea8e42
Where does the vocabulary come from?
[ "LDC corpus" ]
[ [ "Our training data consists of 2.09M sentence pairs extracted from LDC corpus. Table 1 shows the detailed statistics of our training data. To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set." ] ]
e44a5514d7464993997212341606c2c0f3a72eb4
What is the worst performing translation granularity?
[ "Unanswerable" ]
[ [] ]
310e61b9dd4d75bc1bebbcb1dae578f55807cd04
What dataset did they use?
[ "LDC corpus, NIST 2003(MT03), NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06), NIST 2008(MT08)" ]
[ [ "Our training data consists of 2.09M sentence pairs extracted from LDC corpus. Table 1 shows the detailed statistics of our training data. To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set." ] ]
bdc6664cec2b94b0b3769bc70a60914795f39574
How do they measure performance?
[ "average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values" ]
[ [ "For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online." ] ]
e40df8c685a28b98006c47808f506def68f30e26
Do they measure the performance of a combined approach?
[ "Unanswerable" ]
[ [] ]
9653c89a93ac5c717a0a26cf80e9aa98a5ccf910
Which four QA systems do they use?
[ "WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8" ]
[ [ "The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 ." ] ]
b921a1771ed0ba9dbeff9da000336ecf2bb38322
How many iterations of visual search are done on average until an answer is found?
[ "Unanswerable" ]
[ [] ]
412aff0b2113b7d61c914edf90b90f2994390088
Do they test performance of their approaches using human judgements?
[ "Yes" ]
[ [ "For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.", "To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given. INLINEFORM1 is the faction of correct answer (parts) given divided by all correct ones in the gold answer, and INLINEFORM2 is the harmonic mean of INLINEFORM3 and INLINEFORM4 . As an example, if the question is “Where was Albert Einstein born?” (gold answer: “Ulm”), and the system gives two answers “Ulm” and “Bern”, then INLINEFORM5 , INLINEFORM6 and INLINEFORM7 ." ] ]
010e3793eb1342225857d3f95e147d8f8467192a
What are the sizes of both datasets?
[ "The Dutch section consists of 2,333,816 sentences and 53,487,257 words., The SONAR500 corpus consists of more than 500 million words obtained from different domains." ]
[ [ "The datasets used for training, validation and testing contain sentences extracted from the Europarl corpus BIBREF1 and SoNaR corpus BIBREF2. The Europarl corpus is an open-source parallel corpus containing proceedings of the European Parliament. The Dutch section consists of 2,333,816 sentences and 53,487,257 words. The SoNaR corpus comprises two corpora: SONAR500 and SONAR1. The SONAR500 corpus consists of more than 500 million words obtained from different domains. Examples of text types are newsletters, newspaper articles, legal texts, subtitles and blog posts. All texts except for texts from social media have been automatically tokenized, POS tagged and lemmatized. It contains significantly more data and more varied data than the Europarl corpus. Due to the high amount of data in the corpus, only three subparts are used: Wikipedia texts, reports and newspaper articles. These subparts are chosen because the number of wrongly used die and dat is expected to be low." ] ]
c20bb0847ced490a793657fbaf6afb5ef54dad81
Why are the scores for predicting perceived musical hardness and darkness extracted only for subsample of 503 songs?
[ "Unanswerable" ]
[ [] ]
ff8557d93704120b65d9b597a4fab40b49d24b6d
How long is the model trained?
[ "Unanswerable" ]
[ [] ]
447eb98e602616c01187960c9c3011c62afd7c27
What are lyrical topics present in the metal genre?
[ "Table TABREF10 displays the twenty resulting topics" ]
[ [ "Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA. The topics are numbered in descending order according to their prevalence (weight) in the text corpus. For each topic, a qualitative interpretation is given along with the 10 most salient terms." ] ]
f398587b9a0008628278a5ea858e01d3f5559f65
By how much does SPNet outperforms state-of-the-art abstractive summarization methods on evaluation metrics?
[ "SPNet vs best baseline:\nROUGE-1: 90.97 vs 90.68\nCIC: 70.45 vs 70.25" ]
[ [ "We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics." ] ]
d5f8707ddc21741d52b3c2a9ab1af2871dc6c90b
What automatic and human evaluation metrics are used to compare SPNet to its counterparts?
[ "ROUGE and CIC, relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair" ]
[ [ "We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).", "We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics." ] ]
58f3bfbd01ba9768172be45a819faaa0de2ddfa4
Is proposed abstractive dialog summarization dataset open source?
[ "Unanswerable" ]
[ [] ]
73633afbefa191b36cca594977204c6511f9dad4
Is it expected to have speaker role, semantic slot and dialog domain annotations in real world datasets?
[ "Not at the moment, but summaries can be additionaly extended with this annotations." ]
[ [ "Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries." ] ]
db39a71080e323ba2ddf958f93778e2b875dcd24
How does SPNet utilize additional speaker role, semantic slot and dialog domain annotations?
[ "Our encoder-decoder framework employs separate encoding for different speakers in the dialog., We integrate semantic slot scaffold by performing delexicalization on original dialogs., We integrate dialog domain scaffold through a multi-task framework." ]
[ [ "Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:", "We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.", "We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:" ] ]
6da2cb3187d3f28b75ac0a61f6562a8adf716109
What are previous state-of-the-art document summarization methods used?
[ "Pointer-Generator, Transformer" ]
[ [ "To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation." ] ]
c47e87efab11f661993a14cf2d7506be641375e4
How does new evaluation metric considers critical informative entities?
[ "Answer with content missing: (formula for CIC) it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities" ]
[ [ "In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to\") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:", "where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.", "CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain." ] ]
14684ad200915ff1e3fc2a89cb614e472a1a2854
Is new evaluation metric extension of ROGUE?
[ "No" ]
[ [ "CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain." ] ]
8d1f9d3aa2cc2e2e58d3da0f5edfc3047978f3ee
What measures were used for human evaluation?
[ "To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself)." ]
[ [ "To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”." ] ]
5065ff56d3c295b8165cb20d8bcfcf3babe9b1b8
What automatic metrics are used for this task?
[ "BLEU-3/4, ROUGE-2/L, CIDEr, SPICE, BERTScore" ]
[ [ "For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\\triangle $BERTS." ] ]
c34a15f1d113083da431e4157aceb11266e9a1b2
Are the models required to also generate rationales?
[ "No" ]
[ [ "We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings)." ] ]
061682beb3dbd7c76cfa26f7ae650e548503d977
Are the rationales generated after the sentences were written?
[ "Yes" ]
[ [ "We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data." ] ]
3518d8eb84f6228407cfabaf509fd63d60351203
Are the sentences in the dataset written by humans who were shown the concept-sets?
[ "Yes" ]
[ [ "It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.", "We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data." ] ]
617c77a600be5529b3391ab0c21504cd288cc7c7
Where do the concept sets come from?
[ "These concept-sets are sampled from several large corpora of image/video captions" ]
[ [ "Towards empowering machines with the generative commonsense reasoning ability, we create a large-scale dataset, named CommonGen, for the constrained text generation task. We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. Through crowd-sourcing via Amazon Mechanical Turk (AMT), we finally obtain $89,028$ human-written sentences as expected outputs. We investigate the performance of sophisticated sequence generation methods for the proposed task with both automatic metrics and human evaluation. The experiments show that all methods are far from human performance in generative commonsense reasoning. Our main contributions are as follows: 1) We introduce the first large-scale constrained text generation dataset targeting at generative commonsense reasoning; 2) We systematically compare methods for this (lexically) constrained text generation with extensive experiments and evaluation. 3) Our code and data are publicly available (w/ the URL in the abstract), so future research in this direction can be directly developed in a unified framework.", "Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total." ] ]
53d6cbee3606dd106494e2e98aa93fdd95920375
How big are improvements of MMM over state of the art?
[ "test accuracy of 88.9%, which exceeds the previous best by 16.9%" ]
[ [ "We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%." ] ]
9dc844f82f520daf986e83466de0c84d93953754
What out of domain datasets authors used for coarse-tuning stage?
[ "MultiNLI BIBREF15 and SNLI BIBREF16 " ]
[ [ "We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits." ] ]
9fe4a2a5b9e5cf29310ab428922cc8e7b2fc1d11
What are state of the art methods MMM is compared to?
[ "FTLM++, BERT-large, XLNet" ]
[ [] ]
36d892460eb863220cd0881d5823d73bbfda172c
What four representative datasets are used for bechmark?
[ "DREAM, MCTest, TOEFL, and SemEval-2018 Task 11" ]
[ [ "Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11." ] ]
4cbc56d0d53c4c03e459ac43e3c374b75fd48efe
What baselines did they consider?
[ "LSTM, SCIBERT" ]
[ [ "In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around developing transformer-based fine-tuning approaches tailored to SR tasks. We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. We demonstrate that models based on the BERT architecture solve problems related to ambiguous sentence labels by learning to predict multiple labels reliably. Further, we show that the improved feature representation and contextualization of embeddings lead to improved performance in biomedical data extraction tasks. These fine-tuned models show promising results while providing a level of flexibility to suit reviewing tasks, such as the screening of studies for inclusion in reviews. By predicting on multilingual and full text contexts we showed that the model's capabilities for transfer learning can be useful when dealing with diverse, real-world data.", "Figure FIGREF23 shows the same set of sentences, represented by concatenations of SCIBERT outputs. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. When clustered, its embeddings yielded an adjusted rand score of 0.57 for a concatenation of the two layers, compared with 0.25 for BERT-base." ] ]
e5a965e7a109ae17a42dd22eddbf167be47fca75
What are the problems related to ambiguity in PICO sentence prediction tasks?
[ "Some sentences are associated to ambiguous dimensions in the hidden state output" ]
[ [ "Sentences 1 and 2 are labelled incorrectly, and clearly appear far away from the population class centroid. Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. This supports a need for multiple tags per sentence, and the fine-tuning of weights within the network." ] ]
082c88e132b4f1bf68abdc3a21ac4af180de1113
How is knowledge retrieved in the memory?
[ "the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector." ]
[ [ "Recently, BIBREF17 proposed a dynamic memory based neural network for implicitly modeling the state of entities present in the text for question answering. However, this model lacks any module for relational reasoning. In response, we propose RelNet, which extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. Our end-to-end method reads text, and writes to both memory slots and edges between them. Intuitively, the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector. The only supervision signal for our method comes from answering questions on the text." ] ]
74091e10f596428135b0ab06008608e09c051565
How is knowledge stored in the memory?
[ "entity memory and relational memory." ]
[ [ "There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\\lbrace e_1, \\ldots , e_N\\rbrace $ , a question on the document represented as another sequence of words and an answer to the question." ] ]
43b4f7eade7a9bcfaf9cc0edba921a41d6036e9c
What are the relative improvements observed over existing methods?
[ "The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks." ]
[ [ "The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks." ] ]
a75861e6dd72d69fdf77ebd81c78d26c6f7d0864
What is the architecture of the neural network?
[ "extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. , The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory." ]
[ [ "We describe the RelNet model in this section. Figure 1 provides a high-level view of the model. The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory." ] ]