paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Lattice-Based Unsupervised Test-Time Adaptationof Neural Network Acoustic Models
1906.11521
Table 6: WER for adaptation of the TED-LIUM model without i-vectors and the Somali model using best path as a supervision with varying fractions of the adaptation data.
['[EMPTY]', 'TED-LIUM dev', 'TED-LIUM test', 'Somali NB', 'Somali WB']
[['[BOLD] baseline', '10.0', '10.6', '53.7', '57.3'], ['[BOLD] ALL-LAT 100%', '9.1', '9.0', '53.0', '56.5'], ['[BOLD] ALL-LAT 75%', '9.2', '8.8', '53.3', '56.2'], ['[BOLD] ALL-LAT 50%', '9.4', '9.0', '53.8', '56.5'], ['[BOLD] ALL-LAT 25%', '9.7', '9.5', '56.0', '57.0'], ['[BOLD] ALL-BP 100%', '9.9', '10.6', '54.5', '58.2'], ['[BOLD] ALL-BP 75%', '9.6', '9.7', '53.8', '57.8'], ['[BOLD] ALL-BP 50%', '9.4', '9.4', '53.7', '57.1'], ['[BOLD] ALL-BP 25%', '9.6', '9.6', '56.0', '57.2']]
This filtering can be done by using a hard threshold, or by using only the fraction of utterances with the highest confidences. Either way one extra hyper-parameter is introduced. We experiment with the TED-LIUM model without i-vectors, and the Somali model. As can be seen from the table, filtering utterances improves results when using best path supervision. The biggest improvement can be achieved when using only 50% of the adaptation data. Even then the TED-LIUM model does not obtain similar performance as when adapted using lattices for supervision. Furthermore, adaptation of the Somali model using best path supervision only barely matches the unadapted baseline. This is probably due to the fact that the WER of the initial Somali model is high and that the lattice provides much more information than a combination of best path supervision and corresponding confidences. We also performed the same filtering experiment with lattices as supervision. We found that using a threshold of 75% – 100% achieves the best results. Overall, adaptation using lattice supervision does not benefit from filtering utterances as much as adaptation using best path supervision.
Lattice-Based Unsupervised Test-Time Adaptationof Neural Network Acoustic Models
1906.11521
Table 4: WER for adaptation of the MGB model to episodes in the longitudinal eval data.
['[EMPTY]', 'eval']
[['[BOLD] baseline', '19.9'], ['[BOLD] LHUC-LAT', '19.4'], ['[BOLD] LHUC-BP', '19.5'], ['[BOLD] ALL-LAT', '19.2'], ['[BOLD] ALL-BP', '19.7']]
This provides more adaptation data ( 30-45 minutes per episode), but perhaps at the cost of losing finer granularity for adaptation. Using the best path with all parameters yields almost no gains (∼1%). When only adapting a subset of the parameters with LHUC the results are more stable, but does not perform as well as all parameters with lattice supervision.
The Perceptimatic English Benchmark for Speech Perception Models
2005.03418
Table 1: Percent accuracies for humans (PEB) and models (the bigger the better). GMM is for DPGMM, DS for DeepSpeech. BEnM, BEnT and BMu are (in order) for monophone English, triphone English and multilingual bottleneck models. Art is for articulatory reconstruction.
['[EMPTY]', 'PEB', 'GMM', 'DS', 'BEnM', 'BEnT', 'BMu', 'Art', 'MFCC']
[['En', '79.5', '88.3', '89.5', '91.2', '90.3', '88.9', '77.3', '78.6'], ['Fr', '76.7', '82.0', '80.2', '87.6', '88.8', '88.5', '70.1', '78.3']]
This implies that, to the extent that any of these models accurately captures listeners’ perceived discriminability, listeners’ behaviour on the task, unsurprisingly, cannot correspond to a hard decision at the optimal decision threshhold. The results also indicate, as expected, a small native language effect—a decrease in listeners’ discrimination accuracy for the non-English stimuli. Such an effect is also captured by all the models trained on English. We observe that some models show native language effects numerically much larger than human listeners, a point we return to below.
Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization
1809.05972
Table 4: Quantitative evaluation on the Twitter dataset.
['Models', 'Relevance BLEU', 'Relevance ROUGE', 'Relevance Greedy', 'Relevance Average', 'Relevance Extreme', 'Diversity Dist-1', 'Diversity Dist-2', 'Diversity Ent-4']
[['seq2seq', '0.64', '0.62', '1.669', '0.54', '0.34', '0.020', '0.084', '6.427'], ['cGAN', '0.62', '0.61', '1.68', '0.536', '0.329', '0.028', '0.102', '6.631'], ['AIM', '[BOLD] 0.85', '[BOLD] 0.82', '[BOLD] 1.960', '[BOLD] 0.645', '[BOLD] 0.370', '0.030', '0.092', '7.245'], ['DAIM', '0.81', '0.77', '1.845', '0.588', '0.344', '[BOLD] 0.032', '[BOLD] 0.137', '[BOLD] 7.907'], ['MMI', '0.80', '0.75', '1.876', '0.591', '0.348', '0.028', '0.105', '7.156']]
We further compared our methods on the Twitter dataset. We treated all dialog history before the last response in a multi-turn conversation session as a source sentence, and use the last response as the target to form our dataset. We employed CNN as our encoder because a CNN-based encoder is presumably advantageous in tracking long dialog history comparing to an LSTM encoder. We truncated the vocabulary to contain only 20k most frequent words due to limited flash memory capacity. We evaluated each methods on 2k test data.
Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization
1809.05972
Table 1: Quantitative evaluation on the Reddit dataset. (∗ is implemented based on [5].)
['Models', 'Relevance BLEU', 'Relevance ROUGE', 'Relevance Greedy', 'Relevance Average', 'Relevance Extreme', 'Diversity Dist-1', 'Diversity Dist-2', 'Diversity Ent-4']
[['seq2seq', '1.85', '0.9', '1.845', '0.591', '0.342', '0.040', '0.153', '6.807'], ['cGAN', '1.83', '0.9', '1.872', '0.604', '0.357', '0.052', '0.199', '7.864'], ['AIM', '[BOLD] 2.04', '[BOLD] 1.2', '[BOLD] 1.989', '[BOLD] 0.645', '0.362', '0.050', '0.205', '8.014'], ['DAIM', '1.93', '1.1', '1.945', '0.632', '[BOLD] 0.366', '[BOLD] 0.054', '[BOLD] 0.220', '[BOLD] 8.128'], ['MMI∗', '1.87', '1.1', '1.864', '0.596', '0.353', '0.046', '0.127', '7.142'], ['Human', '-', '-', '-', '-', '-', '0.129', '0.616', '9.566']]
Quantitative evaluation We first evaluated our methods on the Reddit dataset using the relevance and diversity metrics. We truncated the vocabulary to contain only the most frequent 20,000 words. We observe that by incorporating the adversarial loss the diversity of generated responses is improved (cGAN vs. seq2seq). The relevance under most metrics (except for BLEU), increases by a small amount.
Efficient and Robust Question Answering from Minimal Context over Documents
1805.08092
Table 4: Results of sentence selection on the dev set of SQuAD and NewsQA. (Top) We compare different models and training methods. We report Top 1 accuracy (Top 1) and Mean Average Precision (MAP). Our selector outperforms the previous state-of-the-art Tan et al. (2018). (Bottom) We compare different selection methods. We report the number of selected sentences (N sent) and the accuracy of sentence selection (Acc). ‘T’, ‘M’ and ‘N’ are training techniques described in Section 3.2 (weight transfer, data modification and score normalization, respectively).
['Selection method', 'SQuAD N sent', 'SQuAD Acc', 'NewsQA N sent', 'NewsQA Acc']
[['Top k\xa0(T+M)', '1', '91.2', '1', '70.9'], ['Top k\xa0(T+M)', '2', '97.2', '3', '89.7'], ['Top k\xa0(T+M)', '3', '98.9', '4', '92.5'], ['Dyn\xa0(T+M)', '1.5', '94.7', '2.9', '84.9'], ['Dyn\xa0(T+M)', '1.9', '96.5', '3.9', '89.4'], ['Dyn\xa0(T+M+N)', '1.5', '98.3', '2.9', '91.8'], ['Dyn\xa0(T+M+N)', '1.9', '[BOLD] 99.3', '3.9', '[BOLD] 94.6']]
We introduce 3 techniques to train the model. (i) As the encoder module of our model is identical to that of S-Reader, we transfer the weights to the encoder module from the QA model trained on the single oracle sentence (Oracle). (ii) We modify the training data by treating a sentence as a wrong sentence if the QA model gets 0 F1, even if the sentence is the oracle sentence. (iii) First, our selector outperforms TF-IDF method and the previous state-of-the-art by large margin (up to 2.9% MAP).
Efficient and Robust Question Answering from Minimal Context over Documents
1805.08092
Table 4: Results of sentence selection on the dev set of SQuAD and NewsQA. (Top) We compare different models and training methods. We report Top 1 accuracy (Top 1) and Mean Average Precision (MAP). Our selector outperforms the previous state-of-the-art Tan et al. (2018). (Bottom) We compare different selection methods. We report the number of selected sentences (N sent) and the accuracy of sentence selection (Acc). ‘T’, ‘M’ and ‘N’ are training techniques described in Section 3.2 (weight transfer, data modification and score normalization, respectively).
['Model', 'SQuAD Top 1', 'SQuAD MAP', 'NewsQA Top 1', 'NewsQA Top 3', 'NewsQA MAP']
[['TF-IDF', '81.2', '89.0', '49.8', '72.1', '63.7'], ['Our selector', '85.8', '91.6', '63.2', '85.1', '75.5'], ['Our selector\xa0(T)', '90.0', '94.3', '67.1', '87.9', '78.5'], ['Our selector\xa0(T+M, T+M+N)', '[BOLD] 91.2', '[BOLD] 95.0', '[BOLD] 70.9', '[BOLD] 89.7', '[BOLD] 81.1'], ['Tan et\xa0al. ( 2018 )', '-', '92.1', '-', '-', '-']]
We introduce 3 techniques to train the model. (i) As the encoder module of our model is identical to that of S-Reader, we transfer the weights to the encoder module from the QA model trained on the single oracle sentence (Oracle). (ii) We modify the training data by treating a sentence as a wrong sentence if the QA model gets 0 F1, even if the sentence is the oracle sentence. (iii) First, our selector outperforms TF-IDF method and the previous state-of-the-art by large margin (up to 2.9% MAP).
Efficient and Robust Question Answering from Minimal Context over Documents
1805.08092
Table 8: Results on the dev-full set of TriviaQA (Wikipedia) and the dev set of SQuAD-Open. Full results (including the dev-verified set on TriviaQA) are in Appendix C. For training Full and Minimal on TriviaQA, we use 10 paragraphs and 20 sentences, respectively. For training Full and Minimal on SQuAD-Open, we use 20 paragraphs and 20 sentences, respectively. For evaluating Full and Minimal, we use 40 paragraphs and 5-20 sentences, respectively. ‘n sent’ indicates the number of sentences used during inference. ‘Acc’ indicates accuracy of whether answer text is contained in selected context. ‘Sp’ indicates inference speed. We compare with the results from the sentences selected by TF-IDF method and our selector (Dyn). We also compare with published Rank1-3 models. For TriviaQA(Wikipedia), they are Neural Casecades Swayamdipta et al. (2018), Reading Twice for Natural Language Understanding Weissenborn (2017) and Mnemonic Reader Hu et al. (2017). For SQuAD-Open, they are DrQA Chen et al. (2017) (Multitask), R3 Wang et al. (2018) and DrQA (Plain).
['[EMPTY]', '[EMPTY]', 'TriviaQA (Wikipedia) n sent', 'TriviaQA (Wikipedia) Acc', 'TriviaQA (Wikipedia) Sp', 'TriviaQA (Wikipedia) F1', 'TriviaQA (Wikipedia) EM', 'SQuAD-Open n sent', 'SQuAD-Open Acc', 'SQuAD-Open Sp', 'SQuAD-Open F1', 'SQuAD-Open EM']
[['Full', 'Full', '69', '95.9', 'x1.0', '59.6', '53.5', '124', '76.9', 'x1.0', '41.0', '33.1'], ['Minimal', 'TF-IDF', '5', '73.0', 'x13.8', '51.9', '45.8', '5', '46.1', 'x12.4', '36.6', '29.6'], ['Minimal', 'TF-IDF', '10', '79.9', 'x6.9', '57.2', '51.5', '10', '54.3', 'x6.2', '39.8', '32.5'], ['Minimal', 'Our', '5.0', '84.9', '[BOLD] x13.8', '59.5', '54.0', '5.3', '58.9', '[BOLD] x11.7', '[BOLD] 42.3', '[BOLD] 34.6'], ['Minimal', 'Selector', '10.5', '90.9', 'x6.6', '[BOLD] 60.5', '[BOLD] 54.9', '10.7', '64.0', 'x5.8', '[BOLD] 42.5', '[BOLD] 34.7'], ['Rank 1', 'Rank 1', '-', '-', '-', '56.0', '51.6', '2376', '77.8', '-', '-', '29.8'], ['Rank 2', 'Rank 2', '-', '-', '-', '55.1', '48.6', '-', '-', '-', '37.5', '29.1'], ['Rank 3', 'Rank 3', '-', '-', '-', '52.9', '46.9', '2376', '77.8', '-', '-', '28.4']]
First, Minimal obtains higher F1 and EM over Full, with the inference speedup of up to 13.8×. Second, the model with our sentence selector with Dyn achieves higher F1 and EM over the model with TF-IDF selector. For example, on the development-full set, with 5 sentences per question on average, the model with Dyn achieves 59.5 F1 while the model with TF-IDF method achieves 51.9 F1. Third, we outperforms the published state-of-the-art on both dataset.
Simple and Effective Text Matching with Richer Alignment Features
1908.00300
Table 7: Robustness checks on dev sets of the corresponding datasets.
['[EMPTY]', '[BOLD] SNLI', '[BOLD] Quora', '[BOLD] Scitail']
[['1 block', '88.1±0.1', '88.7±0.1', '88.3±0.8'], ['2 blocks', '88.9±0.2', '89.2±0.2', '[BOLD] 88.9±0.3'], ['3 blocks', '88.9±0.1', '89.4±0.1', '88.8±0.5'], ['4 blocks', '[BOLD] 89.0±0.1', '[BOLD] 89.5±0.1', '88.7±0.5'], ['5 blocks', '89.0±0.2', '89.2±0.2', '88.5±0.5'], ['1 enc. layer', '88.6±0.2', '88.9±0.2', '88.1±0.4'], ['2 enc. layers', '88.9±0.2', '89.2±0.2', '88.9±0.3'], ['3 enc. layers', '[BOLD] 89.2±0.1', '[BOLD] 89.2±0.1', '88.7±0.6'], ['4 enc. layers', '89.1±0.0', '89.1±0.1', '88.7±0.5'], ['5 enc. layers', '89.0±0.1', '89.0±0.2', '[BOLD] 89.1±0.3']]
The number of blocks is tuned in a range from 1 to 3. The number of layers of the convolutional encoder is tuned from 1 to 3. we validate with up to 5 blocks and layers, in all other experiments we deliberately limit the maximum number of blocks and number of layers to 3 to control the size of the model. The initial learning rate is tuned from 0.0001 to 0.003. The batch size is tuned from 64 to 512. The threshold for gradient clipping is set to 5. For all the experiments except for the comparison of ensemble models, we report the average score and the standard deviation of 10 runs. Robustness checks. To check whether our proposed method is robust to different variants of structural hyperparameters, we experiment with (1) the number of blocks varying from 1 to 5 with the number of encoder layers set to 2; (2) the number of encoder layers varying from 1 to 5 with the number of blocks set to 2. Robustness checks are performed on the development set of SNLI, Quora and Scitail. We can see in the table that fewer blocks or layers may not be sufficient but adding more blocks or layers than necessary hardly harms the performance. On WikiQA dataset, our method does not seem to be robust to structural hyperparameter changes. We leave the further investigation of the high variance on the WikiQA dataset for further work.
Simple and Effective Text Matching with Richer Alignment Features
1908.00300
Table 6: Ablation study on dev sets of the corresponding datasets.
['[EMPTY]', '[BOLD] SNLI', '[BOLD] Quora', '[BOLD] Scitail', '[BOLD] WikiQA']
[['original', '88.9', '89.4', '88.9', '0.7740'], ['w/o enc-in', '87.2', '85.7', '78.1', '0.7146'], ['residual conn.', '88.9', '89.2', '87.4', '0.7640'], ['simple fusion', '88.8', '88.3', '87.5', '0.7345'], ['alignment alt.', '88.7', '89.3', '88.2', '0.7702'], ['prediction alt.', '88.9', '89.2', '88.8', '0.7558'], ['parallel blocks', '88.8', '88.6', '87.6', '0.7607']]
The first ablation baseline shows that without richer features as the alignment input, the performance on all datasets degrades significantly. This is the key component in the whole model. The results of the second baseline show that vanilla residual connections without direct access to the original point-wise features are not enough to model the relations in many text matching tasks. The simpler implementation of the fusion layer leads to evidently worse performance, indicating that the fusion layer cannot be further simplified. On the other hand, the alignment layer and the prediction layer can be simplified on some of the datasets. In the last ablation study, we can see that parallel blocks perform worse than stacked blocks, which supports the preference for deeper models over wider ones.
Normalized and Geometry-Aware Self-Attention Network for Image Captioning
2003.08897
Table 3: Comparison of normalizing query and key in N-SAN.
['Query', 'Key', 'B@4', 'M', 'R', 'C', 'S']
[['✗', '✗', '38.4', '28.6', '58.4', '128.6', '22.6'], ['✓', '✗', '39.3', '[BOLD] 29.1', '[BOLD] 58.9', '[BOLD] 130.8', '23.0'], ['✗', '✓', '39.2', '29.0', '58.8', '130.1', '22.8'], ['✓', '✓', '[BOLD] 39.4', '[BOLD] 29.1', '58.8', '130.7', '[BOLD] 23.1']]
What if we normalize the keys in addition to the queries? We have the following observations. 1) Normalizing either of Q and K could increase the performance. 2) The performances of normalizing both Q and K and normalizing Q alone are very similar, and are both significantly higher than that of SAN. 3) Normalizing K alone is inferior to normalizing Q alone. The reason is that normalizing K is equivalent to normalizing Θ in Eqn.
Normalized and Geometry-Aware Self-Attention Network for Image Captioning
2003.08897
Table 2: Comparison of using various normalization methods in NSA.
['Approach', 'B@4', 'M', 'R', 'C', 'S']
[['SAN', '38.4', '28.6', '58.4', '128.6', '22.6'], ['LN', '38.5', '28.6', '58.3', '128.2', '22.5'], ['BN', '38.8', '28.9', '58.7', '129.4', '22.8'], ['IN', '[BOLD] 39.4', '[BOLD] 29.2', '[BOLD] 59.0', '130.7', '[BOLD] 23.0'], ['IN w/o [ITALIC] γ, [ITALIC] β', '39.3', '29.1', '58.9', '[BOLD] 130.8', '[BOLD] 23.0']]
Since we introduced IN into the NSA module for normalization, an intuitive question to ask is whether we can replace IN with other normalization methods. We have the following observations. 1) Using LN slightly decreases the performance. We conjecture that is because LN normalizes activations of all channels with the same normalization terms (μ and σ), thus limiting the expression capacity of each channel when calculating attention weights. 2) IN and IN w/o γ,β significantly outperform SAN and all the other normalization methods. Meanwhile, the extra affine transformations (γ and β) are not necessary. 3) Applying BN outperforms SAN but is inferior to adopting IN. BN has a similar effect as IN to reduce the internal covariate shift by fixing the distribution of the queries. However, as is described in Sec. .
Normalized and Geometry-Aware Self-Attention Network for Image Captioning
2003.08897
Table 4: Comparison of various variants of GSA.
['Approach', '#params', 'B@4', 'M', 'R', 'C', 'S']
[['SAN', '40.2M', '38.4', '28.6', '58.4', '128.6', '22.6'], ['absolute', '40.2M', '38.3', '28.5', '58.4', '128.4', '22.6'], ['content-independent', '40.2M', '39.2', '29.1', '58.9', '131.0', '22.9'], ['key-dependent', '41.5M', '38.9', '29.0', '58.8', '129.5', '22.8'], ['query-dependent', '41.5M', '[BOLD] 39.3', '[BOLD] 29.2', '[BOLD] 59.0', '[BOLD] 131.4', '[BOLD] 23.0']]
‘+absolute" denotes adding absolute geometry information of each individual object to their input representations at the bottoms of the encoder. We have the following findings. 1) Adding the absolute geometry information (‘‘absolute") is not beneficial to the performance. That is probably because it is too complex for SA to infer the 2D layout of objects from their absolute geometry information. 2) All the proposed variants of GSA can improve the performance of SAN, showing the advantages of using relative geometry information. 3) ‘‘query-dependent" brings the best performance and outperforms the content-independent variant, proving that incorporating the content information of the associated query can help infer a better geometric bias. 4) ‘‘key-dependent" is inferior to ‘‘query-dependent". That is because when using key-dependent geometric bias, the scores ϕ3ij= K′j⊤Gij condition on different keys K′j, thus the differences in Gij may be overwhelmed by the differences in K′j when performing softmax on the keys’ dimension. In comparision, when using query-dependent geometric bias, the effect of Gij could be highlighted since the scores condition on a common query Q′i when performing softmax. We did not observe further improvement when combing these variants into ϕ in Eq.
Normalized and Geometry-Aware Self-Attention Network for Image Captioning
2003.08897
Table 7: Video captioning results on VATEX dataset.
['Model', 'B@4', 'M', 'R', 'C']
[['VATEX ', '28.2', '21.7', '46.9', '45.7'], ['Transformer (Ours)', '30.6', '22.3', '48.4', '53.4'], ['+NSA', '[BOLD] 31.0', '[BOLD] 22.7', '[BOLD] 49.0', '[BOLD] 57.1']]
We see that the performance of Transformer strongly exceeds that of VATEX, which adopts an LSTM-based architecture. Our Transformer+NSA method consistently improves over Transformer on all metrics. Particularly, our method improves the CIDEr score by 3.7 points when compared to Transformer, and significantly improves the CIDEr score by 11.4 points when compared to VATEX baseline.
Why Comparing Single Performance Scores Does Not Allow to Draw Conclusions About Machine Learning Approaches
1803.09578
Table 1: The same BiLSTM-CRF approach was evaluated twice under Evaluation 1. The threshold column depicts the average difference in percentage points F1-score for statistical significance with 0.04
['[BOLD] Task', '[BOLD] Threshold [ITALIC] τ', '[BOLD] % significant', 'Δ( [ITALIC] test)95', 'Δ( [ITALIC] test) [ITALIC] Max']
[['ACE 2005 - Entities', '0.65', '28.96%', '1.21', '2.53'], ['ACE 2005 - Events', '1.97', '34.48%', '4.32', '9.04'], ['CoNLL 2000 - Chunking', '0.20', '18.36%', '0.30', '0.56'], ['CoNLL 2003 - NER-En', '0.42', '31.02%', '0.83', '1.69'], ['CoNLL 2003 - NER-De', '0.78', '33.20%', '1.61', '3.36'], ['GermEval 2014 - NER-De', '0.60', '26.80%', '1.12', '2.38'], ['TempEval 3 - Events', '1.19', '10.72%', '1.48', '2.99']]
For the ACE 2005 - Events task, we observe in 34.48% of the cases a significant difference between the models A(j)i and ~A(j)i. For the other tasks, we observe similar results and between 10.72% and 33.20% of the cases are statistically significant.
Why Comparing Single Performance Scores Does Not Allow to Draw Conclusions About Machine Learning Approaches
1803.09578
Table 2: The same BiLSTM-CRF approach was evaluated twice under Evaluation 2. The threshold column depicts the average difference in percentage points F1-score for statistical significance with 0.04
['[BOLD] Task', '[BOLD] Spearman [ITALIC] ρ', '[BOLD] Threshold [ITALIC] τ', '[BOLD] % significant', 'Δ( [ITALIC] dev)95', 'Δ( [ITALIC] test)95', 'Δ( [ITALIC] test) [ITALIC] Max']
[['ACE 2005 - Entities', '0.153', '0.65', '24.86%', '0.42', '1.04', '1.66'], ['ACE 2005 - Events', '0.241', '1.97', '29.08%', '1.29', '3.73', '7.98'], ['CoNLL 2000 - Chunking', '0.262', '0.20', '15.84%', '0.10', '0.29', '0.49'], ['CoNLL 2003 - NER-En', '0.234', '0.42', '21.72%', '0.27', '0.67', '1.12'], ['CoNLL 2003 - NER-De', '0.422', '0.78', '25.68%', '0.58', '1.44', '2.22'], ['GermEval 2014 - NER-De', '0.333', '0.60', '16.72%', '0.48', '0.90', '1.63'], ['TempEval 3 - Events', '-0.017', '1.19', '9.38%', '0.74', '1.41', '2.57']]
For all tasks, we observe small Spearman’s rank correlation ρ between the development and the test score. The low correlation indicates that a run with high development score doesn’t have to yield a high test score. The value 3.68 for the ACE 2005 - Events tasks indicates that, given two models with the same performance on the development set, the test performance can vary up to 3.68 percentage points F1-score (95% interval).
Why Comparing Single Performance Scores Does Not Allow to Draw Conclusions About Machine Learning Approaches
1803.09578
Table 5: 95% percentile of Δ(test) after averaging.
['[BOLD] Task', 'Δ( [ITALIC] test)95 [BOLD] for [ITALIC] n scores [BOLD] 1', 'Δ( [ITALIC] test)95 [BOLD] for [ITALIC] n scores [BOLD] 3', 'Δ( [ITALIC] test)95 [BOLD] for [ITALIC] n scores [BOLD] 5', 'Δ( [ITALIC] test)95 [BOLD] for [ITALIC] n scores [BOLD] 10', 'Δ( [ITALIC] test)95 [BOLD] for [ITALIC] n scores [BOLD] 20']
[['ACE-Ent.', '1.21', '0.72', '0.51', '0.38', '0.26'], ['ACE-Ev.', '4.32', '2.41', '1.93', '1.39', '0.97'], ['Chk.', '0.30', '0.16', '0.14', '0.09', '0.06'], ['NER-En', '0.83', '0.45', '0.35', '0.26', '0.18'], ['NER-De', '1.61', '0.94', '0.72', '0.51', '0.37'], ['GE 14', '1.12', '0.64', '0.48', '0.34', '0.25'], ['TE 3', '1.48', '0.81', '0.63', '0.48', '0.32']]
For increasing n the value Δ(test)95 decreases, i.e. the mean score becomes more stable. However, for the CoNLL 2003 NER- En task we still observe a difference of 0.26 percentage points F1-score between the mean scores for n=10. For the ACE 2005 Events dataset, the value is even at 1.39 percentage points F1-score.
Locally Adaptive Translation for Knowledge Graph Embedding
1512.01370
Table 1: Different choices of optimal loss functions and the predictive performances over three data sets Subset1, Subset2 and FB15K, where fr(h,t)=∥h+r−t∥22, (h,r,t) is a triple in knowledge graph, and (h′,r,t′) is incorrect triple.
['Data sets', 'Optimal loss function', 'Mean Rank Raw', 'Mean Rank Filter']
[['Subset1', '[ITALIC] fr( [ITALIC] h, [ITALIC] t)+3− [ITALIC] fr( [ITALIC] h′, [ITALIC] t′)', '339', '240'], ['Subset2', '[ITALIC] fr( [ITALIC] h, [ITALIC] t)+2− [ITALIC] fr( [ITALIC] h′, [ITALIC] t′)', '500', '365'], ['FB15K', '[ITALIC] fr( [ITALIC] h, [ITALIC] t)+1− [ITALIC] fr( [ITALIC] h′, [ITALIC] t′)', '243', '125']]
To verify this, we construct knowledge graphs with different locality. We simply partition a knowledge graph into different subgraphs in a uniform manner. Each subgraph contains different types of relations and their corresponding entities. Moreover, different subgraphs have the identical number of relations for the sake of balance of the number of entities. We claim that over different subgraphs, the optimal margin-based loss function may be different in terms of the margin. To validate this point, we perform the embedding method And we partition FB15K into five subsets with equal size of relations. For example, one subset, named Subset1, contains 13,666 entities and 269 relations. Another subset, named Subset2, has 13603 entities and 269 relations. More precisely, they take different values of margin as 3 and 2, respectively. This suggests that knowledge embedding of different knowledge graphs with a global setting of loss function can not well represent the locality of knowledge graphs, and it is indispensable to propose a locality sensitive loss function with different margins.
Locally Adaptive Translation for Knowledge Graph Embedding
1512.01370
Table 3: Evaluation results on link prediction.
['Data sets Metric', 'WN18 Mean Rank', 'WN18 Mean Rank', 'FB15K Mean Rank', 'FB15K Mean Rank']
[['Metric', 'Raw', 'Filter', 'Raw', 'Filter'], ['Unstructured', '315', '304', '1,074', '979'], ['RESCAL', '1,180', '1,163', '828', '683'], ['SE', '1,011', '985', '273', '162'], ['SME(linear)', '545', '533', '274', '154'], ['SME(bilinear)', '526', '509', '284', '158'], ['LFM', '469', '456', '283', '164'], ['TransE', '263', '251', '243', '125'], ['TransH(bern)', '401', '388', '212', '87'], ['TransH(unif)', '318', '303', '211', '84'], ['TransA', '165', '153', '164', '58']]
All parameters are determined on the validation set. It can be seen that on both data sets, TransA obtains the lowest mean rank. Furthermore, on WN18, among the baselines, Unstructured and TransH(unif) perform the best, but TransA decreases the mean rank by about 150 compared with both of them. On FB15K, among the baselines, TransH(unif) is the best baseline. TransA decreases its mean rank by 30∼50. Notice that the decreases on WN18 and FB15K are different, because the number of relations in WN18 is quite small and the relation-specific margin is very small too. In this case, the optimal margin is almost equal to the entity-specific margin. While on FB15K, the number of relations is 1345, and the optimal margin is the combination of the entity-specific margin and the relation-specific margin.
Locally Adaptive Translation for Knowledge Graph Embedding
1512.01370
Table 4: Evaluation results of triple classification. (%)
['Data sets', 'WN11', 'FB13', 'FB15K']
[['SE', '53.0', '75.2', '-'], ['SME(linear)', '70.0', '63.7', '-'], ['SLM', '69.9', '85.3', '-'], ['LFM', '73.8', '84.3', '-'], ['NTN', '70.4', '87.1', '68.5'], ['TransH(unif)', '77.7', '76.5', '79.0'], ['TransH(bern)', '78.8', '83.3', '80.2'], ['TransA', '93.2', '82.8', '87.7']]
All parameters are determined on the validation set. The optimal setting are: λ=0.001, d=220, B=120, μ=0.5 and taking L1 as dissimilarity on WN11; λ=0.001, d=50, B=480, μ=0.5 and taking L1 as dissimilarity on FB13. On WN11, TransA outperforms the other methods. On FB13, the method NTN is shown more powerful. This is consistent with the results in previous literature On FB15K, TransA also performs the best. Since FB13 is much denser to FB15K, NTN is more expressive on dense graph. On sparse graph, TransA is superior to other state-of-the-art embedding methods.
Evaluating Dialogue Generation Systems via Response Selection
2004.14302
Table 3: Correlations between the ground-truth system ranking and the rankings by automatic evaluation.
['Metrics', 'Spearman', 'p-value']
[['BLEU-1', '−0.36', '0.30'], ['BLEU-2', '0.085', '0.82'], ['METEOR', '0.073', '0.84'], ['ROUGE-L', '0.35', '0.33'], ['RANDOM', '0.43', '-'], ['[BOLD] CHOSEN', '[BOLD] 0.48', '[BOLD] 0.19'], ['HUMAN', '0.87', '0.0038']]
First, we yielded the human upper bound. we evaluated the correlation between the rankings made by different annotators (HUMAN). We randomly divided human evaluation into two groups and made two rankings. The correlation coefficient between the two rankings was 0.87. Second, we found that the rankings made using existing automatic evaluation metrics correlate poorly with ground-truth ranking. BLEU, often used to evaluate generation systems, does not correlate with human evaluation at all. One exception is ROUGE-L. However, its correlation coefficient is lower than 0.4, which means reasonable correlation. Third, we found that the ranking made by using our test set reasonably correlates with the ground-truth ranking compared with other metrics, and the correlation coefficient (CHOSEN) is higher than 0.4.
A Wind of Change:Detecting and Evaluating Lexical Semantic Changeacross Times and Domains
1906.02979
Table 5: ρ for SGNS+OP+CD (L/P, win=2, k=1, t=None) before (ORG) and after time-shuffling (SHF) and downampling them to the same frequency (+DWN).
['[BOLD] Dataset', '[BOLD] ORG', '[BOLD] SHF', '[BOLD] +DWN']
[['[BOLD] DURel', '[BOLD] 0.816', '0.180', '0.372'], ['[BOLD] SURel', '[BOLD] 0.767', '0.763', '0.576']]
As we saw, dispersion measures are sensitive to frequency. In order to test for this influence within our datasets we follow Dubossarsky et al. For each target word we merge all sentences from the two corpora Ca and Cb containing it, shuffle them, split them again into two sets while holding their frequencies from the original corpora approximately stable and merge them again with the original corpora. This reduces the target words’ mean degree of LSC between Ca and Cb significantly. Accordingly, the mean degree of LSC predicted by the models should reduce significantly if the models measure LSC (and not some other controlled property of the dataset such as frequency). We find that the mean prediction on a result sample (L/P, win=2) indeed reduces from 0.5 to 0.36 on DURel and from 0.53 to 0.44 on SURel. Moreover, shuffling should reduce the correlation of individual model predictions with the gold rank, as many items in the gold rank have a high degree of LSC, supposedly being canceled out by the shuffling and hence randomizing the ranking. Testing this on a result sample (SGNS+OP+CD, L/P, win=2, k=1, t (ORG) to 0.180 on the shuffled (SHF) corpora, but not for SURel where the correlation remains stable (0.767 vs. 0.763). We hypothesize that the latter may be due to SURel’s frequency properties and find that downsampling all target words to approximately the same frequency in both corpora (≈50) reduces the correlation (+DWN). However, there is still a rather high correlation left (0.576). Presumably, other factors play a role: (i) Time-shuffling may not totally randomize the rankings because words with a high change still end up having slightly different meaning distributions in the two corpora than words with no change at all. Combined with the fact that the SURel rank is less uniformly distributed than DURel this may lead to a rough preservation of the SURel rank after shuffling. (ii) For words with a strong change the shuffling creates two equally polysemous sets of word uses from two monosemous sets. The models may be sensitive to the different variances in these sets, and hence predict stronger change for more polysemous sets of uses. Overall, our findings demonstrate that much more work has to be done to understand the effects of time-shuffling as well as sensitivity effects of LSC detection models to frequency and polysemy.
A Wind of Change:Detecting and Evaluating Lexical Semantic Changeacross Times and Domains
1906.02979
Table 3: Best and mean ρ scores across similarity measures (CD, LND, JSD) on semantic representations.
['[BOLD] Dataset', '[BOLD] Representation', '[BOLD] best', '[BOLD] mean']
[['[BOLD] DURel', 'raw count', '0.639', '0.395'], ['[BOLD] DURel', 'PPMI', '0.670', '0.489'], ['[BOLD] DURel', 'SVD', '0.728', '0.498'], ['[BOLD] DURel', 'RI', '0.601', '0.374'], ['[BOLD] DURel', 'SGNS', '[BOLD] 0.866', '[BOLD] 0.502'], ['[BOLD] DURel', 'SCAN', '0.327', '0.156'], ['[BOLD] SURel', 'raw count', '0.599', '0.120'], ['[BOLD] SURel', 'PPMI', '0.791', '0.500'], ['[BOLD] SURel', 'SVD', '0.639', '0.300'], ['[BOLD] SURel', 'RI', '0.622', '0.299'], ['[BOLD] SURel', 'SGNS', '[BOLD] 0.851', '[BOLD] 0.520'], ['[BOLD] SURel', 'SCAN', '0.082', '-0.244']]
SGNS is clearly the best vector space model, even though its mean performance does not outperform other representations as clearly as its best performance. Regarding count models, PPMI and SVD show the best results.
A Wind of Change:Detecting and Evaluating Lexical Semantic Changeacross Times and Domains
1906.02979
Table 4: Mean ρ scores for CD across the alignments. Applies only to RI, SVD and SGNS.
['[BOLD] Dataset', '[BOLD] OP', 'OP−', 'OP+', '[BOLD] WI', '[BOLD] None']
[['[BOLD] DURel', '0.618', '0.557', '[BOLD] 0.621', '0.468', '0.254'], ['[BOLD] SURel', '[BOLD] 0.590', '0.514', '0.401', '0.492', '0.285']]
OP+ has the best mean performance on DURel, but performs poorly on SURel. Artetxe et al. show that the additional pre- and post-processing steps of OP+ can be harmful in certain conditions. We tested the influence of the different steps and identified the non-orthogonal whitening transformation as the main reason for a performance drop of ≈20%. As expected, the mean performance drops considerably. However, it remains positive, which suggests that the spaces learned in the models are not random but rather slightly rotated variants. Especially interesting is the comparison of Word Injection (WI) where one common vector space is learned against the OP-models where two separately learned vector spaces are aligned. We found that OP profits from mean-centering in the pre-processing step: applying mean-centering to WI matrices improves the performance by 3% on WI+SGNS+CD.
Retrofitting Word Vectors to Semantic Lexicons
1411.4166
Table 3: Absolute performance changes for including PPDB information while training LBL vectors. Spearman’s correlation (3 left columns) and accuracy (3 right columns) on different tasks. Bold indicates greatest improvement.
['Method', '[ITALIC] k, [ITALIC] γ', 'MEN-3k', 'RG-65', 'WS-353', 'TOEFL', 'SYN-REL', 'SA']
[['LBL (Baseline)', '[ITALIC] k=∞, [ITALIC] γ=0', '58.0', '42.7', '53.6', '66.7', '31.5', '72.5'], ['[BOLD] LBL + Lazy', '[ITALIC] γ=1', '–0.4', '4.2', '0.6', '–0.1', '0.6', '1.2'], ['[BOLD] LBL + Lazy', '[ITALIC] γ=0.1', '0.7', '8.1', '0.4', '–1.4', '0.7', '0.8'], ['[BOLD] LBL + Lazy', '[ITALIC] γ=0.01', '0.7', '9.5', '1.7', '2.6', '1.9', '0.4'], ['[BOLD] LBL + Periodic', '[ITALIC] k=100M', '3.8', '18.4', '3.6', '12.0', '4.8', '1.3'], ['[BOLD] LBL + Periodic', '[ITALIC] k=50M', '3.4', '[BOLD] 19.5', '4.4', '18.6', '0.6', '[BOLD] 1.9'], ['[BOLD] LBL + Periodic', '[ITALIC] k=25M', '0.5', '18.1', '2.7', '[BOLD] 21.3', '–3.7', '0.8'], ['[BOLD] LBL + Retrofitting', '–', '[BOLD] 5.7', '15.6', '[BOLD] 5.5', '18.6', '[BOLD] 14.7', '0.9']]
S6SS2SSS0Px1 Results. For lazy, γ=0.01 performs best, but the method is in most cases not highly sensitive to γ’s value. For periodic, which overall leads to greater improvements over the baseline than lazy, k=50M performs best, although all other values of k also outperform the the baseline. Retrofitting, which can be applied to any word vectors, regardless of how they are trained, is competitive and sometimes better.
Retrofitting Word Vectors to Semantic Lexicons
1411.4166
Table 2: Absolute performance changes with retrofitting. Spearman’s correlation (3 left columns) and accuracy (3 right columns) on different tasks. Higher scores are always better. Bold indicates greatest improvement for a vector type.
['Lexicon', 'MEN-3k', 'RG-65', 'WS-353', 'TOEFL', 'SYN-REL', 'SA']
[['Glove', '73.7', '76.7', '60.5', '89.7', '67.0', '79.6'], ['+PPDB', '1.4', '2.9', '–1.2', '[BOLD] 5.1', '–0.4', '[BOLD] 1.6'], ['+WN [ITALIC] syn', '0.0', '2.7', '0.5', '[BOLD] 5.1', '–12.4', '0.7'], ['+WN [ITALIC] all', '[BOLD] 2.2', '[BOLD] 7.5', '[BOLD] 0.7', '2.6', '–8.4', '0.5'], ['+FN', '–3.6', '–1.0', '–5.3', '2.6', '–7.0', '0.0'], ['SG', '67.8', '72.8', '65.6', '85.3', '73.9', '81.2'], ['+PPDB', '[BOLD] 5.4', '3.5', '[BOLD] 4.4', '[BOLD] 10.7', '–2.3', '[BOLD] 0.9'], ['+WN [ITALIC] syn', '0.7', '3.9', '0.0', '9.3', '–13.6', '0.7'], ['+WN [ITALIC] all', '2.5', '[BOLD] 5.0', '1.9', '9.3', '–10.7', '–0.3'], ['+FN', '–3.2', '2.6', '–4.9', '1.3', '–7.3', '0.5'], ['GC', '31.3', '62.8', '62.3', '60.8', '10.9', '67.8'], ['+PPDB', '[BOLD] 7.0', '6.1', '2.0', '[BOLD] 13.1', '[BOLD] 5.3', '[BOLD] 1.1'], ['+WN [ITALIC] syn', '3.6', '6.4', '0.6', '7.3', '–1.7', '0.0'], ['+WN [ITALIC] all', '6.7', '[BOLD] 10.2', '[BOLD] 2.3', '4.4', '–0.6', '0.2'], ['+FN', '1.8', '4.0', '0.0', '4.4', '–0.6', '0.2'], ['Multi', '75.8', '75.5', '68.1', '84.0', '45.5', '81.0'], ['+PPDB', '[BOLD] 3.8', '4.0', '[BOLD] 6.0', '[BOLD] 12.0', '[BOLD] 4.3', '0.6'], ['+WN [ITALIC] syn', '1.2', '0.2', '2.2', '6.6', '–12.3', '[BOLD] 1.4'], ['+WN [ITALIC] all', '2.9', '[BOLD] 8.5', '4.3', '6.6', '–10.6', '[BOLD] 1.4'], ['+FN', '1.8', '4.0', '0.0', '4.4', '–0.6', '0.2']]
All of the lexicons offer high improvements on the word similarity tasks (the first three columns). On the TOEFL task, we observe large improvements of the order of 10 absolute points in accuracy for all lexicons except for FrameNet. FrameNet’s performance is weaker, in some cases leading to worse performance (e.g., with Glove and SG vectors). For the extrinsic sentiment analysis task, we observe improvements using all the lexicons and gain 1.4% (absolute) in accuracy for the Multi vectors over the baseline. This increase is statistically significant (p<0.01, McNemar).
Searching for Effective Neural Extractive Summarization: What Works and What’s Next
1907.03491
Table 5: Results of different architectures with different pre-trained knowledge on CNN/DailyMail, where Enc. and Dec. represent document encoder and decoder respectively.
['[BOLD] Model [BOLD] Dec.', '[BOLD] Model [BOLD] Enc.', '[BOLD] R-1 [BOLD] Baseline', '[BOLD] R-2 [BOLD] Baseline', '[BOLD] R-L [BOLD] Baseline', '[BOLD] R-1 [BOLD] + GloVe', '[BOLD] R-2 [BOLD] + GloVe', '[BOLD] R-L [BOLD] + GloVe', '[BOLD] R-1 [BOLD] + BERT', '[BOLD] R-2 [BOLD] + BERT', '[BOLD] R-L [BOLD] + BERT', '[BOLD] R-1 [BOLD] + Newsroom', '[BOLD] R-2 [BOLD] + Newsroom', '[BOLD] R-L [BOLD] + Newsroom']
[['SeqLab', 'LSTM', '41.22', '18.72', '37.52', '[BOLD] 41.33', '[BOLD] 18.78', '[BOLD] 37.64', '42.18', '19.64', '38.53', '41.48', '[BOLD] 18.95', '37.78'], ['SeqLab', 'Transformer', '41.31', '[BOLD] 18.85', '37.63', '40.19', '18.67', '37.51', '42.28', '[BOLD] 19.73', '38.59', '41.32', '18.83', '37.63'], ['Pointer', 'LSTM', '[BOLD] 41.56', '18.77', '[BOLD] 37.83', '41.15', '18.38', '37.43', '[BOLD] 42.39', '19.51', '[BOLD] 38.69', '41.35', '18.59', '37.61'], ['Pointer', 'Transformer', '41.36', '18.59', '37.67', '41.10', '18.38', '37.41', '42.09', '19.31', '38.41', '[BOLD] 41.54', '18.73', '[BOLD] 37.83']]
As shown in Tab. However, when the models are equipped with BERT, we are excited to observe that the performances of all types of architectures are improved by a large margin. Specifically, the model CNN-LSTM-Pointer has achieved a new state-of-the-art with 42.11 on R-1, surpassing existing models dramatically.
Enriching Neural Models with Targeted Features for Dementia Detection
1906.05483
Table 3: Performance of evaluated models.
['[BOLD] Approach', '[BOLD] Accuracy', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1', '[BOLD] AUC', '[BOLD] TN', '[BOLD] FP', '[BOLD] FN', '[BOLD] TP']
[['C-LSTM', '0.8384', '0.8683', '0.9497', '0.9058', '0.9057', '6.3', '15.6', '5.3', '102.6'], ['C-LSTM-Att', '0.8333', '0.8446', '0.9778', '0.9061', '0.9126', '2.6', '19.3', '2.3', '105.6'], ['C-LSTM-Att-w', '0.8512', '0.9232', '0.8949', '0.9084', '0.9139', '14.0', '8.0', '11.3', '96.6'], ['OURS', '0.8495', '0.8508', '[BOLD] 0.9965', '0.9178', '0.9207', '1.0', '16.6', '0.3', '95.0'], ['OURS-Att', '0.8466', '0.8525', '0.9895', '0.9158', '[BOLD] 0.9503', '1.3', '16.3', '1.0', '94.3'], ['OURS-Att-w', '[BOLD] 0.8820', '[BOLD] 0.9312', '0.9298', '[BOLD] 0.9305', '0.9498', '11.0', '6.6', '6.6', '88.6']]
As is demonstrated, our proposed model achieves the highest performance in Accuracy, Precision, Recall, F1, and AUC. It outperforms the state of the art (C-LSTM) by 5.2%, 7.1%, 4.9%, 2.6%, and 3.7%, respectively.
A Corpus for Modeling Word Importance in Spoken Dialogue Transcripts
1801.09746
Table 2: Model performance in terms of RMS deviation and macro-averaged F1 score, with best results in bold font.
['[BOLD] Model', '[BOLD] RMS', '[ITALIC] F1 [BOLD] (macro)']
[['LSTM-CRF', '0.154', '[BOLD] 0.60'], ['LSTM-SIG', '[BOLD] 0.120', '0.519']]
While the LSTM-CRF had a better (higher) F-score on the classification task, its RMS score was worse (higher) than the LSTM-SIG model, which may be due to the limitation of the model as discussed in Section 5.
Weak Supervision Enhanced Generative Network for Question Generation
1907.00607
Table 1: Comparison with other methods on SQuAD dataset. We demonstrate automatic evaluation results on BLEU 1-4, ROUGE-L, METEOR metrics. The best performance for each column is highlighted in boldface. The WeGen without pre-training means the pipeline of Answer-Related Encoder and Transferred Interaction module are not used and the control gate is abandoned. Refer to Section 4.4 for more details.
['Model', 'BLEU 1', 'BLEU 2', 'BLEU 3', 'BLEU 4', 'ROUGE-L', 'METEOR']
[['Vanilla Seq2Seq', '17.13', '8.28', '4.74', '2.94', '18.92', '7.23'], ['Seq2Seq+Attention', '17.90', '9.64', '5.68', '3.34', '19.95', '8.63'], ['Transformer', '15.14', '7.27', '3.94', '1.61', '16.47', '5.93'], ['Seq2Seq+Attention+Copy', '29.17', '19.45', '12.63', '10.43', '28.97', '17.63'], ['[BOLD] WeGen', '[BOLD] 32.65', '[BOLD] 22.14', '[BOLD] 15.86', '[BOLD] 12.03', '[BOLD] 32.36', '[BOLD] 20.25'], ['WeGen w/o pre-training', '31.14', '20.26', '13.87', '11.25', '31.04', '18.42']]
The experimental results reveal a number of interesting points. The copy mechanism improve the results significantly. It uses attentive read from word embedding of sequence in encoder and selective read from location-aware hidden states to enhance the capability of decoder and proves effectiveness of the repeat pattern in human communication. Transformer structure performs badly and just achieves better results than Vanilla Seq2Seq. This suggests that the pure attention-based models are not sufficient for question generation and the local features of sequences and the variant semantic relations should be modelled more effectively.
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 5: Comparison of word error rates for different language models.
['LM', 'WER SWB', 'WER CH']
[['Baseline 4M 4-gram', '9.3', '15.6'], ['37M 4-gram (n-gram)', '8.8', '15.3'], ['n-gram + model M', '8.4', '14.3'], ['n-gram + model M + NNLM', '8.0', '14.1']]
This new n-gram LM was used in combination with our best acoustic model to decode and generate word lattices for further LM rescoring experiments. The WER improved by 0.5% for SWB and 0.3% for CallHome. Part of this improvement (0.1-0.2%) was due to also using a larger beam for decoding. We built a model M LM on each corpus and interpolated the models, together with the 37M n-gram LM. We built two NNLMs for interpolation. One was trained on just the most relevant data: the 24M word corpus (Switchboard/Fisher/CallHome acoustic transcripts).
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 1: Word error rates of sigmoid vs. Maxout networks trained with annealed dropout (Maxout-AD) for ST CNNs, DNNs and score fusion on Hub5’00 SWB. Note that all networks are trained only on the SWB-1 data (262 hours).
['Model', 'WER SWB (ST) sigmoid', 'WER SWB (ST) Maxout-AD']
[['DNN', '11.9', '11.0'], ['CNN', '11.8', '11.6'], ['DNN+CNN', '10.5', '10.2']]
All Maxout networks utilize 2 filters per hidden unit, and the same number of layers and roughly the same number of parameters per layer as the sigmoid-based DNN/CNN counterparts. Parameter equalization is achieved by having a factor of √2 more neurons per hidden layer for the maxout nets since the maxout operation reduces the number of outputs by a factor of 2. Note that ReLU networks, in our experience, perform on-par with sigmoid-based DNNs in this data regime. Maxout networks trained with AD (Maxout-AD), on the other hand, show a clear advantage over our traditional networks. Also, note that the convolutional layers of the Maxout-AD CNN have only 128 and 256 feature map outputs, whereas those of the sigmoid CNN has 512/512 outputs. Training of the Maxout-AD CNN with a 512/512 filter configuration is in progress.
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 2: Comparison of word error rates for CE-trained DNNs with different number of outputs and phonetic context size on Hub5’00 SWB.
['Nb. outputs', 'Phonetic ctx.', 'WER SWB (CE)']
[['16000', '±2', '12.0'], ['16000', '±3', '11.8'], ['32000', '±2', '11.7'], ['64000', '±2', '11.9']]
When training on 2000 hours of data, we found it beneficial to increase the number of context-dependent HMM output targets to values that are far larger than commonly reported. We conjecture that this is because GMMs are a distributed model and require more data for each state to reliably estimate the mixture components, whereas the DNN output layer is shared between states. This allows DNNs to have a much richer target space. Additionally, we experimented with growing acoustic decision trees where the phonetic context is increased to heptaphones (±3 phones within words and ±2 phones across words).
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 3: Comparison of word error rates for CE and ST CNN, DNN, RNN and various score fusions on Hub5’00.
['Model', 'WER SWB CE', 'WER SWB ST', 'WER CH CE', 'WER CH ST']
[['CNN', '12.6', '10.4', '18.4', '17.9'], ['DNN', '11.7', '10.3', '18.5', '17.0'], ['RNN', '11.5', '9.9', '17.7', '16.3'], ['DNN+CNN', '11.3', '9.6', '17.4', '16.3'], ['RNN+CNN', '11.2', '9.4', '17.0', '16.1'], ['DNN+RNN+CNN', '11.1', '9.4', '17.1', '15.9']]
All nets are trained with 10-15 passes of cross-entropy on 2000 hours of audio and 30 iterations of sequence For score fusion, we decode with a frame-level sum of the outputs of the nets prior to the softmax with uniform weights.
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 4: Comparison of word error rates for CE and sequence trained unfolded RNN and DNN with score fusion and joint modeling on Hub5’00. The WERs for the joint models are after sequence training.
['RNN/CNN combination', 'WER SWB', 'WER CH']
[['score fusion of CE models', '11.2', '17.0'], ['score fusion of ST models', '9.4', '16.1'], ['joint model from CE models (ST)', '9.3', '15.6'], ['joint model from ST models (ST)', '9.4', '15.7']]
Two experimental scenarios were considered. The first is where the joint model was initialized with the fusion of the cross-entropy trained RNN and CNN whereas the second uses ST models as the starting point.
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 6: Comparison of word error rates on Hub5’00 (SWB and CH) for existing systems (∗ note that the 19.1% CallHome WER is not reported in [13]).
['System', 'AM training data', 'SWB', 'CH']
[['Vesely et al.\xa0', 'SWB', '12.6', '24.1'], ['Seide et al.\xa0', 'SWB+Fisher+other', '13.1', '–'], ['Hannun et al.\xa0', 'SWB+Fisher', '12.6', '19.3'], ['Zhou et al.\xa0', 'SWB', '14.2', '–'], ['Maas et al.\xa0', 'SWB', '14.3', '26.0'], ['Maas et al.\xa0', 'SWB+Fisher', '15.0', '23.0'], ['Soltau et al.\xa0', 'SWB', '10.4', '19.1∗'], ['This system', 'SWB+Fisher+CH', '8.0', '14.1']]
Since Switchboard is such a well-studied corpus, we thought we would take a step back and reflect on how far we have come in terms of speech recognition technology. At the height of technological development for GMM-based systems, the winning IBM submission scored 15.2% WER during the 2004 DARPA EARS For clarity, we also specify the type of training data that was used for acoustic modeling in each case.
Thematically ReinforcedExplicit Semantic Analysis
1405.4364
Table 2: Evaluation results (ordered by decreasing precision)
['[ITALIC] λ1', '[ITALIC] λ2', '[ITALIC] λ3', '[ITALIC] λ4', '[ITALIC] λ5', 'C', '# SVs', 'Precision']
[['1.5', '0', '0.5', '0.25', '0.125', '3.0', '786', '[BOLD] 75.015%'], ['1', '0', '0.5', '0.25', '0.125', '3.0', '709', '74.978%'], ['1.5', '1', '0.5', '0.25', '0.125', '3.0', '827', '74.899%'], ['0.25', '1.5', '0.5', '0.25', '0.125', '3.0', '761', '74.87%'], ['0.5', '0', '0.5', '0.25', '0.125', '3.0', '698', '74.867%'], ['1', '0.5', '0.25', '0.125', '0.0625', '3.0', '736', '74.845%'], ['0.5', '1', '0.5', '0.25', '0.125', '3.0', '736', '74.795%'], ['1', '1.5', '0.5', '0.25', '0.125', '3.0', '865', '74.791%'], ['0.5', '0.5', '0.5', '0.25', '0.125', '3.0', '682', '74.789%'], ['0.5', '1.5', '0.5', '0.25', '0.125', '3.0', '778', '74.814%'], ['1.5', '0.5', '0.2', '0.1', '0.05', '3.0', '775', '74.780%']]
The results show a significant improvement over the standard ESA version (that corresponds to λi=0 for all i. This confirms our approach. On Fig. the reader can see the precision obtained as function of the two first parameters λ1 and λ2, as well the number of support vectors used. We notice that the precision varies slightly (between 74.36% and 75.015%, that is less than 1%) as long as λ1 or λ2 are nonzero, and abruptly goes down to 65.58% when they are both zero. For nonzero values of λi the variation of precision follows no recognizable pattern. On the other hand, the number of support vectors shows a pattern: it is clearly correlated with λ1 and λ2, the highest value being 995, number of support vectors used when both λ1 and λ2 take their highest values. Since CPU time is roughly proportional to the number of support vectors, it is most interesting to take small (but nonzero) values of λi so that, at the same time, precision is high and the number of support vectors (and hence CPU time) is kept small.
Thematically ReinforcedExplicit Semantic Analysis
1405.4364
Table 2: Evaluation results (ordered by decreasing precision)
['[ITALIC] λ1', '[ITALIC] λ2', '[ITALIC] λ3', '[ITALIC] λ4', '[ITALIC] λ5', 'C', '# SVs', 'Precision']
[['0', '1', '0.5', '0.25', '0.125', '3.0', '710', '74.716%'], ['2', '1', '0.5', '0.25', '0.125', '3.0', '899', '74.705%'], ['2', '0', '0.5', '0.25', '0.125', '3.0', '852', '74.675%'], ['0.5', '0.25', '0.125', '0.0625', '0.0312', '3.0', '653', '74.67%'], ['2', '0.5', '0.5', '0.25', '0.125', '3.0', '899', '74.641%'], ['0.25', '0.125', '0.0625', '0.0312', '0.015', '3.0', '615', '74.613%'], ['1', '1', '1', '0.5', '0.25', '3.0', '796', '74.61%'], ['0', '1.5', '1', '0.5', '0.25', '3.0', '792', '74.548%'], ['1.5', '1.5', '1', '0.75', '0.25', '3.0', '900', '74.471%'], ['2', '1.5', '1', '0.5', '0.25', '3.0', '[BOLD] 995', '74.36%'], ['0', '0', '0', '0', '0', '3.0', '324', '65.58%']]
The results show a significant improvement over the standard ESA version (that corresponds to λi=0 for all i. This confirms our approach. On Fig. the reader can see the precision obtained as function of the two first parameters λ1 and λ2, as well the number of support vectors used. We notice that the precision varies slightly (between 74.36% and 75.015%, that is less than 1%) as long as λ1 or λ2 are nonzero, and abruptly goes down to 65.58% when they are both zero. For nonzero values of λi the variation of precision follows no recognizable pattern. On the other hand, the number of support vectors shows a pattern: it is clearly correlated with λ1 and λ2, the highest value being 995, number of support vectors used when both λ1 and λ2 take their highest values. Since CPU time is roughly proportional to the number of support vectors, it is most interesting to take small (but nonzero) values of λi so that, at the same time, precision is high and the number of support vectors (and hence CPU time) is kept small.
Essence Knowledge Distillation for Speech Recognition
1906.10834
Table 2: Word error rates of different models trained with a subset of the Switchboard data.
['Acoustic Model', '[ITALIC] k', 'SWB', 'CHE', 'TOTAL']
[['TDNN', '[EMPTY]', '14.1', '26.3', '20.3'], ['TDNN-LSTM', '[EMPTY]', '14.4', '26.2', '20.2'], ['TDNN-LSTM+TDNN (teacher)', '[EMPTY]', '13.2', '25.4', '19.3'], ['[EMPTY]', '1', '13.5', '25.4', '19.6'], ['[EMPTY]', '5', '13.1', '24.6', '18.9'], ['[EMPTY]', '10', '13.0', '24.6', '[BOLD] 18.8'], ['TDNN-LSTM (student)', '20', '12.9', '25.0', '19.0'], ['[EMPTY]', '50', '13.0', '24.9', '18.9'], ['[EMPTY]', '1000', '13.0', '24.8', '19.0'], ['[EMPTY]', '8912', '13.0', '24.6', '18.9']]
A subset consisting 25% of the training data from the Switchboard data set was used to quickly evaluate the effectiveness of the proposed method and to tune some hyperparameters. As can be seen, the TDNN-LSTM performed better than the TDNN model. The teacher model, which is a fusion of a TDNN model and a TDNN-LSTM model, significantly outperformed any individual model.
Neural Belief Tracker: Data-Driven Dialogue State Tracking
1606.03777
Table 2: DSTC2 and WOZ 2.0 test set performance (joint goals and requests) of the NBT-CNN model making use of three different word vector collections. The asterisk indicates statistically significant improvement over the baseline xavier (random) word vectors (paired t-test; p<0.05).
['[BOLD] Word Vectors', '[BOLD] DSTC2 [BOLD] Goals', '[BOLD] DSTC2 [BOLD] Requests', '[BOLD] WOZ 2.0 [BOLD] Goals', '[BOLD] WOZ 2.0 [BOLD] Requests']
[['xavier [BOLD] (No Info.)', '64.2', '81.2', '81.2', '90.7'], ['[BOLD] GloVe', '69.0*', '96.4*', '80.1', '91.4'], ['[BOLD] Paragram-SL999', '[BOLD] 73.4*', '[BOLD] 96.5*', '[BOLD] 84.2*', '[BOLD] 91.6']]
The NBT models use the semantic relations embedded in the pre-trained word vectors to handle semantic variation and produce high-quality intermediate representations. , trained using co-occurrence information in large textual corpora; and 3) semantically specialised Paragram-SL999 vectors Wieting et al. Paragram-SL999 vectors (significantly) outperformed GloVe and xavier vectors for goal tracking on both datasets. The gains are particularly robust for noisy DSTC2 data, where both collections of pre-trained vectors consistently outperformed random initialisation. The gains are weaker for the noise-free WOZ 2.0 dataset, which seems to be large (and clean) enough for the NBT model to learn task-specific rephrasings and compensate for the lack of semantic content in the word vectors. For this dataset, GloVe vectors do not improve over the randomly initialised ones. We believe this happens because distributional models keep related, yet antonymous words close together (e.g. north and south, expensive and inexpensive), offsetting the useful semantic content embedded in this vector spaces. The NBT-DNN model showed the same trends.
Neural Belief Tracker: Data-Driven Dialogue State Tracking
1606.03777
Table 1: DSTC2 and WOZ 2.0 test set accuracies for: a) joint goals; and b) turn-level requests. The asterisk indicates statistically significant improvement over the baseline trackers (paired t-test; p<0.05).
['[BOLD] DST Model', '[BOLD] DSTC2 [BOLD] Goals', '[BOLD] DSTC2 [BOLD] Requests', '[BOLD] WOZ 2.0 [BOLD] Goals', '[BOLD] WOZ 2.0 [BOLD] Requests']
[['[BOLD] Delexicalisation-Based Model', '69.1', '95.7', '70.8', '87.1'], ['[BOLD] Delexicalisation-Based Model + Semantic Dictionary', '72.9*', '95.7', '83.7*', '87.6'], ['Neural Belief Tracker: NBT-DNN', '72.6*', '96.4', '[BOLD] 84.4*', '91.2*'], ['Neural Belief Tracker: NBT-CNN', '[BOLD] 73.4*', '[BOLD] 96.5', '84.2*', '[BOLD] 91.6*']]
The NBT models outperformed the baseline models in terms of both joint goal and request accuracies. For goals, the gains are always statistically significant (paired t-test, p<0.05). Moreover, there was no statistically significant variation between the NBT and the lexicon-supplemented models, showing that the NBT can handle semantic relations which otherwise had to be explicitly encoded in semantic dictionaries.
Variational Neural Discourse Relation Recognizer
1603.03876
(b) Con vs Other
['[BOLD] Model', '[BOLD] Acc', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['[BOLD] (R & X\xa0rutherford-xue:2015:NAACL-HLT)', '-', '-', '-', '53.80'], ['[BOLD] (J & E\xa0TACL536)', '76.95', '-', '-', '52.78'], ['[BOLD] SVM', '62.62', '39.14', '72.40', '50.82'], ['[BOLD] SCNN', '63.00', '39.80', '75.29', '52.04'], ['[BOLD] VarNDRR', '53.82', '35.39', '88.53', '50.56']]
Because the development and test sets are imbalanced in terms of the ratio of positive and negative instances, we chose the widely-used F1 score as our major evaluation metric. In addition, we also provide the precision, recall and accuracy for further analysis. according to their F1 scores. Although it fails on Con, VarNDRR achieves the best result on EXP/Com among these three models. Overall, VarNDRR is competitive in comparison with these two baselines. With respect to the accuracy, our model does not yield substantial improvements over the two baselines. This may be because that we used the F1 score rather than the accuracy, as our selection criterion on the development set. With respect to the precision and recall, our model tends to produce relatively lower precisions but higher recalls. This suggests that the improvements of VarNDRR in terms of F1 scores mostly benefits from the recall values.
Variational Neural Discourse Relation Recognizer
1603.03876
(a) Com vs Other
['[BOLD] Model', '[BOLD] Acc', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['[BOLD] R & X\xa0rutherford-xue:2015:NAACL-HLT', '-', '-', '-', '41.00'], ['[BOLD] J & E\xa0TACL536', '70.27', '-', '-', '35.93'], ['[BOLD] SVM', '63.10', '22.79', '64.47', '33.68'], ['[BOLD] SCNN', '60.42', '22.00', '67.76', '33.22'], ['[BOLD] VarNDRR', '63.30', '24.00', '71.05', '35.88']]
Because the development and test sets are imbalanced in terms of the ratio of positive and negative instances, we chose the widely-used F1 score as our major evaluation metric. In addition, we also provide the precision, recall and accuracy for further analysis. according to their F1 scores. Although it fails on Con, VarNDRR achieves the best result on EXP/Com among these three models. Overall, VarNDRR is competitive in comparison with these two baselines. With respect to the accuracy, our model does not yield substantial improvements over the two baselines. This may be because that we used the F1 score rather than the accuracy, as our selection criterion on the development set. With respect to the precision and recall, our model tends to produce relatively lower precisions but higher recalls. This suggests that the improvements of VarNDRR in terms of F1 scores mostly benefits from the recall values.
Variational Neural Discourse Relation Recognizer
1603.03876
(c) Exp vs Other
['[BOLD] Model', '[BOLD] Acc', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['[BOLD] (R & X\xa0rutherford-xue:2015:NAACL-HLT)', '-', '-', '-', '69.40'], ['[BOLD] (J & E\xa0TACL536)', '69.80', '-', '-', '80.02'], ['[BOLD] SVM', '60.71', '65.89', '58.89', '62.19'], ['[BOLD] SCNN', '63.00', '56.29', '91.11', '69.59'], ['[BOLD] VarNDRR', '57.36', '56.46', '97.39', '71.48']]
Because the development and test sets are imbalanced in terms of the ratio of positive and negative instances, we chose the widely-used F1 score as our major evaluation metric. In addition, we also provide the precision, recall and accuracy for further analysis. according to their F1 scores. Although it fails on Con, VarNDRR achieves the best result on EXP/Com among these three models. Overall, VarNDRR is competitive in comparison with these two baselines. With respect to the accuracy, our model does not yield substantial improvements over the two baselines. This may be because that we used the F1 score rather than the accuracy, as our selection criterion on the development set. With respect to the precision and recall, our model tends to produce relatively lower precisions but higher recalls. This suggests that the improvements of VarNDRR in terms of F1 scores mostly benefits from the recall values.
Variational Neural Discourse Relation Recognizer
1603.03876
(d) Tem vs Other
['[BOLD] Model', '[BOLD] Acc', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['[BOLD] (R & X\xa0rutherford-xue:2015:NAACL-HLT)', '-', '-', '-', '33.30'], ['[BOLD] (J & E\xa0TACL536)', '87.11', '-', '-', '27.63'], ['[BOLD] SVM', '66.25', '15.10', '68.24', '24.73'], ['[BOLD] SCNN', '76.95', '20.22', '62.35', '30.54'], ['[BOLD] VarNDRR', '62.14', '17.40', '97.65', '29.54']]
Because the development and test sets are imbalanced in terms of the ratio of positive and negative instances, we chose the widely-used F1 score as our major evaluation metric. In addition, we also provide the precision, recall and accuracy for further analysis. according to their F1 scores. Although it fails on Con, VarNDRR achieves the best result on EXP/Com among these three models. Overall, VarNDRR is competitive in comparison with these two baselines. With respect to the accuracy, our model does not yield substantial improvements over the two baselines. This may be because that we used the F1 score rather than the accuracy, as our selection criterion on the development set. With respect to the precision and recall, our model tends to produce relatively lower precisions but higher recalls. This suggests that the improvements of VarNDRR in terms of F1 scores mostly benefits from the recall values.
Gated Convolutional Bidirectional Attention-based Model for Off-topic Spoken Response Detection
2004.09036
Table 5: The performance of GCBiA with negative sampling augmentation method conditioned on over 0.999 on-topic recall.
['Model', 'Seen PPR3', 'Seen AOR', 'Unseen PPR3', 'Unseen AOR']
[['GCBiA', '93.6', '79.2', '68.0', '45.0'], ['+ neg sampling', '[BOLD] 94.2', '[BOLD] 88.2', '[BOLD] 79.4', '[BOLD] 69.1']]
To augment training data and strengthen the generalization of the off-topic response detection model for unseen prompts, we proposed a new and effective negative sampling method for off-topic response detection task. Comparing with the previous method of generating only one negative sample for each positive one, we generated two. The first one is chosen randomly as before, and the second one consists of words shuffled from the first one. This method contributes to the diversity of negative samples of training data. The size of our training data reaches 1.67M, compared with 1.12M in the previous negative sampling method. To make training data balanced, we gave the weight of positive and negative samples: 1 and 0.5, respectively. Our model GCBiA equipped with negative sampling augmentation can achieve 88.2% and 69.1% average off-topic response recall on seen and unseen prompts, conditioned on 0.999 on-topic recall.
Gated Convolutional Bidirectional Attention-based Model for Off-topic Spoken Response Detection
2004.09036
Table 4: The comparison of different models based on over 0.999 on-topic recall on seen and unseen benchmarks. AOR means Average Off-topic Recall (%) and PRR3 means Prompt Ratio over off-topic Recall 0.3 (%).
['Systems', 'Model', 'Seen PPR3', 'Seen AOR', 'Unseen PPR3', 'Unseen AOR']
[['Malinin et\xa0al., 2017', 'Att-RNN', '84.6', '72.2', '32.0', '21.0'], ['Our baseline model', 'G-Att-RNN', '87.8', '76.8', '54.0', '38.1'], ['This work', '+ Bi-Attention', '90.4', '78.3', '56.0', '39.7'], ['This work', '+ RNN→CNN', '89.7', '76.6', '66.0', '43.7'], ['This work', '+ [ITALIC] maxpooling', '92.3', '79.1', '68.0', '42.2'], ['This work', '+ Res-conn in gated unit (GCBiA)', '[BOLD] 93.6', '[BOLD] 79.2', '[BOLD] 68.0', '[BOLD] 45.0']]
As is shown in To make the evaluation more convincing, we built a stronger baseline model G-Att-RNN based on Att-RNN by adding residual connections with each layer. Additionally, we add a gated unit as the relevance layer for our baseline model G-Att-RNN. Compared with Att-RNN, our baseline model G-Att-RNN achieved significant improvements on both seen (by +3.2 PPR3 points and +4.6 AOR points) and unseen benchmark (by +22.0 PPR3 points and +17.1 AOR). +24.0 AOR points on the unseen benchmark, as well as +9.0 PPR3 points and +7.0 AOR points on the seen benchmark. Meanwhile, our approach significantly outperforms G-Att-RNN by +14.0 PPR3 points and + 6.9 AOR points on the unseen benchmark, as well as +5.8 PPR3 points and +2.4 AOR points on the seen benchmark.
Joint Speaker Counting, Speech Recognition, and Speaker Identification for Overlapped Speech of Any Number of Speakers
2006.10930
Table 1: SER (%), WER (%), and SA-WER (%) for baseline systems and proposed method. The number of profiles per test audio was 8. Each profile was extracted by using 2 utterances (15 sec on average). For random speaker assignment experiment (3rd row), averages of 10 trials were computed. No LM was used in the evaluation.
['ModelEval Set', '1-speaker SER', '1-speaker WER', '1-speaker [BOLD] SA-WER', '2-speaker-mixed SER', '2-speaker-mixed WER', '2-speaker-mixed [BOLD] SA-WER', '3-speaker-mixed SER', '3-speaker-mixed WER', '3-speaker-mixed [BOLD] SA-WER', 'Total SER', 'Total WER', 'Total [BOLD] SA-WER']
[['Single-speaker ASR', '-', '4.7', '-', '-', '66.9', '-', '-', '90.7', '-', '-', '68.4', '-'], ['SOT-ASR', '-', '4.5', '-', '-', '10.3', '-', '-', '19.5', '-', '-', '13.9', '-'], ['SOT-ASR + random speaker assignment', '87.4', '4.5', '[BOLD] 175.2', '82.8', '23.4', '[BOLD] 169.7', '76.1', '39.1', '[BOLD] 165.1', '80.2', '28.1', '[BOLD] 168.3'], ['SOT-ASR + d-vec speaker identification', '0.4', '4.5', '[BOLD] 4.8', '6.4', '10.3', '[BOLD] 16.5', '13.1', '19.5', '[BOLD] 31.7', '8.7', '13.9', '[BOLD] 22.2'], ['Proposed Model', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['SOT-ASR + Spk-Enc + Inv-Attn', '0.3', '4.3', '[BOLD] 4.7', '5.5', '10.4', '[BOLD] 12.2', '14.8', '23.4', '[BOLD] 26.7', '9.3', '15.9', '[BOLD] 18.2'], ['↪ + SpeakerQueryRNN', '0.4', '4.2', '[BOLD] 4.6', '3.0', '9.1', '[BOLD] 10.9', '11.6', '21.5', '[BOLD] 24.7', '6.9', '14.5', '[BOLD] 16.7'], ['↪ + Weighted Profile (¯ [ITALIC] dn)', '0.2', '4.2', '[BOLD] 4.5', '2.5', '8.7', '[BOLD] 9.9', '10.2', '20.2', '[BOLD] 23.1', '6.0', '13.7', '[BOLD] 15.6']]
Baseline results The first row corresponds the conventional single-speaker ASR based on AED. As expected, the WER was significantly degraded for overlapped speech. The second row shows the result of the SOT-ASR system that was used for initializing the proposed method in training. SOT-ASR significantly improved the WER for all evaluation settings.
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
1609.03193
(a)
['[EMPTY]', 'ASG', 'CTC']
[['dev-clean', '10.4', '10.7'], ['test-clean', '10.1', '10.5']]
Our ASG criterion is implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is done with an OpenMP parallel for. Both criteria lead to the same LER. For comparing the speed, we report performance for sequence sizes as reported initially by Baidu, but also for longer sequence sizes, which corresponds to our average use case. ASG appears faster on long sequences, even though it is running on CPU only. Baidu’s GPU CTC implementation seems more aimed at larger vocabularies (e.g. 5000 Chinese characters). We have introduced a simple end-to-end automatic speech recognition system, which combines a standard 1D convolutional neural network, a sequence criterion which can infer the segmentation, and a simple beam-search decoder. The decoding results are competitive on the LibriSpeech corpus with MFCC features (7.2% WER), and promising with power spectrum and raw speech (9.4% WER and 10.1% WER respectively). (on average, one LibriSpeech sentence is processed in less than 60ms by our ConvNet, and the decoder runs at 8.6x on a single thread).
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
1609.03193
(b)
['batch size', 'CTC CPU', 'CTC GPU', 'ASG CPU']
[['1', '1.9', '5.9', '2.5'], ['4', '2.0', '6.0', '2.8'], ['8', '2.0', '6.1', '2.8']]
Our ASG criterion is implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is done with an OpenMP parallel for. Both criteria lead to the same LER. For comparing the speed, we report performance for sequence sizes as reported initially by Baidu, but also for longer sequence sizes, which corresponds to our average use case. ASG appears faster on long sequences, even though it is running on CPU only. Baidu’s GPU CTC implementation seems more aimed at larger vocabularies (e.g. 5000 Chinese characters). We have introduced a simple end-to-end automatic speech recognition system, which combines a standard 1D convolutional neural network, a sequence criterion which can infer the segmentation, and a simple beam-search decoder. The decoding results are competitive on the LibriSpeech corpus with MFCC features (7.2% WER), and promising with power spectrum and raw speech (9.4% WER and 10.1% WER respectively). (on average, one LibriSpeech sentence is processed in less than 60ms by our ConvNet, and the decoder runs at 8.6x on a single thread).
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
1609.03193
(c)
['batch size', 'CTC CPU', 'CTC GPU', 'ASG CPU']
[['1', '40.9', '97.9', '16.0'], ['4', '41.6', '99.6', '17.7'], ['8', '41.7', '100.3', '19.2']]
Our ASG criterion is implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is done with an OpenMP parallel for. Both criteria lead to the same LER. For comparing the speed, we report performance for sequence sizes as reported initially by Baidu, but also for longer sequence sizes, which corresponds to our average use case. ASG appears faster on long sequences, even though it is running on CPU only. Baidu’s GPU CTC implementation seems more aimed at larger vocabularies (e.g. 5000 Chinese characters). We have introduced a simple end-to-end automatic speech recognition system, which combines a standard 1D convolutional neural network, a sequence criterion which can infer the segmentation, and a simple beam-search decoder. The decoding results are competitive on the LibriSpeech corpus with MFCC features (7.2% WER), and promising with power spectrum and raw speech (9.4% WER and 10.1% WER respectively). (on average, one LibriSpeech sentence is processed in less than 60ms by our ConvNet, and the decoder runs at 8.6x on a single thread).
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
1609.03193
Table 2: LER/WER of the best sets of hyper-parameters for each feature types.
['[EMPTY]', 'MFCC LER', 'MFCC WER', 'PS LER', 'PS WER', 'Raw LER', 'Raw WER']
[['dev-clean', '6.9', '[EMPTY]', '9.3', '[EMPTY]', '10.3', '[EMPTY]'], ['test-clean', '6.9', '7.2', '9.1', '9.4', '10.6', '10.1']]
every 20 ms. We found that one could squeeze out about 1% in performance by refining the precision of the output. This is efficiently achieved by shifting the input sequence, and feeding it to the network several times. Both power spectrum and raw features are performing slightly worse than MFCCs. the gap would vanish.
EARL: Joint Entity and Relation Linking for Question Answering over Knowledge Graphs
1801.03825
Table 6: Evaluating EARL’s Relation Linking performance
['[BOLD] System', '[BOLD] Accuracy LC-QuAD', '[BOLD] Accuracy - QALD']
[['ReMatch\xa0', '0.12', '0.31'], ['RelMatch\xa0', '0.15', '0.29'], ['EARL without adaptive learning', '0.32', '0.45'], ['EARL with adaptive learning', '[BOLD] 0.36', '[BOLD] 0.47']]
Aim: Given a question, the task is to the perform relation linking in the question. This also evaluates our hypothesis H3. we could run on LC-QuAD and QALD. The large difference in accuracy of relation-linking over LC-QuAD over QALD, is due to the face that LC-QuAD has 82% questions with more than one relation, thus detecting relation phrases in the question was difficult.
EARL: Joint Entity and Relation Linking for Question Answering over Knowledge Graphs
1801.03825
Table 3: Empirical comparison of Connection Density and GTSP: n = number of nodes in graph; L = number of clusters in graph; N = number of nodes per cluster; top K results retrieved from ElasticSearch.
['[BOLD] Approach', '[BOLD] Accuracy (K=30)', '[BOLD] Accuracy (K=10)', '[BOLD] Time Complexity']
[['Brute Force GTSP', '0.61', '0.62', 'O( [ITALIC] n22 [ITALIC] n)'], ['LKH - GTSP', '0.59', '0.58', 'O( [ITALIC] nL2)'], ['Connection Density', '0.61', '0.62', 'O( [ITALIC] N2 [ITALIC] L2)']]
Aim: We evaluate hypotheses (H1 and H2) that the connection density and GTSP can be used for joint linking task. We also evaluate the LKH approximation solution of GTSP for doing this task. We compare the time complexity of the three different approaches. Results: Connection density has worse time complexity than approximate GTSP solver LKH if we assume the best case of equal cluster sizes for LKH. However, it provides a better accuracy. Moreover, the average time taken in EARL using connection density (including the candidate generation step) is 0.42 seconds per question. The approximate solution LKH has polynomial run time, but its accuracy drops compared to the brute force GTSP solution. Moreover, from a question answering perspective the ranked list offered by the Connection Density approach is useful since it can be presented to the user as a list of possible correct solutions or used by subsequent processing steps of a QA system. Hence, for further experiments in this section we used the connection density approach.
EARL: Joint Entity and Relation Linking for Question Answering over Knowledge Graphs
1801.03825
Table 4: Evaluation of joint linking performance
['[BOLD] Value of k', 'R [ITALIC] f [BOLD] based on R [ITALIC] i', 'R [ITALIC] f [BOLD] based on C,H', 'R [ITALIC] f [BOLD] based on R [ITALIC] i,C,H']
[['[ITALIC] k = 10', '0.543', '0.689', '0.708'], ['[ITALIC] k = 30', '0.544', '0.666', '0.735'], ['[ITALIC] k = 50', '0.543', '0.617', '[BOLD] 0.737'], ['[ITALIC] k = 100', '0.540', '0.534', '0.733'], ['[ITALIC] k∗ = 10', '0.568', '0.864', '[BOLD] 0.905'], ['[ITALIC] k∗ = 30', '0.554', '0.779', '0.864'], ['[ITALIC] k∗ = 50', '0.549', '0.708', '0.852'], ['[ITALIC] k∗ = 50', '0.545', '0.603', '0.817']]
Metrics: We use the mean reciprocal rank of the correct candidate ¯ci for each entity/relation in the query. From the probable candidate list generation step, we fetch a list of top candidates for each identified phrase in a query with a k value of 10, 30, 50 and 100, where k is the number of results from text search for each keyword spotted. To evaluate the robustness of our classifier and features we perform two tests. i) ii) On the bottom half of Table 4 we artificially insert the correct candidate into each list to purely test re-ranking abilities of our system (this portion of the table contains k∗ as the number of items in each candidate list). We inject the correct uris at the lowest rank (see k∗), if it was not retrieved in the top k results from previous step. Results: In case correct URIs were missing in the candidate list, we inserted URIs artificially as the last candidate . The MRR then increased from 0.568 to 0.905.
EARL: Joint Entity and Relation Linking for Question Answering over Knowledge Graphs
1801.03825
Table 5: Evaluating EARL’s Entity Linking performance
['[BOLD] System', '[BOLD] Accuracy LC-QuAD', '[BOLD] Accuracy - QALD']
[['FOX\xa0', '0.36', '0.30'], ['DBpediaSpotlight\xa0', '0.40', '0.42'], ['TextRazor', '0.52', '0.53'], ['Babelfy\xa0', '0.56', '0.56'], ['EARL without adaptive learning', '0.61', '0.55'], ['EARL with adaptive learning', '[BOLD] 0.65', '[BOLD] 0.57']]
EARL uses a series of sequential modules with little to no feedback across them. Hence, the errors in one module propagate down the line. To trammel this, we implement an adaptive approach especially for curbing the errors made in the pre-processing modules. While conducting experiments, it was observed that most of the errors are in the shallow parsing phase, mainly because of grammatical errors in LC-QuAD which directly affects the consecutive E/R prediction and candidate selection steps. If the E/R prediction is erroneous, it will search in a wrong Elasticsearch index for probable candidate list generation. In such a case none of the candidates ∈ ci for a keyword would contain ¯ci as is reflected by the probabilities assigned to ci by the re-ranker module. If the maximum probability assigned to ci is less than a very small threshold value, empirically chosen as 0.01, we re-do the steps from ER prediction after altering the original prediction. Aim: To evaluate the performance of EARL with other state-of-the-art systems on the entity linking task. This also evaluates our hypothesis H3. Metrics: We are reporting the performance on accuracy. Accuracy is defined by the ratio of the correctly identified entities over the total number of entities present. Result: The value of k is set to 30 while re-ranking and fetching the most probable entity.
Ask No More: Deciding when to guess in referential visual dialogue
1805.06960
Table 4: Games played by DM with MaxQ=10, and the baseline with 5 fixed questions. Percentages of games (among all games and only decided games) where the DM models ask either fewer or more questions than the baseline. For the decided games, percentages of games where asking fewer/more questions helps (+ Change), hurts (– Change) or does not have an impact on task success w.r.t. the baseline result (No Change).
['DM', 'Decided games + Change', 'Decided games + Change', 'Decided games – Change', 'Decided games – Change', 'Decided games No Change', 'Decided games No Change', 'Decided games Total', 'Decided games Total', 'All games Total', 'All games Total']
[['DM', 'Fewer', 'More', 'Fewer', 'More', 'Fewer', 'More', 'Fewer', 'More', 'Fewer', 'More'], ['DM1', '1.77', '3.46', '2.64', '3.79', '22.58', '50.35', '26.99', '57.6', '22.63', '64.43'], ['DM2', '25.01', '0.16', '13.98', '0.81', '56.18', '3.67', '95.17', '4.64', '14.83', '85.14']]
When considering all the games, we see that the DM models ask many more questions (64.43% DM1 and 85.14% DM2) than the baseline. Zooming into decided games thus allows for a more appropriate comparison. helps (+ Change), hurts (– Change) or does not have an impact on task success (No Change) with respect to the baseline results. We observe that DM2 dramatically decreases the number of questions: in 95.17% of decided games, it asks fewer questions than the baseline; interestingly, in only 13.98% of cases where it asks fewer questions its performance is worse than the baseline — in all the other cases, either it achieves the same success (56.18%) or even improves on the baseline results (25.01%). On the other hand, DM1 does not seem to reduce the number of unnecessary questions in a significant way. Our analyses show that using a decision making component produces dialogues with fewer repeated questions and can reduce the number of unnecessary questions, thus potentially leading to more efficient and less unnatural interactions. Indeed, for some games not correctly resolved by the baseline system, our model is able to guess the right target object by asking fewer questions. By being restricted to a fixed number of questions, the baseline system often introduces noise or apparently forgets about important information that was obtained with the initial questions. Qualitative error analysis, however, also shows cases where the DM makes a premature decision to stop asking questions before obtaining enough information. Yet in other occasions, the DM seems to have made a sensible decision, but the inaccuracy of the Oracle or the Guesser components lead to task failure. Further examples are available in Appendix D.
A Fixed-Size Encoding Method for Variable-Length Sequences with its Application to Neural Network Language Models
1505.01504
Table 2: Perplexities on PTB for various LMs.
['Model', 'Test PPL']
[['KN 5-gram ', '141'], ['FNNLM ', '140'], ['RNNLM ', '123'], ['LSTM ', '117'], ['bigram FNNLM', '176'], ['trigram FNNLM', '131'], ['4-gram FNNLM', '118'], ['5-gram FNNLM', '114'], ['6-gram FNNLM', '113'], ['1st-order FOFE-FNNLM', '116'], ['2nd-order FOFE-FNNLM', '[BOLD] 108']]
We have first evaluated the performance of the traditional FNN-LMs, taking the previous several words as input, denoted as n-gram FNN-LMs here. We have trained neural networks with a linear projection layer (of 200 hidden nodes) and two hidden layers (of 400 nodes per layer). All hidden units in networks use the rectified linear activation function, i.e., f(x)=max(0,x). We use SGD with a mini-batch size of 200 and an initial learning rate of 0.4. The learning rate is kept fixed as long as the perplexity on the validation set decreases by at least 1. After that, we continue six more epochs of training, where the learning rate is halved after each epoch. The proposed FOFE-FNNLMs can significantly outperform the baseline FNN-LMs using the same architecture. For example, the perplexity of the baseline bigram FNNLM is 176, while the FOFE-FNNLM can improve to 116. It indicates FOFE-FNNLMs can effectively model the long-term dependency in language without using any recurrent feedback. At last, the 2nd-order FOFE-FNNLM can provide further improvement, yielding the perplexity of 108 on PTB. It also outperforms all higher-order FNN-LMs (4-gram, 5-gram and 6-gram), which are bigger in model size. To our knowledge, this is one of the best reported results on PTB without model combination.
Multimodal Social Media Analysis for Gang Violence Prevention
1807.08465
Table 2. Results for detecting the psychosocial codes: aggression, loss and substance use. For each code we report precision (P), recall (R), F1-scores (F1) and average precision (AP). Numbers shown are mean values of 5-fold cross validation performances. The highest performance (based on AP) for each code is marked with an asterisk. In bold and red we highlight all performances not significantly worse than the highest one (based on statistical testing with 95% confidence intervals).
['[BOLD] Modality', '[BOLD] Features', '[BOLD] Fusion', '[BOLD] Aggression P', '[BOLD] Aggression R', '[BOLD] Aggression F1', '[BOLD] Aggression AP', '[BOLD] Loss P', '[BOLD] Loss R', '[BOLD] Loss F1', '[BOLD] Loss AP', '[BOLD] Substance use P', '[BOLD] Substance use R', '[BOLD] Substance use F1', '[BOLD] Substance use AP', '[BOLD] mAP']
[['-', '- (random baseline)', '-', '0.25', '0.26', '0.26', '0.26', '0.17', '0.17', '0.17', '0.20', '0.18', '0.18', '0.18', '0.20', '0.23'], ['-', '- (positive baseline)', '-', '0.25', '1.00', '0.40', '0.25', '0.21', '1.00', '0.35', '0.22', '0.20', '1.00', '0.33', '0.20', '0.22'], ['text', 'linguistic features', '-', '0.35', '0.34', '0.34', '0.31', '0.71', '0.47', '0.56', '0.51', '0.25', '0.53', '0.34', '0.24', '0.35'], ['text', 'CNN-char', '-', '0.37', '0.47', '0.39', '0.36', '0.75', '0.66', '0.70', ' [BOLD] 0.77', '0.27', '0.32', '0.29', '0.28', '0.45'], ['text', 'CNN-word', '-', '0.39', '0.46', '0.42', '0.41', '0.71', '0.65', '0.68', ' [BOLD] 0.77', '0.28', '0.30', '0.29', '0.31', '0.50'], ['text', 'all textual', 'early', '0.40', '0.46', '0.43', '0.42', '0.70', '0.73', '0.71', ' [BOLD] 0.81', '0.25', '0.37', '0.30', '0.30', '0.51'], ['text', 'all textual', 'late', '0.43', '0.41', '0.42', '0.42', '0.69', '0.65', '0.67', ' [BOLD] 0.79', '0.29', '0.37', '0.32', '0.32', '0.51'], ['image', 'inception global', '-', '0.43', '0.64', '0.51', ' [BOLD] 0.49', '0.38', '0.57', '0.45', '0.43', '0.41', '0.62', '0.49', ' [BOLD] 0.48', '0.47'], ['image', 'Faster R-CNN local (0.1)', '-', '0.43', '0.64', '0.52', ' [BOLD] 0.47', '0.28', '0.56', '0.37', '0.31', '0.44', '0.30', '0.35', '0.37', '0.38'], ['image', 'Faster R-CNN local (0.5)', '-', '0.47', '0.48', '0.47', '0.44', '0.30', '0.39', '0.33', '0.31', '0.46', '0.12', '0.19', '0.30', '0.35'], ['image', 'all visual', 'early', '0.49', '0.62', '0.55', ' [BOLD] 0.55 *', '0.38', '0.57', '0.45', '0.44', '0.41', '0.59', '0.48', ' [BOLD] 0.48', '0.49'], ['image', 'all visual', 'late', '0.48', '0.51', '0.49', ' [BOLD] 0.52', '0.40', '0.51', '0.44', '0.43', '0.47', '0.52', '0.50', ' [BOLD] 0.51 *', '0.49'], ['image+text', 'all textual + visual', 'early', '0.48', '0.51', '0.49', ' [BOLD] 0.53', '0.72', '0.73', '0.73', ' [BOLD] 0.82 *', '0.37', '0.53', '0.43', ' [BOLD] 0.45', ' [BOLD] 0.60'], ['image+text', 'all textual + visual', 'late', '0.48', '0.44', '0.46', ' [BOLD] 0.53', '0.71', '0.67', '0.69', ' [BOLD] 0.80', '0.44', '0.43', '0.43', ' [BOLD] 0.48', ' [BOLD] 0.60 *']]
Our results indicate that image and text features play different roles in detecting different psychosocial codes. Textual information clearly dominates the detection of code loss. We hypothesize that loss is better conveyed textually whereas substance use and aggression are easier to express visually. Qualitatively, the linguistic features with the highest magnitude weights (averaged over all training splits) in a linear SVM bear this out, with the top five features for loss being i) free, ii) miss, iii) bro, iv) love v) you; the top five features for substance use being i ) smoke, ii) cup, iii) drank, iv) @mention v) purple; and the top five features for aggression being i) Middle Finger Emoji, ii) Syringe Emoji, iii) opps, iv) pipe v) 2017. The loss features are obviously related to the death or incarceration of a loved one (e.g. miss and free are often used in phrases wishing someone was freed from prison). The top features for aggression and substance use are either emojis which are themselves pictographic representations, i.e. not a purely textual expression of the code, or words that reference physical objects (e.g. pipe, smoke, cup) which are relatively easy to picture.
Multimodal Social Media Analysis for Gang Violence Prevention
1807.08465
Table 1. Numbers of instances for the different visual concepts and psychosocial codes in our dataset. For the different codes, the first number indicates for how many tweets at least one annotator assigned the corresponding code, numbers in parentheses are based on per-tweet majority votes.
['[BOLD] Concepts/Codes', '[BOLD] Twitter', '[BOLD] Tumblr', '[BOLD] Total']
[['[ITALIC] handgun', '164', '41', '205'], ['[ITALIC] long gun', '15', '105', '116'], ['[ITALIC] joint', '185', '113', '298'], ['[ITALIC] marijuana', '56', '154', '210'], ['[ITALIC] person', '1368', '74', '1442'], ['[ITALIC] tattoo', '227', '33', '260'], ['[ITALIC] hand gesture', '572', '2', '574'], ['[ITALIC] lean', '43', '116', '159'], ['[ITALIC] money', '107', '138', '245'], ['[ITALIC] aggression', '457 (185)', '-', '457 (185)'], ['[ITALIC] loss', '397 (308)', '-', '397 (308)'], ['[ITALIC] substance use', '365 (268)', '-', '365 (268)']]
Note that in order to ensure sufficient quality of the annotations, but also due to the nature of the data, we relied on a special annotation process and kept the total size of the dataset comparatively small. However, crawling images from Tumblr targeting keywords related to those concepts lead us to gather images where the target concept is the main subject in the image, while in our Twitter images they appear in the image but are rarely the main element in the picture. Further manually analyzing the images crawled from Twitter and Tumblr, we have confirmed this “domain gap" between the two sources of data that can explain the difference of performance. This puts in light the challenges associated with detecting these concepts in our Twitter data. We believe the only solution is therefore to gather additional images from Twitter from similar users. This will be part of the future work of this research.
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation
2005.00456
Table 2: Average scores for the six different responses, on the six quality: Understandable, Natural, Maintains Context, Interesting, Uses Knowledge and Overall Quality.
['[BOLD] System', '[BOLD] Und (0-1)', '[BOLD] Nat (1-3)', '[BOLD] MCtx (1-3)', '[BOLD] Int (1-3)', '[BOLD] UK (0-1)', '[BOLD] OQ (1-5)']
[['Topical-Chat', 'Topical-Chat', 'Topical-Chat', 'Topical-Chat', 'Topical-Chat', 'Topical-Chat', 'Topical-Chat'], ['Original Ground-Truth', '0.95', '2.72', '2.72', '2.64', '0.72', '4.25'], ['Argmax Decoding', '0.60', '2.08', '2.13', '1.94', '0.47', '2.76'], ['Nucleus Sampling (0.3)', '0.51', '2.02', '1.90', '1.82', '0.42', '2.40'], ['Nucleus Sampling (0.5)', '0.48', '1.92', '1.93', '1.72', '0.34', '2.29'], ['Nucleus Sampling (0.7)', '0.52', '2.01', '1.87', '1.80', '0.37', '2.39'], ['New Human Generated', '[BOLD] 0.99', '[BOLD] 2.92', '[BOLD] 2.93', '[BOLD] 2.90', '[BOLD] 0.96', '[BOLD] 4.80'], ['PersonaChat', 'PersonaChat', 'PersonaChat', 'PersonaChat', 'PersonaChat', 'PersonaChat', 'PersonaChat'], ['Original Ground-Truth', '0.99', '2.89', '2.82', '2.67', '0.56', '4.36'], ['Language Model', '0.97', '2.63', '2.02', '2.24', '0.08', '2.98'], ['LSTM Seq2Seq', '0.92', '2.64', '2.49', '2.29', '0.47', '3.47'], ['KV-MemNN', '0.93', '2.70', '2.18', '2.56', '0.17', '3.25'], ['New Human Generated', '[BOLD] 1.00', '[BOLD] 2.97', '[BOLD] 2.88', '[BOLD] 2.87', '[BOLD] 0.96', '[BOLD] 4.80']]
Across both datasets and all qualities, the new human generated response strongly outperforms all other response types, even the original ground truth. This may be because the new human generated response was written with this quality annotation in mind, and as such is optimized for turn-level evaluation. On the other hand, the workers who produced the original ground-truth response, were more concerned with the quality of the overall dialog than with the quality of each individual response.
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation
2005.00456
Table 1: Inter-annotator agreement for all the metrics. For all the correlations presented in this table, p<0.01.
['[BOLD] Metric', '[BOLD] Spearman', '[BOLD] Pearson']
[['Topical-Chat', 'Topical-Chat', 'Topical-Chat'], ['Understandable', '0.5102', '0.5102'], ['Natural', '0.4871', '0.4864'], ['Maintains Context', '0.5599', '0.5575'], ['Interesting', '0.5811', '0.5754'], ['Uses Knowledge', '0.7090', '0.7090'], ['Overall Quality', '0.7183', '0.7096'], ['PersonaChat', 'PersonaChat', 'PersonaChat'], ['Understandable', '0.2984', '0.2984'], ['Natural', '0.4842', '0.4716'], ['Maintains Context', '0.6125', '0.6130'], ['Interesting', '0.4318', '0.4288'], ['Uses Knowledge', '0.8115', '0.8115'], ['Overall Quality', '0.6577', '0.6603']]
The correlation between each pair of annotations is computed and the average correlation over all the pairs is reported. Correlation is used instead of Cohen’s Kappa in order to better account for the ordinal nature of the ratings (i.e., 4 should correlate better with 5 than 1), and to maintain consistency with the evaluation of the automatic metrics. Most inter-annotator correlations are above 0.4, which indicates moderate to strong agreement. The low agreement for Understandable on PersonaChat is likely a consequence of the simple language in the dataset. Most responses are understandable, except for those requiring background knowledge (e.g., that ‘cod’ is an acronym for ‘Call of Duty’). Since the annotators have differing background knowledge, the few occasions where they fail to understand an utterance will differ, hence the lower agreement. The agreement for Overall Quality is relatively high (0.71 for Topical-Chat and 0.66 for PersonaChat) which suggests that any ambiguity in the specific dialog qualities is mitigated when the annotator is asked for an overall impression.
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation
2005.00456
Table 3: Turn-level correlations on Topical-Chat. We show: (1) best non-USR metric, (2) best USR sub-metric and (3) USR metric. All measures in this table are statistically significant to p<0.01.
['Metric', 'Spearman', 'Pearson']
[['Understandable', 'Understandable', 'Understandable'], ['BERTScore (base)', '0.2502', '0.2611'], ['USR - MLM', '[BOLD] 0.3268', '[BOLD] 0.3264'], ['USR', '0.3152', '0.2932'], ['Natural', 'Natural', 'Natural'], ['BERTScore (base)', '0.2094', '0.2260'], ['USR - MLM', '[BOLD] 0.3254', '[BOLD] 0.3370'], ['USR', '0.3037', '0.2763'], ['Maintains Context', 'Maintains Context', 'Maintains Context'], ['METEOR', '0.3018', '0.2495'], ['USR - DR (x = c)', '0.3650', '0.3391'], ['USR', '[BOLD] 0.3769', '[BOLD] 0.4160'], ['Interesting', 'Interesting', 'Interesting'], ['BERTScore (base)', '0.4121', '0.3901'], ['USR - DR (x = c)', '[BOLD] 0.4877', '0.3533'], ['USR', '0.4645', '[BOLD] 0.4555'], ['Uses Knowledge', 'Uses Knowledge', 'Uses Knowledge'], ['METEOR', '0.3909', '[BOLD] 0.3328'], ['USR - DR (x = f)', '[BOLD] 0.4468', '0.2220'], ['USR', '0.3353', '0.3175']]
USR is shown to strongly outperform both word-overlap and embedding-based metrics across all of the dialog qualities. Interestingly, the best non-USR metric is consistently either METEOR or BERTScore – possibly because both methods are adept at comparing synonyms during evaluation. For some dialog qualities, the overall USR metric outperforms the best sub-metric. For example, USR does better for Maintains Context than USR-DR. This is likely because the information from the other sub-metrics (e.g., Uses Knowledge) is valuable and effectively leveraged by USR.
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation
2005.00456
Table 5: Turn-level correlations between all automatic metrics and the Overall Quality ratings for the Topical-Chat corpus. All values with p>0.05 are italicized.
['Metric', 'Spearman', 'Pearson']
[['Word-Overlap Metrics', 'Word-Overlap Metrics', 'Word-Overlap Metrics'], ['F-1', '0.1645', '0.1690'], ['BLEU-1', '0.2728', '0.2876'], ['BLEU-2', '0.2862', '0.3012'], ['BLEU-3', '0.2569', '0.3006'], ['BLEU-4', '0.2160', '0.2956'], ['METEOR', '0.3365', '0.3908'], ['ROUGE-L', '0.2745', '0.2870'], ['Embedding Based Metrics', 'Embedding Based Metrics', 'Embedding Based Metrics'], ['Greedy Matching', '0.1712', '0.1943'], ['Embedding Average', '0.1803', '0.2038'], ['Vector Extrema', '0.2032', '0.2091'], ['Skip-Thought', '[ITALIC] 0.1040', '[ITALIC] 0.1181'], ['BERTScore (base)', '0.3229', '0.3540'], ['BERTScore (large)', '0.2982', '0.3252'], ['Reference Free Metrics', 'Reference Free Metrics', 'Reference Free Metrics'], ['USR - MLM', '0.3086', '0.3345'], ['USR - DR (x = c)', '0.3245', '0.4068'], ['USR - DR (x = f)', '0.1419', '0.3221'], ['USR', '[BOLD] 0.4192', '[BOLD] 0.4220']]
USR shows a strong improvement over all other methods. This strong performance can be attributed to two factors: (1) the ability of MLM and DR to accurately quantify qualities of a generated response without a reference response, and (2) the ability of USR to effectively combine MLM and DR into a better correlated overall metric.
Sequence-to-Sequence Models Can Directly Translate Foreign Speech
1703.08581
Table 3: Speech recognition model performance in WER.
['[EMPTY]', 'Fisher dev', 'Fisher dev2', 'Fisher test', 'Callhome devtest', 'Callhome evltest']
[['Ours', '25.7', '25.1', '23.2', '44.5', '45.3'], ['Post et al. ', '41.3', '40.0', '36.5', '64.7', '65.3'], ['Kumar et al. ', '29.8', '29.8', '25.3', '–', '–']]
We construct a baseline cascade of a Spanish ASR seq2seq model whose output is passed into a Spanish to English NMT model. Performance on the Fisher task is significantly better than on Callhome since it contains more formal speech, consisting of conversations between strangers while Callhome conversations were often between family members.
Sequence-to-Sequence Models Can Directly Translate Foreign Speech
1703.08581
Table 1: Varying number of decoder layers in the speech translation model. BLEU score on the Fisher/dev set.
['Num decoder layers [ITALIC] D 1', 'Num decoder layers [ITALIC] D 2', 'Num decoder layers [ITALIC] D 3', 'Num decoder layers [ITALIC] D 4', 'Num decoder layers [ITALIC] D 5']
[['43.8', '45.1', '45.2', '45.5', '45.3']]
In contrast, seq2seq NMT models often use much deeper decoders, In analogy to a traditional ASR system, one may think of the seq2seq encoder behaving as the acoustic model while the decoder acts as the language model. The additional complexity of the translation task when compared to monolingual language modeling motivates the use of a higher capacity decoder network.
Sequence-to-Sequence Models Can Directly Translate Foreign Speech
1703.08581
Table 5: Speech translation model performance in BLEU score.
['Model', 'Fisher dev', 'Fisher dev2', 'Fisher test', 'Callhome devtest', 'Callhome evltest']
[['End-to-end ST 3', '46.5', '47.3', '47.3', '16.4', '16.6'], ['Multi-task ST / ASR 3', '48.3', '49.1', '48.7', '16.8', '17.4'], ['ASR→NMT cascade 3', '45.1', '46.1', '45.5', '16.2', '16.6'], ['Post et al. ', '–', '35.4', '–', '–', '11.7'], ['Kumar et al. ', '–', '40.1', '40.4', '–', '–']]
Despite not having access to source language transcripts at any stage of the training, the end-to-end model outperforms the baseline cascade, which passes the 1-best Spanish ASR output into the NMT model, by about 1.8 BLEU points on the Fisher/test set. We obtain an additional improvement of 1.4 BLEU points or more on all Fisher datasets in the multi-task configuration, in which the Spanish transcripts are used for additional supervision by sharing a single encoder sub-network across independent ASR and ST decoders. The ASR model converged after four days of training (1.5m steps), while the ST and multitask models continued to improve, with the final 1.2 BLEU point improvement taking two more weeks.
TACAM: Topic And Context Aware Argument Mining
1906.00923
Table 1. In-Topic
['two-class', 'Method BiLSTM', '0.74']
[['two-class', 'BiCLSTM', '0.74'], ['two-class', 'TACAM-WE', '0.74'], ['two-class', 'TACAM-KG', '0.73'], ['two-class', 'CAM-BERT Base', '0.79'], ['two-class', 'TACAM-BERT Base', '[BOLD] 0.81'], ['[EMPTY]', 'CAM-BERT Large', '0.80'], ['[EMPTY]', 'TACAM-BERT Large', '[BOLD] 0.81'], ['three-class', 'BiLSTM', '0.56'], ['three-class', 'BiCLSTM', '0.53'], ['three-class', 'TACAM-WE', '0.54'], ['three-class', 'TACAM-KG', '0.56'], ['three-class', 'CAM-BERT Base', '0.65'], ['three-class', 'TACAM-BERT Base', '0.66'], ['[EMPTY]', 'CAM-BERT Large', '0.67'], ['[EMPTY]', 'TACAM-BERT Large', '[BOLD] 0.69']]
In this setting we do not expect a large improvement by providing topic information since the models have already been trained with arguments of the same topics as in the training set. However, we see a relative increase of about 10% for the two-classes and 20% for the three-classes classification problem by using context information from transfer learning. Therefore, we conclude that contextual information about potential arguments is important and since the topics are diverse, the model is able to learn argument structure for each topic.
TACAM: Topic And Context Aware Argument Mining
1906.00923
Table 2. Cross-Topic
['[EMPTY]', 'Method', 'Topics Abortion', 'Topics Cloning', 'Topics Death penalty', 'Topics Gun control', 'Topics Marij. legal.', 'Topics Min. wage', 'Topics Nucl. energy', 'Topics School unif.', '\\diameter']
[['two-classes', 'BiLSTM', '0.61', '0.72', '0.70', '0.75', '0.64', '0.62', '0.67', '0.54', '0.66'], ['two-classes', 'BiCLSTM', '0.67', '0.71', '0.71', '0.73', '0.69', '0.75', '0.71', '0.58', '0.70'], ['two-classes', 'TACAM-WE', '0.64', '0.71', '0.70', '0.74', '0.64', '0.63', '0.68', '0.55', '0.66'], ['two-classes', 'TACAM-KG', '0.62', '0.69', '0.70', '0.75', '0.64', '0.76', '0.71', '0.56', '0.68'], ['two-classes', 'CAM-BERT Base', '0.61', '0.77', '0.74', '0.76', '0.74', '0.61', '0.76', '0.73', '0.72'], ['two-classes', 'CAM-BERT Large', '0.62', '0.79', '0.75', '0.77', '0.77', '0.65', '0.75', '0.73', '0.73'], ['[EMPTY]', 'TACAM-BERT Base', '0.78', '0.77', '[BOLD] 0.78', '0.80', '[BOLD] 0.79', '0.83', '0.80', '[BOLD] 0.83', '0.80'], ['[EMPTY]', 'TACAM-BERT Large', '[BOLD] 0.79', '[BOLD] 0.78', '[BOLD] 0.78', '[BOLD] 0.81', '[BOLD] 0.79', '[BOLD] 0.84', '[BOLD] 0.83', '0.82', '[BOLD] 0.80'], ['three-classes', 'BiLSTM', '0.47', '0.52', '0.48', '0.48', '0.44', '0.42', '0.48', '0.42', '0.46'], ['three-classes', 'BiCLSTM', '0.49', '0.52', '0.46', '0.51', '0.46', '0.44', '0.47', '0.42', '0.47'], ['three-classes', 'TACAM-WE', '0.47', '0.52', '0.47', '0.48', '0.46', '0.46', '0.48', '0.41', '0,47'], ['three-classes', 'TACAM-KG', '0.46', '0.51', '0.47', '0.47', '0.46', '0.48', '0.47', '0.41', '0.47'], ['three-classes', 'CAM-BERT Base', '0.38', '0.63', '0.53', '0.49', '0.54', '0.54', '0.61', '0,50', '0.53'], ['three-classes', 'TACAM-BERT Base', '0.42', '0.68', '0.54', '0.50', '0.60', '0.49', '0.64', '[BOLD] 0.69', '0.57'], ['[EMPTY]', 'CAM-BERT Large', '0.53', '0.67', '0.56', '0.53', '0.59', '0.66', '0.67', '0.66', '0.61'], ['[EMPTY]', 'TACAM-BERT Large', '[BOLD] 0.54', '[BOLD] 0.69', '[BOLD] 0.59', '[BOLD] 0.55', '[BOLD] 0.63', '[BOLD] 0.69', '[BOLD] 0.71', '[BOLD] 0.69', '[BOLD] 0.64']]
In this experiment, which reflects a real-life argument search scenario, we want to prove our two hypotheses: When classifying potential arguments, it is advantageous to take information about the topic into account. The context of an argument and topic context are important for the classification decision. On the whole, we can see that our two hypotheses are confirmed. In the two-classes scenario the recurrent model improves if topic information is provided by knowledge graph embeddings. By using attention-based models with pre-trained weights we can observe a significant performance boost of eleven score points in average when considering topic information. However, the same model without topic information performs only slightly better than the recurrent models. Therefore, we conclude that both, topic information together with contexts of topic and argument, are important for the correct decision about a potential argument. We observe similar effects in the three-classes scenario. Although in average different contexts for the recurrent model have a similar effect, we can clearly observe that taking topic information into account improves classification results by one score points. The combination of transfer learning for context and topic information again outperforms all other approaches by far. At the same time, the pre-trained model without topic information achieves a macro-f1 score of 0.61 which is 3 points lower than with topic information.
TACAM: Topic And Context Aware Argument Mining
1906.00923
Table 4. Topic dependent cross-topic classification results
['[EMPTY]', 'Method', 'Topics Abortion', 'Topics Cloning', 'Topics Death penalty', 'Topics Gun control', 'Topics Marij. legal.', 'Topics Min. wage', 'Topics Nucl. energy', 'Topics School unif.', '\\diameter']
[['two-classes', 'BiLSTM', '0.57', '0.59', '0.53', '0.59', '0.62', '0.62', '0.59', '0.57', '0.58'], ['two-classes', 'BiCLSTM', '0.62', '0.72', '0.46', '0.46', '0.76', '0.60', '0.69', '0.45', '0.60'], ['two-classes', 'CAM-BERT Base', '0.56', '0.63', '0.60', '0.62', '0.61', '0.55', '0.60', '0.53', '0.59'], ['two-classes', 'TACAM-BERT Base', '[BOLD] 0.68', '[BOLD] 0.77', '[BOLD] 0.78', '[BOLD] 0.79', '[BOLD] 0.82', '[BOLD] 0.85', '[BOLD] 0.79', '[BOLD] 0.58', '[BOLD] 0.76'], ['three-classes', 'BiLSTM', '0.39', '0.39', '0.37', '0.36', '0.39', '0.42', '0.40', '0.39', '0.39'], ['three-classes', 'BiCLSTM', '[BOLD] 0.46', '0.34', '0.29', '0.35', '0.42', '0.29', '0.47', '0.30', '0.36'], ['three-classes', 'CAM-BERT Base', '0.42', '0.50', '0.42', '0.42', '0.48', '0.51', '0.50', '0.49', '0.47'], ['three-classes', 'TACAM-BERT Base', '0.44', '[BOLD] 0.60', '[BOLD] 0.52', '[BOLD] 0.49', '[BOLD] 0.61', '[BOLD] 0.65', '[BOLD] 0.62', '[BOLD] 0.55', '[BOLD] 0.56']]
For the two-classes problem we observe a massive performance drop of ten points in macro-f1 score for the BiCLSTM model. Nonetheless, the model still makes use of topic information and outperforms the standard BiLSTM by two macro-f1 score points. Our approach TACAM-BERT Base is more robust, the performance falls by moderate four score points and the gap to the counterpart model without topic information is incredible 17 score points large. We observe a similar behaviour in the three-classes scenario. Our TACAM-BERT Base approach achieves the same average score as in the original cross topic task. In contrast the performance of the BiCLSTM model drops by 11 score points and it even performs worse than the same model without topic information on this more complex task. Thus we conclude that unlike previous models our approaches are indeed able to grasp the context of the argument and topic and are able to relate them with each other.
A Bayesian Model for Generative Transition-based Dependency Parsing
1506.04334
Table 6: Language modelling test results. Above, training and testing on WSJ. Below, training semi-supervised and testing on WMT.
['Model', 'Perplexity']
[['HPYP 5-gram', '147.22'], ['ChelbaJ00', '146.1'], ['EmamiJ05', '131.3'], ['[BOLD] HPYP-DP', '[BOLD] 145.54'], ['HPYP 5-gram', '178.13'], ['[BOLD] HPYP-DP', '[BOLD] 163.96']]
We note that the perplexities reported are upper bounds on the true perplexity of the model, as it is intractable to sum over all possible parses of a sentence to compute the marginal probability of the words. As an approximation we sum over the final beam after decoding.
A Bayesian Model for Generative Transition-based Dependency Parsing
1506.04334
Table 3: Effect of including elements in the model conditioning contexts. Results are given on the YM development set.
['Context elements', 'UAS', 'LAS']
[['[ITALIC] σ1. [ITALIC] t, [ITALIC] σ2. [ITALIC] t', '73.25', '70.14'], ['+ [ITALIC] rc1( [ITALIC] σ1). [ITALIC] t', '80.21', '76.64'], ['+ [ITALIC] lc1( [ITALIC] σ1). [ITALIC] t', '85.18', '82.03'], ['+ [ITALIC] σ3. [ITALIC] t', '87.23', '84.26'], ['+ [ITALIC] rc1( [ITALIC] σ2). [ITALIC] t', '87.95', '85.04'], ['+ [ITALIC] σ1. [ITALIC] w', '88.53', '86.11'], ['+ [ITALIC] σ2. [ITALIC] w', '88.93', '86.57']]
The first modelling choice is the selection and ordering of elements in the conditioning contexts of the HPYP priors. The first two words on the stack are the most important, but insufficient – second-order dependencies and further elements on the stack should also be included in the contexts. The challenge is that the back-off structure of each HPYP specifies an ordering of the elements based on their importance in the prediction. We are therefore much more restricted than classifiers with large, sparse feature-sets which are commonly used in transition-based parsers. Due to sparsity, the word types are the first elements to be dropped in the back-off structure, and elements such as third-order dependencies, which have been shown to improve parsing performance, cannot be included successfully in our model.
A Bayesian Model for Generative Transition-based Dependency Parsing
1506.04334
Table 5: Parsing accuracies on the YM test set. compared against previous published results. TitovH07 was retrained to enable direct comparison.
['Model', 'UAS', 'LAS']
[['Eisner96', '80.7', '-'], ['WallachSM08', '85.7', '-'], ['TitovH07', '89.36', '87.65'], ['[BOLD] HPYP-DP', '[BOLD] 88.47', '[BOLD] 86.13'], ['MaltParser', '88.88', '87.41'], ['ZhangN11', '92.9', '91.8'], ['ChoiM13', '92.96', '91.93']]
Our HPYP model performs much better than Eisner’s generative model as well as the Bayesian version of that model proposed by \newciteWallachSM08 (the result for Eisner’s model is given as reported by \newciteWallachSM08 on the WSJ). The accuracy of our model is only 0.8 UAS below the generative model of \newciteTitovH07, despite that model being much more powerful. The Titov and Henderson model takes 3 days to train, and its decoding speed is around 1 sentence per second.
ESPnet: End-to-End Speech Processing Toolkit
1804.00015
Table 2: Comparisons (CER, WER, and training time) of the WSJ task with other end-to-end ASR systems.
['Method', 'Wall Clock Time', '# GPUs']
[['ESPnet (Chainer)', '20 hours', '1'], ['ESPnet (PyTorch)', '5 hours', '1'], ['seq2seq + CNN ', '120 hours', '10']]
The use of a deeper encoder network, integration of character-based LSTMLM, and joint CTC/attention decoding steadily improved the performance. But we can state that ESPnet provides reasonable performance by comparing with these prior studies.
ESPnet: End-to-End Speech Processing Toolkit
1804.00015
Table 2: Comparisons (CER, WER, and training time) of the WSJ task with other end-to-end ASR systems.
['Method', 'Metric', 'dev93', 'eval92']
[['ESPnet with VGG2-BLSTM', 'CER', '10.1', '7.6'], ['+ BLSTM layers (4 → 6)', 'CER', '8.5', '5.9'], ['+ char-LSTMLM', 'CER', '8.3', '5.2'], ['+ joint decoding', 'CER', '5.5', '3.8'], ['+ label smoothing', 'CER', '5.3', '3.6'], ['[EMPTY]', 'WER', '12.4', '8.9'], ['seq2seq + CNN (no LM) ', 'WER', '[EMPTY]', '10.5'], ['seq2seq + FST word LM ', 'CER', '[EMPTY]', '3.9'], ['[EMPTY]', 'WER', '[EMPTY]', '9.3'], ['CTC + FST word LM ', 'WER', '[EMPTY]', '7.3']]
The use of a deeper encoder network, integration of character-based LSTMLM, and joint CTC/attention decoding steadily improved the performance. But we can state that ESPnet provides reasonable performance by comparing with these prior studies.
Automatic Speech Recognition with Very Large Conversational Finnish and Estonian Vocabularies
1707.04227
TABLE III: Comparison of uniform data processing, random sampling of web data by 20 %, and weighted parameter updates from web data by a factor of 0.4, in NNLM training. The models were trained using normal softmax. Includes development set perplexity, word error rate (%), and word error rate after interpolation with the n-gram model.
['Subset Processing', 'Training Time', 'Perplexity', 'WER', '+NGram']
[['[BOLD] Finnish, 5k classes', '[BOLD] Finnish, 5k classes', '[BOLD] Finnish, 5k classes', '[BOLD] Finnish, 5k classes', '[BOLD] Finnish, 5k classes'], ['Uniform', '143 h', '511', '26.0', '25.6'], ['Sampling', '128 h', '505', '26.2', '25.6'], ['Weighting', '101 h', '521', '26.4', '25.5'], ['[BOLD] Finnish, 42.5k subwords', '[BOLD] Finnish, 42.5k subwords', '[BOLD] Finnish, 42.5k subwords', '[BOLD] Finnish, 42.5k subwords', '[BOLD] Finnish, 42.5k subwords'], ['Uniform', '360 h', '679', '25.2', '[BOLD] 24.6'], ['Sampling', '360 h', '671', '25.5', '25.0'], ['Weighting', '360 h', '672', '[BOLD] 25.1', '[BOLD] 24.6'], ['[BOLD] Finnish, 468k subwords, 5k classes', '[BOLD] Finnish, 468k subwords, 5k classes', '[BOLD] Finnish, 468k subwords, 5k classes', '[BOLD] Finnish, 468k subwords, 5k classes', '[BOLD] Finnish, 468k subwords, 5k classes'], ['Uniform', '141 h', '790', '26.0', '25.0'], ['Sampling', '119 h', '761', '25.9', '25.1'], ['[BOLD] Estonian, 5k classes', '[BOLD] Estonian, 5k classes', '[BOLD] Estonian, 5k classes', '[BOLD] Estonian, 5k classes', '[BOLD] Estonian, 5k classes'], ['Uniform', '86 h', '339', '[BOLD] 19.8', '19.9'], ['Sampling', '87 h', '311', '20.2', '19.9'], ['Weighting', '105 h', '335', '20.0', '[BOLD] 19.6'], ['[BOLD] Estonian, 212k subwords, 5k classes', '[BOLD] Estonian, 212k subwords, 5k classes', '[BOLD] Estonian, 212k subwords, 5k classes', '[BOLD] Estonian, 212k subwords, 5k classes', '[BOLD] Estonian, 212k subwords, 5k classes'], ['Uniform', '187 h', '424', '20.0', '19.7'], ['Sampling', '130 h', '397', '20.0', '19.8'], ['Weighting', '187 h', '409', '19.9', '[BOLD] 19.6']]
Optimizing the weights for neural network training is more difficult than for the n-gram mixture models. As we do not have a computational method for optimizing the weights, we tried a few values, observing the development set perplexity during training. Sampling 20 % of the web data on each iteration, or weighting the web data by a factor of 0.4 seemed to work reasonably well. We used a slightly higher learning rate when weighting the web data to compensate for the fact that the updates are smaller on average. Uniform means that the web data is processed just like other data sets , sampling means that a subset of web data is randomly sampled before each epoch, and weighting means that the parameter updates are given a smaller weight when the mini-batch contains web sentences. Sampling seems to improve perplexity, but not word error rate. Because sampling usually speeds up training considerably and our computational resources were limited, the rest of the experiments were done using sampling.
Automatic Generation of Language-Independent Featuresfor Cross-Lingual Classification
1802.04028
Table 4: Results for CLTC1, CLTC2, CLTC3 and UCLTC
['Setup', 'Source', 'Target', 'Examples', 'LIFG (%)']
[['CLTC1', 'F', 'F', '150', '65.70'], ['CLTC1', 'E', 'E', '150', '67.60'], ['CLTC1', 'G', 'G', '150', '67.10'], ['CLTC2', 'E', 'F', '150', '62.00'], ['CLTC2', 'G', 'F', '150', '59.60'], ['CLTC2', 'F', 'E', '150', '60.50'], ['CLTC2', 'G', 'E', '150', '61.80'], ['CLTC2', 'F', 'G', '150', '60.90'], ['CLTC2', 'E', 'G', '150', '63.30'], ['CLTC3', 'F, E', 'F', '300', '73.50'], ['CLTC3', 'F, G', 'F', '300', '71.40'], ['CLTC3', 'E, G', 'F', '300', '69.60'], ['CLTC3', 'F, E, G', 'F', '450', '77.30'], ['CLTC3', 'F, E', 'E', '300', '74.60'], ['CLTC3', 'F, G', 'E', '300', '67.40'], ['CLTC3', 'E, G', 'E', '300', '75.20'], ['CLTC3', 'F, E, G', 'E', '450', '78.40'], ['CLTC3', 'F, E', 'G', '300', '66.80'], ['CLTC3', 'F, G', 'G', '300', '73.00'], ['CLTC3', 'E, G', 'G', '300', '74.80'], ['[EMPTY]', 'F, E, G', 'G', '450', '77.90'], ['UCLTC', 'F', 'F, E', '150', '63.10'], ['UCLTC', 'E', 'F, E', '150', '64.80'], ['UCLTC', 'G', 'F, E', '150', '60.70'], ['UCLTC', 'F, E', 'F, E', '300', '74.00'], ['UCLTC', 'F, G', 'F, E', '300', '69.40'], ['UCLTC', 'E, G', 'F, E', '300', '72.40'], ['UCLTC', 'F, E, G', 'F, E', '450', '77.60'], ['UCLTC', 'F', 'F, G', '150', '63.30'], ['UCLTC', 'E', 'F, G', '150', '62.70'], ['UCLTC', 'G', 'F, G', '150', '63.40'], ['UCLTC', 'F, G', 'F, G', '300', '70.20'], ['UCLTC', 'F, G', 'F, G', '300', '72.20'], ['UCLTC', 'E, G', 'F, G', '300', '72.20'], ['UCLTC', 'F, E, G', 'F, G', '450', '77.60'], ['UCLTC', 'F', 'E, G', '150', '60.70'], ['UCLTC', 'E', 'E, G', '150', '65.50'], ['UCLTC', 'G', 'E, G', '150', '64.50'], ['UCLTC', 'F, E', 'E, G', '300', '70.70'], ['UCLTC', 'F, G', 'E, G', '300', '71.10'], ['UCLTC', 'E, G', 'E, G', '300', '75.00'], ['UCLTC', 'F, E, G', 'E, G', '450', '78.20'], ['UCLTC', 'F', 'F, E, G', '150', '62.40'], ['UCLTC', 'E', 'F, E, G', '150', '64.30'], ['UCLTC', 'G', 'F, E, G', '150', '62.80'], ['UCLTC', 'F, E', 'F, E, G', '300', '71.60'], ['UCLTC', 'F, G', 'F, E, G', '300', '70.60'], ['UCLTC', 'E, G', 'F, E, G', '300', '73.20'], ['UCLTC', 'F, E, G', 'F, E, G', '450', '77.90']]
We tested all combinations of source and target languages for all the CLTC setups. We can see similar patterns to those shown above. With every source language added to the training set, the performance of the testing set (now in two or three target languages) improves.
Automatic Generation of Language-Independent Featuresfor Cross-Lingual Classification
1802.04028
Table 1: CLTC Results on the Webis-CLS-10C Dataset
['Baseline', 'Source', 'Target', 'Baseline Results', 'LIFG']
[['SHFR-ECOC', 'E', 'F', '62.09', '90.00'], ['SHFR-ECOC', 'E', 'G', '65.22', '91.29'], ['Inverted', 'E', 'G', '49.00', '91.00'], ['DCI', 'E', 'F', '83.80', '90.38'], ['DCI', 'E', 'G', '83.80', '92.07']]
As the baselines did, we reported the accuracy achieved, except when we compare with Inverted, which reported F1, and so did we.
Automatic Generation of Language-Independent Featuresfor Cross-Lingual Classification
1802.04028
Table 2: CLTC Results on Reuters RCV1/RCV2 Dataset
['Baseline', 'Source', 'Target', 'Baseline Results', 'LIFG']
[['SHFR-ECOC', 'E', 'S', '72.79', '85.70'], ['SHFR-ECOC', 'F', 'S', '73.82', '85.95'], ['[EMPTY]', 'G', 'S', '74.15', '87.16'], ['Inverted', 'E', 'G', '55.00', '89.00'], ['SHFA', 'E', 'S', '76.40', '85.70'], ['SHFA', 'F', 'S', '76.80', '85.95'], ['[EMPTY]', 'G', 'S', '77.10', '87.16'], ['DMMC', 'E', 'F', '65.52', '88.63'], ['DMMC', 'E', 'G', '58.23', '89.44'], ['DMMC', 'E', 'S', '62.64', '85.70'], ['BRAVE', 'E', 'F', '82.50', '89.39'], ['BRAVE', 'E', 'G', '89.70', '90.76'], ['BRAVE', 'E', 'S', '60.20', '86.78'], ['[EMPTY]', 'F', 'E', '79.50', '89.09'], ['[EMPTY]', 'G', 'E', '80.10', '89.25'], ['[EMPTY]', 'S', 'E', '70.40', '86.61']]
As the baselines did, we reported the accuracy achieved, except when we compare with Inverted, which reported F1, and so did we.
Automatic Generation of Language-Independent Featuresfor Cross-Lingual Classification
1802.04028
Table 3: The Effect of Hierarchical Feature Generation
['Source', 'Target', 'LIFG – w/o [ITALIC] CMeta', 'LIFG – w/ [ITALIC] CMeta']
[['E', 'F', '52.63', '62.03'], ['E', 'G', '55.19', '63.34'], ['F', 'E', '50.87', '60.49'], ['F', 'G', '49.32', '60.88'], ['G', 'E', '51.06', '59.61'], ['G', 'F', '50.01', '61.84']]
As can be seen, the result improvement is significant – about 10% on average. Clearly, abstract features significantly contributes to performance, and should, therefore, be used when available.
Query-Reduction Networksfor Question Answering
1606.04582
Figure 3: (top) bAbI QA dataset (Weston et al., 2016) visualization of update and reset gates in QRN ‘2r’ model (bottom two) bAbI dialog and DSTC2 dialog dataset (Bordes and Weston, 2016) visualization of update and reset gates in QRN ‘2r’ model. Note that the stories can have as many as 800+ sentences; we only show part of them here. More visualizations are shown in Figure 4 (bAbI QA) and Figure 5 (dialog datasets).
['Task 3: Displaying options', 'Layer 1 [ITALIC] z1', 'Layer 1 → [ITALIC] r1', 'Layer 1 ← [ITALIC] r1', 'Layer 2 [ITALIC] z2']
[['resto-paris-expen-frech-8stars?', '0.00', '1.00', '0.96', '0.91'], ['Do you have something else?', '0.41', '0.99', '0.00', '0.00'], ['Sure let me find another option.', '1.00', '0.00', '0.00', '0.12'], ['resto-paris-expen-frech-5stars?', '0.00', '1.00', '0.96', '0.91'], ['No this does not work for me.', '0.00', '0.00', '0.14', '0.00'], ['Sure let me find an other option.', '1.00', '0.00', '0.00', '0.12'], ['What do you think of this? resto-paris-expen-french-4stars', 'What do you think of this? resto-paris-expen-french-4stars', 'What do you think of this? resto-paris-expen-french-4stars', 'What do you think of this? resto-paris-expen-french-4stars', 'What do you think of this? resto-paris-expen-french-4stars']]
In QA Task 2 example (top left), we observe high update gate values in the first layer on facts that state who has the apple, and in the second layer, the high update gate values are on those that inform where that person went to. We also observe that the forward reset gate at t=2 in the first layer (→r12) is low, which is signifying that apple no more belongs to Sandra. In dialog Task 3 (bottom left), the model is able to infer that three restaurants are already recommended so that it can recommend another one. In dialog Task 6 (bottom), the model focuses on the sentences containing Spanish, and does not concentrate much on other facts such as I don’t care.
Query-Reduction Networksfor Question Answering
1606.04582
Figure 3: (top) bAbI QA dataset (Weston et al., 2016) visualization of update and reset gates in QRN ‘2r’ model (bottom two) bAbI dialog and DSTC2 dialog dataset (Bordes and Weston, 2016) visualization of update and reset gates in QRN ‘2r’ model. Note that the stories can have as many as 800+ sentences; we only show part of them here. More visualizations are shown in Figure 4 (bAbI QA) and Figure 5 (dialog datasets).
['Task 2: Two Supporting Facts', 'Layer 1 [ITALIC] z1', 'Layer 1 → [ITALIC] r1', 'Layer 1 ← [ITALIC] r1', 'Layer 2 [ITALIC] z2']
[['Sandra picked up the apple there.', '0.95', '0.89', '0.98', '0.00'], ['Sandra dropped the apple.', '0.83', '0.05', '0.92', '0.01'], ['Daniel grabbed the apple there.', '0.88', '0.93', '0.98', '0.00'], ['Sandra travelled to the bathroom.', '0.01', '0.18', '0.63', '0.02'], ['Daniel went to the hallway.', '0.01', '0.24', '0.62', '0.83'], ['Where is the apple? hallway', 'Where is the apple? hallway', 'Where is the apple? hallway', 'Where is the apple? hallway', 'Where is the apple? hallway']]
In QA Task 2 example (top left), we observe high update gate values in the first layer on facts that state who has the apple, and in the second layer, the high update gate values are on those that inform where that person went to. We also observe that the forward reset gate at t=2 in the first layer (→r12) is low, which is signifying that apple no more belongs to Sandra. In dialog Task 3 (bottom left), the model is able to infer that three restaurants are already recommended so that it can recommend another one. In dialog Task 6 (bottom), the model focuses on the sentences containing Spanish, and does not concentrate much on other facts such as I don’t care.
Query-Reduction Networksfor Question Answering
1606.04582
Figure 3: (top) bAbI QA dataset (Weston et al., 2016) visualization of update and reset gates in QRN ‘2r’ model (bottom two) bAbI dialog and DSTC2 dialog dataset (Bordes and Weston, 2016) visualization of update and reset gates in QRN ‘2r’ model. Note that the stories can have as many as 800+ sentences; we only show part of them here. More visualizations are shown in Figure 4 (bAbI QA) and Figure 5 (dialog datasets).
['Task 15: Deduction', 'Layer 1 [ITALIC] z1', 'Layer 1 → [ITALIC] r1', 'Layer 1 ← [ITALIC] r1', 'Layer 2 [ITALIC] z2']
[['Mice are afraid of wolves.', '0.11', '0.99', '0.13', '0.78'], ['Gertrude is a mouse.', '0.77', '0.99', '0.96', '0.00'], ['Cats are afraid of sheep.', '0.01', '0.99', '0.07', '0.03'], ['Winona is a mouse.', '0.14', '0.85', '0.77', '0.05'], ['Sheep are afraid of wolves.', '0.02', '0.98', '0.27', '0.05'], ['What is Gertrude afraid of? wolf', 'What is Gertrude afraid of? wolf', 'What is Gertrude afraid of? wolf', 'What is Gertrude afraid of? wolf', 'What is Gertrude afraid of? wolf']]
In QA Task 2 example (top left), we observe high update gate values in the first layer on facts that state who has the apple, and in the second layer, the high update gate values are on those that inform where that person went to. We also observe that the forward reset gate at t=2 in the first layer (→r12) is low, which is signifying that apple no more belongs to Sandra. In dialog Task 3 (bottom left), the model is able to infer that three restaurants are already recommended so that it can recommend another one. In dialog Task 6 (bottom), the model focuses on the sentences containing Spanish, and does not concentrate much on other facts such as I don’t care.
Query-Reduction Networksfor Question Answering
1606.04582
Figure 3: (top) bAbI QA dataset (Weston et al., 2016) visualization of update and reset gates in QRN ‘2r’ model (bottom two) bAbI dialog and DSTC2 dialog dataset (Bordes and Weston, 2016) visualization of update and reset gates in QRN ‘2r’ model. Note that the stories can have as many as 800+ sentences; we only show part of them here. More visualizations are shown in Figure 4 (bAbI QA) and Figure 5 (dialog datasets).
['Task 6: DSTC2 dialog', 'Layer 1 [ITALIC] z1', 'Layer 1 → [ITALIC] r1', 'Layer 1 ← [ITALIC] r1', 'Layer 2 [ITALIC] z2']
[['Spanish food.', '0.84', '0.07', '0.00', '0.82'], ['You are lookng for a spanish restaurant right?', '0.98', '0.02', '0.49', '0.75'], ['Yes.', '0.01', '1.00', '0.33', '0.13'], ['What part of town do you have in mind?', '0.20', '0.73', '0.41', '0.11'], ['I don’t care.', '0.00', '1.00', '0.02', '0.00'], ['What price range would you like?', '0.72', '0.46', '0.52', '0.72'], ['I don’t care. API CALL spanish R-location R-price', 'I don’t care. API CALL spanish R-location R-price', 'I don’t care. API CALL spanish R-location R-price', 'I don’t care. API CALL spanish R-location R-price', 'I don’t care. API CALL spanish R-location R-price']]
In QA Task 2 example (top left), we observe high update gate values in the first layer on facts that state who has the apple, and in the second layer, the high update gate values are on those that inform where that person went to. We also observe that the forward reset gate at t=2 in the first layer (→r12) is low, which is signifying that apple no more belongs to Sandra. In dialog Task 3 (bottom left), the model is able to infer that three restaurants are already recommended so that it can recommend another one. In dialog Task 6 (bottom), the model focuses on the sentences containing Spanish, and does not concentrate much on other facts such as I don’t care.
Query-Reduction Networksfor Question Answering
1606.04582
Table 2: bAbI QA dataset [Weston et al., 2016] error rates (%) of QRN and previous work: LSTM [Weston et al., 2016], End-to-end Memory Networks (N2N) [Sukhbaatar et al., 2015], Dynamic Memory Networks (DMN+) [Xiong et al., 2016], Gated End-to-end Memory Networks(GMemN2N) [Perez and Liu, 2016]. Results within each task of Differentiable Neural Computer(DNC) were not provided in its paper Graves et al. [2016]). For QRN, a number in the front (1, 2, 3, 6) indicates the number of layers. A number in the back (200) indicates the dimension of hidden vector, while the default value is 50. ‘r’ indicates that the reset gate is used, and ‘v’ indicates that the gates were vectorized. ‘*’ indicates joint training.
['Task', '1k Previous works', '1k Previous works', '1k Previous works', '1k Previous works', '1k QRN', '1k QRN', '1k QRN', '1k QRN', '1k QRN', '1k QRN', '10k Previous works', '10k Previous works', '10k Previous works', '10k QRN', '10k QRN', '10k QRN', '10k QRN']
[['Task', 'LSTM', 'N2N', 'DMN+', 'GMemN2N', '1r', '2', '2r', '3r', '6r', '6r200*', 'N2N', 'DMN+', 'GMemN2N', '2r', '2rv', '3r', '6r200'], ['1: Single supporting fact', '50.0', '0.1', '1.3', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '13.1', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0'], ['2: Two supporting facts', '80.0', '18.8', '72.3', '8.1', '65.7', '1.2', '0.7', '0.5', '1.5', '15.3', '0.3', '0.3', '0.0', '0.4', '0.8', '0.4', '0.0'], ['3: Three supporting facts', '80.0', '31.7', '73.3', '38.7', '68.2', '17.5', '5.7', '1.2', '15.3', '13.8', '2.1', '1.1', '4.5', '0.4', '1.4', '0.0', '0.0'], ['4: Two arg relations', '39.0', '17.5', '26.9', '0.4', '0.0', '0.0', '0.0', '0.7', '9.0', '13.6', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0'], ['5: Three arg relations', '30.0', '12.9', '25.6', '1.0', '1.0', '1.1', '1.1', '1.2', '1.3', '12.5', '0.8', '0.5', '0.2', '0.5', '0.2', '0.3', '0.0'], ['6: Yes/no questions', '52.0', '2.0', '28.5', '8.4', '0.1', '0.0', '0.9', '1.2', '50.6', '15.5', '0.1', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0'], ['7: Counting', '51.0', '10.1', '21.9', '17.8', '10.9', '11.1', '9.6', '9.4', '13.1', '15.3', '2.0', '2.4', '1.8', '1.0', '0.7', '0.7', '0.0'], ['8: Lists/sets', '55.0', '6.1', '21.9', '12.5', '6.8', '5.7', '5.6', '3.7', '7.8', '15.1', '0.9', '0.0', '0.3', '1.4', '0.6', '0.8', '0.4'], ['9 : Simple negation', '36.0', '1.5', '42.9', '10.7', '0.0', '0.6', '0.0', '0.0', '32.7', '13.0', '0.3', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0'], ['10: Indefinite knowledge', '56.0', '2.6', '23.1', '16.5', '0.8', '0.6', '0.0', '0.0', '3.5', '12.9', '0.0', '0.0', '0.2', '0.0', '0.0', '0.0', '0.0'], ['11: Basic coreference', '38.0', '3.3', '4.3', '0.0', '11.3', '0.5', '0.0', '0.0', '0.9', '14.7', '0.1', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0'], ['12: Conjunction', '26.0', '0.0', '3.5', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '15.1', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0'], ['13: Compound coreference', '6.0', '0.5', '7.8', '0.0', '5.3', '5.5', '0.0', '0.3', '8.9', '13.7', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0'], ['14: Time reasoning', '73.0', '2.0', '61.9', '1.2', '20.2', '1.3', '0.8', '3.8', '18.2', '14.5', '0.1', '0.0', '0.0', '0.2', '0.0', '0.0', '0.1'], ['15: Basic deduction', '79.0', '1.8', '47.6', '0.0', '39.4', '0.0', '0.0', '0.0', '0.1', '14.7', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0'], ['16: Basic induction', '77.0', '51.0', '54.4', '0.1', '50.6', '54.8', '53.0', '53.4', '53.5', '15.5', '51.8', '45.3', '0.0', '49.4', '50.4', '49.1', '0.0'], ['17: Positional reasoning', '49.0', '42.6', '44.1', '41.7', '40.6', '36.5', '34.4', '51.8', '52.0', '13.0', '18.6', '4.2', '27.8', '0.9', '0.0', '5.8', '4.1'], ['18: Size reasoning', '48.0', '9.2', '9.1', '9.2', '8.2', '8.6', '7.9', '8.8', '47.5', '14.9', '5.3', '2.1', '8.5', '1.6', '8.4', '1.8', '0.7'], ['19: Path finding', '92.0', '90.6', '90.8', '88.5', '88.8', '89.8', '78.7', '90.7', '88.6', '13.6', '2.3', '0.0', '31.0', '36.1', '1.0', '27.9', '0.1'], ['20: Agents motivations', '9.0', '0.2', '2.2', '0.0', '0.0', '0.0', '0.2', '0.3', '5.5', '14.6', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0'], ['# Failed', '20', '10', '16', '10', '12', '8', '7', '[BOLD] 5', '13', '20', '3', '1', '3', '2', '2', '3', '[BOLD] 0'], ['Average error rates (%)', '51.3', '15.2', '33.2', '12.7', '20.1', '11.7', '[BOLD] 9.9', '11.3', '20.5', '14.2', '4.2', '2.8', '3.7', '4.6', '3.2', '4.3', '[BOLD] 0.3']]
In 1k data, QRN’s ‘2r’ (2 layers + reset gate + d=50) outperforms all other models by a large margin (2.8+%). In 10k dataset, the average accuracy of QRN’s ‘6r200’ (6 layers + reset gate + d=200) model outperforms all previous models by a large margin (2.5+%), achieving a nearly perfect score of 99.7%.
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 4: Evaluation of VOGNet in GT5 setting by training (first column) and testing (top row) on SVSQ, TEMP, SPAT respectively
['[EMPTY]', 'SVSQ Acc', 'SVSQ SAcc', 'TEMP Acc', 'TEMP SAcc', 'SPAT Acc', 'SPAT SAcc']
[['SVSQ', '76.38', '59.58', '1.7', '0.42', '2.27', '0.6'], ['TEMP', '75.4', '57.38', '23.07', '12.06', '18.03', '8.16'], ['SPAT', '75.15', '57.02', '22.6', '11.04', '23.53', '11.58']]
However, the reverse is not true i.e. models trained on SVSQ fail miserably in SPAT and TEMP (accuracy is <3%). This suggests that both TEMP and SPAT moderately counter the bias caused by having a single object instance in a video. Interestingly, while VOGNet trained on TEMP doesn’t perform well on SPAT (performance is worse than VidGrnd trained on SPAT), when VOGNet is trained on SPAT and tested on TEMP it significantly outperforms VidGrnd trained in TEMP. This asymmetry is possibly because the multi-modal transformer is applied to individual frames.
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 3: Comparison of VOGNet against ImgGrnd and VidGrnd. GT5 and P100 use 5 and 100 proposals per frame. Here, Acc: Grounding Accuracy, VAcc: Video accuracy, Cons: Consistency, SAcc: Strict Accuracy (see Section 4.3 for details). On the challenging evaluation metrics of TEMP and SPAT, VOGNet (ours) shows significant improvement over competitive image and video grounding baselines.
['[EMPTY]', 'Model', 'SVSQ Acc', 'SVSQ SAcc', 'SEP Acc', 'SEP VAcc', 'SEP SAcc', 'TEMP Acc', 'TEMP VAcc', 'TEMP Cons', 'TEMP SAcc', 'SPAT Acc', 'SPAT VAcc', 'SPAT Cons', 'SPAT SAcc']
[['GT5', 'ImgGrnd', '75.31', '56.53', '39.78', '51.14', '30.34', '17.02', '7.24', '34.73', '7.145', '16.93', '9.38', '49.21', '7.02'], ['GT5', 'VidGrnd', '75.42', '57.16', '41.59', '54.16', '31.22', '19.92', '8.83', '31.70', '8.67', '20.18', '11.39', '49.01', '8.64'], ['GT5', 'VOGNet', '[BOLD] 76.34', '[BOLD] 58.85', '[BOLD] 42.82', '[BOLD] 55.64', '[BOLD] 32.46', '[BOLD] 23.38', '[BOLD] 12.17', '[BOLD] 39.14', '[BOLD] 12.01', '[BOLD] 23.11', '[BOLD] 14.79', '[BOLD] 57.26', '[BOLD] 11.90'], ['P100', 'ImgGrnd', '[BOLD] 55.22', '[BOLD] 32.7', '26.29', '46.9', '15.4', '9.71', '3.59', '22.97', '3.49', '7.39', '4.02', '[BOLD] 37.15', '2.72'], ['P100', 'VidGrnd', '53.30', '30.90', '25.99', '47.07', '14.79', '10.56', '4.04', '[BOLD] 29.47', '3.98', '8.54', '4.33', '36.26', '3.09'], ['P100', 'VOGNet', '53.77', '31.9', '[BOLD] 29.32', '[BOLD] 51.2', '[BOLD] 17.17', '[BOLD] 12.68', '[BOLD] 5.37', '25.03', '[BOLD] 5.17', '[BOLD] 9.91', '[BOLD] 5.08', '34.93', '[BOLD] 3.59']]
across GT5 (5 proposal boxes per frame) and P100 (100 proposal boxes per frame). In practice, SPAT and TEMP strategies when applied to contrastive videos from ActivityNet are effective proxies to obtaining naturally occurring contrastive examples from the web.
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 7: Ablative study comparing gains from Multi-Modal Transformer (MTx) and Object Transformer (OTx) and Relative Position Encoding (RPE). L: Number of Layers, H: Number of Heads in the Transformer. Note that VOGNet = ImgGrnd +MTx(1L,3H) +OTx(1L,3H) + RPE
['SPAT', 'Acc', 'VAcc', 'Cons', 'SAcc']
[['ImgGrnd', '17.03', '9.71', '50.41', '7.14'], ['+OTx(1L, 3H)', '19.8', '10.91', '48.34', '8.45'], ['+RPE', '20.2', '11.66', '49.21', '9.28'], ['+MTx(1L, 3H)', '19.23', '10.49', '48.19', '8.14'], ['+RPE', '19.09', '10.46', '50.09', '8.23'], ['+OTx(3L, 6H)', '21.14', '12.1', '49.66', '9.52'], ['+OTx + MTx + RPE', '[BOLD] 23.53', '[BOLD] 14.22', '[BOLD] 56.5', '[BOLD] 11.58'], ['VOGNet', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['+MTx(3L,6H)', '24.24', '[BOLD] 15.36', '57.37', '12.52'], ['+OTx(3L,6H)', '[BOLD] 24.99', '7.33', '[BOLD] 66.29', '[BOLD] 14.47']]
Ablation Study: We observe: ( i) self-attention via object is an effective way to encode object relations across frames (ii) multi-modal transformer applied on individual frames gives modest gains but falls short of object transformer due to lack of temporal information (iii) relative position encoding (RPE) boosts strict accuracy for both transformers (iv) object transformer with 3 layers and 6 heads performs worse than using a single multi-modal transformer i.e. adding more layers and attention heads to object transformer is not enough (v) using both object and multi-modal transformers with more layers and more heads gives the best performing model.
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 3: Total number of lemmatized words (with at least 20 occurrence) in the train set of ASRL.
['V', 'Arg0', 'Arg1', 'Arg2', 'ArgM-LOC']
[['338', '93', '281', '114', '59']]
Verb Figure Arg1 Figure Arg2 The higher number of verbs In comparison, Arg0 is highly unbalanced as agents are mostly restricted to “people”. We also observe that “man” appears much more often than “woman”/“she”. This indicates gender bias in video curation or video description. Another interesting observation is that “person” class dominates in each of argument roles which suggest “person-person” interactions are more commonly described than “person-object” interactions.
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 4: Comparing models trained with GT5 and P100. All models are tested in P100 setting.
['Model', 'Train', 'SVSQ Acc', 'SVSQ SAcc', 'SEP Acc', 'SEP VAcc', 'SEP SAcc', 'TEMP Acc', 'TEMP VAcc', 'TEMP Cons', 'TEMP SAcc', 'SPAT Acc', 'SPAT VAcc', 'SPAT Cons', 'SPAT SAcc']
[['ImgGrnd', 'GT5', '46.31', '24.83', '20.55', '47.49', '9.92', '8.06', '2.68', '25.35', '2.68', '4.64', '2.47', '34.17', '1.31'], ['ImgGrnd', 'P100', '55.22', '32.7', '26.29', '46.9', '15.4', '9.71', '3.59', '22.97', '3.49', '7.39', '4.02', '37.15', '2.72'], ['VidGrnd', 'GT5', '43.37', '22.64', '22.67', '49.6', '11.67', '9.35', '3.37', '28.47', '3.29', '5.1', '2.66', '33.6', '1.74'], ['VidGrnd', 'P100', '53.30', '30.90', '25.99', '47.07', '14.79', '10.56', '4.04', '29.47', '3.98', '8.54', '4.33', '36.26', '3.09'], ['VOGNet', 'GT5', '46.25', '24.61', '24.05', '51.07', '12.51', '9.72', '3.41', '26.34', '3.35', '6.21', '3.40', '39.81', '2.18'], ['VOGNet', 'P100', '53.77', '31.9', '29.32', '51.2', '17.17', '12.68', '5.37', '25.03', '5.17', '9.91', '5.08', '34.93', '3.59']]
GT5 models in P100 setting: While testing in P100, for TEMP and SPAT, we set the threshold for models trained in GT5 as 0.5 which is higher than the threshold used when testing in GT5 (0.2). This is expected as a lower threshold would imply a higher chance of a false positive.
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 5: Ablative study layers and heads of Transformers.
['SPAT', 'Acc', 'VAcc', 'Cons', 'SAcc']
[['ImgGrnd', '17.03', '9.71', '50.41', '7.14'], ['+OTx (1L, 3H)', '19.8', '10.91', '48.34', '8.45'], ['+OTx (2L, 3H)', '20.8', '11.38', '49.45', '9.17'], ['+OTx (2L, 6H)', '[BOLD] 21.16', '[BOLD] 12.2', '48.86', '[BOLD] 9.58'], ['+OTx (3L, 3H)', '20.68', '11.34', '48.66', '9.19'], ['+OTx (3L, 6H)', '21.14', '12.1', '[BOLD] 49.66', '9.52'], ['VOGNet', '23.53', '14.22', '56.5', '11.58'], ['+MTx (2L,3H)', '23.38', '14.78', '55.5', '11.9'], ['+MTx (2L,6H)', '23.96', '14.44', '55.5', '11.59'], ['+MTx (3L,3H)', '24.53', '14.84', '56.19', '12.37'], ['+MTx (3L,6H)', '24.24', '15.36', '57.37', '12.52'], ['+OTx(3L,6H)', '[BOLD] 24.99', '[BOLD] 17.33', '[BOLD] 66.29', '[BOLD] 14.47']]
Transformer Ablation: It is interesting to note adding more heads better than adding more layers for object transformer, while in the case of multi-modal transformer both number of heads and number of layers help. Finally, we find that simply adding more layers and heads to the object transformer is insufficient, as a multi-modal transformer with 1 layer and 3 heads performs significantly better than the object transformer with 3 layers and 6 heads.
Tracking Amendments to Legislation and Other Political Texts with a Novel Minimum-Edit-Distance Algorithm: DocuToads.
1608.06459
(e) Transposition
['[EMPTY]', '[BOLD] A', '[BOLD] simple', 'minimum', 'edit', 'distance', 'algorithm']
[['minimum', '0', '0', '1', '0', '0', '0'], ['edit', '0', '0', '0', '2', '0', '0'], ['distance', '0', '0', '0', '0', '3', '0'], ['algorithm', '0', '0', '0', '0', '0', '4'], ['[BOLD] A', '[BOLD] 1', '0', '0', '0', '0', '0'], ['[BOLD] simple', '0', '[BOLD] 2', '0', '0', '0', '0']]
The most common source of disagreement in our sample of articles can be traced primarily to the addition of new articles. In such cases, new text was added between two previously existing articles in the reference document. DocuToads, however, has to record the additions at an index position in the reference text which will correspond to either the preceding or following article. There is in other words a random component which may cause DocuToads to disagree with human coders who can implement any single rule on the placement of new articles. This constitutes a form of negative serial correlation, a common concern found in time-series analysis. Users of DocuToads are advised to consider this issue if correlating article-level results with other article-level variables assigned by human coders. This source of disagreement has no effect on the document-level results.
Tracking Amendments to Legislation and Other Political Texts with a Novel Minimum-Edit-Distance Algorithm: DocuToads.
1608.06459
(a) Identical text sequences
['[EMPTY]', 'A', 'simple', 'minimum', 'edit', 'distance', 'algorithm']
[['A', '[BOLD] 1', '0', '0', '0', '0', '0'], ['simple', '0', '[BOLD] 2', '0', '0', '0', '0'], ['minimum', '0', '0', '[BOLD] 3', '0', '0', '0'], ['edit', '0', '0', '0', '[BOLD] 4', '0', '0'], ['distance', '0', '0', '0', '0', '[BOLD] 5', '0'], ['algorithm', '0', '0', '0', '0', '0', '[BOLD] 6']]
The most common source of disagreement in our sample of articles can be traced primarily to the addition of new articles. In such cases, new text was added between two previously existing articles in the reference document. DocuToads, however, has to record the additions at an index position in the reference text which will correspond to either the preceding or following article. There is in other words a random component which may cause DocuToads to disagree with human coders who can implement any single rule on the placement of new articles. This constitutes a form of negative serial correlation, a common concern found in time-series analysis. Users of DocuToads are advised to consider this issue if correlating article-level results with other article-level variables assigned by human coders. This source of disagreement has no effect on the document-level results.
Tracking Amendments to Legislation and Other Political Texts with a Novel Minimum-Edit-Distance Algorithm: DocuToads.
1608.06459
(b) Deletion
['[EMPTY]', 'A', '[BOLD] simple', 'minimum', 'edit', 'distance', 'algorithm']
[['A', '1', '[BOLD] 0', '0', '0', '0', '0'], ['minimum', '0', '[BOLD] 0', '1', '0', '0', '0'], ['edit', '0', '[BOLD] 0', '0', '2', '0', '0'], ['distance', '0', '[BOLD] 0', '0', '0', '3', '0'], ['algorithm', '0', '[BOLD] 0', '0', '0', '0', '4']]
The most common source of disagreement in our sample of articles can be traced primarily to the addition of new articles. In such cases, new text was added between two previously existing articles in the reference document. DocuToads, however, has to record the additions at an index position in the reference text which will correspond to either the preceding or following article. There is in other words a random component which may cause DocuToads to disagree with human coders who can implement any single rule on the placement of new articles. This constitutes a form of negative serial correlation, a common concern found in time-series analysis. Users of DocuToads are advised to consider this issue if correlating article-level results with other article-level variables assigned by human coders. This source of disagreement has no effect on the document-level results.
Tracking Amendments to Legislation and Other Political Texts with a Novel Minimum-Edit-Distance Algorithm: DocuToads.
1608.06459
(c) Addition
['[EMPTY]', 'A', 'simple', 'minimum', 'edit', 'distance', 'algorithm']
[['A', '1', '0', '0', '0', '0', '0'], ['simple', '0', '2', '0', '0', '0', '0'], ['[BOLD] new', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0'], ['minimum', '0', '0', '1', '0', '0', '0'], ['edit', '0', '0', '0', '2', '0', '0'], ['distance', '0', '0', '0', '0', '3', '0'], ['algorithm', '0', '0', '0', '0', '0', '4']]
The most common source of disagreement in our sample of articles can be traced primarily to the addition of new articles. In such cases, new text was added between two previously existing articles in the reference document. DocuToads, however, has to record the additions at an index position in the reference text which will correspond to either the preceding or following article. There is in other words a random component which may cause DocuToads to disagree with human coders who can implement any single rule on the placement of new articles. This constitutes a form of negative serial correlation, a common concern found in time-series analysis. Users of DocuToads are advised to consider this issue if correlating article-level results with other article-level variables assigned by human coders. This source of disagreement has no effect on the document-level results.
Tracking Amendments to Legislation and Other Political Texts with a Novel Minimum-Edit-Distance Algorithm: DocuToads.
1608.06459
(d) Substitution
['[EMPTY]', 'A', '[BOLD] simple', 'minimum', 'edit', 'distance', 'algorithm']
[['A', '1', '[BOLD] 0', '0', '0', '0', '0'], ['[BOLD] new', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0'], ['minimum', '0', '[BOLD] 0', '1', '0', '0', '0'], ['edit', '0', '[BOLD] 0', '0', '2', '0', '0'], ['distance', '0', '[BOLD] 0', '0', '0', '3', '0'], ['algorithm', '0', '[BOLD] 0', '0', '0', '0', '4']]
The most common source of disagreement in our sample of articles can be traced primarily to the addition of new articles. In such cases, new text was added between two previously existing articles in the reference document. DocuToads, however, has to record the additions at an index position in the reference text which will correspond to either the preceding or following article. There is in other words a random component which may cause DocuToads to disagree with human coders who can implement any single rule on the placement of new articles. This constitutes a form of negative serial correlation, a common concern found in time-series analysis. Users of DocuToads are advised to consider this issue if correlating article-level results with other article-level variables assigned by human coders. This source of disagreement has no effect on the document-level results.
Explaining Question Answering Models through Text Generation
2004.05569
Table 10: Results of End2End compared to our model (with GS and ST variants) on hypernym extraction.
['Model', 'Accuracy']
[['+GS +ST', '84.0'], ['+GS -ST', '61.0'], ['-GS +ST', '84.7'], ['-GS -ST', '54.7'], ['End2End', '86.5']]
We report results on the synthetic hypernym extraction task with and without the Gumbel-softmax trick and the ST estimator. We observe that the ST estimator is crucial even on such a simple task, which aligns with prior observations havrylov2017emergence that ST helps overcome the discrepancy between training time and test time. GS improved results without ST, but had little effect with ST.
Explaining Question Answering Models through Text Generation
2004.05569
Table 3: Human-evaluation results for how reasonable hypotheses are (CSQA development set). Each rater determined whether a hypothesis is reasonable (1 point), somewhat reasonable (0.5 point) or not reasonable (0 points). The score is the average rating across raters and examples.
['Model', 'Score']
[['| [ITALIC] c|=3+KLD+REP', '0.72'], ['Top- [ITALIC] K=5 ST', '[BOLD] 0.74'], ['SupGen | [ITALIC] c|=3', '0.60'], ['SupGen | [ITALIC] c|=30', '0.55']]
Top-K=5 ST achieved the highest score of 0.74. While SupGen models produce more natural texts, they are judged to be less reasonable in the context of the question.
Capsule-Transformer for Neural Machine Translation
2004.14649
Table 2: Effect in encoder and decoder.
['[BOLD] #', '[ITALIC] Layers', '[BOLD] BLEU']
[['1', '-', '24.28'], ['2', '1-3', '24.64'], ['3', '4-6', '24.48'], ['4', '1-6', '24.87']]
S4SS3SSS0Px1 Effect on Transformer Componets To evaluate the effect of capsule routing SAN in encoder and decoder , we perform an ablation study. Especially the modified decoder still outperforms the baseline even we have removed the vertical routing part, which demonstrates the effectiveness of our model. The row 4 proves the complementarity of the encoder and decoder with capsule routing SAN.
Capsule-Transformer for Neural Machine Translation
2004.14649
Table 1: Comparing with existing NMT systems on WMT17 Chinese-to-English (Zh-En) and WMT14 English-to-German (En-De) tasks.
['[BOLD] System', '[BOLD] Architecture', '[BOLD] Zh-En', '[BOLD] En-De']
[['[ITALIC] Existing NMT Systems', '[ITALIC] Existing NMT Systems', '[ITALIC] Existing NMT Systems', '[ITALIC] Existing NMT Systems'], ['Wu et al. ( 2016 )', 'RNN with 8 layers', '-', '26.30'], ['Gehring et al. ( 2017 )', 'CNN with 15 layers', '-', '26.36'], ['Vaswani et al. ( 2017 )', 'Transformer- [ITALIC] Base', '-', '27.30'], ['Vaswani et al. ( 2017 )', 'Transformer- [ITALIC] Big', '-', '28.40'], ['Hassan et al. ( 2018 )', 'Transformer- [ITALIC] Big', '24.20', '-'], ['Li et al. ( 2019 )', 'Transformer- [ITALIC] Base + Effective Aggregation', '24.68', '27.98'], ['Li et al. ( 2019 )', 'Transformer- [ITALIC] Big + Effective Aggregation', '25.00', '28.96'], ['[ITALIC] Our NMT Systems', '[ITALIC] Our NMT Systems', '[ITALIC] Our NMT Systems', '[ITALIC] Our NMT Systems'], ['[ITALIC] this work', 'Transformer- [ITALIC] Base', '24.28', '27.43'], ['[ITALIC] this work', 'capsule-Transformer- [ITALIC] Base', '25.02', '28.04'], ['[ITALIC] this work', 'Transformer- [ITALIC] Big', '24.71', '28.42'], ['[ITALIC] this work', 'capsule-Transformer- [ITALIC] Big', '25.14', '28.71']]
As shown in the table, our capsule-Transformer model consistently improves the performance across both language pairs and model variations, which shows the effectiveness and generalization ability of our approach. For WMT17 Zh-En task, our model outperforms all the models listed above, especially only the capsule-Transformer-Base model could achieve a score higher even than the other Big model. For WMT14 En-De task, our model outperforms the corresponding baseline while inferior to the Big model proposed by Li et al. Considering their model introduces over 33M new parameters (while for our Big model, this number is 1.6K) and uses a much larger batch size than ours (4096 vs. 1024) in the training process, it is reasonable for us to believe that our model would achieve a more promising score if the condition was the same.