table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
D16-1007table_2
Comparison of different position features.
2
[['Position Feature', 'plain text PF'], ['Position Feature', 'TPF1'], ['Position Feature', 'TPF2']]
1
[['F1']]
[['83.21'], ['83.99'], ['83.90']]
column
['F1']
['Position Feature']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Position Feature || plain text PF</td> <td>83.21</td> </tr> <tr> <td>Position Feature || TPF1</td> <td>83.99</td> </tr> <tr> <td>Position Feature || TPF2</td> <td>83.90</td> </tr> </tbody></table>
Table 2
table_2
D16-1007
8
emnlp2016
Table 2 summarizes the performances of proposed model when different position features are exploited. To concentrate on studying the effect of position features, we do not involve lexical features in this section. As the table shows, the position feature on plain text is still effective in our model and we accredit its satisfactory result to the dependency information and tree-based kernels. The F1 scores of tree-based position features are higher since they are “specially designed” for our model. Contrary to our expectation, the more fine-grained TPF2 does not yield a better performance than TPF1, and two kinds of TPF give fairly close results. One possible reason is that the influence of a more elaborated definition of relative position is minimal. As most sentences in this dataset are of short length and their dependency trees are not so complicated, replacing TPF1 with TPF2 usually brings little new structural information and thus results in a similar F1 score. However, though the performances of different position features are close, tree-based position feature is an essential part of our model. The F1 score is severely reduced to 75.22 when we remove the tree-based position feature in PECNN.
[1, 2, 1, 1, 1, 2, 2, 0, 0]
['Table 2 summarizes the performances of proposed model when different position features are exploited.', 'To concentrate on studying the effect of position features, we do not involve lexical features in this section.', 'As the table shows, the position feature on plain text is still effective in our model and we accredit its satisfactory result to the dependency information and tree-based kernels.', 'The F1 scores of tree-based position features are higher since they are “specially designed” for our model.', 'Contrary to our expectation, the more fine-grained TPF2 does not yield a better performance than TPF1, and two kinds of TPF give fairly close results.', 'One possible reason is that the influence of a more elaborated definition of relative position is minimal.', 'As most sentences in this dataset are of short length and their dependency trees are not so complicated, replacing TPF1 with TPF2 usually brings little new structural information and thus results in a similar F1 score.', 'However, though the performances of different position features are close, tree-based position feature is an essential part of our model.', 'The F1 score is severely reduced to 75.22 when we remove the tree-based position feature in PECNN.']
[None, None, ['plain text PF', 'TPF1', 'TPF2'], ['TPF1', 'TPF2'], ['TPF1', 'TPF2'], None, ['TPF1', 'TPF2'], None, None]
1
D16-1010table_3
Pearson correlation values between human and model preferences for each construction and the verb-bias score; training on raw frequencies and 2 constructions. All correlations significant with p-value < 0.001, except the one value with *. Best result for each row is marked in boldface.
1
[['DO'], ['PD'], ['DO-PD']]
2
[['AB (Connectionist)', '-'], ['BFS (Bayesian)', 'Level 1'], ['BFS (Bayesian)', 'Level 2']]
[['0.06*', '0.23', '0.25'], ['0.33', '0.38', '0.32'], ['0.39', '0.53', '0.59']]
column
['correlation', 'correlation', 'correlation']
['AB (Connectionist)', 'BFS (Bayesian)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AB (Connectionist) || -</th> <th>BFS (Bayesian) || Level 1</th> <th>BFS (Bayesian) || Level 2</th> </tr> </thead> <tbody> <tr> <td>DO</td> <td>[0.06]</td> <td>0.23</td> <td>0.25</td> </tr> <tr> <td>PD</td> <td>0.33</td> <td>0.38</td> <td>0.32</td> </tr> <tr> <td>DO-PD</td> <td>0.39</td> <td>0.53</td> <td>0.59</td> </tr> </tbody></table>
Table 3
table_3
D16-1010
8
emnlp2016
Table 3 presents the correlation results for the two models’ preferences for each construction and the verb bias score. The AB model does not correlate with the judgments for the DO. However, the model produces significant positive correlations with the PD judgments and with the verb bias score. The BFS model, on the other hand, achieves significant positive correlations on all measures, by both levels. As in the earlier experiments, the best correlation with the verb bias score is produced by the second level of the BFS model, as Figure 3 demonstrates.
[1, 1, 1, 1, 1]
['Table 3 presents the correlation results for the two models’ preferences for each construction and the verb bias score.', 'The AB model does not correlate with the judgments for the DO.', 'However, the model produces significant positive correlations with the PD judgments and with the verb bias score.', 'The BFS model, on the other hand, achieves significant positive correlations on all measures, by both levels.', 'As in the earlier experiments, the best correlation with the verb bias score is produced by the second level of the BFS model.']
[['AB (Connectionist)', 'BFS (Bayesian)'], ['AB (Connectionist)', 'DO'], ['AB (Connectionist)', 'PD'], ['DO', 'PD', 'DO-PD', 'AB (Connectionist)', 'BFS (Bayesian)'], ['Level 2', 'BFS (Bayesian)']]
1
D16-1011table_4
Comparison between rationale models (middle and bottom rows) and the baselines using full title or body (top row).
1
[['Full title'], ['Full body'], ['Independent'], ['Independent'], ['Dependent'], ['Dependent']]
1
[['MAP (dev)'], ['MAP (test)'], ['% words']]
[['56.5', '60.0', '10.1'], ['54.2', '53.0', '89.9'], ['55.7', '53.6', '9.7'], ['56.3', '52.6', '19.7'], ['56.1', '54.6', '11.6'], ['56.5', '55.6', '32.8']]
column
['MAP (dev)', 'MAP (test)', '% words']
['Independent', 'Dependent']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP (dev)</th> <th>MAP (test)</th> <th>% words</th> </tr> </thead> <tbody> <tr> <td>Full title</td> <td>56.5</td> <td>60.0</td> <td>10.1</td> </tr> <tr> <td>Full body</td> <td>54.2</td> <td>53.0</td> <td>89.9</td> </tr> <tr> <td>Independent</td> <td>55.7</td> <td>53.6</td> <td>9.7</td> </tr> <tr> <td>Independent</td> <td>56.3</td> <td>52.6</td> <td>19.7</td> </tr> <tr> <td>Dependent</td> <td>56.1</td> <td>54.6</td> <td>11.6</td> </tr> <tr> <td>Dependent</td> <td>56.5</td> <td>55.6</td> <td>32.8</td> </tr> </tbody></table>
Table 4
table_4
D16-1011
8
emnlp2016
Results. Table 4 presents the results of our rationale model. We explore a range of hyper-parameter values. We include two runs for each version. The first one achieves the highest MAP on the development set, The second run is selected to compare the models when they use roughly 10% of question text (7 words on average). We also show the results of different runs in Figure 6. The rationales achieve the MAP up to 56.5%, getting close to using the titles. The models also outperform the baseline of using the noisy question bodies, indicating the the models’ capacity of extracting short but important fragments.
[2, 1, 2, 2, 1, 2, 1, 1]
['Results.', 'Table 4 presents the results of our rationale model.', 'We explore a range of hyper-parameter values.', 'We include two runs for each version.', 'The first one achieves the highest MAP on the development set, The second run is selected to compare the models when they use roughly 10% of question text (7 words on average).', 'We also show the results of different runs in Figure 6.', 'The rationales achieve the MAP up to 56.5%, getting close to using the titles.', 'The models also outperform the baseline of using the noisy question bodies, indicating the the models’ capacity of extracting short but important fragments.']
[None, None, None, None, ['Independent', 'Dependent', 'MAP (dev)'], None, ['Dependent', 'MAP (dev)', 'Full title'], ['Independent', 'Dependent', 'Full title', 'Full body']]
1
D16-1018table_2
Spearman’s rank correlation results on the SCWS dataset
4
[['Model', 'Huang', 'Similarity Metrics', 'AvgSim'], ['Model', 'Huang', 'Similarity Metrics', 'AvgSimC'], ['Model', 'Chen', 'Similarity Metrics', 'AvgSim'], ['Model', 'Chen', 'Similarity Metrics', 'AvgSimC'], ['Model', 'Neelakantan', 'Similarity Metrics', 'AvgSim'], ['Model', 'Neelakantan', 'Similarity Metrics', 'AvgSimC'], ['Model', 'Li', 'Similarity Metrics', '-'], ['Model', 'Tian', 'Similarity Metrics', 'Model_M'], ['Model', 'Tian', 'Similarity Metrics', 'Model_W'], ['Model', 'Bartunov', 'Similarity Metrics', 'AvgSimC'], ['Model', 'Ours + CBOW', 'Similarity Metrics', 'HardSim'], ['Model', 'Ours + CBOW', 'Similarity Metrics', 'SoftSim'], ['Model', 'Ours + Skip-gram', 'Similarity Metrics', 'HardSim'], ['Model', 'Ours + Skip-gram', 'Similarity Metrics', 'SoftSim']]
1
[['ρ × 100']]
[['62.8'], ['65.7'], ['66.2'], ['68.9'], ['67.2'], ['69.2'], ['69.7'], ['63.6'], ['65.4'], ['61.2'], ['64.3'], ['65.6'], ['64.9'], ['66.1']]
column
['correlation']
['Ours + CBOW', 'Ours + Skip-gram']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ρ × 100</th> </tr> </thead> <tbody> <tr> <td>Model || Huang || Similarity Metrics || AvgSim</td> <td>62.8</td> </tr> <tr> <td>Model || Huang || Similarity Metrics || AvgSimC</td> <td>65.7</td> </tr> <tr> <td>Model || Chen || Similarity Metrics || AvgSim</td> <td>66.2</td> </tr> <tr> <td>Model || Chen || Similarity Metrics || AvgSimC</td> <td>68.9</td> </tr> <tr> <td>Model || Neelakantan || Similarity Metrics || AvgSim</td> <td>67.2</td> </tr> <tr> <td>Model || Neelakantan || Similarity Metrics || AvgSimC</td> <td>69.2</td> </tr> <tr> <td>Model || Li || Similarity Metrics || -</td> <td>69.7</td> </tr> <tr> <td>Model || Tian || Similarity Metrics || Model_M</td> <td>63.6</td> </tr> <tr> <td>Model || Tian || Similarity Metrics || Model_W</td> <td>65.4</td> </tr> <tr> <td>Model || Bartunov || Similarity Metrics || AvgSimC</td> <td>61.2</td> </tr> <tr> <td>Model || Ours + CBOW || Similarity Metrics || HardSim</td> <td>64.3</td> </tr> <tr> <td>Model || Ours + CBOW || Similarity Metrics || SoftSim</td> <td>65.6</td> </tr> <tr> <td>Model || Ours + Skip-gram || Similarity Metrics || HardSim</td> <td>64.9</td> </tr> <tr> <td>Model || Ours + Skip-gram || Similarity Metrics || SoftSim</td> <td>66.1</td> </tr> </tbody></table>
Table 2
table_2
D16-1018
7
emnlp2016
Table 2 shows the results of our contextdependent sense embedding models on the SCWS dataset. In this table, ρ refers to the Spearman’s rank correlation and a higher value of ρ indicates better performance. The baseline performances are from Huang et al. (2012), Chen et al. (2014), Neelakantan et al. (2014), Li and Jurafsky (2015), Tian et al. (2014) and Bartunov et al. (2016). Here Ours + CBOW denotes our model with a CBOW based energy function and Ours + Skip-gram denotes our model with a Skip-gram based energy function. The results above the thick line are the models based on context clustering methods and the results below the thick line are the probabilistic models including ours. The similarity metrics of context clustering based models are AvgSim and AvgSimC proposed by Reisinger and Mooney (2010). Tian et al. (2014) propose two metrics Model_M and Model_W which are similar to our HardSim and SoftSim metrics. From Table 2, we can observe that our model outperforms the other probabilistic models and is not as good as the best context clustering based model. The context clustering based models are overall better than the probabilistic models on this task. A possible reason is that most context clustering based methods make use of more external knowledge than probabilistic models. However, note that Faruqui et al. (2016) presented several problems associated with the evaluation of word vectors on word similarity datasets and pointed out that the use of word similarity tasks for evaluation of word vectors is not sustainable. Bartunov et al. (2016) also suggest that SCWS should be of limited use for evaluating word representation models. Therefore, the results on this task shall be taken with caution. We consider that more realistic natural language processing tasks like word sense induction are better for evaluating sense embedding models.
[1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2]
['Table 2 shows the results of our contextdependent sense embedding models on the SCWS dataset.', 'In this table, ρ refers to the Spearman’s rank correlation and a higher value of ρ indicates better performance.', 'The baseline performances are from Huang et al. (2012), Chen et al. (2014), Neelakantan et al. (2014), Li and Jurafsky (2015), Tian et al. (2014) and Bartunov et al. (2016).', 'Here Ours + CBOW denotes our model with a CBOW based energy function and Ours + Skip-gram denotes our model with a Skip-gram based energy function.', 'The results above the thick line are the models based on context clustering methods and the results below the thick line are the probabilistic models including ours.', 'The similarity metrics of context clustering based models are AvgSim and AvgSimC proposed by Reisinger and Mooney (2010).', 'Tian et al. (2014) propose two metrics Model_M and Model_W which are similar to our HardSim and SoftSim metrics.', 'From Table 2, we can observe that our model outperforms the other probabilistic models and is not as good as the best context clustering based model.', 'The context clustering based models are overall better than the probabilistic models on this task.', 'A possible reason is that most context clustering based methods make use of more external knowledge than probabilistic models.', 'However, note that Faruqui et al. (2016) presented several problems associated with the evaluation of word vectors on word similarity datasets and pointed out that the use of word similarity tasks for evaluation of word vectors is not sustainable.', 'Bartunov et al. (2016) also suggest that SCWS should be of limited use for evaluating word representation models.', 'Therefore, the results on this task shall be taken with caution.', 'We consider that more realistic natural language processing tasks like word sense induction are better for evaluating sense embedding models.']
[None, None, ['Huang', 'Chen', 'Neelakantan', 'Li', 'Tian', 'Bartunov'], ['Ours + CBOW', 'Ours + Skip-gram'], None, ['AvgSim', 'AvgSimC'], ['Model_M', 'Model_W', 'HardSim', 'SoftSim'], ['Model'], ['Model'], ['Model'], None, None, None, None]
1
D16-1021table_4
Examples of attention weights in different hops for aspect level sentiment classification. The model only uses content attention. The hop columns show the weights of context words in each hop, indicated by values and gray color. This example shows the results of sentence “great food but the service was dreadful!” with “food” and “service” as the aspects.
1
[['great'], ['food'], ['but'], ['the'], ['was'], ['dreadful'], ['!']]
2
[['hop 1', 'service'], ['hop 1', 'food'], ['hop 2', 'service'], ['hop 2', 'food'], ['hop 3', 'service'], ['hop 3', 'food'], ['hop 4', 'service'], ['hop 4', 'food'], ['hop 5', 'service'], ['hop 5', 'food']]
[['0.20', '0.22', '0.15', '0.12', '0.14', '0.14', '0.13', '0.12', '0.23', '0.20'], ['0.11', '0.21', '0.07', '0.11', '0.08', '0.10', '0.12', '0.11', '0.06', '0.12'], ['0.20', '0.03', '0.10', '0.11', '0.10', '0.08', '0.12', '0.11', '0.13', '0.06'], ['0.03', '0.11', '0.07', '0.11', '0.08', '0.08', '0.12', '0.11', '0.06', '0.06'], ['0.08', '0.04', '0.07', '0.11', '0.08', '0.08', '0.12', '0.11', '0.06', '0.06'], ['0.20', '0.22', '0.45', '0.32', '0.45', '0.45', '0.28', '0.32', '0.40', '0.43'], ['0.19', '0.16', '0.08', '0.11', '0.08', '0.08', '0.12', '0.11', '0.07', '0.07']]
column
['weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights']
['service', 'food']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>hop 1 || service</th> <th>hop 1 || food</th> <th>hop 2 || service</th> <th>hop 2 || food</th> <th>hop 3 || service</th> <th>hop 3 || food</th> <th>hop 4 || service</th> <th>hop 4 || food</th> <th>hop 5 || service</th> <th>hop 5 || food</th> </tr> </thead> <tbody> <tr> <td>great</td> <td>0.20</td> <td>0.22</td> <td>0.15</td> <td>0.12</td> <td>0.14</td> <td>0.14</td> <td>0.13</td> <td>0.12</td> <td>0.23</td> <td>0.20</td> </tr> <tr> <td>food</td> <td>0.11</td> <td>0.21</td> <td>0.07</td> <td>0.11</td> <td>0.08</td> <td>0.10</td> <td>0.12</td> <td>0.11</td> <td>0.06</td> <td>0.12</td> </tr> <tr> <td>but</td> <td>0.20</td> <td>0.03</td> <td>0.10</td> <td>0.11</td> <td>0.10</td> <td>0.08</td> <td>0.12</td> <td>0.11</td> <td>0.13</td> <td>0.06</td> </tr> <tr> <td>the</td> <td>0.03</td> <td>0.11</td> <td>0.07</td> <td>0.11</td> <td>0.08</td> <td>0.08</td> <td>0.12</td> <td>0.11</td> <td>0.06</td> <td>0.06</td> </tr> <tr> <td>was</td> <td>0.08</td> <td>0.04</td> <td>0.07</td> <td>0.11</td> <td>0.08</td> <td>0.08</td> <td>0.12</td> <td>0.11</td> <td>0.06</td> <td>0.06</td> </tr> <tr> <td>dreadful</td> <td>0.20</td> <td>0.22</td> <td>0.45</td> <td>0.32</td> <td>0.45</td> <td>0.45</td> <td>0.28</td> <td>0.32</td> <td>0.40</td> <td>0.43</td> </tr> <tr> <td>!</td> <td>0.19</td> <td>0.16</td> <td>0.08</td> <td>0.11</td> <td>0.08</td> <td>0.08</td> <td>0.12</td> <td>0.11</td> <td>0.07</td> <td>0.07</td> </tr> </tbody></table>
Table 4
table_4
D16-1021
7
emnlp2016
From Table 4, we can find that in the first hop the context words “great”, “but” and “dreadful” contribute equally to the aspect “service”. While after the second hop, the weight of “dreadful” increases and finally the model correctly predict the polarity towards “service” as negative. This case shows the effects of multiple hops. However, for food aspect, the content-based model also gives a larger weight to “dreadful” when the target we focus on is “food”. As a result, the model incorrectly predicts the polarity towards “food” as negative. This phenomenon might be caused by the neglect of location information.
[1, 1, 1, 1, 1, 2]
['From Table 4, we can find that in the first hop the context words “great”, “but” and “dreadful” contribute equally to the aspect “service”.', 'While after the second hop, the weight of “dreadful” increases and finally the model correctly predict the polarity towards “service” as negative.', 'This case shows the effects of multiple hops.', 'However, for food aspect, the content-based model also gives a larger weight to “dreadful” when the target we focus on is “food”.', 'As a result, the model incorrectly predicts the polarity towards “food” as negative.', 'This phenomenon might be caused by the neglect of location information.']
[['great', 'but', 'dreadful', 'service'], None, ['dreadful', 'service'], ['dreadful', 'food'], ['food'], None]
1
D16-1025table_2
Overall results on the HE Set: BLEU, computed against the original reference translation, and TER, computed with respect to the targeted post-edit (HTER) and multiple postedits (mTER).
2
[['system', 'PBSY'], ['system', 'HPB'], ['system', 'SPB'], ['system', 'NMT']]
1
[['BLEU'], ['HTER'], ['mTER']]
[['25.3', '28.0', '21.8'], ['24.6', '29.9', '23.4'], ['25.8', '29.0', '22.7'], ['31.1*', '21.1*', '16.2*']]
column
['BLEU', 'HTER', 'mTER']
['NMT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>HTER</th> <th>mTER</th> </tr> </thead> <tbody> <tr> <td>system || PBSY</td> <td>25.3</td> <td>28.0</td> <td>21.8</td> </tr> <tr> <td>system || HPB</td> <td>24.6</td> <td>29.9</td> <td>23.4</td> </tr> <tr> <td>system || SPB</td> <td>25.8</td> <td>29.0</td> <td>22.7</td> </tr> <tr> <td>system || NMT</td> <td>31.1*</td> <td>21.1*</td> <td>16.2*</td> </tr> </tbody></table>
Table 2
table_2
D16-1025
4
emnlp2016
4 Overall Translation Quality. Table 2 presents overall system results according to HTER and mTER, as well as BLEU computed against the original TED Talks reference translation. We can see that NMT clearly outperforms all other approaches both in terms of BLEU and TER scores. Focusing on mTER results, the gain obtained by NMT over the second best system (PBSY) amounts to 26%. It is also worth noticing that mTER is considerably lower than HTER for each system. This reduction shows that exploiting all the available postedits as references for TER is a viable way to control and overcome post-editors variability, thus ensuring a more reliable and informative evaluation about the real overall performance of MT systems. For this reason, the two following analyses rely on mTER. In particular, we investigate how specific characteristics of input documents affect the system’s overall translation quality, focusing on (i) sentence length and (ii) the different talks composing the dataset.
[2, 1, 1, 1, 1, 2, 0, 0]
['4 Overall Translation Quality.', 'Table 2 presents overall system results according to HTER and mTER, as well as BLEU computed against the original TED Talks reference translation.', 'We can see that NMT clearly outperforms all other approaches both in terms of BLEU and TER scores.', 'Focusing on mTER results, the gain obtained by NMT over the second best system (PBSY) amounts to 26%.', 'It is also worth noticing that mTER is considerably lower than HTER for each system.', 'This reduction shows that exploiting all the available postedits as references for TER is a viable way to control and overcome post-editors variability, thus ensuring a more reliable and informative evaluation about the real overall performance of MT systems.', 'For this reason, the two following analyses rely on mTER.', 'In particular, we investigate how specific characteristics of input documents affect the system’s overall translation quality, focusing on (i) sentence length and (ii) the different talks composing the dataset.']
[None, ['system', 'HTER', 'mTER', 'BLEU'], ['NMT', 'BLEU', 'HTER', 'mTER'], ['mTER', 'NMT', 'PBSY'], ['mTER', 'HTER'], ['HTER', 'mTER'], None, None]
1
D16-1025table_4
Word reordering evaluation in terms of shift operations in HTER calculation and of KRS. For each system, the number of generated words, the number of shift errors and their corresponding percentages are reported.
2
[['system', 'PBSY'], ['system', 'HPB'], ['system', 'SPB'], ['system', 'NMT']]
1
[['#words'], ['#shifts'], ['%shifts'], ['KRS']]
[['11517', '354', '3.1', '84.6'], ['11417', '415', '3.6', '84.3'], ['11420', '398', '3.5', '84.5'], ['11284', '173', '1.5*', '88.3*']]
column
['#words', '#shifts', '%shifts', 'KRS']
['NMT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>#words</th> <th>#shifts</th> <th>%shifts</th> <th>KRS</th> </tr> </thead> <tbody> <tr> <td>system || PBSY</td> <td>11517</td> <td>354</td> <td>3.1</td> <td>84.6</td> </tr> <tr> <td>system || HPB</td> <td>11417</td> <td>415</td> <td>3.6</td> <td>84.3</td> </tr> <tr> <td>system || SPB</td> <td>11420</td> <td>398</td> <td>3.5</td> <td>84.5</td> </tr> <tr> <td>system || NMT</td> <td>11284</td> <td>173</td> <td>1.5*</td> <td>88.3*</td> </tr> </tbody></table>
Table 4
table_4
D16-1025
7
emnlp2016
5.3 Word order errors. To analyse reordering errors, we start by focusing on shift operations identified by the HTER metrics. The first three columns of Table 4 show, respectively: (i) the number of words generated by each system (ii) the number of shifts required to align each system output to the corresponding post-edit; and (iii) the corresponding percentage of shift errors. Notice that the shift error percentages are incorporated in the HTER scores reported in Table 2. We can see in Table 4 that shift errors in NMT translations are definitely less than in the other systems. The error reduction of NMT with respect to the second best system (PBSY) is about 50% (173 vs. 354). It should be recalled that these numbers only refer to shifts detected by HTER, that is (groups of) words of the MT output and corresponding post-edit that are identical but occurring in different positions. Words that had to be moved and modified at the same time (for instance replaced by a synonym or a morphological variant) are not counted in HTER shift figures, but are detected as substitution, insertion or deletion operations. To ensure that our reordering evaluation is not biased towards the alignment between the MT output and the post-edit performed by HTER, we run an additional assessment using KRS – Kendall Reordering Score (Birch et al., 2010) – which measures the similarity between the source-reference reorderings and the source-MT output reorderings. Being based on bilingual word alignment via the source sentence, KRS detects reordering errors also when post-edit and MT words are not identical. Also unlike TER, KRS is sensitive to the distance between the position of a word in the MT output and that in the reference. Looking at the last column of Table 4, we can say that our observations on HTER are confirmed by the KRS results: the reorderings performed by NMT are much more accurate than those performed by any PBMT system. Moreover, according to the approximate randomization test, KRS differences are statistically significant between NMT and all other systems, but not among the three PBMT systems.
[2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 1, 1]
['5.3 Word order errors.', 'To analyse reordering errors, we start by focusing on shift operations identified by the HTER metrics.', 'The first three columns of Table 4 show, respectively: (i) the number of words generated by each system (ii) the number of shifts required to align each system output to the corresponding post-edit; and (iii) the corresponding percentage of shift errors. Notice that the shift error percentages are incorporated in the HTER scores reported in Table 2.', 'We can see in Table 4 that shift errors in NMT translations are definitely less than in the other systems.', 'The error reduction of NMT with respect to the second best system (PBSY) is about 50% (173 vs. 354).', 'It should be recalled that these numbers only refer to shifts detected by HTER, that is (groups of) words of the MT output and corresponding post-edit that are identical but occurring in different positions.', 'Words that had to be moved and modified at the same time (for instance replaced by a synonym or a morphological variant) are not counted in HTER shift figures, but are detected as substitution, insertion or deletion operations.', 'To ensure that our reordering evaluation is not biased towards the alignment between the MT output and the post-edit performed by HTER, we run an additional assessment using KRS – Kendall Reordering Score (Birch et al., 2010) – which measures the similarity between the source-reference reorderings and the source-MT output reorderings.', 'Being based on bilingual word alignment via the source sentence, KRS detects reordering errors also when post-edit and MT words are not identical.', 'Also unlike TER, KRS is sensitive to the distance between the position of a word in the MT output and that in the reference.', 'Looking at the last column of Table 4, we can say that our observations on HTER are confirmed by the KRS results: the reorderings performed by NMT are much more accurate than those performed by any PBMT system.', 'Moreover, according to the approximate randomization test, KRS differences are statistically significant between NMT and all other systems, but not among the three PBMT systems.']
[None, None, ['#words', '#shifts', '%shifts'], ['NMT', 'system', '#shifts', '%shifts'], ['NMT', 'PBSY'], None, None, ['KRS'], ['KRS'], ['KRS'], ['KRS', 'NMT'], ['KRS', 'NMT', 'system']]
1
D16-1032table_2
Human evaluation results on the generated and true recipes. Scores range in [1, 5].
2
[['Model', 'Attention'], ['Model', 'EncDec'], ['Model', 'NN'], ['Model', 'NN-Swap'], ['Model', 'Checklist'], ['Model', 'Checklist+'], ['Model', 'Truth']]
1
[['Syntax'], ['Ingredient use'], ['Follows goal']]
[['4.47', '3.02', '3.47'], ['4.58', '3.29', '3.61'], ['4.22', '3.02', '3.36'], ['4.11', '3.51', '3.78'], ['4.58', '3.80', '3.94'], ['4.39', '3.95', '4.10'], ['4.39', '4.03', '4.34']]
column
['Syntax', 'Ingridient use', 'Follows goal']
['Checklist', 'Checklist+']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Syntax</th> <th>Ingredient use</th> <th>Follows goal</th> </tr> </thead> <tbody> <tr> <td>Model || Attention</td> <td>4.47</td> <td>3.02</td> <td>3.47</td> </tr> <tr> <td>Model || EncDec</td> <td>4.58</td> <td>3.29</td> <td>3.61</td> </tr> <tr> <td>Model || NN</td> <td>4.22</td> <td>3.02</td> <td>3.36</td> </tr> <tr> <td>Model || NN-Swap</td> <td>4.11</td> <td>3.51</td> <td>3.78</td> </tr> <tr> <td>Model || Checklist</td> <td>4.58</td> <td>3.80</td> <td>3.94</td> </tr> <tr> <td>Model || Checklist+</td> <td>4.39</td> <td>3.95</td> <td>4.10</td> </tr> <tr> <td>Model || Truth</td> <td>4.39</td> <td>4.03</td> <td>4.34</td> </tr> </tbody></table>
Table 2
table_2
D16-1032
8
emnlp2016
Table 2 shows the averaged scores over the responses. The checklist models outperform all baselines in generating recipes that follow the provided agenda closely and accomplish the desired goal, where NN in particular often generates the wrong dish. Perhaps surprisingly, both the Attention and EncDec baselines and the Checklist model beat the true recipes in terms of having better grammar. This can partly be attributed to noise in the parsing of the true recipes, and partly because the neural models tend to generate shorter, simpler texts.
[1, 1, 1, 2]
['Table 2 shows the averaged scores over the responses.', 'The checklist models outperform all baselines in generating recipes that follow the provided agenda closely and accomplish the desired goal, where NN in particular often generates the wrong dish.', 'Perhaps surprisingly, both the Attention and EncDec baselines and the Checklist model beat the true recipes in terms of having better grammar.', 'This can partly be attributed to noise in the parsing of the true recipes, and partly because the neural models tend to generate shorter, simpler texts.']
[None, ['Checklist', 'Checklist+', 'NN', 'Model'], ['Attention', 'EncDec', 'Checklist'], None]
1
D16-1035table_4
Performance comparison with other state-of-the-art systems on RST-DT.
2
[['System', 'Joty et al. (2013)'], ['System', 'Ji and Eisenstein. (2014)'], ['System', 'Feng and Hirst. (2014)'], ['System', 'Li et al. (2014a)'], ['System', 'Li et al. (2014b)'], ['System', 'Heilman and Sagae. (2015)'], ['System', 'Ours'], ['System', 'Human']]
1
[['S'], ['N'], ['R']]
[['82.7', '68.4', '55.7'], ['82.1', '71.1', '61.6'], ['85.7', '71.0', '58.2'], ['84.0', '70.8', '58.6'], ['83.4', '73.8', '57.8'], ['83.5', '68.1', '55.1'], ['85.8', '71.1', '58.9'], ['88.7', '77.7', '65.8']]
column
['S', 'N', 'R']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S</th> <th>N</th> <th>R</th> </tr> </thead> <tbody> <tr> <td>System || Joty et al. (2013)</td> <td>82.7</td> <td>68.4</td> <td>55.7</td> </tr> <tr> <td>System || Ji and Eisenstein. (2014)</td> <td>82.1</td> <td>71.1</td> <td>61.6</td> </tr> <tr> <td>System || Feng and Hirst. (2014)</td> <td>85.7</td> <td>71.0</td> <td>58.2</td> </tr> <tr> <td>System || Li et al. (2014a)</td> <td>84.0</td> <td>70.8</td> <td>58.6</td> </tr> <tr> <td>System || Li et al. (2014b)</td> <td>83.4</td> <td>73.8</td> <td>57.8</td> </tr> <tr> <td>System || Heilman and Sagae. (2015)</td> <td>83.5</td> <td>68.1</td> <td>55.1</td> </tr> <tr> <td>System || Ours</td> <td>85.8</td> <td>71.1</td> <td>58.9</td> </tr> <tr> <td>System || Human</td> <td>88.7</td> <td>77.7</td> <td>65.8</td> </tr> </tbody></table>
Table 4
table_4
D16-1035
8
emnlp2016
Table 4 shows the performance for our system and those systems. Our system achieves the best result in span and relatively lower performance in nucleus and relation identification comparing with the corresponding best results but still better than most systems. No system achieves the best result on all three metrics. To further show the effectiveness of the deep learning model itself without handcrafted features, we compare the performance between our model and the model proposed by Li et al. (2014a) without handcrafted features and the results are shown in Table 5. It shows our overall performance outperforms the model proposed by Li et al. (2014a) which illustrates our model is effective.
[1, 1, 1, 0, 0]
['Table 4 shows the performance for our system and those systems.', 'Our system achieves the best result in span and relatively lower performance in nucleus and relation identification comparing with the corresponding best results but still better than most systems.', 'No system achieves the best result on all three metrics.', 'To further show the effectiveness of the deep learning model itself without handcrafted features, we compare the performance between our model and the model proposed by Li et al. (2014a) without handcrafted features and the results are shown in Table 5.', 'It shows our overall performance outperforms the model proposed by Li et al. (2014a) which illustrates our model is effective.']
[None, ['Ours', 'System'], ['System', 'S', 'N', 'R'], None, None]
1
D16-1038table_7
Domain Transfer Results. We conduct the evaluation on TAC-KBP corpus with the split of newswire (NW) and discussion form (DF) documents. Here, we choose MSEP-EMD and MSEP-CorefESA+AUG+KNOW as the MSEP approach for event detection and co-reference respectively. We use SSED and SupervisedBase as the supervised modules for comparison. For event detection, we compare F1 scores of span plus type match while we report the average F1 scores for event co-reference.
3
[['Event Detection', 'In Domain', 'Train NW Test NW'], ['Event Detection', 'Out of Domain', 'Train DF Test NW'], ['Event Detection', 'In Domain', 'Train DF Test DF'], ['Event Detection', 'Out of Domain', 'Train NW Test DF'], ['Event Co-reference', 'In Domain', 'Train NW Test NW'], ['Event Co-reference', 'Out of Domain', 'Train DF Test NW'], ['Event Co-reference', 'In Domain', 'Train DF Test DF'], ['Event Co-reference', 'Out of Domain', 'Train NW Test DF']]
1
[['MSEP'], ['Supervised']]
[['58.5', '63.7'], ['55.1', '54.8'], ['57.9', '62.6'], ['52.8', '52.3'], ['73.2', '73.6'], ['71', '70.1'], ['68.6', '68.9'], ['67.9', '67']]
column
['F1', 'F1']
['MSEP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MSEP</th> <th>Supervised</th> </tr> </thead> <tbody> <tr> <td>Event Detection || In Domain || Train NW Test NW</td> <td>58.5</td> <td>63.7</td> </tr> <tr> <td>Event Detection || Out of Domain || Train DF Test NW</td> <td>55.1</td> <td>54.8</td> </tr> <tr> <td>Event Detection || In Domain || Train DF Test DF</td> <td>57.9</td> <td>62.6</td> </tr> <tr> <td>Event Detection || Out of Domain || Train NW Test DF</td> <td>52.8</td> <td>52.3</td> </tr> <tr> <td>Event Co-reference || In Domain || Train NW Test NW</td> <td>73.2</td> <td>73.6</td> </tr> <tr> <td>Event Co-reference || Out of Domain || Train DF Test NW</td> <td>71</td> <td>70.1</td> </tr> <tr> <td>Event Co-reference || In Domain || Train DF Test DF</td> <td>68.6</td> <td>68.9</td> </tr> <tr> <td>Event Co-reference || Out of Domain || Train NW Test DF</td> <td>67.9</td> <td>67</td> </tr> </tbody></table>
Table 7
table_7
D16-1038
9
emnlp2016
4.7 Domain Transfer Evaluation. To demonstrate the superiority of the adaptation capabilities of the proposed MSEP system, we test its performance on new domains and compare with the supervised system. TAC-KBP corpus contains two genres: newswire (NW) and discussion forum (DF), and they have roughly equal number of documents. When trained on NW and tested on DF, supervised methods encounter out-of-domain situations. However, the MSEP system can adapt well. Table 7 shows that MSEP outperforms supervised methods in out-of-domain situations for both tasks. The differences are statistically significant with p < 0.05.
[2, 2, 2, 1, 1, 1, 2]
['4.7 Domain Transfer Evaluation.', 'To demonstrate the superiority of the adaptation capabilities of the proposed MSEP system, we test its performance on new domains and compare with the supervised system.', 'TAC-KBP corpus contains two genres: newswire (NW) and discussion forum (DF), and they have roughly equal number of documents.', 'When trained on NW and tested on DF, supervised methods encounter out-of-domain situations.', 'However, the MSEP system can adapt well.', 'Table 7 shows that MSEP outperforms supervised methods in out-of-domain situations for both tasks.', 'The differences are statistically significant with p < 0.05.']
[None, ['MSEP', 'Supervised'], None, ['Train NW Test DF'], ['MSEP'], ['MSEP', 'Supervised', 'Out of Domain', 'Event Detection', 'Event Co-reference'], None]
1
D16-1039table_2
Performance results for the BLESS and ENTAILMENT datasets.
4
[['Model', 'SVM+Yu', 'Dataset', 'BLESS'], ['Model', 'SVM+Word2Vecshort', 'Dataset', 'BLESS'], ['Model', 'SVM+Word2Vec', 'Dataset', 'BLESS'], ['Model', 'SVM+Ourshort', 'Dataset', 'BLESS'], ['Model', 'SVM+Our', 'Dataset', 'BLESS'], ['Model', 'SVM+Yu', 'Dataset', 'ENTAIL'], ['Model', 'SVM+Word2Vecshort', 'Dataset', 'ENTAIL'], ['Model', 'SVM+Word2Vec', 'Dataset', 'ENTAIL'], ['Model', 'SVM+Ourshort', 'Dataset', 'ENTAIL'], ['Model', 'SVM+Our', 'Dataset', 'ENTAIL']]
1
[['Accuracy']]
[['90.4%'], ['83.8%'], ['84.0%'], ['91.1%'], ['93.6%'], ['87.5%'], ['82.8%'], ['83.3%'], ['88.2%'], ['91.7%']]
column
['accuracy']
['SVM+Our']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || SVM+Yu || Dataset || BLESS</td> <td>90.4%</td> </tr> <tr> <td>Model || SVM+Word2Vecshort || Dataset || BLESS</td> <td>83.8%</td> </tr> <tr> <td>Model || SVM+Word2Vec || Dataset || BLESS</td> <td>84.0%</td> </tr> <tr> <td>Model || SVM+Ourshort || Dataset || BLESS</td> <td>91.1%</td> </tr> <tr> <td>Model || SVM+Our || Dataset || BLESS</td> <td>93.6%</td> </tr> <tr> <td>Model || SVM+Yu || Dataset || ENTAIL</td> <td>87.5%</td> </tr> <tr> <td>Model || SVM+Word2Vecshort || Dataset || ENTAIL</td> <td>82.8%</td> </tr> <tr> <td>Model || SVM+Word2Vec || Dataset || ENTAIL</td> <td>83.3%</td> </tr> <tr> <td>Model || SVM+Ourshort || Dataset || ENTAIL</td> <td>88.2%</td> </tr> <tr> <td>Model || SVM+Our || Dataset || ENTAIL</td> <td>91.7%</td> </tr> </tbody></table>
Table 2
table_2
D16-1039
7
emnlp2016
Table 2 shows the performance of the three supervised models in Experiment 1. Our approach achieves significantly better performance than Yu’s method and Word2Vec method in terms of accuracy (t-test, p-value < 0.05) for both BLESS and ENTAILMENT datasets. Specifically, our approach improves the average accuracy by 4% compared to Yu’s method, and by 9% compared to the Word2Vec method. The Word2Vec embeddings have the worst result because it is based only on co-occurrence based similarity, which is not effective for the classifier to accurately recognize all the taxonomic relations. Our approach performs better than Yu’s method and it shows that our approach can learn embeddings more effectively. Our approach encodes not only hypernym and hyponym terms but also the contextual information between them, while Yu’s method ignores the contextual information for taxonomic relation identification. Moreover, from the experimental results of SVM+Our and SVM+Ourshort, we can observe that the offset vector between hypernym and hyponym, which captures the contextual information, plays an important role in our approach as it helps to improve the performance in both datasets. However, the offset feature is not so important for the Word2Vec model. The reason is that the Word2Vec model is targeted for the analogy task rather than taxonomic relation identification.
[1, 1, 1, 1, 1, 2, 1, 2, 2]
['Table 2 shows the performance of the three supervised models in Experiment 1.', 'Our approach achieves significantly better performance than Yu’s method and Word2Vec method in terms of accuracy (t-test, p-value < 0.05) for both BLESS and ENTAILMENT datasets.', 'Specifically, our approach improves the average accuracy by 4% compared to Yu’s method, and by 9% compared to the Word2Vec method.', 'The Word2Vec embeddings have the worst result because it is based only on co-occurrence based similarity, which is not effective for the classifier to accurately recognize all the taxonomic relations.', 'Our approach performs better than Yu’s method and it shows that our approach can learn embeddings more effectively.', 'Our approach encodes not only hypernym and hyponym terms but also the contextual information between them, while Yu’s method ignores the contextual information for taxonomic relation identification.', 'Moreover, from the experimental results of SVM+Our and SVM+Ourshort, we can observe that the offset vector between hypernym and hyponym, which captures the contextual information, plays an important role in our approach as it helps to improve the performance in both datasets.', 'However, the offset feature is not so important for the Word2Vec model.', 'The reason is that the Word2Vec model is targeted for the analogy task rather than taxonomic relation identification.']
[None, ['SVM+Ourshort', 'SVM+Our', 'BLESS', 'ENTAIL', 'Accuracy'], ['SVM+Ourshort', 'SVM+Our', 'SVM+Yu', 'SVM+Word2Vecshort', 'SVM+Word2Vec'], ['SVM+Word2Vecshort', 'SVM+Word2Vec'], ['SVM+Ourshort', 'SVM+Our', 'SVM+Yu'], ['SVM+Yu'], ['SVM+Ourshort', 'SVM+Our'], ['SVM+Word2Vecshort', 'SVM+Word2Vec'], ['SVM+Word2Vecshort', 'SVM+Word2Vec']]
1
D16-1039table_3
Performance results for the general domain datasets when using one domain for training and another domain for testing.
6
[['Model', 'SVM+Yu', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Word2Vecshort', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Word2Vec', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Ourshort', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Our', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Yu', 'Training', 'ENTAIL', 'Testing', 'BLESS'], ['Model', 'SVM+Word2Vecshort', 'Training', 'ENTAIL', 'Testing', 'BLESS'], ['Model', 'SVM+Word2Vec', 'Training', 'ENTAIL', 'Testing', 'BLESS'], ['Model', 'SVM+Ourshort', 'Training', 'ENTAIL', 'Testing', 'BLESS'], ['Model', 'SVM+Our', 'Training', 'ENTAIL', 'Testing', 'BLESS']]
1
[['Accuracy']]
[['83.7%'], ['76.5%'], ['77.1%'], ['85.8%'], ['89.4%'], ['87.1%'], ['78.0%'], ['78.9%'], ['87.1%'], ['90.6%']]
column
['accuracy']
['SVM+Our']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || SVM+Yu || Training || BLESS || Testing || ENTAIL</td> <td>83.7%</td> </tr> <tr> <td>Model || SVM+Word2Vecshort || Training || BLESS || Testing || ENTAIL</td> <td>76.5%</td> </tr> <tr> <td>Model || SVM+Word2Vec || Training || BLESS || Testing || ENTAIL</td> <td>77.1%</td> </tr> <tr> <td>Model || SVM+Ourshort || Training || BLESS || Testing || ENTAIL</td> <td>85.8%</td> </tr> <tr> <td>Model || SVM+Our || Training || BLESS || Testing || ENTAIL</td> <td>89.4%</td> </tr> <tr> <td>Model || SVM+Yu || Training || ENTAIL || Testing || BLESS</td> <td>87.1%</td> </tr> <tr> <td>Model || SVM+Word2Vecshort || Training || ENTAIL || Testing || BLESS</td> <td>78.0%</td> </tr> <tr> <td>Model || SVM+Word2Vec || Training || ENTAIL || Testing || BLESS</td> <td>78.9%</td> </tr> <tr> <td>Model || SVM+Ourshort || Training || ENTAIL || Testing || BLESS</td> <td>87.1%</td> </tr> <tr> <td>Model || SVM+Our || Training || ENTAIL || Testing || BLESS</td> <td>90.6%</td> </tr> </tbody></table>
Table 3
table_3
D16-1039
7
emnlp2016
Experiment 2. This experiment aims to evaluate the generalization capability of our extracted term embeddings. In the experiment, we train the classifier on the BLESS dataset, test it on the ENTAILMENT dataset and vice versa. Similarly, we exclude from the training set any pair of terms that has one term appearing in the testing set. The experimental results in Table 3 show that our term embedding learning approach performs better than other methods in accuracy. It also shows that the taxonomic properties identified by our term embedding learning approach have great generalization capability (i.e. less dependent on the training set), and can be used generically for representing taxonomic relations.
[2, 2, 2, 2, 1, 1]
['Experiment 2.', 'This experiment aims to evaluate the generalization capability of our extracted term embeddings.', 'In the experiment, we train the classifier on the BLESS dataset, test it on the ENTAILMENT dataset and vice versa.', 'Similarly, we exclude from the training set any pair of terms that has one term appearing in the testing set.', 'The experimental results in Table 3 show that our term embedding learning approach performs better than other methods in accuracy.', 'It also shows that the taxonomic properties identified by our term embedding learning approach have great generalization capability (i.e. less dependent on the training set), and can be used generically for representing taxonomic relations.']
[None, None, ['BLESS', 'ENTAIL'], None, ['SVM+Our', 'Model'], ['SVM+Our']]
1
D16-1043table_5
Performance on common coverage subsets of the datasets (MEN* and SimLex*).
3
[['Source', 'Wikipedia', 'Text'], ['Source', 'Google', 'Visual'], ['Source', 'Google', 'MM'], ['Source', 'Bing', 'Visual'], ['Source', 'Bing', 'MM'], ['Source', 'Flickr', 'Visual'], ['Source', 'Flickr', 'MM'], ['Source', 'ImageNet', 'Visual'], ['Source', 'ImageNet', 'MM'], ['Source', 'ESPGame', 'Visual'], ['Source', 'ESPGame', 'MM']]
6
[['Arch.', 'AlexNet', 'Agg.', 'Mean', 'Type/Eval', 'SL'], ['Arch.', 'AlexNet', 'Agg.', 'Mean', 'Type/Eval', 'MEN'], ['Arch.', 'AlexNet', 'Agg.', 'Max', 'Type/Eval', 'SL'], ['Arch.', 'AlexNet', 'Agg.', 'Max', 'Type/Eval', 'MEN'], ['Arch.', 'GoogLeNet', 'Agg.', 'Mean', 'Type/Eval', 'SL'], ['Arch.', 'GoogLeNet', 'Agg.', 'Mean', 'Type/Eval', 'MEN'], ['Arch.', 'GoogLeNet', 'Agg.', 'Max', 'Type/Eval', 'SL'], ['Arch.', 'GoogLeNet', 'Agg.', 'Max', 'Type/Eval', 'MEN'], ['Arch.', 'VGGNet', 'Agg.', 'Mean', 'Type/Eval', 'SL'], ['Arch.', 'VGGNet', 'Agg.', 'Mean', 'Type/Eval', 'MEN'], ['Arch.', 'VGGNet', 'Agg.', 'Max', 'Type/Eval', 'SL'], ['Arch.', 'VGGNet', 'Agg.', 'Max', 'Type/Eval', 'MEN']]
[['0.248', '0.654', '0.248', '0.654', '0.248', '0.654', '0.248', '0.654', '0.248', '0.654', '0.248', '0.654'], ['0.406', '0.549', '0.402', '0.552', '0.420', '0.570', '0.434', '0.579', '0.430', '0.576', '0.406', '0.560'], ['0.366', '0.691', '0.344', '0.693', '0.366', '0.701', '0.342', '0.699', '0.378', '0.701', '0.341', '0.693'], ['0.431', '0.613', '0.425', '0.601', '0.410', '0.612', '0.414', '0.603', '0.400', '0.611', '0.398', '0.569'], ['0.384', '0.715', '0.355', '0.708', '0.374', '0.725', '0.343', '0.712', '0.363', '0.720', '0.340', '0.705'], ['0.382', '0.577', '0.371', '0.544', '0.378', '0.547', '0.354', '0.518', '0.378', '0.567', '0.340', '0.511'], ['0.372', '0.725', '0.344', '0.712', '0.367', '0.728', '0.336', '0.716', '0.370', '0.726', '0.330', '0.711'], ['0.316', '0.560', '0.316', '0.560', '0.347', '0.538', '0.423', '0.600', '0.412', '0.581', '0.413', '0.574'], ['0.348', '0.711', '0.348', '0.711', '0.364', '0.717', '0.394', '0.729', '0.418', '0.724', '0.405', '0.721'], ['0.037', '0.431', '0.039', '0.347', '0.104', '0.501', '0.125', '0.438', '0.188', '0.514', '0.125', '0.460'], ['0.179', '0.666', '0.147', '0.651', '0.224', '0.692', '0.226', '0.683', '0.268', '0.697', '0.222', '0.688']]
column
['similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity']
['VGGNet']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Arch. || AlexNet || Agg. || Mean || Type/Eval || SL</th> <th>Arch. || AlexNet || Agg. || Mean || Type/Eval || MEN</th> <th>Arch. || AlexNet || Agg. || Max || Type/Eval || SL</th> <th>Arch. || AlexNet || Agg. || Max || Type/Eval || MEN</th> <th>Arch. || GoogLeNet || Agg. || Mean || Type/Eval || SL</th> <th>Arch. || GoogLeNet || Agg. || Mean || Type/Eval || MEN</th> <th>Arch. || GoogLeNet || Agg. || Max || Type/Eval || SL</th> <th>Arch. || GoogLeNet || Agg. || Max || Type/Eval || MEN</th> <th>Arch. || VGGNet || Agg. || Mean || Type/Eval || SL</th> <th>Arch. || VGGNet || Agg. || Mean || Type/Eval || MEN</th> <th>Arch. || VGGNet || Agg. || Max || Type/Eval || SL</th> <th>Arch. || VGGNet || Agg. || Max || Type/Eval || MEN</th> </tr> </thead> <tbody> <tr> <td>Source || Wikipedia || Text</td> <td>0.248</td> <td>0.654</td> <td>0.248</td> <td>0.654</td> <td>0.248</td> <td>0.654</td> <td>0.248</td> <td>0.654</td> <td>0.248</td> <td>0.654</td> <td>0.248</td> <td>0.654</td> </tr> <tr> <td>Source || Google || Visual</td> <td>0.406</td> <td>0.549</td> <td>0.402</td> <td>0.552</td> <td>0.420</td> <td>0.570</td> <td>0.434</td> <td>0.579</td> <td>0.430</td> <td>0.576</td> <td>0.406</td> <td>0.560</td> </tr> <tr> <td>Source || Google || MM</td> <td>0.366</td> <td>0.691</td> <td>0.344</td> <td>0.693</td> <td>0.366</td> <td>0.701</td> <td>0.342</td> <td>0.699</td> <td>0.378</td> <td>0.701</td> <td>0.341</td> <td>0.693</td> </tr> <tr> <td>Source || Bing || Visual</td> <td>0.431</td> <td>0.613</td> <td>0.425</td> <td>0.601</td> <td>0.410</td> <td>0.612</td> <td>0.414</td> <td>0.603</td> <td>0.400</td> <td>0.611</td> <td>0.398</td> <td>0.569</td> </tr> <tr> <td>Source || Bing || MM</td> <td>0.384</td> <td>0.715</td> <td>0.355</td> <td>0.708</td> <td>0.374</td> <td>0.725</td> <td>0.343</td> <td>0.712</td> <td>0.363</td> <td>0.720</td> <td>0.340</td> <td>0.705</td> </tr> <tr> <td>Source || Flickr || Visual</td> <td>0.382</td> <td>0.577</td> <td>0.371</td> <td>0.544</td> <td>0.378</td> <td>0.547</td> <td>0.354</td> <td>0.518</td> <td>0.378</td> <td>0.567</td> <td>0.340</td> <td>0.511</td> </tr> <tr> <td>Source || Flickr || MM</td> <td>0.372</td> <td>0.725</td> <td>0.344</td> <td>0.712</td> <td>0.367</td> <td>0.728</td> <td>0.336</td> <td>0.716</td> <td>0.370</td> <td>0.726</td> <td>0.330</td> <td>0.711</td> </tr> <tr> <td>Source || ImageNet || Visual</td> <td>0.316</td> <td>0.560</td> <td>0.316</td> <td>0.560</td> <td>0.347</td> <td>0.538</td> <td>0.423</td> <td>0.600</td> <td>0.412</td> <td>0.581</td> <td>0.413</td> <td>0.574</td> </tr> <tr> <td>Source || ImageNet || MM</td> <td>0.348</td> <td>0.711</td> <td>0.348</td> <td>0.711</td> <td>0.364</td> <td>0.717</td> <td>0.394</td> <td>0.729</td> <td>0.418</td> <td>0.724</td> <td>0.405</td> <td>0.721</td> </tr> <tr> <td>Source || ESPGame || Visual</td> <td>0.037</td> <td>0.431</td> <td>0.039</td> <td>0.347</td> <td>0.104</td> <td>0.501</td> <td>0.125</td> <td>0.438</td> <td>0.188</td> <td>0.514</td> <td>0.125</td> <td>0.460</td> </tr> <tr> <td>Source || ESPGame || MM</td> <td>0.179</td> <td>0.666</td> <td>0.147</td> <td>0.651</td> <td>0.224</td> <td>0.692</td> <td>0.226</td> <td>0.683</td> <td>0.268</td> <td>0.697</td> <td>0.222</td> <td>0.688</td> </tr> </tbody></table>
Table 5
table_5
D16-1043
6
emnlp2016
5.2 Common subset comparison. Table 5 shows the results on the common subset of the evaluation datasets, where all word pairs have images in each of the data sources. First, note the same patterns as before: multi-modal representations perform better than linguistic ones. Even for the poorly performing ESP Game dataset, the VGGNet representations perform better on both SimLex and MEN (bottom right of the table). Visual representations from Google, Bing, Flickr and ImageNet all perform much better than ESP Game on this common covered subset. In a sense, the fullcoverage datasets were “punished” for their ability to return images for abstract words in the previous experiment: on this subset, which is more concrete, the search engines do much better. To a certain extent, including linguistic information is actually detrimental to performance, with multi-modal performing worse than purely visual. Again, we see the marked improvement with VGGNet for ImageNet, while Google, Bing and Flickr all do very well, regardless of the architecture.
[2, 1, 1, 1, 1, 1, 1, 1]
['5.2 Common subset comparison.', 'Table 5 shows the results on the common subset of the evaluation datasets, where all word pairs have images in each of the data sources.', 'First, note the same patterns as before: multi-modal representations perform better than linguistic ones.', 'Even for the poorly performing ESP Game dataset, the VGGNet representations perform better on both SimLex and MEN (bottom right of the table).', 'Visual representations from Google, Bing, Flickr and ImageNet all perform much better than ESP Game on this common covered subset.', 'In a sense, the fullcoverage datasets were “punished” for their ability to return images for abstract words in the previous experiment: on this subset, which is more concrete, the search engines do much better.', 'To a certain extent, including linguistic information is actually detrimental to performance, with multi-modal performing worse than purely visual.', 'Again, we see the marked improvement with VGGNet for ImageNet, while Google, Bing and Flickr all do very well, regardless of the architecture.']
[None, None, None, ['ESPGame', 'VGGNet', 'SL', 'MEN'], ['Google', 'Bing', 'Flickr', 'ImageNet', 'ESPGame'], None, None, ['VGGNet', 'ImageNet', 'Google', 'Bing', 'Flickr']]
1
D16-1044table_1
Comparison of multimodal pooling methods. Models are trained on the VQA train split and tested on test-dev.
2
[['Method', 'Element-wise Sum'], ['Method', 'Concatenation'], ['Method', 'Concatenation + FC'], ['Method', 'Concatenation + FC + FC'], ['Method', 'Element-wise Product'], ['Method', 'Element-wise Product + FC'], ['Method', 'Element-wise Product + FC + FC'], ['Method', 'MCB (2048 × 2048 → 16K)'], ['Method', 'Full Bilinear (128 × 128 → 16K)'], ['Method', 'MCB (128 × 128 → 4K)'], ['Method', 'Element-wise Product with VGG-19'], ['Method', 'MCB (d = 16K) with VGG-19'], ['Method', 'Concatenation + FC with Attention'], ['Method', 'MCB (d = 16K) with Attention']]
1
[['Accuracy']]
[['56.50'], ['57.49'], ['58.40'], ['57.10'], ['58.57'], ['56.44'], ['57.88'], ['59.83'], ['58.46'], ['58.69'], ['55.97'], ['57.05'], ['58.36'], ['62.50']]
column
['accuracy']
['MCB (2048 × 2048 → 16K)', 'MCB (128 × 128 → 4K)', 'MCB (d = 16K) with VGG-19', 'MCB (d = 16K) with Attention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Method || Element-wise Sum</td> <td>56.50</td> </tr> <tr> <td>Method || Concatenation</td> <td>57.49</td> </tr> <tr> <td>Method || Concatenation + FC</td> <td>58.40</td> </tr> <tr> <td>Method || Concatenation + FC + FC</td> <td>57.10</td> </tr> <tr> <td>Method || Element-wise Product</td> <td>58.57</td> </tr> <tr> <td>Method || Element-wise Product + FC</td> <td>56.44</td> </tr> <tr> <td>Method || Element-wise Product + FC + FC</td> <td>57.88</td> </tr> <tr> <td>Method || MCB (2048 × 2048 → 16K)</td> <td>59.83</td> </tr> <tr> <td>Method || Full Bilinear (128 × 128 → 16K)</td> <td>58.46</td> </tr> <tr> <td>Method || MCB (128 × 128 → 4K)</td> <td>58.69</td> </tr> <tr> <td>Method || Element-wise Product with VGG-19</td> <td>55.97</td> </tr> <tr> <td>Method || MCB (d = 16K) with VGG-19</td> <td>57.05</td> </tr> <tr> <td>Method || Concatenation + FC with Attention</td> <td>58.36</td> </tr> <tr> <td>Method || MCB (d = 16K) with Attention</td> <td>62.50</td> </tr> </tbody></table>
Table 1
table_1
D16-1044
6
emnlp2016
4.3 Ablation Results. We compare the performance of non-bilinear and bilinear pooling methods in Table 1. We see that MCB pooling outperforms all non-bilinear pooling methods, such as eltwise sum, concatenation, and eltwise product. One could argue that the compact bilinear method simply has more parameters than the non-bilinear pooling methods, which contributes to its performance. We compensated for this by stacking fully connected layers (with 4096 units per layer, ReLU activation, and dropout) after the non-bilinear pooling methods to increase their number of parameters. However, even with similar parameter budgets, nonbilinear methods could not achieve the same accuracy as the MCB method. For example, the “Concatenation + FC + FC” pooling method has approximately 40962 + 40962 + 4096 × 3000 ≈ 46 million parameters, which matches the 48 million parameters available in MCB with d = 16000. However, the performance of the “Concatenation + FC + FC” method is only 57.10% compared to MCB’s 59.83%. Section 2 in Table 1 also shows that compact bilinear pooling has no impact on accuracy compared to full bilinear pooling. Section 3 in Table 1 demonstrates that the MCB brings improvements regardless of the image CNN used. We primarily use ResNet152 in this paper, but MCB also improves performance if VGG-19 is used. Section 4 in Table 1 shows that our soft attention model works best with MCB pooling. In fact, attending to the Concatenation + FC layer has the same performance as not using attention at all, while attending to the MCB layer improves performance by 2.67 points.
[2, 1, 1, 1, 2, 1, 2, 2, 1, 1, 1, 1]
['4.3 Ablation Results.', 'We compare the performance of non-bilinear and bilinear pooling methods in Table 1.', 'We see that MCB pooling outperforms all non-bilinear pooling methods, such as eltwise sum, concatenation, and eltwise product.', 'One could argue that the compact bilinear method simply has more parameters than the non-bilinear pooling methods, which contributes to its performance.', 'We compensated for this by stacking fully connected layers (with 4096 units per layer, ReLU activation, and dropout) after the non-bilinear pooling methods to increase their number of parameters.', 'However, even with similar parameter budgets, nonbilinear methods could not achieve the same accuracy as the MCB method.', 'For example, the “Concatenation + FC + FC” pooling method has approximately 40962 + 40962 + 4096 × 3000 ≈ 46 million parameters, which matches the 48 million parameters available in MCB with d = 16000.', 'However, the performance of the “Concatenation + FC + FC” method is only 57.10% compared to MCB’s 59.83%.', 'Section 2 in Table 1 also shows that compact bilinear pooling has no impact on accuracy compared to full bilinear pooling.', 'Section 3 in Table 1 demonstrates that the MCB brings improvements regardless of the image CNN used.', 'We primarily use ResNet152 in this paper, but MCB also improves performance if VGG-19 is used. Section 4 in Table 1 shows that our soft attention model works best with MCB pooling.', 'In fact, attending to the Concatenation + FC layer has the same performance as not using attention at all, while attending to the MCB layer improves performance by 2.67 points.']
[None, None, ['MCB (2048 × 2048 → 16K)', 'MCB (128 × 128 → 4K)', 'MCB (d = 16K) with VGG-19', 'MCB (d = 16K) with Attention', 'Method'], None, None, ['MCB (2048 × 2048 → 16K)'], ['MCB (2048 × 2048 → 16K)'], ['MCB (2048 × 2048 → 16K)'], None, ['MCB (d = 16K) with VGG-19'], ['MCB (d = 16K) with Attention'], ['MCB (d = 16K) with Attention']]
1
D16-1045table_1
Overall Synthetic Data Results. Aand Bdenote an aggressive and a balanced approaches, respectively. Acc. (std) is the average and the standard deviation of the accuracy across 10 test sets. # Wins is the number of test sets on which the SWVP algorithm outperforms CSP. Gener. is the number of times the best β hyper-parameter value on the development set is also the best value on the test set, or the test set accuracy with the best development set β is at most 0.5% lower than that with the best test set β.
2
[['Model', 'B-WM'], ['Model', 'B-WMR'], ['Model', 'A-WM'], ['Model', 'A-WMR'], ['Model', 'CSP']]
2
[['simple(++), learnable(+++)', 'Acc. (std)'], ['simple(++), learnable(+++)', '# Wins'], ['simple(++), learnable(+++)', 'Gener.'], ['simple(++), learnable(++)', 'Acc. (std)'], ['simple(++), learnable(++)', '# Wins'], ['simple(++), learnable(++)', 'Gener.'], ['simple(+), learnable(+)', 'Acc. (std)'], ['simple(+), learnable(+)', '# Wins'], ['simple(+), learnable(+)', 'Gener.']]
[['75.47(3.05)', '9/10', '10/10', '63.18 (1.32)', '9/10', '10/10', '28.48 (1.9)', '5/10', '10/10'], ['75.96 (2.42)', '8/10', '10/10', '63.02 (2.49)', '9/10', '10/10', '24.31 (5.2)', '4/10', '10/10'], ['74.18 (2.16)', '7/10', '10/10', '61.65 (2.30)', '9/10', '10/10', '30.45 (1.0)', '6/10', '10/10'], ['75.17 (3.07)', '7/10', '10/10', '61.02 (1.93)', '8/10', '10/10', '25.8 (3.18)', '2/10', '10/10'], ['72.24 (3.45)', 'NA', 'NA', '57.89 (2.85)', 'NA', 'NA', '25.27(8.55)', 'NA', 'NA']]
column
['Acc. (std)', '# Wins', 'Gener.', 'Acc. (std)', '# Wins', 'Gener.', 'Acc. (std)', '# Wins', 'Gener.']
['B-WM', 'B-WMR', 'A-WM', 'A-WMR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>simple(++), learnable(+++) || Acc. (std)</th> <th>simple(++), learnable(+++) || # Wins</th> <th>simple(++), learnable(+++) || Gener.</th> <th>simple(++), learnable(++) || Acc. (std)</th> <th>simple(++), learnable(++) || # Wins</th> <th>simple(++), learnable(++) || Gener.</th> <th>simple(+), learnable(+) || Acc. (std)</th> <th>simple(+), learnable(+) || # Wins</th> <th>simple(+), learnable(+) || Gener.</th> </tr> </thead> <tbody> <tr> <td>Model || B-WM</td> <td>75.47(3.05)</td> <td>9/10</td> <td>10/10</td> <td>63.18 (1.32)</td> <td>9/10</td> <td>10/10</td> <td>28.48 (1.9)</td> <td>5/10</td> <td>10/10</td> </tr> <tr> <td>Model || B-WMR</td> <td>75.96 (2.42)</td> <td>8/10</td> <td>10/10</td> <td>63.02 (2.49)</td> <td>9/10</td> <td>10/10</td> <td>24.31 (5.2)</td> <td>4/10</td> <td>10/10</td> </tr> <tr> <td>Model || A-WM</td> <td>74.18 (2.16)</td> <td>7/10</td> <td>10/10</td> <td>61.65 (2.30)</td> <td>9/10</td> <td>10/10</td> <td>30.45 (1.0)</td> <td>6/10</td> <td>10/10</td> </tr> <tr> <td>Model || A-WMR</td> <td>75.17 (3.07)</td> <td>7/10</td> <td>10/10</td> <td>61.02 (1.93)</td> <td>8/10</td> <td>10/10</td> <td>25.8 (3.18)</td> <td>2/10</td> <td>10/10</td> </tr> <tr> <td>Model || CSP</td> <td>72.24 (3.45)</td> <td>NA</td> <td>NA</td> <td>57.89 (2.85)</td> <td>NA</td> <td>NA</td> <td>25.27(8.55)</td> <td>NA</td> <td>NA</td> </tr> </tbody></table>
Table 1
table_1
D16-1045
8
emnlp2016
Synthetic Data. Table 1 presents our results. In all three setups an SWVP algorithm is superior. Averaged accuracy differences between the best performing algorithms and CSP are: 3.72 (B-WMR, (simple(++), learnable(+++))), 5.29 (B-WM, (simple(++), learnable(++))) and 5.18 (A-WM, (simple(+), learnable(+))). In all setups SWVP outperforms CSP in terms of averaged performance (except from B-WMR for (simple(+), learnable(+))). Moreover, the weighted models are more stable than CSP, as indicated by the lower standard deviation of their accuracy scores. Finally, for the more simple and learnable datasets the SWVP models outperform CSP in the majority of cases (7-10/10).
[2, 1, 1, 1, 1, 1, 1]
['Synthetic Data.', 'Table 1 presents our results.', 'In all three setups an SWVP algorithm is superior.', 'Averaged accuracy differences between the best performing algorithms and CSP are: 3.72 (B-WMR, (simple(++), learnable(+++))), 5.29 (B-WM, (simple(++), learnable(++))) and 5.18 (A-WM, (simple(+), learnable(+))).', 'In all setups SWVP outperforms CSP in terms of averaged performance (except from B-WMR for (simple(+), learnable(+))).', 'Moreover, the weighted models are more stable than CSP, as indicated by the lower standard deviation of their accuracy scores.', 'Finally, for the more simple and learnable datasets the SWVP models outperform CSP in the majority of cases (7-10/10).']
[None, None, ['B-WM', 'B-WMR', 'A-WM', 'A-WMR'], ['B-WM', 'B-WMR', 'A-WM', 'A-WMR'], ['B-WM', 'B-WMR', 'A-WM', 'A-WMR', 'CSP'], ['CSP'], ['B-WM', 'B-WMR', 'A-WM', 'A-WMR', 'CSP']]
1
D16-1048table_2
The performance of cross-lingually similarized Chinese dependency grammars with different configurations.
2
[['Grammar', 'baseline'], ['Grammar', 'proj : fixed'], ['Grammar', 'proj : proj'], ['Grammar', 'proj : nonproj'], ['Grammar', 'nonproj : fixed'], ['Grammar', 'nonproj : proj'], ['Grammar', 'nonproj : nonproj']]
1
[['Similarity (%)'], ['Dep. P (%)'], ['Ada. P (%)'], ['BLEU-4 (%)']]
[['34.2', '84.5', '84.5', '24.6'], ['46.3', '54.1', '82.3', '25.8 (+1.2)'], ['63.2', '72.2', '84.6', '26.1 (+1.5)'], ['64.3', '74.6', '84.7', '26.2 (+1.6)'], ['48.4', '56.1', '82.6', '20.1 (−4.5)'], ['63.6', '71.4', '84.4', '22.9 (−1.7)'], ['64.1', '73.9', '84.9', '20.7 (−3.9)']]
column
['Similarity (%)', 'Dep. P (%)', 'Ada. P (%)', 'BLEU-4 (%)']
['Grammar']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Similarity (%)</th> <th>Dep. P (%)</th> <th>Ada. P (%)</th> <th>BLEU-4 (%)</th> </tr> </thead> <tbody> <tr> <td>Grammar || baseline</td> <td>34.2</td> <td>84.5</td> <td>84.5</td> <td>24.6</td> </tr> <tr> <td>Grammar || proj : fixed</td> <td>46.3</td> <td>54.1</td> <td>82.3</td> <td>25.8 (+1.2)</td> </tr> <tr> <td>Grammar || proj : proj</td> <td>63.2</td> <td>72.2</td> <td>84.6</td> <td>26.1 (+1.5)</td> </tr> <tr> <td>Grammar || proj : nonproj</td> <td>64.3</td> <td>74.6</td> <td>84.7</td> <td>26.2 (+1.6)</td> </tr> <tr> <td>Grammar || nonproj : fixed</td> <td>48.4</td> <td>56.1</td> <td>82.6</td> <td>20.1 (−4.5)</td> </tr> <tr> <td>Grammar || nonproj : proj</td> <td>63.6</td> <td>71.4</td> <td>84.4</td> <td>22.9 (−1.7)</td> </tr> <tr> <td>Grammar || nonproj : nonproj</td> <td>64.1</td> <td>73.9</td> <td>84.9</td> <td>20.7 (−3.9)</td> </tr> </tbody></table>
Table 2
table_2
D16-1048
8
emnlp2016
5.2.2 Selection of Searching Modes. With the hyper-parameters given by the developing procedures, cross-lingual similarization is conducted on the whole FBIS dataset. All the searching mode configurations are tried and 6 pairs of grammars are generated. For each of the 6 Chinese dependency grammars, we also give the three indicators as described before. Table 2 shows that, cross-lingual similarization results in grammars with much higher cross-lingual similarity, and the adaptive accuracies given by the adapted grammars approach to those of the original grammars. It indicates that the proposed algorithm improve the crosslingual similarity without losing syntactic knowledge. To determine the best searching mode for treebased machine translation, we use the ChineseEnglish FBIS dataset as the small-scale bilingual corpus. A 4-gram language model is trained on the Xinhua portion of the Gigaword corpus with the SRILM toolkit (Stolcke and Andreas, 2002). For the analysis given by non-projective similarized grammars, The projective transformation should be conducted in order to produce projective dependency structures for rule extraction and translation decoding. In details, the projective transformation first traverses the non-projective dependency structures just as they are projective, then adjusts the order of the nodes according to the traversed word sequences. We take NIST MT Evaluation testing set 2002 (NIST 02) for developing , and use the casesensitive BLEU (Papineni et al., 2002) to measure the translation accuracy. The last column of Table 2 shows the performance of the grammars on machine translation. The cross-lingually similarized grammars corresponding to the configurations with projective searching for Chinese always improve the translation performance, while non-projective grammars always hurt the performance. It probably can be attributed to the low performance of non-projective parsing as well as the inappropriateness of the simple projective transformation method. In the final application in machine translation, we adopted the similarized grammar corresponding to the configuration with projective searching on the source side and nonprojective searching on the target side.
[0, 0, 0, 0, 1, 1, 2, 2, 2, 2, 2, 1, 1, 2, 0]
['5.2.2 Selection of Searching Modes.', 'With the hyper-parameters given by the developing procedures, cross-lingual similarization is conducted on the whole FBIS dataset.', 'All the searching mode configurations are tried and 6 pairs of grammars are generated.', 'For each of the 6 Chinese dependency grammars, we also give the three indicators as described before.', 'Table 2 shows that, cross-lingual similarization results in grammars with much higher cross-lingual similarity, and the adaptive accuracies given by the adapted grammars approach to those of the original grammars.', 'It indicates that the proposed algorithm improve the crosslingual similarity without losing syntactic knowledge.', 'To determine the best searching mode for treebased machine translation, we use the ChineseEnglish FBIS dataset as the small-scale bilingual corpus.', 'A 4-gram language model is trained on the Xinhua portion of the Gigaword corpus with the SRILM toolkit (Stolcke and Andreas, 2002).', 'For the analysis given by non-projective similarized grammars, The projective transformation should be conducted in order to produce projective dependency structures for rule extraction and translation decoding.', 'In details, the projective transformation first traverses the non-projective dependency structures just as they are projective, then adjusts the order of the nodes according to the traversed word sequences.', 'We take NIST MT Evaluation testing set 2002 (NIST 02) for developing , and use the casesensitive BLEU (Papineni et al., 2002) to measure the translation accuracy.', 'The last column of Table 2 shows the performance of the grammars on machine translation.', 'The cross-lingually similarized grammars corresponding to the configurations with projective searching for Chinese always improve the translation performance, while non-projective grammars always hurt the performance.', 'It probably can be attributed to the low performance of non-projective parsing as well as the inappropriateness of the simple projective transformation method.', 'In the final application in machine translation, we adopted the similarized grammar corresponding to the configuration with projective searching on the source side and nonprojective searching on the target side.']
[None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]
1
D16-1048table_3
The performance of the cross-lingually similarized grammar on dependency tree-based translation, compared with related work.
2
[['System', '(Liu et al. 2006)'], ['System', '(Chiang 2007)'], ['System', '(Xie et al. 2011)'], ['System', 'Original Grammar'], ['System', 'Similarized Grammar']]
1
[['NIST 04'], ['NIST 05']]
[['34.55', '31.94'], ['35.29', '33.22'], ['35.82', '33.62'], ['35.44', '33.08'], ['36.78', '35.12']]
column
['BLEU', 'BLEU']
['Original Grammar', 'Similarized Grammar']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NIST 04</th> <th>NIST 05</th> </tr> </thead> <tbody> <tr> <td>System || (Liu et al. 2006)</td> <td>34.55</td> <td>31.94</td> </tr> <tr> <td>System || (Chiang 2007)</td> <td>35.29</td> <td>33.22</td> </tr> <tr> <td>System || (Xie et al. 2011)</td> <td>35.82</td> <td>33.62</td> </tr> <tr> <td>System || Original Grammar</td> <td>35.44</td> <td>33.08</td> </tr> <tr> <td>System || Similarized Grammar</td> <td>36.78</td> <td>35.12</td> </tr> </tbody></table>
Table 3
table_3
D16-1048
8
emnlp2016
Table 3 shows the performance of the crosslingually similarized grammar on dependency treebased translation, compared with previous work (Xie et al., 2011). We also give the performance of constituency tree-based translation (Liu et al., 2006) and formal syntax-based translation (Chiang, 2007). The original grammar performs slightly worse than the previous work in dependency tree-based translation, this can ascribed to the difference between the implementation of the original grammar and the dependency parser used in the previous work. However, the similarized grammar achieves very significant improvement based on the original grammar, and also significant surpass the previous work. Note that there is no other modification on the translation model besides the replacement of the source parser.
[1, 2, 1, 1, 2]
['Table 3 shows the performance of the crosslingually similarized grammar on dependency treebased translation, compared with previous work (Xie et al., 2011).', 'We also give the performance of constituency tree-based translation (Liu et al., 2006) and formal syntax-based translation (Chiang, 2007).', 'The original grammar performs slightly worse than the previous work in dependency tree-based translation, this can ascribed to the difference between the implementation of the original grammar and the dependency parser used in the previous work.', 'However, the similarized grammar achieves very significant improvement based on the original grammar, and also significant surpass the previous work.', 'Note that there is no other modification on the translation model besides the replacement of the source parser.']
[['Similarized Grammar', 'Original Grammar', '(Xie et al. 2011)'], ['(Liu et al. 2006)', '(Chiang 2007)'], ['Original Grammar', '(Xie et al. 2011)'], ['Similarized Grammar', 'Original Grammar', '(Xie et al. 2011)'], None]
1
D16-1050table_1
BLEU scores on the NIST Chinese-English translation task. AVG = average BLEU scores on test sets. We highlight the best results in bold for each test set. “↑/⇑”: significantly better than Moses (p < 0.05/p < 0.01); “+/++”: significantly better than GroundHog (p < 0.05/p < 0.01);
2
[['System', 'Moses'], ['System', 'GroundHog'], ['System', 'VNMT w/o KL'], ['System', 'VNMT']]
1
[['MT05'], ['MT02'], ['MT03'], ['MT04'], ['MT06'], ['MT08'], ['AVG']]
[['33.68', '34.19', '34.39', '35.34', '29.20', '22.94', '31.21'], ['31.38', '33.32', '32.59', '35.05', '29.80', '22.82', '30.72'], ['31.40', '33.50', '32.92', '34.95', '28.74', '22.07', '30.44'], ['32.25', '34.50++', '33.78++', '36.72⇑++', '30.92⇑++', '24.41↑++', '32.07']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['VNMT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT05</th> <th>MT02</th> <th>MT03</th> <th>MT04</th> <th>MT06</th> <th>MT08</th> <th>AVG</th> </tr> </thead> <tbody> <tr> <td>System || Moses</td> <td>33.68</td> <td>34.19</td> <td>34.39</td> <td>35.34</td> <td>29.20</td> <td>22.94</td> <td>31.21</td> </tr> <tr> <td>System || GroundHog</td> <td>31.38</td> <td>33.32</td> <td>32.59</td> <td>35.05</td> <td>29.80</td> <td>22.82</td> <td>30.72</td> </tr> <tr> <td>System || VNMT w/o KL</td> <td>31.40</td> <td>33.50</td> <td>32.92</td> <td>34.95</td> <td>28.74</td> <td>22.07</td> <td>30.44</td> </tr> <tr> <td>System || VNMT</td> <td>32.25</td> <td>34.50++</td> <td>33.78++</td> <td>36.72⇑++</td> <td>30.92⇑++</td> <td>24.41↑++</td> <td>32.07</td> </tr> </tbody></table>
Table 1
table_1
D16-1050
6
emnlp2016
Table 1 summarizes the BLEU scores of different systems on the Chinese-English translation tasks. Clearly VNMT significantly improves translation quality in terms of BLEU on most cases, and obtains the best average results that gain 0.86 and 1.35 BLEU points over Moses and GroundHog respectively. Besides, without the KL objective, VNMT w/o KL obtains even worse results than GroundHog. These results indicate the following two points: 1) explicitly modeling underlying semantics by a latent variable indeed benefits neural machine translation, and 2) the improvements of our model are not from enlarging the network.
[1, 1, 1, 2]
['Table 1 summarizes the BLEU scores of different systems on the Chinese-English translation tasks.', 'Clearly VNMT significantly improves translation quality in terms of BLEU on most cases, and obtains the best average results that gain 0.86 and 1.35 BLEU points over Moses and GroundHog respectively.', 'Besides, without the KL objective, VNMT w/o KL obtains even worse results than GroundHog.', 'These results indicate the following two points: 1) explicitly modeling underlying semantics by a latent variable indeed benefits neural machine translation, and 2) the improvements of our model are not from enlarging the network.']
[None, ['VNMT', 'Moses', 'GroundHog'], ['VNMT w/o KL', 'GroundHog'], None]
1
D16-1051table_1
Alignment quality results for IBM2-HMM (2H) and its convex relaxation (2HC) using either HMM-style dynamic programming or “Joint” decoding. The first and last columns above are for the GIZA++ HMM initialized either with IBM Model 1 or Model 1 followed by Model 2. FA above refers to the improved IBM Model 2 (FastAlign) of (Dyer et al., 2013).
2
[['Iteration', '1'], ['Iteration', '2'], ['Iteration', '3'], ['Iteration', '4'], ['Iteration', '5'], ['Iteration', '6'], ['Iteration', '7'], ['Iteration', '8'], ['Iteration', '9'], ['Iteration', '10']]
5
[['Training', '2H', 'Decoding', 'HMM', 'AER'], ['Training', '2H', 'Decoding', 'HMM', 'F-Measure'], ['Training', '2H', 'Decoding', 'Joint', 'AER'], ['Training', '2H', 'Decoding', 'Joint', 'F-Measure'], ['Training', '2HC', 'Decoding', 'HMM', 'AER'], ['Training', '2HC', 'Decoding', 'HMM', 'F-Measure'], ['Training', '2HC', 'Decoding', 'Joint', 'AER'], ['Training', '2HC', 'Decoding', 'Joint', 'F-Measure'], ['Training', 'FA', 'Decoding', 'IBM2', 'AER'], ['Training', 'FA', 'Decoding', 'IBM2', 'F-Measure'], ['Training', '1-2H', 'Decoding', 'HMM', 'AER'], ['Training', '1-2H', 'Decoding', 'HMM', 'F-Measure']]
[['0.0956', '0.7829', '0.1076', '0.7797', '0.1538', '0.7199', '0.1814', '0.6914', '0.5406', '0.2951', '0.1761', '0.7219'], ['0.0884', '0.7854', '0.0943', '0.7805', '0.1093', '0.7594', '0.1343', '0.733', '0.1625', '0.7111', '0.0873', '0.8039'], ['0.0844', '0.7899', '0.0916', '0.7806', '0.1023', '0.7651', '0.1234', '0.7427', '0.1254', '0.7484', '0.0786', '0.8112'], ['0.0828', '0.7908', '0.0904', '0.7813', '0.0996', '0.7668', '0.1204', '0.7457', '0.1169', '0.7589', '0.0753', '0.8094'], ['0.0808', '0.7928', '0.0907', '0.7806', '0.0992', '0.7673', '0.1197', '0.7461', '0.1131', '0.7624', '0.0737', '0.8058'], ['0.0804', '0.7928', '0.0906', '0.7807', '0.0989', '0.7678', '0.1199', '0.7457', '0.1128', '0.763', '0.0719', '0.8056'], ['0.0795', '0.7939', '0.091', '0.7817', '0.0986', '0.7679', '0.1197', '0.7457', '0.1116', '0.7633', '0.0717', '0.8046'], ['0.0789', '0.7942', '0.09', '0.7814', '0.0988', '0.7679', '0.1195', '0.7458', '0.1086', '0.7658', '0.0725', '0.8024'], ['0.0793', '0.7937', '0.0904', '0.7813', '0.0986', '0.768', '0.1195', '0.7457', '0.1076', '0.7672', '0.0738', '0.8007'], ['0.0793', '0.7927', '0.0902', '0.7816', '0.0986', '0.768', '0.1195', '0.7457', '0.1072', '0.7679', '0.0734', '0.801']]
column
['AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure']
['HMM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Training || 15210H || Decoding || HMM || AER</th> <th>Training || 15210H || Decoding || HMM || F-Measure</th> <th>Training || 15210H || Decoding || Joint || AER</th> <th>Training || 15210H || Decoding || Joint || F-Measure</th> <th>Training || 210HC || Decoding || HMM || AER</th> <th>Training || 210HC || Decoding || HMM || F-Measure</th> <th>Training || 210HC || Decoding || Joint || AER</th> <th>Training || 210HC || Decoding || Joint || F-Measure</th> <th>Training || FA10 || Decoding || IBM2 || AER</th> <th>Training || FA10 || Decoding || IBM2 || F-Measure</th> <th>Training || 1525H10 || Decoding || HMM || AER</th> <th>Training || 1525H10 || Decoding || HMM || F-Measure</th> </tr> </thead> <tbody> <tr> <td>Iteration || 1</td> <td>0.0956</td> <td>0.7829</td> <td>0.1076</td> <td>0.7797</td> <td>0.1538</td> <td>0.7199</td> <td>0.1814</td> <td>0.6914</td> <td>0.5406</td> <td>0.2951</td> <td>0.1761</td> <td>0.7219</td> </tr> <tr> <td>Iteration || 2</td> <td>0.0884</td> <td>0.7854</td> <td>0.0943</td> <td>0.7805</td> <td>0.1093</td> <td>0.7594</td> <td>0.1343</td> <td>0.733</td> <td>0.1625</td> <td>0.7111</td> <td>0.0873</td> <td>0.8039</td> </tr> <tr> <td>Iteration || 3</td> <td>0.0844</td> <td>0.7899</td> <td>0.0916</td> <td>0.7806</td> <td>0.1023</td> <td>0.7651</td> <td>0.1234</td> <td>0.7427</td> <td>0.1254</td> <td>0.7484</td> <td>0.0786</td> <td>0.8112</td> </tr> <tr> <td>Iteration || 4</td> <td>0.0828</td> <td>0.7908</td> <td>0.0904</td> <td>0.7813</td> <td>0.0996</td> <td>0.7668</td> <td>0.1204</td> <td>0.7457</td> <td>0.1169</td> <td>0.7589</td> <td>0.0753</td> <td>0.8094</td> </tr> <tr> <td>Iteration || 5</td> <td>0.0808</td> <td>0.7928</td> <td>0.0907</td> <td>0.7806</td> <td>0.0992</td> <td>0.7673</td> <td>0.1197</td> <td>0.7461</td> <td>0.1131</td> <td>0.7624</td> <td>0.0737</td> <td>0.8058</td> </tr> <tr> <td>Iteration || 6</td> <td>0.0804</td> <td>0.7928</td> <td>0.0906</td> <td>0.7807</td> <td>0.0989</td> <td>0.7678</td> <td>0.1199</td> <td>0.7457</td> <td>0.1128</td> <td>0.763</td> <td>0.0719</td> <td>0.8056</td> </tr> <tr> <td>Iteration || 7</td> <td>0.0795</td> <td>0.7939</td> <td>0.091</td> <td>0.7817</td> <td>0.0986</td> <td>0.7679</td> <td>0.1197</td> <td>0.7457</td> <td>0.1116</td> <td>0.7633</td> <td>0.0717</td> <td>0.8046</td> </tr> <tr> <td>Iteration || 8</td> <td>0.0789</td> <td>0.7942</td> <td>0.09</td> <td>0.7814</td> <td>0.0988</td> <td>0.7679</td> <td>0.1195</td> <td>0.7458</td> <td>0.1086</td> <td>0.7658</td> <td>0.0725</td> <td>0.8024</td> </tr> <tr> <td>Iteration || 9</td> <td>0.0793</td> <td>0.7937</td> <td>0.0904</td> <td>0.7813</td> <td>0.0986</td> <td>0.768</td> <td>0.1195</td> <td>0.7457</td> <td>0.1076</td> <td>0.7672</td> <td>0.0738</td> <td>0.8007</td> </tr> <tr> <td>Iteration || 10</td> <td>0.0793</td> <td>0.7927</td> <td>0.0902</td> <td>0.7816</td> <td>0.0986</td> <td>0.768</td> <td>0.1195</td> <td>0.7457</td> <td>0.1072</td> <td>0.7679</td> <td>0.0734</td> <td>0.801</td> </tr> </tbody></table>
Table 1
table_1
D16-1051
9
emnlp2016
Table 1 shows the alignment summary statistics for the 447 sentences present in the Hansard test data. We present alignments quality scores using either the FastAlign IBM Model 2, the GIZA++ HMM, and our model and its relaxation using either the “HMM” or “Joint” decoding. First, we note that in deciding the decoding style for IBM2-HMM, the HMM method is better than the Joint method. We expected this type of performance since HMM decoding introduces positional dependance among the entire set of words in the sentence, which is shown to be a good modeling assumption (Vogel et al., 1996). From the results in Table 1 we see that the HMM outperforms all other models, including IBM2HMM and its convex relaxation. However, IBM2- HMM is not far in AER performance from the HMM and both it and its relaxation do better than FastAlign or IBM Model 3 (the results for IBM Model 3 are not presented; a one-directional English-French run of 1 52 53 15 gave AER and F-Measure numbers of 0.1768 and 0.6588, respectively, and this was behind both the IBM Model 2 FastAlign and our models).
[1, 2, 1, 2, 1, 1]
['Table 1 shows the alignment summary statistics for the 447 sentences present in the Hansard test data.', 'We present alignments quality scores using either the FastAlign IBM Model 2, the GIZA++ HMM, and our model and its relaxation using either the “HMM” or “Joint” decoding.', 'First, we note that in deciding the decoding style for IBM2-HMM, the HMM method is better than the Joint method.', 'We expected this type of performance since HMM decoding introduces positional dependance among the entire set of words in the sentence, which is shown to be a good modeling assumption (Vogel et al., 1996).', 'From the results in Table 1 we see that the HMM outperforms all other models, including IBM2-HMM and its convex relaxation.', 'However, IBM2-HMM is not far in AER performance from the HMM and both it and its relaxation do better than FastAlign.']
[None, ['Training', 'HMM', 'Joint', 'Decoding'], ['2H', 'HMM', 'Joint'], ['HMM'], ['HMM', '2H', '2HC'], ['2H', 'IBM2', 'AER', 'HMM', 'F-Measure', 'FA']]
1
D16-1062table_3
Comparison of Fleiss’ κ scores with scores from SNLI quality control sentence pairs.
2
[['Fleiss’κ', 'Contradiction'], ['Fleiss’κ', 'Entailment'], ['Fleiss’κ', 'Neutral'], ['Fleiss’κ', 'Overall']]
1
[['4GS'], ['5GS'], ['Bowman et al. 2015']]
[['0.37', '0.59', '0.77'], ['0.48', '0.63', '0.72'], ['0.41', '0.54', '0.6'], ['0.43', '0.6', '0.7']]
column
['Fleiss’κ', 'Fleiss’κ', 'Fleiss’κ']
['4GS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>4GS</th> <th>5GS</th> <th>Bowman et al. 2015</th> </tr> </thead> <tbody> <tr> <td>Fleiss’κ || Contradiction</td> <td>0.37</td> <td>0.59</td> <td>0.77</td> </tr> <tr> <td>Fleiss’κ || Entailment</td> <td>0.48</td> <td>0.63</td> <td>0.72</td> </tr> <tr> <td>Fleiss’κ || Neutral</td> <td>0.41</td> <td>0.54</td> <td>0.6</td> </tr> <tr> <td>Fleiss’κ || Overall</td> <td>0.43</td> <td>0.6</td> <td>0.7</td> </tr> </tbody></table>
Table 3
table_3
D16-1062
6
emnlp2016
Table 3 shows that the level of agreement as measured by the Fleiss’κ score is much lower when the number of annotators is increased, particularly for the 4GS set of sentence pairs, as compared to scores noted in Bowman et al. (2015). The decrease in agreement is particularly large with regard to contradiction. This could occur for a number of reasons. Recognizing entailment is an inherently difficult task, and classifying a correct label, particularly for contradiction and neutral, can be difficult due to an individual’s interpretation of the sentences and assumptions that an individual makes about the key facts of each sentence (e.g. coreference). It may also be the case that the individuals tasked with creating the sentence pairs on AMT created sentences that appeared to contradict a premise text, but can be interpreted differently given a different context.
[1, 1, 2, 2, 2]
['Table 3 shows that the level of agreement as measured by the Fleiss’κ score is much lower when the number of annotators is increased, particularly for the 4GS set of sentence pairs, as compared to scores noted in Bowman et al. (2015).', 'The decrease in agreement is particularly large with regard to contradiction.', 'This could occur for a number of reasons.', 'Recognizing entailment is an inherently difficult task, and classifying a correct label, particularly for contradiction and neutral, can be difficult due to an individual’s interpretation of the sentences and assumptions that an individual makes about the key facts of each sentence (e.g. coreference).', 'It may also be the case that the individuals tasked with creating the sentence pairs on AMT created sentences that appeared to contradict a premise text, but can be interpreted differently given a different context.']
[['Fleiss’κ', '4GS', 'Bowman et al. 2015'], ['Contradiction'], None, None, None]
1
D16-1062table_5
Theta scores and area under curve percentiles for LSTM trained on SNLI and tested on GSIRT . We also report the accuracy for the same LSTM tested on all SNLI quality control items (see Section 3.1). All performance is based on binary classification for each label.
4
[['Item', 'Set', '5GS', 'Entailment'], ['Item', 'Set', '5GS', 'Contradiction'], ['Item', 'Set', '5GS', 'Neutral'], ['Item', 'Set', '4GS', 'Contradiction'], ['Item', 'Set', '4GS', 'Neutral']]
1
[['Theta Score'], ['Percentile'], ['Test Acc.']]
[['-0.133', '44.83%', '96.5%'], ['1.539', '93.82%', '87.9%'], ['0.423', '66.28%', '88%'], ['1.777', '96.25%', '78.9%'], ['0.441', '67%', '83%']]
column
['Theta Score', 'Percentile', 'Test Acc.']
['4GS', '5GS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Theta Score</th> <th>Percentile</th> <th>Test Acc.</th> </tr> </thead> <tbody> <tr> <td>Item || Set || 5GS || Entailment</td> <td>-0.133</td> <td>44.83%</td> <td>96.5%</td> </tr> <tr> <td>Item || Set || 5GS || Contradiction</td> <td>1.539</td> <td>93.82%</td> <td>87.9%</td> </tr> <tr> <td>Item || Set || 5GS || Neutral</td> <td>0.423</td> <td>66.28%</td> <td>88%</td> </tr> <tr> <td>Item || Set || 4GS || Contradiction</td> <td>1.777</td> <td>96.25%</td> <td>78.9%</td> </tr> <tr> <td>Item || Set || 4GS || Neutral</td> <td>0.441</td> <td>67%</td> <td>83%</td> </tr> </tbody></table>
Table 5
table_5
D16-1062
8
emnlp2016
The theta scores from IRT in Table 5 show that, compared to AMT users, the system performed well above average for contradiction items compared to human performance, and performed around the average for entailment and neutral items. For both the neutral and contradiction items, the theta scores are similar across the 4GS and 5GS sets, whereas the accuracy of the more difficult 4GS items is consistently lower. This shows the advantage of IRT to account for item characteristics in its ability estimates. A similar theta score across sets indicates that we can measure the “ability level” regardless of whether the test set is easy or hard. Theta score is a consistent measurement, compared to accuracy which varies with the difficulty of the dataset.
[1, 1, 2, 2, 2]
['The theta scores from IRT in Table 5 show that, compared to AMT users, the system performed well above average for contradiction items compared to human performance, and performed around the average for entailment and neutral items.', 'For both the neutral and contradiction items, the theta scores are similar across the 4GS and 5GS sets, whereas the accuracy of the more difficult 4GS items is consistently lower.', 'This shows the advantage of IRT to account for item characteristics in its ability estimates.', 'A similar theta score across sets indicates that we can measure the “ability level” regardless of whether the test set is easy or hard.', 'Theta score is a consistent measurement, compared to accuracy which varies with the difficulty of the dataset.']
[['Theta Score', 'Contradiction', 'Entailment', 'Neutral'], ['Neutral', 'Contradiction', 'Theta Score', '4GS', '5GS', 'Test Acc.'], None, ['Theta Score'], ['Theta Score', 'Test Acc.']]
1
D16-1063table_2
Performance of different rho functions on Text8 dataset with 17M tokens.
2
[['Task', 'Similarity'], ['Task', 'Analogy']]
2
[['Robi', '-'], ['ρ0', 'off'], ['ρ0', 'on'], ['ρ1', 'off'], ['ρ1', 'on'], ['ρ2', 'off'], ['ρ2', 'on'], ['ρ3', 'off'], ['ρ3', 'on']]
[['41.2', '69.0', '71.0', '66.7', '70.4', '66.8', '70.8', '68.1', '68.0'], ['22.7', '24.9', '31.9', '34.3', '44.5', '32.3', '40.4', '33.6', '42.9']]
column
['Robi', 'ρ0', 'ρ0', 'ρ1', 'ρ1', 'ρ2', 'ρ2', 'ρ3', 'ρ3']
['Similarity', 'Analogy']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Robi || -</th> <th>ρ0 || off</th> <th>ρ0 || on</th> <th>ρ1 || off</th> <th>ρ1 || on</th> <th>ρ2 || off</th> <th>ρ2 || on</th> <th>ρ3 || off</th> <th>ρ3 || on</th> </tr> </thead> <tbody> <tr> <td>Task || Similarity</td> <td>41.2</td> <td>69.0</td> <td>71.0</td> <td>66.7</td> <td>70.4</td> <td>66.8</td> <td>70.8</td> <td>68.1</td> <td>68.0</td> </tr> <tr> <td>Task || Analogy</td> <td>22.7</td> <td>24.9</td> <td>31.9</td> <td>34.3</td> <td>44.5</td> <td>32.3</td> <td>40.4</td> <td>33.6</td> <td>42.9</td> </tr> </tbody></table>
Table 2
table_2
D16-1063
7
emnlp2016
It can be seen from Table 2 that adding the weight rw,c improves performance in all the cases, especially on the word analogy task. Among the four ρ functions, ρ0 performs the best on the word similarity task but suffers notably on the analogy task, while ρ1 = log performs the best overall. Given these observations, which are consistent with the results on large scale datasets, in the experiments that follow we only report WordRank with the best configuration, i.e., using ρ1 with the weight rw,c as defined in (4).
[1, 1, 2]
['It can be seen from Table 2 that adding the weight rw,c improves performance in all the cases, especially on the word analogy task.', 'Among the four ρ functions, ρ0 performs the best on the word similarity task but suffers notably on the analogy task, while ρ1 = log performs the best overall.', 'Given these observations, which are consistent with the results on large scale datasets, in the experiments that follow we only report WordRank with the best configuration, i.e., using ρ1 with the weight rw,c as defined in (4).']
[['Analogy', 'Similarity'], ['ρ0', 'ρ1', 'Similarity', 'Analogy'], ['ρ1']]
1
D16-1065table_3
Comparison between our joint approaches and the pipelined counterparts.
4
[['Dataset', 'LDC2013E117', 'System', 'JAMR (fixed)'], ['Dataset', 'LDC2013E117', 'System', 'System 1'], ['Dataset', 'LDC2013E117', 'System', 'System 2'], ['Dataset', 'LDC2014T12', 'System', 'JAMR (fixed)'], ['Dataset', 'LDC2014T12', 'System', 'System 1'], ['Dataset', 'LDC2014T12', 'System', 'System 2']]
1
[['P'], ['R'], ['F1']]
[['0.67', '0.58', '0.62'], ['0.72', '0.65', '0.68'], ['0.73', '0.69', '0.71'], ['0.68', '0.59', '0.63'], ['0.74', '0.63', '0.68'], ['0.73', '0.68', '0.71']]
column
['P', 'R', 'F1']
['System 1', 'System 2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Dataset || LDC2013E117 || System || JAMR(fixed)</td> <td>0.67</td> <td>0.58</td> <td>0.62</td> </tr> <tr> <td>Dataset || LDC2013E117 || System || System 1</td> <td>0.72</td> <td>0.65</td> <td>0.68</td> </tr> <tr> <td>Dataset || LDC2013E117 || System || System 2</td> <td>0.73</td> <td>0.69</td> <td>0.71</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || JAMR(fixed)</td> <td>0.68</td> <td>0.59</td> <td>0.63</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || System 1</td> <td>0.74</td> <td>0.63</td> <td>0.68</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || System 2</td> <td>0.73</td> <td>0.68</td> <td>0.71</td> </tr> </tbody></table>
Table 3
table_3
D16-1065
8
emnlp2016
4.4 Joint Model vs. Pipelined Model. In this section, we compare the overall performance of our joint model to the pipelined model, JAMR. To give a fair comparison, we first implemented system 1 only using the same features (i.e., features 1- 4 in Table 1) as JAMR for concept fragments. Table 3 gives the results on the two datasets. In terms of F-measure, we gain a 6% absolute improvement, and a 5% absolute improvement over the results of JAMR on the two different experimental setups respectively. Next, we implemented system 2 by using more lexical features to capture the association between concept and the context (i.e., features 5-16 in Table 1). Intuitively, these lexical contextual features should be helpful in identifying concepts in parsing process. As expected, the results in Table 3 show that we gain 3% improvement over the two different datasets respectively, by adding only some additional lexical features.
[2, 2, 2, 1, 1, 2, 2, 1]
['4.4 Joint Model vs. Pipelined Model.', 'In this section, we compare the overall performance of our joint model to the pipelined model, JAMR.', 'To give a fair comparison, we first implemented system 1 only using the same features (i.e., features 1- 4 in Table 1) as JAMR for concept fragments.', 'Table 3 gives the results on the two datasets.', 'In terms of F-measure, we gain a 6% absolute improvement, and a 5% absolute improvement over the results of JAMR on the two different experimental setups respectively.', 'Next, we implemented system 2 by using more lexical features to capture the association between concept and the context (i.e., features 5-16 in Table 1).', 'Intuitively, these lexical contextual features should be helpful in identifying concepts in parsing process.', 'As expected, the results in Table 3 show that we gain 3% improvement over the two different datasets respectively, by adding only some additional lexical features.']
[None, None, None, None, ['F1', 'System 1', 'JAMR (fixed)'], ['System 2'], None, ['System 2']]
1
D16-1065table_4
Final results of various methods.
4
[['Dataset', 'LDC2013E117', 'System', 'CAMR*'], ['Dataset', 'LDC2013E117', 'System', 'CAMR'], ['Dataset', 'LDC2013E117', 'System', 'Our approach'], ['Dataset', 'LDC2014T12', 'System', 'CAMR*'], ['Dataset', 'LDC2014T12', 'System', 'CAMR'], ['Dataset', 'LDC2014T12', 'System', 'CCG-based'], ['Dataset', 'LDC2014T12', 'System', 'Our approach']]
1
[['P'], ['R'], ['F1']]
[['.69', '.67', '.68'], ['.71', '.69', '.70'], ['.73', '.69', '.71'], ['.70', '.66', '.68'], ['.72', '.67', '.70'], ['.67', '.66', '.66'], ['.73', '.68', '.71']]
column
['P', 'R', 'F1']
['Our approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Dataset || LDC2013E117 || System || CAMR*</td> <td>.69</td> <td>.67</td> <td>.68</td> </tr> <tr> <td>Dataset || LDC2013E117 || System || CAMR</td> <td>.71</td> <td>.69</td> <td>.70</td> </tr> <tr> <td>Dataset || LDC2013E117 || System || Our approach</td> <td>.73</td> <td>.69</td> <td>.71</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || CAMR*</td> <td>.70</td> <td>.66</td> <td>.68</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || CAMR</td> <td>.72</td> <td>.67</td> <td>.70</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || CCG-based</td> <td>.67</td> <td>.66</td> <td>.66</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || Our approach</td> <td>.73</td> <td>.68</td> <td>.71</td> </tr> </tbody></table>
Table 4
table_4
D16-1065
8
emnlp2016
We give a comparison between our approach and other state-of-the-art AMR parsers, including CCGbased parser (Artzi et al., 2015) and dependencybased parser (Wang et al., 2015b). For comparison purposes, we give two results from two different versions of dependency-based AMR parser: CAMR* and CAMR. Compared to the latter, the former denotes the system that does not use the extended features generated from the semantic role labeling system, word sense disambiguation system and so on, which is directly comparable to our system. From Table 4 we can see that our parser achieves better performance than other approaches, even without utilizing any external semantic resources.
[2, 2, 2, 1]
['We give a comparison between our approach and other state-of-the-art AMR parsers, including CCGbased parser (Artzi et al., 2015) and dependency-based parser (Wang et al., 2015b).', 'For comparison purposes, we give two results from two different versions of dependency-based AMR parser: CAMR* and CAMR.', 'Compared to the latter, the former denotes the system that does not use the extended features generated from the semantic role labeling system, word sense disambiguation system and so on, which is directly comparable to our system.', 'From Table 4 we can see that our parser achieves better performance than other approaches, even without utilizing any external semantic resources.']
[None, ['CAMR*', 'CAMR'], None, ['Our approach', 'System']]
1
D16-1065table_5
Final results on the full LDC2014T12 dataset.
4
[['Dataset', 'LDC2014T12', 'System', 'JAMR (fixed)'], ['Dataset', 'LDC2014T12', 'System', 'CAMR*'], ['Dataset', 'LDC2014T12', 'System', 'CAMR'], ['Dataset', 'LDC2014T12', 'System', 'SMBT-based'], ['Dataset', 'LDC2014T12', 'System', 'Our approach']]
1
[['P'], ['R'], ['F1']]
[['.64', '.53', '.58'], ['.68', '.60', '.64'], ['.70', '.62', '.66'], ['-', '-', '.67'], ['.70', '.62', '.66']]
column
['P', 'R', 'F1']
['Our approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Dataset || LDC2014T12 || System || JAMR (fixed)</td> <td>.64</td> <td>.53</td> <td>.58</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || CAMR*</td> <td>.68</td> <td>.60</td> <td>.64</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || CAMR</td> <td>.70</td> <td>.62</td> <td>.66</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || SMBT-based</td> <td>-</td> <td>-</td> <td>.67</td> </tr> <tr> <td>Dataset || LDC2014T12 || System || Our approach</td> <td>.70</td> <td>.62</td> <td>.66</td> </tr> </tbody></table>
Table 5
table_5
D16-1065
8
emnlp2016
We also evaluate our parser on the full LDC2014T12 dataset. We use the training/development/test split recommended in the release: 10,312 sentences for training, 1368 sentences for development and 1371 sentences for testing. For comparison, we include the results of JAMR, CAMR*, CAMR and SMBT-based parser (Pust et al., 2015), which are also trained on the same dataset. The results in Table 5 show that our approach outperforms CAMR*, and obtains comparable performance with CAMR. However, our approach achieves slightly lower performance, compared to the SMBT-based parser, which adds data and features drawn from various external semantic resources.
[2, 2, 2, 1, 1]
['We also evaluate our parser on the full LDC2014T12 dataset.', 'We use the training/development/test split recommended in the release: 10,312 sentences for training, 1368 sentences for development and 1371 sentences for testing.', 'For comparison, we include the results of JAMR, CAMR*, CAMR and SMBT-based parser (Pust et al., 2015), which are also trained on the same dataset.', 'The results in Table 5 show that our approach outperforms CAMR*, and obtains comparable performance with CAMR.', 'However, our approach achieves slightly lower performance, compared to the SMBT-based parser, which adds data and features drawn from various external semantic resources.']
[['LDC2014T12'], None, ['JAMR (fixed)', 'CAMR*', 'CAMR', 'SMBT-based', 'Our approach'], ['Our approach', 'CAMR*', 'CAMR'], ['Our approach', 'SMBT-based']]
1
D16-1068table_2
Per language UAS for the fully supervised setup. Model names are as in Table 1, ‘e’ stands for ensemble. Best results for each language and parsing model order are highlighted in bold.
2
[['language', 'swedish'], ['language', 'bulgarian'], ['language', 'chinese'], ['language', 'czech'], ['language', 'dutch'], ['language', 'japanese'], ['language', 'catalan'], ['language', 'english']]
2
[['First Order', 'TurboParser'], ['First Order', 'BGI-PP'], ['First Order', 'BGI-PP+i+b'], ['First Order', 'BGI-PP+i+b+e'], ['Second Order', 'TurboParser'], ['Second Order', 'BGI-PP'], ['Second Order', 'BGI-PP+i+b'], ['Second Order', 'BGI-PP+i+b+e']]
[['87.12', '86.35', '86.93', '87.12', '88.65', '86.14', '87.85', '89.29'], ['90.66', '90.22', '90.42', '90.66', '92.43', '89.73', '91.50', '92.58'], ['84.88', '83.89', '84.17', '84.17', '86.53', '81.33', '85.18', '86.59'], ['83.53', '83.46', '83.44', '83.44', '86.35', '84.91', '86.26', '87.50'], ['88.48', '88.56', '88.43', '88.43', '91.30', '89.64', '90.49', '91.34'], ['93.03', '93.18', '93.27', '93.27', '93.83', '93.78', '94.01', '94.01'], ['88.94', '88.50', '88.67', '88.93', '92.25', '89.3', '90.46', '92.24'], ['87.18', '86.94', '86.84', '87.18', '90.70', '86.52', '88.24', '90.66']]
column
['UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS']
['BGI-PP+i+b', 'BGI-PP+i+b+e']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>First Order || TurboParser</th> <th>First Order || BGI-PP</th> <th>First Order || BGI-PP+i+b</th> <th>First Order || BGI-PP+i+b+e</th> <th>Second Order || TurboParser</th> <th>Second Order || BGI-PP</th> <th>Second Order || BGI-PP+i+b</th> <th>Second Order || BGI-PP+i+b+e</th> </tr> </thead> <tbody> <tr> <td>language || swedish</td> <td>87.12</td> <td>86.35</td> <td>86.93</td> <td>87.12</td> <td>88.65</td> <td>86.14</td> <td>87.85</td> <td>89.29</td> </tr> <tr> <td>language || bulgarian</td> <td>90.66</td> <td>90.22</td> <td>90.42</td> <td>90.66</td> <td>92.43</td> <td>89.73</td> <td>91.50</td> <td>92.58</td> </tr> <tr> <td>language || chinese</td> <td>84.88</td> <td>83.89</td> <td>84.17</td> <td>84.17</td> <td>86.53</td> <td>81.33</td> <td>85.18</td> <td>86.59</td> </tr> <tr> <td>language || czech</td> <td>83.53</td> <td>83.46</td> <td>83.44</td> <td>83.44</td> <td>86.35</td> <td>84.91</td> <td>86.26</td> <td>87.50</td> </tr> <tr> <td>language || dutch</td> <td>88.48</td> <td>88.56</td> <td>88.43</td> <td>88.43</td> <td>91.30</td> <td>89.64</td> <td>90.49</td> <td>91.34</td> </tr> <tr> <td>language || japanese</td> <td>93.03</td> <td>93.18</td> <td>93.27</td> <td>93.27</td> <td>93.83</td> <td>93.78</td> <td>94.01</td> <td>94.01</td> </tr> <tr> <td>language || catalan</td> <td>88.94</td> <td>88.50</td> <td>88.67</td> <td>88.93</td> <td>92.25</td> <td>89.3</td> <td>90.46</td> <td>92.24</td> </tr> <tr> <td>language || english</td> <td>87.18</td> <td>86.94</td> <td>86.84</td> <td>87.18</td> <td>90.70</td> <td>86.52</td> <td>88.24</td> <td>90.66</td> </tr> </tbody></table>
Table 2
table_2
D16-1068
8
emnlp2016
Table 2 complements our results, providing UAS values for each of the 8 languages participating in this setup. The UAS difference between BGI-PP+i+b and the TurboParser are (+0.24)-(- 0.71) in first order parsing and (+0.18)-(-2.46) in second order parsing. In the latter case, combining these two models (BGI+PP+i+b+e) yields improvements over the TurboParser in 6 out of 8 languages.
[1, 1, 1]
['Table 2 complements our results, providing UAS values for each of the 8 languages participating in this setup.', 'The UAS difference between BGI-PP+i+b and the TurboParser are (+0.24)-(- 0.71) in first order parsing and (+0.18)-(-2.46) in second order parsing.', 'In the latter case, combining these two models (BGI+PP+i+b+e) yields improvements over the TurboParser in 6 out of 8 languages.']
[['language', 'First Order', 'Second Order'], ['BGI-PP+i+b', 'TurboParser', 'First Order', 'Second Order'], ['BGI-PP+i+b+e', 'TurboParser', 'language']]
1
D16-1071table_3
Word relation results. MRR per language and POS type for all models. unfiltered is the unfiltered nearest neighbor search space; filtered is the nearest neighbor search space that contains only one POS. ‡ (resp. †): significantly worse than LAMB (sign test, p < .01, resp. p < .05). Best unfiltered/filtered result per row is in bold.
4
[['lang', 'cz', 'POS', 'a'], ['lang', 'cz', 'POS', 'n'], ['lang', 'cz', 'POS', 'v'], ['lang', 'cz', 'POS', 'all'], ['lang', 'de', 'POS', 'a'], ['lang', 'de', 'POS', 'n'], ['lang', 'de', 'POS', 'v'], ['lang', 'de', 'POS', 'all'], ['lang', 'en', 'POS', 'a'], ['lang', 'en', 'POS', 'n'], ['lang', 'en', 'POS', 'v'], ['lang', 'en', 'POS', 'all'], ['lang', 'es', 'POS', 'a'], ['lang', 'es', 'POS', 'n'], ['lang', 'es', 'POS', 'v'], ['lang', 'es', 'POS', 'all'], ['lang', 'hu', 'POS', 'a'], ['lang', 'hu', 'POS', 'n'], ['lang', 'hu', 'POS', 'v'], ['lang', 'hu', 'POS', 'all']]
3
[['unfiltered', 'form', 'real'], ['unfiltered', 'form', 'opt'], ['unfiltered', 'form', 'sum'], ['unfiltered', 'STEM', 'real'], ['unfiltered', 'STEM', 'opt'], ['unfiltered', 'STEM', 'sum'], ['unfiltered', '-', 'LAMB'], ['filtered', 'form', 'real'], ['filtered', 'form', 'opt'], ['filtered', 'form', 'sum'], ['filtered', 'STEM', 'real'], ['filtered', 'STEM', 'opt'], ['filtered', 'STEM', 'sum'], ['filtered', 'LAMB', '-']]
[['0.03', '0.04', '0.05', '0.02', '0.05', '0.05', '0.06', '0.03‡', '0.05†', '0.07', '0.04†', '0.08', '0.08', '0.09'], ['0.15‡', '0.21‡', '0.24‡', '0.18‡', '0.27‡', '0.26‡', '0.30', '0.17‡', '0.23‡', '0.26‡', '0.20‡', '0.29‡', '0.28‡', '0.32'], ['0.07‡', '0.13‡', '0.16†', '0.08‡', '0.14‡', '0.16‡', '0.18', '0.09‡', '0.15‡', '0.17‡', '0.09‡', '0.17†', '0.18', '0.20'], ['0.12‡', '0.18‡', '0.20‡', '0.14‡', '0.22‡', '0.21‡', '0.25', '-', '-', '-', '-', '-', '-', '-'], ['0.14‡', '0.22‡', '0.25†', '0.17‡', '0.26', '0.21‡', '0.27', '0.17‡', '0.25‡', '0.27‡', '0.23‡', '0.33', '0.33', '0.33'], ['0.23‡', '0.35‡', '0.30‡', '0.28‡', '0.35†', '0.33‡', '0.36', '0.24‡', '0.36‡', '0.31‡', '0.28‡', '0.36', '0.35‡', '0.37'], ['0.11‡', '0.19‡', '0.18‡', '0.11‡', '0.22', '0.18‡', '0.23', '0.13‡', '0.20‡', '0.21‡', '0.13‡', '0.24‡', '0.23‡', '0.26'], ['0.21‡', '0.32‡', '0.28‡', '0.24‡', '0.33†', '0.30‡', '0.34', '-', '-', '-', '-', '-', '-', '-'], ['0.22‡', '0.25‡', '0.24‡', '0.16‡', '0.26‡', '0.25‡', '0.28', '0.25‡', '0.28‡', '0.28‡', '0.18‡', '0.29‡', '0.32', '0.31'], ['0.24‡', '0.27‡', '0.28‡', '0.22‡', '0.30', '0.28‡', '0.30', '0.25‡', '0.28‡', '0.29‡', '0.23‡', '0.31†', '0.31‡', '0.32'], ['0.29‡', '0.35‡', '0.37', '0.17‡', '0.35', '0.24‡', '0.37', '0.33‡', '0.39‡', '0.42‡', '0.21‡', '0.42†', '0.39‡', '0.44'], ['0.23‡', '0.26‡', '0.27‡', '0.20‡', '0.28‡', '0.25‡', '0.29', '-', '-', '-', '-', '-', '-', '-'], ['0.20‡', '0.23‡', '0.23‡', '0.08‡', '0.21‡', '0.18‡', '0.27', '0.21‡', '0.25‡', '0.26‡', '0.10‡', '0.26‡', '0.26‡', '0.30'], ['0.21‡', '0.25‡', '0.25‡', '0.16‡', '0.25‡', '0.23‡', '0.29', '0.22‡', '0.26‡', '0.27‡', '0.17‡', '0.27‡', '0.26‡', '0.30'], ['0.19‡', '0.35†', '0.36', '0.11‡', '0.29‡', '0.19‡', '0.38', '0.22‡', '0.36‡', '0.36‡', '0.16‡', '0.36‡', '0.33‡', '0.42'], ['0.20‡', '0.26‡', '0.26‡', '0.14‡', '0.24‡', '0.21‡', '0.30', '-', '-', '-', '-', '-', '-', '-'], ['0.02‡', '0.06‡', '0.06‡', '0.05‡', '0.08', '0.08', '0.09', '0.04‡', '0.08‡', '0.08‡', '0.06‡', '0.12', '0.11', '0.12'], ['0.01‡', '0.04‡', '0.05‡', '0.03‡', '0.07', '0.06‡', '0.07', '0.01‡', '0.04‡', '0.05‡', '0.04‡', '0.07†', '0.06‡', '0.07'], ['0.04‡', '0.11‡', '0.13‡', '0.07‡', '0.14‡', '0.15', '0.17', '0.05‡', '0.13‡', '0.14‡', '0.07‡', '0.15‡', '0.16†', '0.19'], ['0.02‡', '0.05‡', '0.06‡', '0.04‡', '0.08‡', '0.07‡', '0.09', '-', '-', '-', '-', '-', '-', '-']]
column
['MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR']
['LAMB']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>unfiltered || form || real</th> <th>unfiltered || form || opt</th> <th>unfiltered || form || sum</th> <th>unfiltered || STEM || real</th> <th>unfiltered || STEM || opt</th> <th>unfiltered || STEM || sum</th> <th>unfiltered || - || LAMB</th> <th>filtered || form || real</th> <th>filtered || form || opt</th> <th>filtered || form || sum</th> <th>filtered || STEM || real</th> <th>filtered || STEM || opt</th> <th>filtered || STEM || sum</th> <th>filtered || - || LAMB</th> </tr> </thead> <tbody> <tr> <td>lang || cz || POS || a</td> <td>0.03</td> <td>0.04</td> <td>0.05</td> <td>0.02</td> <td>0.05</td> <td>0.05</td> <td>0.06</td> <td>0.03‡</td> <td>0.05†</td> <td>0.07</td> <td>0.04†</td> <td>0.08</td> <td>0.08</td> <td>0.09</td> </tr> <tr> <td>lang || cz || POS || n</td> <td>0.15‡</td> <td>0.21‡</td> <td>0.24‡</td> <td>0.18‡</td> <td>0.27‡</td> <td>0.26‡</td> <td>0.30</td> <td>0.17‡</td> <td>0.23‡</td> <td>0.26‡</td> <td>0.20‡</td> <td>0.29‡</td> <td>0.28‡</td> <td>0.32</td> </tr> <tr> <td>lang || cz || POS || v</td> <td>0.07‡</td> <td>0.13‡</td> <td>0.16†</td> <td>0.08‡</td> <td>0.14‡</td> <td>0.16‡</td> <td>0.18</td> <td>0.09‡</td> <td>0.15‡</td> <td>0.17‡</td> <td>0.09‡</td> <td>0.17†</td> <td>0.18</td> <td>0.20</td> </tr> <tr> <td>lang || cz || POS || all</td> <td>0.12‡</td> <td>0.18‡</td> <td>0.20‡</td> <td>0.14‡</td> <td>0.22‡</td> <td>0.21‡</td> <td>0.25</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>lang || de || POS || a</td> <td>0.14‡</td> <td>0.22‡</td> <td>0.25†</td> <td>0.17‡</td> <td>0.26</td> <td>0.21‡</td> <td>0.27</td> <td>0.17‡</td> <td>0.25‡</td> <td>0.27‡</td> <td>0.23‡</td> <td>0.33</td> <td>0.33</td> <td>0.33</td> </tr> <tr> <td>lang || de || POS || n</td> <td>0.23‡</td> <td>0.35‡</td> <td>0.30‡</td> <td>0.28‡</td> <td>0.35†</td> <td>0.33‡</td> <td>0.36</td> <td>0.24‡</td> <td>0.36‡</td> <td>0.31‡</td> <td>0.28‡</td> <td>0.36</td> <td>0.35‡</td> <td>0.37</td> </tr> <tr> <td>lang || de || POS || v</td> <td>0.11‡</td> <td>0.19‡</td> <td>0.18‡</td> <td>0.11‡</td> <td>0.22</td> <td>0.18‡</td> <td>0.23</td> <td>0.13‡</td> <td>0.20‡</td> <td>0.21‡</td> <td>0.13‡</td> <td>0.24‡</td> <td>0.23‡</td> <td>0.26</td> </tr> <tr> <td>lang || de || POS || all</td> <td>0.21‡</td> <td>0.32‡</td> <td>0.28‡</td> <td>0.24‡</td> <td>0.33†</td> <td>0.30‡</td> <td>0.34</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>lang || en || POS || a</td> <td>0.22‡</td> <td>0.25‡</td> <td>0.24‡</td> <td>0.16‡</td> <td>0.26‡</td> <td>0.25‡</td> <td>0.28</td> <td>0.25‡</td> <td>0.28‡</td> <td>0.28‡</td> <td>0.18‡</td> <td>0.29‡</td> <td>0.32</td> <td>0.31</td> </tr> <tr> <td>lang || en || POS || n</td> <td>0.24‡</td> <td>0.27‡</td> <td>0.28‡</td> <td>0.22‡</td> <td>0.30</td> <td>0.28‡</td> <td>0.30</td> <td>0.25‡</td> <td>0.28‡</td> <td>0.29‡</td> <td>0.23‡</td> <td>0.31†</td> <td>0.31‡</td> <td>0.32</td> </tr> <tr> <td>lang || en || POS || v</td> <td>0.29‡</td> <td>0.35‡</td> <td>0.37</td> <td>0.17‡</td> <td>0.35</td> <td>0.24‡</td> <td>0.37</td> <td>0.33‡</td> <td>0.39‡</td> <td>0.42‡</td> <td>0.21‡</td> <td>0.42†</td> <td>0.39‡</td> <td>0.44</td> </tr> <tr> <td>lang || en || POS || all</td> <td>0.23‡</td> <td>0.26‡</td> <td>0.27‡</td> <td>0.20‡</td> <td>0.28‡</td> <td>0.25‡</td> <td>0.29</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>lang || es || POS || a</td> <td>0.20‡</td> <td>0.23‡</td> <td>0.23‡</td> <td>0.08‡</td> <td>0.21‡</td> <td>0.18‡</td> <td>0.27</td> <td>0.21‡</td> <td>0.25‡</td> <td>0.26‡</td> <td>0.10‡</td> <td>0.26‡</td> <td>0.26‡</td> <td>0.30</td> </tr> <tr> <td>lang || es || POS || n</td> <td>0.21‡</td> <td>0.25‡</td> <td>0.25‡</td> <td>0.16‡</td> <td>0.25‡</td> <td>0.23‡</td> <td>0.29</td> <td>0.22‡</td> <td>0.26‡</td> <td>0.27‡</td> <td>0.17‡</td> <td>0.27‡</td> <td>0.26‡</td> <td>0.30</td> </tr> <tr> <td>lang || es || POS || v</td> <td>0.19‡</td> <td>0.35†</td> <td>0.36</td> <td>0.11‡</td> <td>0.29‡</td> <td>0.19‡</td> <td>0.38</td> <td>0.22‡</td> <td>0.36‡</td> <td>0.36‡</td> <td>0.16‡</td> <td>0.36‡</td> <td>0.33‡</td> <td>0.42</td> </tr> <tr> <td>lang || es || POS || all</td> <td>0.20‡</td> <td>0.26‡</td> <td>0.26‡</td> <td>0.14‡</td> <td>0.24‡</td> <td>0.21‡</td> <td>0.30</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>lang || hu || POS || a</td> <td>0.02‡</td> <td>0.06‡</td> <td>0.06‡</td> <td>0.05‡</td> <td>0.08</td> <td>0.08</td> <td>0.09</td> <td>0.04‡</td> <td>0.08‡</td> <td>0.08‡</td> <td>0.06‡</td> <td>0.12</td> <td>0.11</td> <td>0.12</td> </tr> <tr> <td>lang || hu || POS || n</td> <td>0.01‡</td> <td>0.04‡</td> <td>0.05‡</td> <td>0.03‡</td> <td>0.07</td> <td>0.06‡</td> <td>0.07</td> <td>0.01‡</td> <td>0.04‡</td> <td>0.05‡</td> <td>0.04‡</td> <td>0.07†</td> <td>0.06‡</td> <td>0.07</td> </tr> <tr> <td>lang || hu || POS || v</td> <td>0.04‡</td> <td>0.11‡</td> <td>0.13‡</td> <td>0.07‡</td> <td>0.14‡</td> <td>0.15</td> <td>0.17</td> <td>0.05‡</td> <td>0.13‡</td> <td>0.14‡</td> <td>0.07‡</td> <td>0.15‡</td> <td>0.16†</td> <td>0.19</td> </tr> <tr> <td>lang || hu || POS || all</td> <td>0.02‡</td> <td>0.05‡</td> <td>0.06‡</td> <td>0.04‡</td> <td>0.08‡</td> <td>0.07‡</td> <td>0.09</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> </tbody></table>
Table 3
table_3
D16-1071
7
emnlp2016
Results. The MRR results in the left half of Table 3 (“unfiltered”) show that for all languages and for all POS, form real has the worst performance among the form models. This comes at no surprise since this model does barely know anything about word forms and lemmata. The form opt model improves these results based on the additional information it has access to (the mapping from lemma to its most frequent form). form sum performs similar to form opt. For Czech, Hungarian and Spanish it is slightly better (or equally good), whereas for English and German there is no clear trend. There is a large difference between these two models on German nouns, with form sum performing considerably worse. We attribute this to the fact that many German noun forms are rare compounds and therefore lead to badly trained form embeddings, which summed up do not lead to high quality embeddings either. Among the stemming models, stem real also is the worst performing model. We can further see that for all languages and almost all POS,stem sum performs worse than stem opt. That indicates that stemming leads to many low-frequency stems or many words sharing the same stem. This is especially apparent in Spanish verbs. There, the stemming models are clearly inferior to form models. Overall, LAMB performs best for all languages and POS types. Most improvements of LAMB are significant. The improvement over the best formmodel reaches up to 6 points (e.g., Czech nouns). In contrast to form sum, LAMB improves over form opt on German nouns. This indicates that the sparsity issue is successfully addressed by LAMB.
[2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['Results.', 'The MRR results in the left half of Table 3 (“unfiltered”) show that for all languages and for all POS, form real has the worst performance among the form models.', 'This comes at no surprise since this model does barely know anything about word forms and lemmata.', 'The form opt model improves these results based on the additional information it has access to (the mapping from lemma to its most frequent form).', 'form sum performs similar to form opt.', 'For Czech, Hungarian and Spanish it is slightly better (or equally good), whereas for English and German there is no clear trend.', 'There is a large difference between these two models on German nouns, with form sum performing considerably worse.', 'We attribute this to the fact that many German noun forms are rare compounds and therefore lead to badly trained form embeddings, which summed up do not lead to high quality embeddings either.', 'Among the stemming models, stem real also is the worst performing model.', 'We can further see that for all languages and almost all POS,stem sum performs worse than stem opt.', 'That indicates that stemming leads to many low-frequency stems or many words sharing the same stem.', 'This is especially apparent in Spanish verbs.', 'There, the stemming models are clearly inferior to form models.', 'Overall, LAMB performs best for all languages and POS types.', 'Most improvements of LAMB are significant.', 'The improvement over the best formmodel reaches up to 6 points (e.g., Czech nouns).', 'In contrast to form sum, LAMB improves over form opt on German nouns.', 'This indicates that the sparsity issue is successfully addressed by LAMB.']
[None, ['lang', 'POS', 'unfiltered'], None, ['form'], ['form', 'sum', 'opt'], ['lang'], ['form', 'sum', 'de'], ['de'], ['STEM'], ['lang', 'POS', 'STEM', 'sum', 'opt'], ['STEM'], ['es'], ['STEM'], ['LAMB', 'lang', 'POS'], ['LAMB'], ['cz'], ['LAMB', 'form', 'sum', 'opt', 'de'], ['LAMB']]
1
D16-1071table_5
Polarity classification results. Bold is best per language and column.
4
[['lang', 'cz', 'features', 'Brychcin et al. (2013)'], ['lang', 'cz', 'features', 'form'], ['lang', 'cz', 'features', 'STEM'], ['lang', 'cz', 'features', 'LAMB'], ['lang', 'en', 'features', 'Hagen et al. (2015)'], ['lang', 'en', 'features', 'form'], ['lang', 'en', 'features', 'STEM'], ['lang', 'en', 'features', 'LAMB']]
1
[['acc'], ['F1']]
[['-', '81.53'], ['80.86', '80.75'], ['81.51', '81.39'], ['81.21', '81.09'], ['-', '64.84'], ['66.78', '62.21'], ['66.95', '62.06'], ['67.49', '63.01']]
column
['acc', 'F1']
['LAMB']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>acc</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>lang || cz || features || Brychcin et al. (2013)</td> <td>-</td> <td>81.53</td> </tr> <tr> <td>lang || cz || features || form</td> <td>80.86</td> <td>80.75</td> </tr> <tr> <td>lang || cz || features || STEM</td> <td>81.51</td> <td>81.39</td> </tr> <tr> <td>lang || cz || features || LAMB</td> <td>81.21</td> <td>81.09</td> </tr> <tr> <td>lang || en || features || Hagen et al. (2015)</td> <td>-</td> <td>64.84</td> </tr> <tr> <td>lang || en || features || form</td> <td>66.78</td> <td>62.21</td> </tr> <tr> <td>lang || en || features || STEM</td> <td>66.95</td> <td>62.06</td> </tr> <tr> <td>lang || en || features || LAMB</td> <td>67.49</td> <td>63.01</td> </tr> </tbody></table>
Table 5
table_5
D16-1071
8
emnlp2016
Results. Table 5 lists the 10-fold cross-validation results (accuracy and macro F1) on the CSFD dataset. LAMB/STEM results are consistently better than form results. In our analysis, we found the following example for the benefit of normalization: “popis a nazev za- ´ jmavy a film je takov ´ a filma ´ ˇrska pras ´ arna” (engl. ´ “description and title are interesting, but it is bad film-making”). This example is correctly classified as negative by the LAMB model because it has an embedding for “prasarna” (bad, smut) whereas the ´ form model does not. The out-of-vocabulary counts for form and LAMB on the first fold of the CSFD experiment are 26.3k and 25.5k, respectively. The similarity of these two numbers suggests that the quality of word embeddings (form vs. LAMB) are responsible for the performance gain. On the SemEval data, LAMB improves the results over form and stem (cf. Table 5). Hence, LAMB can still pick up additional information despite the simple morphology of English. This is probably due to better embeddings for rare words. The SemEval 2015 winner (Hagen et al., 2015) is a highly domaindependent and specialized system that we do not outperform.
[2, 1, 1, 2, 2, 2, 2, 1, 1, 2, 1]
['Results.', 'Table 5 lists the 10-fold cross-validation results (accuracy and macro F1) on the CSFD dataset.', 'LAMB/STEM results are consistently better than form results.', 'In our analysis, we found the following example for the benefit of normalization: “popis a nazev za- ´ jmavy a film je takov ´ a filma ´ ˇrska pras ´ arna” (engl. ´ “description and title are interesting, but it is bad film-making”).', 'This example is correctly classified as negative by the LAMB model because it has an embedding for “prasarna” (bad, smut) whereas the ´ form model does not.', 'The out-of-vocabulary counts for form and LAMB on the first fold of the CSFD experiment are 26.3k and 25.5k, respectively.', 'The similarity of these two numbers suggests that the quality of word embeddings (form vs. LAMB) are responsible for the performance gain.', 'On the SemEval data, LAMB improves the results over form and stem (cf. Table 5).', 'Hence, LAMB can still pick up additional information despite the simple morphology of English.', 'This is probably due to better embeddings for rare words.', 'The SemEval 2015 winner (Hagen et al., 2015) is a highly domain-dependent and specialized system that we do not outperform.']
[None, ['acc', 'F1'], ['LAMB', 'STEM'], None, ['LAMB'], ['form', 'LAMB'], ['form', 'LAMB'], ['LAMB', 'form', 'STEM'], ['LAMB', 'en'], None, ['Hagen et al. (2015)']]
1
D16-1072table_2
POS tagging performance of online and offline pruning with different r and λ on CTB5 and PD.
5
[['Online Pruning', 'r', '2', 'λ', '0.98'], ['Online Pruning', 'r', '4', 'λ', '0.98'], ['Online Pruning', 'r', '8', 'λ', '0.98'], ['Online Pruning', 'r', '16', 'λ', '0.98'], ['Online Pruning', 'r', '8', 'λ', '0.90'], ['Online Pruning', 'r', '8', 'λ', '0.95'], ['Online Pruning', 'r', '8', 'λ', '0.99'], ['Online Pruning', 'r', '8', 'λ', '1.00'], ['Offline Pruning', 'r', '8', 'λ', '0.9999'], ['Offline Pruning', 'r', '16', 'λ', '0.9999'], ['Offline Pruning', 'r', '32', 'λ', '0.9999'], ['Offline Pruning', 'r', '16', 'λ', '0.99'], ['Offline Pruning', 'r', '16', 'λ', '0.999'], ['Offline Pruning', 'r', '16', 'λ', '0.99999']]
2
[['Accuracy (%)', 'CTB5-dev'], ['Accuracy (%)', 'PD-dev'], ['#Tags (pruned)', 'CTB-side'], ['#Tags (pruned)', 'PD-side']]
[['94.25', '95.03', '2.0', '2.0'], ['95.06', '95.66', '3.9', '4.0'], ['95.14', '95.83', '6.3', '7.4'], ['95.12', '95.81', '7.8', '14.1'], ['95.15', '95.79', '3.7', '6.3'], ['95.13', '95.82', '5.1', '7.1'], ['95.15', '95.74', '7.4', '7.9'], ['95.15', '95.76', '8.0', '8.0'], ['94.95', '96.05', '4.1', '5.1'], ['95.15', '96.09', '5.2', '7.6'], ['95.13', '96.09', '5.5', '9.3'], ['94.42', '95.77', '1.6', '2.2'], ['95.02', '96.10', '2.6', '4.0'], ['95.10', '96.09', '6.8', '8.9']]
column
['Accuracy (%)', 'Accuracy (%)', '#Tags (pruned)', '#Tags (pruned)']
['Online Pruning', 'Offline Pruning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%) || CTB5-dev</th> <th>Accuracy (%) || PD-dev</th> <th>#Tags (pruned) || CTB-side</th> <th>#Tags (pruned) || PD-side</th> </tr> </thead> <tbody> <tr> <td>Online Pruning || r || 2 || λ || 0.98</td> <td>94.25</td> <td>95.03</td> <td>2.0</td> <td>2.0</td> </tr> <tr> <td>Online Pruning || r || 4 || λ || 0.98</td> <td>95.06</td> <td>95.66</td> <td>3.9</td> <td>4.0</td> </tr> <tr> <td>Online Pruning || r || 8 || λ || 0.98</td> <td>95.14</td> <td>95.83</td> <td>6.3</td> <td>7.4</td> </tr> <tr> <td>Online Pruning || r || 16 || λ || 0.98</td> <td>95.12</td> <td>95.81</td> <td>7.8</td> <td>14.1</td> </tr> <tr> <td>Online Pruning || r || 8 || λ || 0.90</td> <td>95.15</td> <td>95.79</td> <td>3.7</td> <td>6.3</td> </tr> <tr> <td>Online Pruning || r || 8 || λ || 0.95</td> <td>95.13</td> <td>95.82</td> <td>5.1</td> <td>7.1</td> </tr> <tr> <td>Online Pruning || r || 8 || λ || 0.99</td> <td>95.15</td> <td>95.74</td> <td>7.4</td> <td>7.9</td> </tr> <tr> <td>Online Pruning || r || 8 || λ || 1.00</td> <td>95.15</td> <td>95.76</td> <td>8.0</td> <td>8.0</td> </tr> <tr> <td>Offline Pruning || r || 8 || λ || 0.9999</td> <td>94.95</td> <td>96.05</td> <td>4.1</td> <td>5.1</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.9999</td> <td>95.15</td> <td>96.09</td> <td>5.2</td> <td>7.6</td> </tr> <tr> <td>Offline Pruning || r || 32 || λ || 0.9999</td> <td>95.13</td> <td>96.09</td> <td>5.5</td> <td>9.3</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.99</td> <td>94.42</td> <td>95.77</td> <td>1.6</td> <td>2.2</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.999</td> <td>95.02</td> <td>96.10</td> <td>2.6</td> <td>4.0</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.99999</td> <td>95.10</td> <td>96.09</td> <td>6.8</td> <td>8.9</td> </tr> </tbody></table>
Table 2
table_2
D16-1072
5
emnlp2016
5 Experiments on POS Tagging. 5.1 Parameter Tuning. For both online and offline pruning, we need to decide the maximum number of single-side tag candidates r and the accumulative probability threshold λ for further truncating the candidates. Table 2 shows the tagging accuracies and the averaged numbers of single-side tags for each token after pruning. The first major row tunes the two hyperparameters for online pruning. We first fix λ = 0.98 and increase r from 2 to 8, leading to consistently improved accuracies on both CTB5-dev and PDdev. No further improvement is gained with r = 16, indicating that tags below the top-8 are mostly very unlikely ones and thus insignificant for computing feature expectations. Then we fix r = 8 and try different λ. We find that λ has little effect on tagging accuracies but influences the numbers of remaining single-side tags. We choose r = 8 and λ = 0.98 for final evaluation. The second major row tunes r and λ for offline pruning. Different from online pruning, λ has much greater effect on the number of remaining single-side tags. Under λ = 0.9999, increasing r from 8 to 16 leads to 0.20% accuracy improvement on CTB5-dev, but using r = 32 has no further gain. Then we fix r = 16 and vary λ from 0.99 to 0.99999. We choose r = 16 and λ = 0.9999 for offline pruning for final evaluation, which leaves each word with about 5.2 CTB-tags and 7.6 PD-tags on average.
[2, 2, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['5 Experiments on POS Tagging.', '5.1 Parameter Tuning.', 'For both online and offline pruning, we need to decide the maximum number of single-side tag candidates r and the accumulative probability threshold λ for further truncating the candidates.', 'Table 2 shows the tagging accuracies and the averaged numbers of single-side tags for each token after pruning.', 'The first major row tunes the two hyperparameters for online pruning.', 'We first fix λ = 0.98 and increase r from 2 to 8, leading to consistently improved accuracies on both CTB5-dev and PDdev.', 'No further improvement is gained with r = 16, indicating that tags below the top-8 are mostly very unlikely ones and thus insignificant for computing feature expectations.', 'Then we fix r = 8 and try different λ.', 'We find that λ has little effect on tagging accuracies but influences the numbers of remaining single-side tags.', 'We choose r = 8 and λ = 0.98 for final evaluation.', 'The second major row tunes r and λ for offline pruning.', 'Different from online pruning, λ has much greater effect on the number of remaining single-side tags.', 'Under λ = 0.9999, increasing r from 8 to 16 leads to 0.20% accuracy improvement on CTB5-dev, but using r = 32 has no further gain.', 'Then we fix r = 16 and vary λ from 0.99 to 0.99999.', 'We choose r = 16 and λ = 0.9999 for offline pruning for final evaluation, which leaves each word with about 5.2 CTB-tags and 7.6 PD-tags on average.']
[None, None, ['Online Pruning', 'Offline Pruning', 'λ', 'r'], None, ['Online Pruning'], ['λ', 'r', 'CTB5-dev', 'PD-dev'], ['r'], ['λ', 'r'], ['λ'], ['r', 'λ'], ['r', 'λ'], ['λ'], ['r', 'λ', 'CTB5-dev'], ['r', 'λ'], ['r', 'λ', 'CTB-side', 'PD-side']]
1
D16-1072table_3
POS tagging performance of difference approaches on CTB5 and PD.
1
[['Coupled (Offline)'], ['Coupled (Online)'], ['Coupled (No Prune)'], ['Coupled (Relaxed)'], ['Guide-feature'], ['Baseline'], ['Li et al. (2012b)']]
2
[['Accuracy (%)', 'CTB5-test'], ['Accuracy (%)', 'PD-test'], ['Speed', 'Toks/Sec']]
[['94.83', '95.90', '246'], ['94.74', '95.95', '365'], ['94.58', '95.79', '3'], ['94.63', '95.87', '127'], ['94.35', '95.63', '584'], ['94.07', '95.82', '1573'], ['94.60', '—', '—']]
column
['Accuracy (%)', 'Accuracy (%)', 'Speed']
['Coupled (Offline)', 'Coupled (Online)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%) || CTB5-test</th> <th>Accuracy (%) || PD-test</th> <th>Speed || Toks/Sec</th> </tr> </thead> <tbody> <tr> <td>Coupled (Offline)</td> <td>94.83</td> <td>95.90</td> <td>246</td> </tr> <tr> <td>Coupled (Online)</td> <td>94.74</td> <td>95.95</td> <td>365</td> </tr> <tr> <td>Coupled (No Prune)</td> <td>94.58</td> <td>95.79</td> <td>3</td> </tr> <tr> <td>Coupled (Relaxed)</td> <td>94.63</td> <td>95.87</td> <td>127</td> </tr> <tr> <td>Guide-feature</td> <td>94.35</td> <td>95.63</td> <td>584</td> </tr> <tr> <td>Baseline</td> <td>94.07</td> <td>95.82</td> <td>1573</td> </tr> <tr> <td>Li et al. (2012b)</td> <td>94.60</td> <td>—</td> <td>—</td> </tr> </tbody></table>
Table 3
table_3
D16-1072
6
emnlp2016
5.2 Main Results. Table 3 summarizes the accuracies on the test data and the tagging speed during the test phase. “Coupled (No Prune)” refers to the coupled model with complete mapping in Li et al. (2015), which maps each one-side tag to all the-other-side tags. “Coupled (Relaxed)” refers the coupled model with relaxed mapping in Li et al. (2015), which maps a one-side tag to a manually-designed small set of the-otherside tags. Li et al. (2012b) report the state-of-theart accuracy on this CTB data, with a joint model of Chinese POS tagging and dependency parsing. It is clear that both online and offline pruning greatly improve the efficiency of the coupled model by about two magnitudes, without the need of a carefully predefined set of tag-to-tag mapping rules. Moreover, the coupled model with offline pruning achieves 0.76% accuracy improvement on CTB5- test over the baseline model, and 0.48% over our reimplemented guide-feature approach of Jiang et al. (2009). The gains on PD-test are marginal, possibly due to the large size of PD-train, similar to the results in Li et al. (2015).
[0, 1, 2, 2, 2, 1, 1, 1]
['5.2 Main Results.', 'Table 3 summarizes the accuracies on the test data and the tagging speed during the test phase.', '“Coupled (No Prune)” refers to the coupled model with complete mapping in Li et al. (2015), which maps each one-side tag to all the-other-side tags.', '“Coupled (Relaxed)” refers the coupled model with relaxed mapping in Li et al. (2015), which maps a one-side tag to a manually-designed small set of the-otherside tags.', 'Li et al. (2012b) report the state-of-theart accuracy on this CTB data, with a joint model of Chinese POS tagging and dependency parsing.', 'It is clear that both online and offline pruning greatly improve the efficiency of the coupled model by about two magnitudes, without the need of a carefully predefined set of tag-to-tag mapping rules.', 'Moreover, the coupled model with offline pruning achieves 0.76% accuracy improvement on CTB5- test over the baseline model, and 0.48% over our reimplemented guide-feature approach of Jiang et al. (2009).', 'The gains on PD-test are marginal, possibly due to the large size of PD-train, similar to the results in Li et al. (2015).']
[None, None, ['Coupled (No Prune)'], ['Coupled (Relaxed)'], ['Li et al. (2012b)'], ['Coupled (Offline)', 'Coupled (Online)'], ['Coupled (Offline)', 'CTB5-test'], ['PD-test']]
1
D16-1072table_4
WS&POS tagging performance of online and offline pruning with different r and λ on CTB5 and PD.
5
[['Online Pruning', 'r', '8', 'λ', '1.00'], ['Online Pruning', 'r', '16', 'λ', '0.95'], ['Online Pruning', 'r', '16', 'λ', '0.99'], ['Online Pruning', 'r', '16', 'λ', '1.00'], ['Offline Pruning', 'r', '16', 'λ', '0.99']]
2
[['Accuracy (%)', 'CTB5-dev'], ['Accuracy (%)', 'PD-dev'], ['#Tags (pruned)', 'CTB-side'], ['#Tags (pruned)', 'PD-side']]
[['90.41', '89.91', '8.0', '8.0'], ['90.65', '90.22', '15.9', '16.0'], ['90.77', '90.49', '16.0', '16.0'], ['90.79', '90.49', '16.0', '16.0'], ['91.64', '91.92', '2.5', '3.5']]
column
['Accuracy (%)', 'Accuracy (%)', '#Tags (pruned)', '#Tags (pruned)']
['Online Pruning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%) || CTB5-dev</th> <th>Accuracy (%) || PD-dev</th> <th>#Tags (pruned) || CTB-side</th> <th>#Tags (pruned) || PD-side</th> </tr> </thead> <tbody> <tr> <td>Online Pruning || r || 8 || λ || 1.00</td> <td>90.41</td> <td>89.91</td> <td>8.0</td> <td>8.0</td> </tr> <tr> <td>Online Pruning || r || 16 || λ || 0.95</td> <td>90.65</td> <td>90.22</td> <td>15.9</td> <td>16.0</td> </tr> <tr> <td>Online Pruning || r || 16 || λ || 0.99</td> <td>90.77</td> <td>90.49</td> <td>16.0</td> <td>16.0</td> </tr> <tr> <td>Online Pruning || r || 16 || λ || 1.00</td> <td>90.79</td> <td>90.49</td> <td>16.0</td> <td>16.0</td> </tr> <tr> <td>Offline Pruning || r || 8 || λ || 0.995</td> <td>91.22</td> <td>91.62</td> <td>2.6</td> <td>3.1</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.995</td> <td>91.66</td> <td>91.85</td> <td>3.2</td> <td>4.3</td> </tr> <tr> <td>Offline Pruning || r || 32 || λ || 0.995</td> <td>91.67</td> <td>91.87</td> <td>3.5</td> <td>5.6</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.95</td> <td>90.69</td> <td>91.30</td> <td>1.6</td> <td>2.1</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.99</td> <td>91.64</td> <td>91.92</td> <td>2.5</td> <td>3.5</td> </tr> <tr> <td>Offline Pruning || r || 16 || λ || 0.999</td> <td>91.62</td> <td>91.75</td> <td>5.1</td> <td>6.4</td> </tr> </tbody></table>
Table 4
table_4
D16-1072
6
emnlp2016
Table 4 shows results for tuning r and λ. From the results, we can see that in the online pruning method, λ seems useless and r becomes the only threshold for pruning unlikely single-side tags. The accuracies are much inferior to those from the offline pruning approach. We believe that the accuracies can be further improved with larger r, which would nevertheless lead to severe inefficiency issue. Based on the results, we choose r = 16 and λ = 1.00 for final evaluation.
[1, 1, 1, 2, 1]
['Table 4 shows results for tuning r and λ.', 'From the results, we can see that in the online pruning method, λ seems useless and r becomes the only threshold for pruning unlikely single-side tags.', 'The accuracies are much inferior to those from the offline pruning approach.', 'We believe that the accuracies can be further improved with larger r, which would nevertheless lead to severe inefficiency issue.', 'Based on the results, we choose r = 16 and λ = 1.00 for final evaluation.']
[None, ['Online Pruning', 'λ'], ['Online Pruning', 'Offline Pruning'], ['r'], ['r', 'λ']]
1
D16-1072table_5
WS&POS tagging performance of difference approaches on CTB5 and PD.
1
[['Coupled (Offline)'], ['Coupled (Online)'], ['Guide-feature'], ['Baseline']]
2
[['F (%) on CTB5-test', 'Only WS'], ['F (%) on CTB5-test', 'Joint WS&POS'], ['F (%) on PD-test', 'Only WS'], ['F (%) on PD-test', 'Joint WS&POS'], ['Speed (Char/Sec)', '-']]
[['95.55', '90.58', '96.12', '92.44', '115'], ['94.94', '89.58', '95.60', '91.56', '26'], ['95.07', '89.79', '95.66', '91.61', '27'], ['94.88', '89.49', '96.28', '92.47', '119']]
column
['F (%) on CTB5-test', 'F (%) on CTB5-test', 'F (%) on PD-test', 'F (%) on PD-test', 'Speed (Char/Sec)']
['Coupled (Offline)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P/R/F (%) on CTB5-test || Only WS</th> <th>P/R/F (%) on CTB5-test || Joint WS&amp;POS</th> <th>P/R/F (%) on PD-test || Only WS</th> <th>P/R/F (%) on PD-test || Joint WS&amp;POS</th> <th>Speed || Char/Sec</th> </tr> </thead> <tbody> <tr> <td>Coupled (Offline)</td> <td>95.65/95.46/95.55</td> <td>90.68/90.49/90.58</td> <td>96.39/95.86/96.12</td> <td>92.70/92.19/92.44</td> <td>115</td> </tr> <tr> <td>Coupled (Online)</td> <td>95.17/94.71/94.94</td> <td>89.80/89.37/89.58</td> <td>95.76/95.45/95.60</td> <td>91.71/91.41/91.56</td> <td>26</td> </tr> <tr> <td>Guide-feature</td> <td>95.26/94.89/95.07</td> <td>89.96/89.61/89.79</td> <td>95.99/95.33/95.66</td> <td>91.92/91.30/91.61</td> <td>27</td> </tr> <tr> <td>Baseline</td> <td>95.00/94.77/94.88</td> <td>89.60/89.38/89.49</td> <td>96.56/96.00/96.28</td> <td>92.74/92.20/92.47</td> <td>119</td> </tr> </tbody></table>
Table 5
table_5
D16-1072
7
emnlp2016
6.2 Main Results. Table 5 summarizes the accuracies on the test data and the tagging speed (characters per second) during the test phase. “Coupled (No Prune)” is not tried due to the prohibitive tag set size in joint WS&POS tagging, and “Coupled (Relaxed)” is also skipped since it seems impossible to manually design reasonable tag-to-tag mapping rules in this case. In terms of efficiency, the coupled model with offline pruning is on par with the baseline single-side tagging model. In terms of F-score, the coupled model with offline pruning achieves 0.67% (WS) and 1.09% (WS&POS) gains on CTB5-test over the baseline model, and 0.48% (WS) and 0.79% (WS&POS) over our reimplemented guide-feature approach of Jiang et al. (2009). Similar to the case of POS tagging, the baseline model is very competitive on PD-test due to the large scale of PD-train.
[2, 1, 2, 1, 1, 2]
['6.2 Main Results.', 'Table 5 summarizes the accuracies on the test data and the tagging speed (characters per second) during the test phase.', '“Coupled (No Prune)” is not tried due to the prohibitive tag set size in joint WS&POS tagging, and “Coupled (Relaxed)” is also skipped since it seems impossible to manually design reasonable tag-to-tag mapping rules in this case.', 'In terms of efficiency, the coupled model with offline pruning is on par with the baseline single-side tagging model.', 'In terms of F-score, the coupled model with offline pruning achieves 0.67% (WS) and 1.09% (WS&POS) gains on CTB5-test over the baseline model, and 0.48% (WS) and 0.79% (WS&POS) over our reimplemented guide-feature approach of Jiang et al. (2009).', 'Similar to the case of POS tagging, the baseline model is very competitive on PD-test due to the large scale of PD-train.']
[None, None, None, ['Speed (Char/Sec)'], ['F (%) on CTB5-test', 'Coupled (Offline)'], None]
1
D16-1072table_6
WS&POS tagging performance of difference approaches on CTB5X and PD.
1
[['Coupled (Offline)'], ['Guide-feature'], ['Baseline'], ['Sun and Wan (2012)'], ['Jiang et al. (2009)']]
2
[['F (%) on CTB5X-test', 'Only WS'], ['F (%) on CTB5X-test', 'Joint WS&POS']]
[['98.01', '94.39'], ['97.96', '94.06'], ['97.37', '93.23'], ['—', '94.36'], ['98.23', '94.03']]
column
['F (%) on CTB5X-test', 'F (%) on CTB5X-test']
['Coupled (Offline)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F (%) on CTB5X-test || Only WS</th> <th>F (%) on CTB5X-test || Joint WS&amp;POS</th> </tr> </thead> <tbody> <tr> <td>Coupled (Offline)</td> <td>98.01</td> <td>94.39</td> </tr> <tr> <td>Guide-feature</td> <td>97.96</td> <td>94.06</td> </tr> <tr> <td>Baseline</td> <td>97.37</td> <td>93.23</td> </tr> <tr> <td>Sun and Wan (2012)</td> <td>—</td> <td>94.36</td> </tr> <tr> <td>Jiang et al. (2009)</td> <td>98.23</td> <td>94.03</td> </tr> </tbody></table>
Table 6
table_6
D16-1072
8
emnlp2016
6.4 Comparison with Previous Work. In order to compare with previous work, we also run our models on CTB5X and PD, where CTB5X adopts a different data split of CTB5 and is widely used in previous research on joint WS&POS tagging (Jiang et al., 2009; Sun and Wan, 2012). CTB5X-dev/test only contain 352/348 sentences respectively. Table 6 presents the F scores on CTB5X-test. We can see that the coupled model with offline pruning achieves 0.64% (WS) and 1.16% (WS&POS) F-score improvements over the baseline model, and 0.05% (WS) and 0.33% (WS&POS) over the guide-feature approach. The original guide-feature method in Jiang et al. (2009) achieves 98.23% and 94.03% F-score, which is very close to the results of our reimplemented model. The sub-word stacking approach of Sun and Wan (2012) can be understood as a more complex variant of the basic guide-feature method.
[2, 2, 2, 1, 1, 1, 2]
['6.4 Comparison with Previous Work.', 'In order to compare with previous work, we also run our models on CTB5X and PD, where CTB5X adopts a different data split of CTB5 and is widely used in previous research on joint WS&POS tagging (Jiang et al., 2009; Sun and Wan, 2012).', 'CTB5X-dev/test only contain 352/348 sentences respectively.', 'Table 6 presents the F scores on CTB5X-test.', 'We can see that the coupled model with offline pruning achieves 0.64% (WS) and 1.16% (WS&POS) F-score improvements over the baseline model, and 0.05% (WS) and 0.33% (WS&POS) over the guide-feature approach.', 'The original guide-feature method in Jiang et al. (2009) achieves 98.23% and 94.03% F-score, which is very close to the results of our reimplemented model.', 'The sub-word stacking approach of Sun and Wan (2012) can be understood as a more complex variant of the basic guide-feature method.']
[None, None, None, ['F (%) on CTB5X-test'], ['Coupled (Offline)', 'Guide-feature', 'Baseline', 'F (%) on CTB5X-test'], ['Jiang et al. (2009)', 'F (%) on CTB5X-test', 'Coupled (Offline)'], ['Sun and Wan (2012)']]
1
D16-1075table_3
Performance of various approaches on stream summarization on five topics.
1
[['Random'], ['NB'], ['B-HAC'], ['TaHBM'], ['Ge et al. (2015b)'], ['BINet-NodeRank'], ['BINet-AreaRank']]
2
[['sports', 'P@50'], ['sports', 'P@100'], ['politics', 'P@50'], ['politics', 'P@100'], ['disaster', 'P@50'], ['disaster', 'P@100'], ['military', 'P@50'], ['military', 'P@100'], ['comprehensive', 'P@50'], ['comprehensive', 'P@100']]
[['0.02', '0.08', '0', '0', '0.02', '0.04', '0', '0', '0.02', '0.03'], ['0.08', '0.12', '0.18', '0.19', '0.42', '0.36', '0.18', '0.17', '0.38', '0.31'], ['0.10', '0.13', '0.30', '0.26', '0.50', '0.47', '0.30', '0.22', '0.36', '0.32'], ['0.18', '0.15', '0.30', '0.29', '0.50', '0.43', '0.46', '0.36', '0.38', '0.33'], ['0.20', '0.15', '0.38', '0.36', '0.64', '0.53', '0.54', '0.41', '0.40', '0.33'], ['0.24', '0.20', '0.38', '0.30', '0.54', '0.51', '0.48', '0.43', '0.36', '0.33'], ['0.40', '0.33', '0.40', '0.34', '0.80', '0.62', '0.50', '0.49', '0.32', '0.30']]
column
['P@50', 'P@100', 'P@50', 'P@100', 'P@50', 'P@100', 'P@50', 'P@100', 'P@50', 'P@100']
['BINet-NodeRank', 'BINet-AreaRank']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>sports || P@50</th> <th>sports || P@100</th> <th>politics || P@50</th> <th>politics || P@100</th> <th>disaster || P@50</th> <th>disaster || P@100</th> <th>military || P@50</th> <th>military || P@100</th> <th>comprehensive || P@50</th> <th>comprehensive || P@100</th> </tr> </thead> <tbody> <tr> <td>Random</td> <td>0.02</td> <td>0.08</td> <td>0</td> <td>0</td> <td>0.02</td> <td>0.04</td> <td>0</td> <td>0</td> <td>0.02</td> <td>0.03</td> </tr> <tr> <td>NB</td> <td>0.08</td> <td>0.12</td> <td>0.18</td> <td>0.19</td> <td>0.42</td> <td>0.36</td> <td>0.18</td> <td>0.17</td> <td>0.38</td> <td>0.31</td> </tr> <tr> <td>B-HAC</td> <td>0.10</td> <td>0.13</td> <td>0.30</td> <td>0.26</td> <td>0.50</td> <td>0.47</td> <td>0.30</td> <td>0.22</td> <td>0.36</td> <td>0.32</td> </tr> <tr> <td>TaHBM</td> <td>0.18</td> <td>0.15</td> <td>0.30</td> <td>0.29</td> <td>0.50</td> <td>0.43</td> <td>0.46</td> <td>0.36</td> <td>0.38</td> <td>0.33</td> </tr> <tr> <td>Ge et al. (2015b)</td> <td>0.20</td> <td>0.15</td> <td>0.38</td> <td>0.36</td> <td>0.64</td> <td>0.53</td> <td>0.54</td> <td>0.41</td> <td>0.40</td> <td>0.33</td> </tr> <tr> <td>BINet-NodeRank</td> <td>0.24</td> <td>0.20</td> <td>0.38</td> <td>0.30</td> <td>0.54</td> <td>0.51</td> <td>0.48</td> <td>0.43</td> <td>0.36</td> <td>0.33</td> </tr> <tr> <td>BINet-AreaRank</td> <td>0.40</td> <td>0.33</td> <td>0.40</td> <td>0.34</td> <td>0.80</td> <td>0.62</td> <td>0.50</td> <td>0.49</td> <td>0.32</td> <td>0.30</td> </tr> </tbody></table>
Table 3
table_3
D16-1075
7
emnlp2016
The results are shown in Table 3. It can be clearly observed that BINet-based approaches outperform baselines and perform comparably to the state-ofthe-art model on generating the summaries on most topics: AreaRank achieves the significant improvement over the state-of-the-art model on sports and disasters, and performs comparably on politics and military and NodeRank’s performance achieves the comparable performance to previous state-of-the-art model though it is inferior to AreaRank on most topics. Among these five topics, almost all models perform well on disaster and military topics because disaster and military reference summaries have more entries than the topics such as politics and sports and topics of event entries in the summaries are focused. The high-quality training data benefits models’ performance especially for AreaRank which is purely data-driven. In contrast, on sports and politics, the number of entries in the reference summaries is small, which results in weaker supervision and affect the performance of models. It is notable that AreaRank does not perform well on generating the comprehensive summary in which topics of event entries are miscellaneous. The reason for the undesirable performance is that the topics of event entries in the comprehensive reference summary are not focused, which results in very few reference (positive) examples for each topic. As a result, the miscellaneousness of topics of positive examples makes them tend to be overwhelmed by large numbers of negative examples during training the model, leading to very week supervision and making it difficult for AreaRank to learn the patterns of positive examples. Compared to AreaRank, the strategy of selecting documents for generating event entries in other baselines and NodeRank use more or less heuristic knowledge, which makes these models perform stably even if the training examples are not sufficient.
[1, 1, 1, 1, 1, 1, 2, 2, 2]
['The results are shown in Table 3.', 'It can be clearly observed that BINet-based approaches outperform baselines and perform comparably to the state-ofthe-art model on generating the summaries on most topics: AreaRank achieves the significant improvement over the state-of-the-art model on sports and disasters, and performs comparably on politics and military and NodeRank’s performance achieves the comparable performance to previous state-of-the-art model though it is inferior to AreaRank on most topics.', 'Among these five topics, almost all models perform well on disaster and military topics because disaster and military reference summaries have more entries than the topics such as politics and sports and topics of event entries in the summaries are focused.', 'The high-quality training data benefits models’ performance especially for AreaRank which is purely data-driven.', 'In contrast, on sports and politics, the number of entries in the reference summaries is small, which results in weaker supervision and affect the performance of models.', 'It is notable that AreaRank does not perform well on generating the comprehensive summary in which topics of event entries are miscellaneous.', 'The reason for the undesirable performance is that the topics of event entries in the comprehensive reference summary are not focused, which results in very few reference (positive) examples for each topic.', 'As a result, the miscellaneousness of topics of positive examples makes them tend to be overwhelmed by large numbers of negative examples during training the model, leading to very week supervision and making it difficult for AreaRank to learn the patterns of positive examples.', 'Compared to AreaRank, the strategy of selecting documents for generating event entries in other baselines and NodeRank use more or less heuristic knowledge, which makes these models perform stably even if the training examples are not sufficient.']
[None, ['BINet-NodeRank', 'BINet-AreaRank'], ['sports', 'politics', 'disaster', 'military', 'comprehensive'], ['BINet-AreaRank'], ['sports', 'politics'], ['BINet-AreaRank'], None, ['BINet-AreaRank'], ['BINet-NodeRank', 'BINet-AreaRank']]
1
D16-1078table_2
The performances on the Abstracts sub-corpus.
3
[['Speculation', 'Systems', 'Baseline'], ['Speculation', 'Systems', 'CNN_C'], ['Speculation', 'Systems', 'CNN_D'], ['Negation', 'Systems', 'Baseline'], ['Negation', 'Systems', 'CNN_C'], ['Negation', 'Systems', 'CNN_D']]
1
[['P (%)'], ['R (%)'], ['F1'], ['PCLB (%)'], ['PCRB (%)'], ['PCS (%)']]
[['94.71', '90.54', '92.56', '84.81', '85.11', '72.47'], ['95.95', '95.19', '95.56', '93.16', '91.50', '85.75'], ['92.25', '94.98', '93.55', '86.39', '84.50', '74.43'], ['85.46', '72.95', '78.63', '84.00', '58.29', '46.42'], ['85.10', '92.74', '89.64', '81.04', '87.73', '70.86'], ['89.49', '90.54', '89.91', '91.91', '83.54', '77.14']]
column
['P (%)', 'R (%)', 'F1', 'PCLB (%)', 'PCRB (%)', 'PCS (%)']
['CNN_C', 'CNN_D']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P (%)</th> <th>R (%)</th> <th>F1</th> <th>PCLB (%)</th> <th>PCRB (%)</th> <th>PCS (%)</th> </tr> </thead> <tbody> <tr> <td>Speculation || Systems || Baseline</td> <td>94.71</td> <td>90.54</td> <td>92.56</td> <td>84.81</td> <td>85.11</td> <td>72.47</td> </tr> <tr> <td>Speculation || Systems || CNN_C</td> <td>95.95</td> <td>95.19</td> <td>95.56</td> <td>93.16</td> <td>91.50</td> <td>85.75</td> </tr> <tr> <td>Speculation || Systems || CNN_D</td> <td>92.25</td> <td>94.98</td> <td>93.55</td> <td>86.39</td> <td>84.50</td> <td>74.43</td> </tr> <tr> <td>Negation || Systems || Baseline</td> <td>85.46</td> <td>72.95</td> <td>78.63</td> <td>84.00</td> <td>58.29</td> <td>46.42</td> </tr> <tr> <td>Negation || Systems || CNN_C</td> <td>85.10</td> <td>92.74</td> <td>89.64</td> <td>81.04</td> <td>87.73</td> <td>70.86</td> </tr> <tr> <td>Negation || Systems || CNN_D</td> <td>89.49</td> <td>90.54</td> <td>89.91</td> <td>91.91</td> <td>83.54</td> <td>77.14</td> </tr> </tbody></table>
Table 2
table_2
D16-1078
7
emnlp2016
4.3 Experimental Results on Abstracts. Table 2 summarizes the performances of scope detection on Abstracts. In Table 2, CNN_C and CNN_D refer the CNN-based model with constituency paths and dependency paths, respectively (the same below). It shows that our CNN-based models (both CNN_C and CNN_D) can achieve better performances than the baseline in most measurements. This indicates that our CNN-based models can better extract and model effective features. Besides, compared to the baseline, our CNN-based models consider fewer features and need less human intervention. It also manifests that our CNN-based models improve significantly more on negation scope detection than on speculation scope detection. Much of this is due to the better ability of our CNN-based models in identifying the right boundaries of scopes than the left ones on negation scope detection, with the huge gains of 29.44% and 25.25% on PCRB using CNN_C and CNN_D, respectively. Table 2 illustrates that the performance of speculation scope detection is higher than that of negation (Best PCS: 85.75% vs 77.14%). It is mainly attributed to the shorter scopes of negation cues. Under the circumstances that the average length of negation sentences is almost as long as that of speculation ones (29.28 vs 29.77), shorter negation scopes mean that more tokens do not belong to the scopes, indicating more negative instances. The imbalance between positive and negative instances has negative effects on both the baseline and the CNN-based models for negation scope detection. Table 2 also shows that our CNN_D outperforms CNN_C in negation scope detection (PCS: 77.14% vs 70.86%), while our CNN_C performs better than CNN_D in speculation scope detection (PCS: 85.75% vs 74.43%). To explore the results of our CNN-based models in details, we present the analysis of top 10 speculative and negative cues below on CNN_C and CNN_D, respectively.
[2, 1, 2, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 0]
['4.3 Experimental Results on Abstracts.', 'Table 2 summarizes the performances of scope detection on Abstracts.', 'In Table 2, CNN_C and CNN_D refer the CNN-based model with constituency paths and dependency paths, respectively (the same below).', 'It shows that our CNN-based models (both CNN_C and CNN_D) can achieve better performances than the baseline in most measurements.', 'This indicates that our CNN-based models can better extract and model effective features.', 'Besides, compared to the baseline, our CNN-based models consider fewer features and need less human intervention.', 'It also manifests that our CNN-based models improve significantly more on negation scope detection than on speculation scope detection.', 'Much of this is due to the better ability of our CNN-based models in identifying the right boundaries of scopes than the left ones on negation scope detection, with the huge gains of 29.44% and 25.25% on PCRB using CNN_C and CNN_D, respectively.', 'Table 2 illustrates that the performance of speculation scope detection is higher than that of negation (Best PCS: 85.75% vs 77.14%).', 'It is mainly attributed to the shorter scopes of negation cues.', 'Under the circumstances that the average length of negation sentences is almost as long as that of speculation ones (29.28 vs 29.77), shorter negation scopes mean that more tokens do not belong to the scopes, indicating more negative instances.', 'The imbalance between positive and negative instances has negative effects on both the baseline and the CNN-based models for negation scope detection.', 'Table 2 also shows that our CNN_D outperforms CNN_C in negation scope detection (PCS: 77.14% vs 70.86%), while our CNN_C performs better than CNN_D in speculation scope detection (PCS: 85.75% vs 74.43%).', 'To explore the results of our CNN-based models in details, we present the analysis of top 10 speculative and negative cues below on CNN_C and CNN_D, respectively.']
[None, None, ['CNN_C', 'CNN_D'], ['CNN_C', 'CNN_D', 'Baseline'], ['CNN_C', 'CNN_D'], ['CNN_C', 'CNN_D', 'Baseline'], ['CNN_C', 'CNN_D'], ['CNN_C', 'CNN_D', 'PCRB (%)'], ['PCS (%)'], None, None, ['Negation', 'CNN_C', 'CNN_D'], ['Negation', 'CNN_C', 'CNN_D'], None]
1
D16-1078table_4
Comparison of our CNN-based model with the state-
3
[['Spe', 'System', 'Morante (2009a)'], ['Spe', 'System', 'Özgür (2009)'], ['Spe', 'System', 'Velldal (2012)'], ['Spe', 'System', 'Zou (2013)'], ['Spe', 'System', 'Ours'], ['Neg', 'System', 'Morante (2008)'], ['Neg', 'System', 'Morante (2009b)'], ['Neg', 'System', 'Li (2010)'], ['Neg', 'System', 'Velldal (2012)'], ['Neg', 'System', 'Zou (2013)'], ['Neg', 'System', 'Ours']]
1
[['Abstracts'], ['Cli'], ['Papers']]
[['77.13', '60.59', '47.94'], ['79.89', 'N/A', '61.13'], ['79.56', '78.69', '75.15'], ['84.21', '72.92', '67.24'], ['85.75', '73.92', '59.82'], ['57.33', 'N/A', 'N/A'], ['73.36', '87.27', '50.26'], ['81.84', '89.79', '64.02'], ['74.35', '90.74', '70.21'], ['76.90', '85.31', '61.19'], ['77.14', '89.66', '55.32']]
column
['PCS', 'PCS', 'PCS']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Abstracts</th> <th>Cli</th> <th>Papers</th> </tr> </thead> <tbody> <tr> <td>Spe || System || Morante (2009a)</td> <td>77.13</td> <td>60.59</td> <td>47.94</td> </tr> <tr> <td>Spe || System || Özgür (2009)</td> <td>79.89</td> <td>N/A</td> <td>61.13</td> </tr> <tr> <td>Spe || System || Velldal (2012)</td> <td>79.56</td> <td>78.69</td> <td>75.15</td> </tr> <tr> <td>Spe || System || Zou (2013)</td> <td>84.21</td> <td>72.92</td> <td>67.24</td> </tr> <tr> <td>Spe || System || Ours</td> <td>85.75</td> <td>73.92</td> <td>59.82</td> </tr> <tr> <td>Neg || System || Morante (2008)</td> <td>57.33</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Neg || System || Morante (2009b)</td> <td>73.36</td> <td>87.27</td> <td>50.26</td> </tr> <tr> <td>Neg || System || Li (2010)</td> <td>81.84</td> <td>89.79</td> <td>64.02</td> </tr> <tr> <td>Neg || System || Velldal (2012)</td> <td>74.35</td> <td>90.74</td> <td>70.21</td> </tr> <tr> <td>Neg || System || Zou (2013)</td> <td>76.90</td> <td>85.31</td> <td>61.19</td> </tr> <tr> <td>Neg || System || Ours</td> <td>77.14</td> <td>89.66</td> <td>55.32</td> </tr> </tbody></table>
Table 4
table_4
D16-1078
9
emnlp2016
Table 4 compares our CNN-based models with the state-of-the-art systems. It shows that our CNNbased models can achieve higher PCSs (+1.54%) than those of the state-of-the-art systems for speculation scope detection and the second highest PCS for negation scope detection on Abstracts, and can get comparable PCSs on Clinical Records (73.92% vs 78.69% for speculation scopes, 89.66% vs 90.74% for negation scopes). It is worth noting that Abstracts and Clinical Records come from different genres. It also displays that our CNN-based models perform worse than the state-of-the-art on Full Papers due to the complex syntactic structures of the sentences and the cross-domain nature of our evaluation. Although our evaluation on Clinical Records is cross-domain, the sentences in Clinical Records are much simpler and the results on Clinical Records are satisfactory. Remind that our CNN-based models are all trained on Abstracts. Another reason is that those state-of-the-art systems on Full Papers (e.g., Li et al., 2010; Velldal et al., 2012) are tree-based, instead of token-based. Li et al. (2010) proposed a semantic parsing framework and focused on determining whether a constituent, rather than a word, is in the scope of a negative cue. Velldal et al. (2012) presented a hybrid framework, combining a rule-based approach using dependency structures and a data-driven approach for selecting appropriate subtrees in constituent structures. Normally, tree-based models can better capture long-distance syntactic dependency than token-based ones. Compared to those tree-based models, however, our CNN-based model needs less manual intervention. To improve the performances of scope detection task, we will explore this alternative in our future work.
[1, 1, 2, 1, 2, 2, 2, 2, 2, 2, 2, 0]
['Table 4 compares our CNN-based models with the state-of-the-art systems.', 'It shows that our CNNbased models can achieve higher PCSs (+1.54%) than those of the state-of-the-art systems for speculation scope detection and the second highest PCS for negation scope detection on Abstracts, and can get comparable PCSs on Clinical Records (73.92% vs 78.69% for speculation scopes, 89.66% vs 90.74% for negation scopes).', 'It is worth noting that Abstracts and Clinical Records come from different genres.', 'It also displays that our CNN-based models perform worse than the state-of-the-art on Full Papers due to the complex syntactic structures of the sentences and the cross-domain nature of our evaluation.', 'Although our evaluation on Clinical Records is cross-domain, the sentences in Clinical Records are much simpler and the results on Clinical Records are satisfactory.', 'Remind that our CNN-based models are all trained on Abstracts.', 'Another reason is that those state-of-the-art systems on Full Papers (e.g., Li et al., 2010; Velldal et al., 2012) are tree-based, instead of token-based.', 'Li et al. (2010) proposed a semantic parsing framework and focused on determining whether a constituent, rather than a word, is in the scope of a negative cue.', 'Velldal et al. (2012) presented a hybrid framework, combining a rule-based approach using dependency structures and a data-driven approach for selecting appropriate subtrees in constituent structures.', 'Normally, tree-based models can better capture long-distance syntactic dependency than token-based ones.', 'Compared to those tree-based models, however, our CNN-based model needs less manual intervention.', 'To improve the performances of scope detection task, we will explore this alternative in our future work.']
[['System'], ['Ours', 'Abstracts', 'Cli'], ['Abstracts', 'Cli'], ['Ours', 'System'], ['Cli'], ['Ours', 'Abstracts'], None, ['Li (2010)'], ['Velldal (2012)'], None, ['Ours'], None]
1
D16-1080table_4
Effects of embedding on performance. WEU, WENU, REU and RENU represent word embedding update, word embedding without update, random embedding update and random embedding without update respectively.
1
[['WEU'], ['WENU'], ['REU'], ['RENU']]
1
[['P'], ['R'], ['F1']]
[['80.74%', '81.19%', '80.97%'], ['74.10%', '69.30%', '71.62%'], ['79.01%', '79.75%', '79.38%'], ['78.16%', '64.55%', '70.70%']]
column
['P', 'R', 'F1']
['WEU']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>WEU</td> <td>80.74%</td> <td>81.19%</td> <td>80.97%</td> </tr> <tr> <td>WENU</td> <td>74.10%</td> <td>69.30%</td> <td>71.62%</td> </tr> <tr> <td>REU</td> <td>79.01%</td> <td>79.75%</td> <td>79.38%</td> </tr> <tr> <td>RENU</td> <td>78.16%</td> <td>64.55%</td> <td>70.70%</td> </tr> </tbody></table>
Table 4
table_4
D16-1080
7
emnlp2016
Table 4 lists the effects of word embedding. We can see that the performance when updating the word embedding is better than when not updating, and the performance of word embedding is a little better than random word embedding. The main reason is that the vocabulary size is 147,377, but the number of words from tweets that exist in the word embedding trained on the Google News dataset is just 35,133. This means that 76.2% of the words are missing. This also confirms that the proposed jointlayer RNN is more suitable for keyphrase extraction on Twitter.
[1, 1, 2, 2, 2]
['Table 4 lists the effects of word embedding.', 'We can see that the performance when updating the word embedding is better than when not updating, and the performance of word embedding is a little better than random word embedding.', 'The main reason is that the vocabulary size is 147,377, but the number of words from tweets that exist in the word embedding trained on the Google News dataset is just 35,133.', 'This means that 76.2% of the words are missing.', 'This also confirms that the proposed jointlayer RNN is more suitable for keyphrase extraction on Twitter.']
[None, ['WEU', 'WENU', 'REU', 'RENU'], None, None, None]
1
D16-1083table_3
Classification results across the behavioral features (BF), the reviewer embeddings (RE) , product embeddings (PE) and bigram of the review texts. Training uses balanced data (50:50). Testing uses two class distributions (C.D.): 50:50 (balanced) and Natural Distribution (N.D.). Improvements of our method are statistically significant with p<0.005 based on paired t-test.
3
[['Method', 'SPEAGLE+(80%)', '50.50.00'], ['Method', 'SPEAGLE+(80%)', 'N.D.'], ['Method', 'Mukherjee_BF', '50.50.00'], ['Method', 'Mukherjee_BF', 'N.D.'], ['Method', 'Mukherjee_BF+Bigram', '50.50.00'], ['Method', 'Mukherjee_BF+Bigram', 'N.D.'], ['Method', 'Ours_RE', '50.50.00'], ['Method', 'Ours_RE', 'N.D.'], ['Method', 'Ours_RE+PE', '50.50.00'], ['Method', 'Ours_RE+PE', 'N.D.'], ['Method', 'Ours_RE+PE+Bigram', '50.50.00'], ['Method', 'Ours_RE+PE+Bigram', 'N.D.']]
2
[['P', 'Hotel'], ['P', 'Restaurant'], ['R', 'Hotel'], ['R', 'Restaurant'], ['F1', 'Hotel'], ['F1', 'Restaurant'], ['A', 'Hotel'], ['A', 'Restaurant']]
[['75.7', '80.5', '83', '83.2', '79.1', '81.8', '81', '82.5'], ['26.5', '50.1', '56', '70.5', '36', '58.6', '80.4', '82'], ['82.4', '82.8', '85.2', '88.5', '83.7', '85.6', '83.8', '83.3'], ['41.4', '48.2', '84.6', '87.9', '55.6', '62.3', '82.4', '78.6'], ['82.8', '84.5', '86.9', '87.8', '84.8', '86.1', '85.1', '86.5'], ['46.5', '48.9', '82.5', '87.3', '59.4', '62.7', '84.9', '82.3'], ['83.3', '85.4', '88.1', '90.2', '85.6', '87.7', '85.5', '87.4'], ['47.1', '56.9', '83.5', '90.1', '60.2', '69.8', '85', '85.8'], ['83.6', '86', '89', '90.7', '86.2', '88.3', '85.7', '88'], ['47.5', '57.4', '84.1', '89.9', '60.7', '70.1', '85.3', '86.1'], ['84.2', '86.8', '89.9', '91.8', '87', '89.2', '86.5', '89.9'], ['48.2', '58.2', '85', '90.3', '61.5', '70.8', '85.9', '87.8']]
column
['P', 'P', 'R', 'R', 'F1', 'F1', 'A', 'A']
['Ours_RE', 'Ours_RE+PE', 'Ours_RE+PE+Bigram']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P || Hotel</th> <th>P || Restaurant</th> <th>R || Hotel</th> <th>R || Restaurant</th> <th>F1 || Hotel</th> <th>F1 || Restaurant</th> <th>A || Hotel</th> <th>A || Restaurant</th> </tr> </thead> <tbody> <tr> <td>Method || SPEAGLE+(80%) || 50.50.00</td> <td>75.7</td> <td>80.5</td> <td>83</td> <td>83.2</td> <td>79.1</td> <td>81.8</td> <td>81</td> <td>82.5</td> </tr> <tr> <td>Method || SPEAGLE+(80%) || N.D.</td> <td>26.5</td> <td>50.1</td> <td>56</td> <td>70.5</td> <td>36</td> <td>58.6</td> <td>80.4</td> <td>82</td> </tr> <tr> <td>Method || Mukherjee_BF || 50.50.00</td> <td>82.4</td> <td>82.8</td> <td>85.2</td> <td>88.5</td> <td>83.7</td> <td>85.6</td> <td>83.8</td> <td>83.3</td> </tr> <tr> <td>Method || Mukherjee_BF || N.D.</td> <td>41.4</td> <td>48.2</td> <td>84.6</td> <td>87.9</td> <td>55.6</td> <td>62.3</td> <td>82.4</td> <td>78.6</td> </tr> <tr> <td>Method || Mukherjee_BF+Bigram || 50.50.00</td> <td>82.8</td> <td>84.5</td> <td>86.9</td> <td>87.8</td> <td>84.8</td> <td>86.1</td> <td>85.1</td> <td>86.5</td> </tr> <tr> <td>Method || Mukherjee_BF+Bigram || N.D.</td> <td>46.5</td> <td>48.9</td> <td>82.5</td> <td>87.3</td> <td>59.4</td> <td>62.7</td> <td>84.9</td> <td>82.3</td> </tr> <tr> <td>Method || Ours_RE || 50.50.00</td> <td>83.3</td> <td>85.4</td> <td>88.1</td> <td>90.2</td> <td>85.6</td> <td>87.7</td> <td>85.5</td> <td>87.4</td> </tr> <tr> <td>Method || Ours_RE || N.D.</td> <td>47.1</td> <td>56.9</td> <td>83.5</td> <td>90.1</td> <td>60.2</td> <td>69.8</td> <td>85</td> <td>85.8</td> </tr> <tr> <td>Method || Ours_RE+PE || 50.50.00</td> <td>83.6</td> <td>86</td> <td>89</td> <td>90.7</td> <td>86.2</td> <td>88.3</td> <td>85.7</td> <td>88</td> </tr> <tr> <td>Method || Ours_RE+PE || N.D.</td> <td>47.5</td> <td>57.4</td> <td>84.1</td> <td>89.9</td> <td>60.7</td> <td>70.1</td> <td>85.3</td> <td>86.1</td> </tr> <tr> <td>Method || Ours_RE+PE+Bigram || 50.50.00</td> <td>84.2</td> <td>86.8</td> <td>89.9</td> <td>91.8</td> <td>87</td> <td>89.2</td> <td>86.5</td> <td>89.9</td> </tr> <tr> <td>Method || Ours_RE+PE+Bigram || N.D.</td> <td>48.2</td> <td>58.2</td> <td>85</td> <td>90.3</td> <td>61.5</td> <td>70.8</td> <td>85.9</td> <td>87.8</td> </tr> </tbody></table>
Table 3
table_3
D16-1083
7
emnlp2016
The compared results are shown in Table 3. We utilize our learnt embeddings of reviewers (Ours RE), both of reviewers’ embeddings and products’ embeddings (Ours RE+PE), respectively. Moreover, to perform fair comparison, like Mukherjee et al. (2013b), we add representations of the review text in classifier (Ours RE+PE+Bigram). From the results, we can observe that our method could outperform all state-of-the-arts in both the hotel and restaurant domains. It proves that our method is effective. Furthermore, the improvements in both the hotel and restaurant domains prove that our model possesses preferable domain-adaptability. It could represent the reviews more accurately and globally by learning from the original data, rather than the experts’ knowledge or assumption.
[1, 2, 2, 1, 1, 1, 2]
['The compared results are shown in Table 3.', 'We utilize our learnt embeddings of reviewers (Ours RE), both of reviewers’ embeddings and products’ embeddings (Ours RE+PE), respectively.', 'Moreover, to perform fair comparison, like Mukherjee et al. (2013b), we add representations of the review text in classifier (Ours RE+PE+Bigram).', 'From the results, we can observe that our method could outperform all state-of-the-arts in both the hotel and restaurant domains.', 'It proves that our method is effective.', 'Furthermore, the improvements in both the hotel and restaurant domains prove that our model possesses preferable domain-adaptability.', 'It could represent the reviews more accurately and globally by learning from the original data, rather than the experts’ knowledge or assumption.']
[None, ['Ours_RE', 'Ours_RE+PE'], ['Ours_RE+PE+Bigram'], ['Hotel', 'Restaurant', 'Ours_RE', 'Ours_RE+PE', 'Ours_RE+PE+Bigram'], ['Ours_RE+PE', 'Ours_RE+PE+Bigram'], ['Hotel', 'Restaurant', 'Ours_RE', 'Ours_RE+PE', 'Ours_RE+PE+Bigram'], None]
1
D16-1083table_4
SVM 5-fold CV classification results by dropping relations from our method utilizing RE+PE+Bigram. Both training and testing use balanced data (50:50). Differences in classification metrics for each dropped relation are statistically significant with p<0.01 based on paired t-test. of reviewers (RE) learnt by the tensor decomposition perform better than the Rels. As we mentioned in Section 2.2, the tensor decomposition embeds the informations over all the relations collectively, and removes the noise of the original data by learning through the global loss function. Consequently, we get the representations with a further optimization.
2
[['Dropped Relation', '1'], ['Dropped Relation', '2'], ['Dropped Relation', '3'], ['Dropped Relation', '4'], ['Dropped Relation', '5'], ['Dropped Relation', '6'], ['Dropped Relation', '7'], ['Dropped Relation', '8'], ['Dropped Relation', '9'], ['Dropped Relation', '10'], ['Dropped Relation', '11']]
2
[['Hotel', 'F1'], ['Hotel', 'A'], ['Restaurant', 'F1'], ['Restaurant', 'A']]
[['-2.1', '-2.0', '-2.0', '-3.1'], ['-2.3', '-2.1', '-1.9', '-2.9'], ['-3.9', '-4.0', '-4.0', '-6.3'], ['-3.7', '-3.5', '-3.6', '-5.5'], ['-3.5', '-3.6', '-2.8', '-4.5'], ['-2.5', '-2.5', '-3.4', '-5.2'], ['-3.2', '-3.2', '-3.3', '-5.0'], ['-2.8', '-2.6', '-3.0', '-4.6'], ['-4.0', '-3.7', '-3.7', '-5.4'], ['-2.2', '-2.4', '-1.8', '-2.8'], ['-2.6', '-2.4', '-2.7', '-4.4']]
column
['F1', 'A', 'F1', 'A']
['Dropped Relation']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Hotel || F1</th> <th>Hotel || A</th> <th>Restaurant || F1</th> <th>Restaurant || A</th> </tr> </thead> <tbody> <tr> <td>Dropped Relation || 1</td> <td>-2.1</td> <td>-2.0</td> <td>-2.0</td> <td>-3.1</td> </tr> <tr> <td>Dropped Relation || 2</td> <td>-2.3</td> <td>-2.1</td> <td>-1.9</td> <td>-2.9</td> </tr> <tr> <td>Dropped Relation || 3</td> <td>-3.9</td> <td>-4.0</td> <td>-4.0</td> <td>-6.3</td> </tr> <tr> <td>Dropped Relation || 4</td> <td>-3.7</td> <td>-3.5</td> <td>-3.6</td> <td>-5.5</td> </tr> <tr> <td>Dropped Relation || 5</td> <td>-3.5</td> <td>-3.6</td> <td>-2.8</td> <td>-4.5</td> </tr> <tr> <td>Dropped Relation || 6</td> <td>-2.5</td> <td>-2.5</td> <td>-3.4</td> <td>-5.2</td> </tr> <tr> <td>Dropped Relation || 7</td> <td>-3.2</td> <td>-3.2</td> <td>-3.3</td> <td>-5.0</td> </tr> <tr> <td>Dropped Relation || 8</td> <td>-2.8</td> <td>-2.6</td> <td>-3.0</td> <td>-4.6</td> </tr> <tr> <td>Dropped Relation || 9</td> <td>-4.0</td> <td>-3.7</td> <td>-3.7</td> <td>-5.4</td> </tr> <tr> <td>Dropped Relation || 10</td> <td>-2.2</td> <td>-2.4</td> <td>-1.8</td> <td>-2.8</td> </tr> <tr> <td>Dropped Relation || 11</td> <td>-2.6</td> <td>-2.4</td> <td>-2.7</td> <td>-4.4</td> </tr> </tbody></table>
Table 4
table_4
D16-1083
7
emnlp2016
3.5 The Effects of Different Relations. We also drop relations of our method with a graceful degradation. Table 4 shows the performances of our method utilizing BF+PE+Bigram for hotel and restaurant domains. We found that dropping Relations 1, 2 and 10 results in a relatively gentle reduction (about 2.2%) in F1-score. According to our survey, the sparseness of the slices generated by Relation 1, 2 and 10 is about 99.9%. For this reason, the result is a relatively gentle reduction. Dropping other relations also result in a 2.5-4.0% performance reduction. It proves that each relation has an influence on the learning to represent reviews.
[2, 2, 1, 1, 2, 2, 1, 2]
['3.5 The Effects of Different Relations.', 'We also drop relations of our method with a graceful degradation.', 'Table 4 shows the performances of our method utilizing BF+PE+Bigram for hotel and restaurant domains.', 'We found that dropping Relations 1, 2 and 10 results in a relatively gentle reduction (about 2.2%) in F1-score.', 'According to our survey, the sparseness of the slices generated by Relation 1, 2 and 10 is about 99.9%.', 'For this reason, the result is a relatively gentle reduction.', 'Dropping other relations also result in a 2.5-4.0% performance reduction.', 'It proves that each relation has an influence on the learning to represent reviews.']
[None, None, ['Hotel', 'Restaurant'], ['F1', '1', '2', '10'], None, None, None, None]
1
D16-1084table_4
Results for the unseen target stance detection development setup using BiCond, with single vs separate embeddings matrices for tweet and target and different initialisations
6
[['EmbIni', 'Random', 'NumMatr', 'Sing', 'Stance', 'FAVOR'], ['EmbIni', 'Random', 'NumMatr', 'Sing', 'Stance', 'AGAINST'], ['EmbIni', 'Random', 'NumMatr', 'Sing', 'Stance', 'Macro'], ['EmbIni', 'Random', 'NumMatr', 'Sep', 'Stance', 'FAVOR'], ['EmbIni', 'Random', 'NumMatr', 'Sep', 'Stance', 'AGAINST'], ['EmbIni', 'Random', 'NumMatr', 'Sep', 'Stance', 'Macro'], ['PreFixed', 'Random', 'NumMatr', 'Sing', 'Stance', 'FAVOR'], ['PreFixed', 'Random', 'NumMatr', 'Sing', 'Stance', 'AGAINST'], ['PreFixed', 'Random', 'NumMatr', 'Sing', 'Stance', 'Macro'], ['PreFixed', 'Random', 'NumMatr', 'Sep', 'Stance', 'FAVOR'], ['PreFixed', 'Random', 'NumMatr', 'Sep', 'Stance', 'AGAINST'], ['PreFixed', 'Random', 'NumMatr', 'Sep', 'Stance', 'Macro'], ['PreCont', 'Random', 'NumMatr', 'Sing', 'Stance', 'FAVOR'], ['PreCont', 'Random', 'NumMatr', 'Sing', 'Stance', 'AGAINST'], ['PreCont', 'Random', 'NumMatr', 'Sing', 'Stance', 'Macro'], ['PreCont', 'Random', 'NumMatr', 'Sep', 'Stance', 'FAVOR'], ['PreCont', 'Random', 'NumMatr', 'Sep', 'Stance', 'AGAINST'], ['PreCont', 'Random', 'NumMatr', 'Sep', 'Stance', 'Macro']]
1
[['P'], ['R'], ['F1']]
[['0.1982', '0.3846', '0.2616'], ['0.6263', '0.5929', '0.6092'], ['-', '-', '0.4354'], ['0.2278', '0.5043', '0.3138'], ['0.6706', '0.4300', '0.5240'], ['-', '-', '0.4189'], ['0.6000', '0.0513', '0.0945'], ['0.5761', '0.9440', '0.7155'], ['-', '-', '0.4050'], ['0.1429', '0.0342', '0.0552'], ['0.5707', '0.9033', '0.6995'], ['-', '-', '0.3773'], ['0.2588', '0.3761', '0.3066'], ['0.7081', '0.5802', '0.6378'], ['-', '-', '0.4722'], ['0.2243', '0.4103', '0.2900'], ['0.6185', '0.5445', '0.5792'], ['-', '-', '0.4346']]
column
['P', 'R', 'F1']
['PreFixed', 'PreCont']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>EmbIni || Random || NumMatr || Sing || Stance || FAVOR</td> <td>0.1982</td> <td>0.3846</td> <td>0.2616</td> </tr> <tr> <td>EmbIni || Random || NumMatr || Sing || Stance || AGAINST</td> <td>0.6263</td> <td>0.5929</td> <td>0.6092</td> </tr> <tr> <td>EmbIni || Random || NumMatr || Sing || Stance || Macro</td> <td>-</td> <td>-</td> <td>0.4354</td> </tr> <tr> <td>EmbIni || Random || NumMatr || Sep || Stance || FAVOR</td> <td>0.2278</td> <td>0.5043</td> <td>0.3138</td> </tr> <tr> <td>EmbIni || Random || NumMatr || Sep || Stance || AGAINST</td> <td>0.6706</td> <td>0.4300</td> <td>0.5240</td> </tr> <tr> <td>EmbIni || Random || NumMatr || Sep || Stance || Macro</td> <td>-</td> <td>-</td> <td>0.4189</td> </tr> <tr> <td>PreFixed || Random || NumMatr || Sing || Stance || FAVOR</td> <td>0.6000</td> <td>0.0513</td> <td>0.0945</td> </tr> <tr> <td>PreFixed || Random || NumMatr || Sing || Stance || AGAINST</td> <td>0.5761</td> <td>0.9440</td> <td>0.7155</td> </tr> <tr> <td>PreFixed || Random || NumMatr || Sing || Stance || Macro</td> <td>-</td> <td>-</td> <td>0.4050</td> </tr> <tr> <td>PreFixed || Random || NumMatr || Sep || Stance || FAVOR</td> <td>0.1429</td> <td>0.0342</td> <td>0.0552</td> </tr> <tr> <td>PreFixed || Random || NumMatr || Sep || Stance || AGAINST</td> <td>0.5707</td> <td>0.9033</td> <td>0.6995</td> </tr> <tr> <td>PreFixed || Random || NumMatr || Sep || Stance || Macro</td> <td>-</td> <td>-</td> <td>0.3773</td> </tr> <tr> <td>PreCont || Random || NumMatr || Sing || Stance || FAVOR</td> <td>0.2588</td> <td>0.3761</td> <td>0.3066</td> </tr> <tr> <td>PreCont || Random || NumMatr || Sing || Stance || AGAINST</td> <td>0.7081</td> <td>0.5802</td> <td>0.6378</td> </tr> <tr> <td>PreCont || Random || NumMatr || Sing || Stance || Macro</td> <td>-</td> <td>-</td> <td>0.4722</td> </tr> <tr> <td>PreCont || Random || NumMatr || Sep || Stance || FAVOR</td> <td>0.2243</td> <td>0.4103</td> <td>0.2900</td> </tr> <tr> <td>PreCont || Random || NumMatr || Sep || Stance || AGAINST</td> <td>0.6185</td> <td>0.5445</td> <td>0.5792</td> </tr> <tr> <td>PreCont || Random || NumMatr || Sep || Stance || Macro</td> <td>-</td> <td>-</td> <td>0.4346</td> </tr> </tbody></table>
Table 4
table_4
D16-1084
6
emnlp2016
Pre-Training. Table 4 shows the effect of unsupervised pre-training of word embeddings with a word2vec skip-gram model, and furthermore, the results of sharing of these representations between the tweets and targets, on the development set. The first set of results is with a uniformly Random embedding initialisation in [−0.1, 0.1]. PreFixed uses the pre-trained skip-gram word embeddings, whereas PreCont initialises the word embeddings with ones from SkipGram and continues training them during LSTM training. Our results show that, in the absence of a large labelled training dataset, pretraining of word embeddings is more helpful than random initialisation of embeddings. Sing vs Sep shows the difference between using shared vs two separate embeddings matrices for looking up the word embeddings. Sing means the word representations for tweet and target vocabularies are shared, whereas Sep means they are different. Using shared embeddings performs better, which we hypothesise is because the tweets contain some mentions of targets that are tested.
[2, 1, 1, 2, 1, 2, 2, 2]
['Pre-Training.', 'Table 4 shows the effect of unsupervised pre-training of word embeddings with a word2vec skip-gram model, and furthermore, the results of sharing of these representations between the tweets and targets, on the development set.', 'The first set of results is with a uniformly Random embedding initialisation in [−0.1, 0.1].', 'PreFixed uses the pre-trained skip-gram word embeddings, whereas PreCont initialises the word embeddings with ones from SkipGram and continues training them during LSTM training.', 'Our results show that, in the absence of a large labelled training dataset, pretraining of word embeddings is more helpful than random initialisation of embeddings.', 'Sing vs Sep shows the difference between using shared vs two separate embeddings matrices for looking up the word embeddings.', 'Sing means the word representations for tweet and target vocabularies are shared, whereas Sep means they are different.', 'Using shared embeddings performs better, which we hypothesise is because the tweets contain some mentions of targets that are tested.']
[None, None, ['Random'], ['PreFixed', 'PreCont'], ['Random', 'PreCont', 'PreFixed'], ['Sing', 'Sep'], ['Sing', 'Sep'], None]
1
D16-1084table_7
Stance Detection test results, compared against the state of the art. SVM-ngrams-comb and Majority baseline are reported in Mohammad et al. (2016), pkudblab in Wei et al. (2016), LitisMind in Zarrella and Marsh (2016), INF-UFRGS in Dias and Becker (2016)
4
[['Method', 'SVM-ngrams-comb (Unseen Target)', 'Stance', 'FAVOR'], ['Method', 'SVM-ngrams-comb (Unseen Target)', 'Stance', 'AGAINST'], ['Method', 'SVM-ngrams-comb (Unseen Target)', 'Stance', 'Macro'], ['Method', 'Majority baseline (Unseen Target)', 'Stance', 'FAVOR'], ['Method', 'Majority baseline (Unseen Target)', 'Stance', 'AGAINST'], ['Method', 'Majority baseline (Unseen Target)', 'Stance', 'Macro'], ['Method', 'BiCond (Unseen Target)', 'Stance', 'FAVOR'], ['Method', 'BiCond (Unseen Target)', 'Stance', 'AGAINST'], ['Method', 'BiCond (Unseen Target)', 'Stance', 'Macro']]
1
[['F1']]
[['0.1842'], ['0.3845'], ['0.2843'], ['0.0'], ['0.5944'], ['0.2972'], ['0.3902'], ['0.5899'], ['0.4901']]
column
['F1']
['BiCond (Unseen Target)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || SVM-ngrams-comb (Unseen Target) || Stance || FAVOR</td> <td>0.1842</td> </tr> <tr> <td>Method || SVM-ngrams-comb (Unseen Target) || Stance || AGAINST</td> <td>0.3845</td> </tr> <tr> <td>Method || SVM-ngrams-comb (Unseen Target) || Stance || Macro</td> <td>0.2843</td> </tr> <tr> <td>Method || Majority baseline (Unseen Target) || Stance || FAVOR</td> <td>0.0</td> </tr> <tr> <td>Method || Majority baseline (Unseen Target) || Stance || AGAINST</td> <td>0.5944</td> </tr> <tr> <td>Method || Majority baseline (Unseen Target) || Stance || Macro</td> <td>0.2972</td> </tr> <tr> <td>Method || BiCond (Unseen Target) || Stance || FAVOR</td> <td>0.3902</td> </tr> <tr> <td>Method || BiCond (Unseen Target) || Stance || AGAINST</td> <td>0.5899</td> </tr> <tr> <td>Method || BiCond (Unseen Target) || Stance || Macro</td> <td>0.4901</td> </tr> <tr> <td>Method || INF-UFRGS (Weakly Supervised*) || Stance || FAVOR</td> <td>0.3256</td> </tr> <tr> <td>Method || INF-UFRGS (Weakly Supervised*) || Stance || AGAINST</td> <td>0.5209</td> </tr> <tr> <td>Method || INF-UFRGS (Weakly Supervised*) || Stance || Macro</td> <td>0.4232</td> </tr> <tr> <td>Method || LitisMind (Weakly Supervised*) || Stance || FAVOR</td> <td>0.3004</td> </tr> <tr> <td>Method || LitisMind (Weakly Supervised*) || Stance || AGAINST</td> <td>0.5928</td> </tr> <tr> <td>Method || LitisMind (Weakly Supervised*) || Stance || Macro</td> <td>0.4466</td> </tr> <tr> <td>Method || pkudblab (Weakly Supervised*) || Stance || FAVOR</td> <td>0.5739</td> </tr> <tr> <td>Method || pkudblab (Weakly Supervised*) || Stance || AGAINST</td> <td>0.5517</td> </tr> <tr> <td>Method || pkudblab (Weakly Supervised*) || Stance || Macro</td> <td>0.5628</td> </tr> <tr> <td>Method || BiCond (Weakly Supervised) || Stance || FAVOR</td> <td>0.6138</td> </tr> <tr> <td>Method || BiCond (Weakly Supervised) || Stance || AGAINST</td> <td>0.5468</td> </tr> <tr> <td>Method || BiCond (Weakly Supervised) || Stance || Macro</td> <td>0.5803</td> </tr> </tbody></table>
Table 7
table_7
D16-1084
8
emnlp2016
Table 7 shows all our results, including those using the unseen target setup, compared against the state-of-the-art on the stance detection corpus. Table 7 further lists baselines reported by Mohammad et al. (2016), namely a majority class baseline (Majority baseline), and a method using 1 to 3-gram bag-of-word and character n-gram features (SVM-ngrams-comb), which are extracted from the tweets and used to train a 3-way SVM classifier. Bag-of-word baselines (BoWV, SVM-ngramscomb) achieve results comparable to the majority baseline (F1 of 0.2972), which shows how difficult the task is. The baselines which only extract features from the tweets, SVM-ngrams-comb and TweetOnly perform worse than the baselines which also learn representations for the targets (BoWV, Concat). By training conditional encoding models on automatically labelled stance detection data we achieve state-of-the-art results. The best result (F1 of 0.5803) is achieved with the bi-directional conditional encoding model (BiCond). This shows that such models are suitable for unseen, as well as seen target stance detection.
[1, 1, 1, 2, 1, 1, 1]
['Table 7 shows all our results, including those using the unseen target setup, compared against the state-of-the-art on the stance detection corpus.', 'Table 7 further lists baselines reported by Mohammad et al. (2016), namely a majority class baseline (Majority baseline), and a method using 1 to 3-gram bag-of-word and character n-gram features (SVM-ngrams-comb), which are extracted from the tweets and used to train a 3-way SVM classifier.', 'Bag-of-word baselines (BoWV, SVM-ngrams-comb) achieve results comparable to the majority baseline (F1 of 0.2972), which shows how difficult the task is.', 'The baselines which only extract features from the tweets, SVM-ngrams-comb and TweetOnly perform worse than the baselines which also learn representations for the targets (BoWV, Concat).', 'By training conditional encoding models on automatically labelled stance detection data we achieve state-of-the-art results.', 'The best result (F1 of 0.5803) is achieved with the bi-directional conditional encoding model (BiCond).', 'This shows that such models are suitable for unseen, as well as seen target stance detection.']
[None, ['Majority baseline (Unseen Target)', 'SVM-ngrams-comb (Unseen Target)'], ['SVM-ngrams-comb (Unseen Target)', 'Majority baseline (Unseen Target)'], None, ['BiCond (Unseen Target)'], ['BiCond (Unseen Target)'], ['BiCond (Unseen Target)', 'Stance']]
1
D16-1088table_4
Event Recognition Performance Before/After Incorporating Subevents
1
[['(Huang and Riloff 2013)'], ['+Subevents']]
1
[['Recall'], ['Precision'], ['F1-score']]
[['71', '88', '79'], ['81', '83', '82']]
column
['Recall', 'Precision', 'F1-score']
['+Subevents']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Recall</th> <th>Precision</th> <th>F1-score</th> </tr> </thead> <tbody> <tr> <td>(Huang and Riloff 2013)</td> <td>71</td> <td>88</td> <td>79</td> </tr> <tr> <td>+Subevents</td> <td>81</td> <td>83</td> <td>82</td> </tr> </tbody></table>
Table 4
table_4
D16-1088
5
emnlp2016
4 Evaluation. We show that our acquired subevent phrases are useful to discover articles that describe the main event and therefore improve event detection performance. For direct comparisons, we tested our subevents using the same test data and the same evaluation setting as the previous multi-faceted event recognition research by (Huang and Riloff, 2013). Specifically, they have annotated 300 new articles that each contains a civil unrest keyword and only 101 of them are actually civil unrest stories. They have shown that the multi-faceted event recognition approach can accurately identify civil unrest documents, by identifying a sentence in the documents where two types of facet phrases or one facet phrase and a main event expression were matched. The first row of Table 4 shows their multi-faceted event recognition performance. We compared our learned subevent phrases with the event phrases learned by (Huang and Riloff, 2013) and found that 559 out of our 610 unique phrases are not in their list. We augmented their provided event phrase list with our newly acquired subevent phrases and then used the exactly same evaluation procedure. Essentially, we used a longer event phrase dictionary which is a combination of main event expressions resulted from the previous research by (Huang and Riloff, 2013) and our learned subevent phrases. Row 2 shows the event recognition performance using the extended event phrase list. We can see that after incorporating subevent phrases, additional 10% of civil unrest stories were discovered, with a small precision loss, the F1-score on event detection was improved by 3%.
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1]
['4 Evaluation.', 'We show that our acquired subevent phrases are useful to discover articles that describe the main event and therefore improve event detection performance.', 'For direct comparisons, we tested our subevents using the same test data and the same evaluation setting as the previous multi-faceted event recognition research by (Huang and Riloff, 2013).', 'Specifically, they have annotated 300 new articles that each contains a civil unrest keyword and only 101 of them are actually civil unrest stories.', 'They have shown that the multi-faceted event recognition approach can accurately identify civil unrest documents, by identifying a sentence in the documents where two types of facet phrases or one facet phrase and a main event expression were matched.', 'The first row of Table 4 shows their multi-faceted event recognition performance.', 'We compared our learned subevent phrases with the event phrases learned by (Huang and Riloff, 2013) and found that 559 out of our 610 unique phrases are not in their list.', 'We augmented their provided event phrase list with our newly acquired subevent phrases and then used the exactly same evaluation procedure.', 'Essentially, we used a longer event phrase dictionary which is a combination of main event expressions resulted from the previous research by (Huang and Riloff, 2013) and our learned subevent phrases.', 'Row 2 shows the event recognition performance using the extended event phrase list.', 'We can see that after incorporating subevent phrases, additional 10% of civil unrest stories were discovered, with a small precision loss, the F1-score on event detection was improved by 3%.']
[None, None, ['(Huang and Riloff 2013)'], None, None, ['(Huang and Riloff 2013)'], ['+Subevents', '(Huang and Riloff 2013)'], None, ['(Huang and Riloff 2013)'], ['+Subevents'], ['F1-score', '+Subevents', 'Precision']]
1
D16-1089table_1
Zero-shot recognition results on AWA (% accuracy).
2
[['Dataset', 'AWA']]
2
[['Vector space models', 'LinReg'], ['Vector space models', 'NLinReg'], ['Vector space models', 'CME'], ['Vector space models', 'ES-ZSL'], ['Ours', 'Gaussian']]
[['44.0', '48.4', '43.1', '58.2', '65.4']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Ours', 'Gaussian']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vector space models || LinReg</th> <th>Vector space models || NLinReg</th> <th>Vector space models || CME</th> <th>Vector space models || ES-ZSL</th> <th>Ours || Gaussian</th> </tr> </thead> <tbody> <tr> <td>Dataset || AWA</td> <td>44.0</td> <td>48.4</td> <td>43.1</td> <td>58.2</td> <td>65.4</td> </tr> </tbody></table>
Table 1
table_1
D16-1089
4
emnlp2016
3.2 Results. Table 1 compares our results on the AWA benchmark against alternatives using the same visual features, and word vectors trained on the same corpus. We observe that: (i) Our Gaussian-embedding obtains the best performance overall. (ii) Our method outperforms CME which shares an objective function and optimisation strategy with ours, but operates on vectors rather than Gaussians. This suggests that our new distribution rather than vectorembedding does indeed bring significant benefit.
[2, 1, 1, 1, 1]
['3.2 Results.', 'Table 1 compares our results on the AWA benchmark against alternatives using the same visual features, and word vectors trained on the same corpus.', 'We observe that: (i) Our Gaussian-embedding obtains the best performance overall.', '(ii) Our method outperforms CME which shares an objective function and optimisation strategy with ours, but operates on vectors rather than Gaussians.', 'This suggests that our new distribution rather than vectorembedding does indeed bring significant benefit.']
[None, ['Ours', 'Gaussian', 'Vector space models'], ['Ours', 'Gaussian'], ['Ours', 'Gaussian', 'Vector space models'], ['Ours', 'Gaussian', 'Vector space models']]
1
D16-1096table_1
Single system results in terms of (TER-BLEU)/2 (the lower the better) on 5 million Chinese to English training set. NMT results are on a large vocabulary (300k) and with UNK replaced. UGRU : updating with a GRU; USub: updating as a subtraction; UGRU + USub: combination of two methods (do not share coverage embedding vectors); +Obj.: UGRU + USub with an additional objective in Equation 6, we have two λs for UGRU and USub separately, and we test λGRU = 1 × 10−4 and λSub = 1 × 10−2. single system MT06
3
[['single system', '-', 'Tree-to-string'], ['single system', '-', 'LVNMT'], ['single system', 'Ours', 'UGRU'], ['single system', 'Ours', 'USub'], ['single system', 'Ours', 'UGRU+USub'], ['single system', 'Ours', '+Obj.']]
3
[['MT06', '-', 'BP'], ['MT06', '-', 'BLEU'], ['MT06', '-', 'T-B'], ['MT08', 'News', 'BP'], ['MT08', 'News', 'BLEU'], ['MT08', 'News', 'T-B'], ['MT08', 'Web', 'BP'], ['MT08', 'Web', 'BLEU'], ['MT08', 'Web', 'T-B'], ['avg.', '-', 'T-B']]
[['0.95', '34.93', '9.45', '0.94', '31.12', '12.90', '0.90', '23.45', '17.72', '13.36'], ['0.96', '34.53', '12.25', '0.93', '28.86', '17.40', '0.97', '26.78', '17.57', '15.74'], ['0.92', '35.59', '10.71', '0.89', '30.18', '15.33', '0.97', '27.48', '16.67', '14.24'], ['0.91', '35.90', '10.29', '0.88', '30.49', '15.23', '0.96', '27.63', '16.12', '13.88'], ['0.92', '36.60', '9.36', '0.89', '31.86', '13.69', '0.95', '27.12', '16.37', '13.14'], ['0.93', '36.80', '9.78', '0.90', '31.83', '14.20', '0.95', '28.28', '15.73', '13.24']]
column
['BP', 'BLEU', 'T-B', 'BP', 'BLEU', 'T-B', 'BP', 'BLEU', 'T-B', 'T-B']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06 || - || BP</th> <th>MT06 || - || BLEU</th> <th>MT06 || - || T-B</th> <th>MT08 || News || BP</th> <th>MT08 || News || BLEU</th> <th>MT08 || News || T-B</th> <th>MT08 || Web || BP</th> <th>MT08 || Web || BLEU</th> <th>MT08 || Web || T-B</th> <th>avg. || - || T-B</th> </tr> </thead> <tbody> <tr> <td>single system || - || Tree-to-string</td> <td>0.95</td> <td>34.93</td> <td>9.45</td> <td>0.94</td> <td>31.12</td> <td>12.90</td> <td>0.90</td> <td>23.45</td> <td>17.72</td> <td>13.36</td> </tr> <tr> <td>single system || - || LVNMT</td> <td>0.96</td> <td>34.53</td> <td>12.25</td> <td>0.93</td> <td>28.86</td> <td>17.40</td> <td>0.97</td> <td>26.78</td> <td>17.57</td> <td>15.74</td> </tr> <tr> <td>single system || Ours || UGRU</td> <td>0.92</td> <td>35.59</td> <td>10.71</td> <td>0.89</td> <td>30.18</td> <td>15.33</td> <td>0.97</td> <td>27.48</td> <td>16.67</td> <td>14.24</td> </tr> <tr> <td>single system || Ours || USub</td> <td>0.91</td> <td>35.90</td> <td>10.29</td> <td>0.88</td> <td>30.49</td> <td>15.23</td> <td>0.96</td> <td>27.63</td> <td>16.12</td> <td>13.88</td> </tr> <tr> <td>single system || Ours || UGRU+USub</td> <td>0.92</td> <td>36.60</td> <td>9.36</td> <td>0.89</td> <td>31.86</td> <td>13.69</td> <td>0.95</td> <td>27.12</td> <td>16.37</td> <td>13.14</td> </tr> <tr> <td>single system || Ours || +Obj.</td> <td>0.93</td> <td>36.80</td> <td>9.78</td> <td>0.90</td> <td>31.83</td> <td>14.20</td> <td>0.95</td> <td>28.28</td> <td>15.73</td> <td>13.24</td> </tr> </tbody></table>
Table 1
table_1
D16-1096
5
emnlp2016
5.2 Translation Results. Table 1 shows the results of all systems on 5 million training set. The traditional syntax-based system achieves 9.45, 12.90, and 17.72 on MT06, MT08 News, and MT08 Web sets respectively, and 13.36 on average in terms of (TERBLEU)/2. The large vocabulary NMT (LVNMT), our baseline, achieves an average (TERBLEU)/2 score of 15.74, which is about 2 points worse than the hybrid system. We test four different settings for our coverage embedding models: • UGRU : updating with a GRU; • USub: updating as a subtraction; • UGRU + USub: combination of two methods (do not share coverage embedding vectors); • +Obj.: UGRU + USub plus an additional objective in Equation 6. UGRU improves the translation quality by 1.3 points on average over LVNMT. And UGRU + USub achieves the best average score of 13.14, which is about 2.6 points better than LVNMT. All the improvements of our coverage embedding models over LVNMT are statistically significant with the signtest of Collins et al. (2005). We believe that we need to explore more hyper-parameters of +Obj. in order to get even better results over UGRU + USub.
[0, 1, 1, 1, 2, 1, 1, 1, 2]
['5.2 Translation Results.', 'Table 1 shows the results of all systems on 5 million training set.', 'The traditional syntax-based system achieves 9.45, 12.90, and 17.72 on MT06, MT08 News, and MT08 Web sets respectively, and 13.36 on average in terms of (TERBLEU)/2.', 'The large vocabulary NMT (LVNMT), our baseline, achieves an average (TERBLEU)/2 score of 15.74, which is about 2 points worse than the hybrid system.', 'We test four different settings for our coverage embedding models: • UGRU : updating with a GRU; • USub: updating as a subtraction; • UGRU + USub: combination of two methods (do not share coverage embedding vectors); • +Obj.: UGRU + USub plus an additional objective in Equation 6.', 'UGRU improves the translation quality by 1.3 points on average over LVNMT.', 'And UGRU + USub achieves the best average score of 13.14, which is about 2.6 points better than LVNMT.', 'All the improvements of our coverage embedding models over LVNMT are statistically significant with the signtest of Collins et al. (2005).', 'We believe that we need to explore more hyper-parameters of +Obj. in order to get even better results over UGRU + USub.']
[None, None, ['MT06', 'T-B', 'MT08', 'News', 'Web', 'avg.'], ['LVNMT', 'avg.', 'T-B'], ['UGRU', 'USub', 'UGRU+USub', '+Obj.'], ['UGRU', 'LVNMT', 'avg.'], ['UGRU+USub', 'avg.', 'LVNMT'], ['LVNMT'], ['UGRU+USub']]
1
D16-1096table_2
Single system results in terms of (TER-BLEU)/2 on 11 million set. NMT results are on a large vocabulary (500k) and with UNK replaced. Due to the time limitation, we only have the results of UGRU system.
2
[['single system', 'Tree-to-string'], ['single system', 'LVNMT'], ['single system', 'UGRU']]
3
[['MT06', '-', 'BP'], ['MT06', '-', 'T-B'], ['MT08', 'News', 'BP'], ['MT08', 'News', 'T-B'], ['MT08', 'Web', 'BP'], ['MT08', 'Web', 'T-B'], ['avg.', '-', 'T-B']]
[['0.90', '8.70', '0.84', '12.65', '0.84', '17.00', '12.78'], ['0.96', '9.78', '0.94', '14.15', '0.97', '15.89', '13.27'], ['0.97', '8.62', '0.95', '12.79', '0.97', '15.34', '12.31']]
column
['BP', 'T-B', 'BP', 'T-B', 'BP', 'T-B', 'T-B']
['UGRU']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06 || - || BP</th> <th>MT06 || - || T-B</th> <th>MT08 || News || BP</th> <th>MT08 || News || T-B</th> <th>MT08 || Web || BP</th> <th>MT08 || Web || T-B</th> <th>avg. || - || T-B</th> </tr> </thead> <tbody> <tr> <td>single system || Tree-to-string</td> <td>0.90</td> <td>8.70</td> <td>0.84</td> <td>12.65</td> <td>0.84</td> <td>17.00</td> <td>12.78</td> </tr> <tr> <td>single system || LVNMT</td> <td>0.96</td> <td>9.78</td> <td>0.94</td> <td>14.15</td> <td>0.97</td> <td>15.89</td> <td>13.27</td> </tr> <tr> <td>single system || UGRU</td> <td>0.97</td> <td>8.62</td> <td>0.95</td> <td>12.79</td> <td>0.97</td> <td>15.34</td> <td>12.31</td> </tr> </tbody></table>
Table 2
table_2
D16-1096
5
emnlp2016
Table 2 shows the results of 11 million systems, LVNMT achieves an average (TER-BLEU)/2 of 13.27, which is about 2.5 points better than 5 million LVNMT. The result of our UGRU coverage model gives almost 1 point gain over LVNMT. Those results suggest that the more training data we use, the stronger the baseline system becomes, and the harder to get improvements. In order to get a reasonable or strong NMT system, we have to conduct experiments over a large-scale training set.
[1, 1, 2, 2]
['Table 2 shows the results of 11 million systems, LVNMT achieves an average (TER-BLEU)/2 of 13.27, which is about 2.5 points better than 5 million LVNMT.', 'The result of our UGRU coverage model gives almost 1 point gain over LVNMT.', 'Those results suggest that the more training data we use, the stronger the baseline system becomes, and the harder to get improvements.', 'In order to get a reasonable or strong NMT system, we have to conduct experiments over a large-scale training set.']
[['LVNMT', 'avg.'], ['UGRU', 'LVNMT'], None, None]
1
D16-1099table_2
Average results for DSMs over four different frequency ranges for the items in the TOEFL, ESL, SL, MEN, and RW tests. All DSMs are trained on the 1 billion words data.
2
[['DSM', 'CO'], ['DSM', 'PPMI'], ['DSM', 'TSVD'], ['DSM', 'ISVD'], ['DSM', 'RI'], ['DSM', 'SGNS'], ['DSM', 'CBOW']]
1
[['HIGH'], ['MEDIUM'], ['LOW'], ['MIXED']]
[['32.61 (↑62.5,↓04.6)', '35.77 (↑66.6,↓21.2)', '12.57 (↑35.7,↓00.0)', '27.14 (↑56.6,↓07.9)'], ['55.51 (↑75.3,↓28.0)', '57.83 (↑88.8,↓18.7)', '25.84 (↑50.0,↓00.0)', '47.73 (↑83.3,↓27.1)'], ['50.52 (↑70.9,↓23.2)', '54.75 (↑77.9,↓24.1)', '17.85 (↑50.0,↓00.0)', '41.08 (↑56.6,↓19.6)'], ['63.31 (↑87.5,↓36.5)', '69.25 (↑88.8,↓46.3)', '10.94 (↑16.0,↓00.0)', '57.24 (↑83.3,↓33.0)'], ['53.11 (↑62.5,↓30.1)', '48.02 (↑72.2,↓20.4)', '23.29 (↑39.0,↓00.0)', '46.39 (↑66.6,↓21.0)'], ['68.81 (↑87.5,↓36.4)', '62.00 (↑83.3,↓27.4)', '18.76 (↑42.8,↓00.0)', '56.93 (↑83.3,↓30.2)'], ['62.73 (↑81.2,↓31.9)', '59.50 (↑83.3,↓32.4)', '27.13 (↑78.5,↓00.0)', '52.21 (↑76.6,↓25.9)']]
column
['correlation', 'correlation', 'correlation', 'correlation']
['DSM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HIGH</th> <th>MEDIUM</th> <th>LOW</th> <th>MIXED</th> </tr> </thead> <tbody> <tr> <td>DSM || CO</td> <td>32.61 (↑62.5,↓04.6)</td> <td>35.77 (↑66.6,↓21.2)</td> <td>12.57 (↑35.7,↓00.0)</td> <td>27.14 (↑56.6,↓07.9)</td> </tr> <tr> <td>DSM || PPMI</td> <td>55.51 (↑75.3,↓28.0)</td> <td>57.83 (↑88.8,↓18.7)</td> <td>25.84 (↑50.0,↓00.0)</td> <td>47.73 (↑83.3,↓27.1)</td> </tr> <tr> <td>DSM || TSVD</td> <td>50.52 (↑70.9,↓23.2)</td> <td>54.75 (↑77.9,↓24.1)</td> <td>17.85 (↑50.0,↓00.0)</td> <td>41.08 (↑56.6,↓19.6)</td> </tr> <tr> <td>DSM || ISVD</td> <td>63.31 (↑87.5,↓36.5)</td> <td>69.25 (↑88.8,↓46.3)</td> <td>10.94 (↑16.0,↓00.0)</td> <td>57.24 (↑83.3,↓33.0)</td> </tr> <tr> <td>DSM || RI</td> <td>53.11 (↑62.5,↓30.1)</td> <td>48.02 (↑72.2,↓20.4)</td> <td>23.29 (↑39.0,↓00.0)</td> <td>46.39 (↑66.6,↓21.0)</td> </tr> <tr> <td>DSM || SGNS</td> <td>68.81 (↑87.5,↓36.4)</td> <td>62.00 (↑83.3,↓27.4)</td> <td>18.76 (↑42.8,↓00.0)</td> <td>56.93 (↑83.3,↓30.2)</td> </tr> <tr> <td>DSM || CBOW</td> <td>62.73 (↑81.2,↓31.9)</td> <td>59.50 (↑83.3,↓32.4)</td> <td>27.13 (↑78.5,↓00.0)</td> <td>52.21 (↑76.6,↓25.9)</td> </tr> </tbody></table>
Table 2
table_2
D16-1099
5
emnlp2016
Table 2 (next side) shows the average results over the different frequency ranges for the various DSMs trained on the 1 billion-word ukWaC data. We also include the highest and lowest individual test scores (signified by ↑ and ↓), in order to get an idea about the consistency of the results. As can be seen in the table, the most consistent model is ISVD, which produces the best results in both the MEDIUM and MIXED frequency ranges. The neural network models SGNS and CBOW produce the best results in the HIGH and LOW range, respectively, with CBOW clearly outperforming SGNS in the latter case. The major difference between these models is that CBOW predicts a word based on a context, while SGNS predicts a context based on a word. Clearly, the former approach is more beneficial for low-frequent items. The PPMI, TSVD and RI models perform similarly across the frequency ranges, with RI producing somewhat lower results in the MEDIUM range, and TSVD producing somewhat lower results in the LOW range. The CO model underperforms in all frequency ranges. Worth noting is the fact that all models that are based on an explicit matrix (i.e. CO, PPMI, TSVD and ISVD) produce better results in the MEDIUM range than in the HIGH range. The arguably most interesting results are in the LOW range. Unsurprisingly, there is a general and significant drop in performance for low frequency items, but with interesting differences among the various models. As already mentioned, the CBOW model produces the best results, closely followed by PPMI and RI. It is noteworthy that the low-dimensional embeddings of the CBOW model only gives a modest improvement over the highdimensional explicit vectors of PPMI. The worst results are produced by the ISVD model, which scores even lower than the baseline CO model. This might be explained by the fact that ISVD removes the latent dimensions with largest variance, which are arguably the most important dimensions for very lowfrequent items. Increasing the number of latent dimensions with high variance in the ISVD model improves the results in the LOW range (16.59 when removing only the top 100 dimensions).
[1, 2, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2]
['Table 2 (next side) shows the average results over the different frequency ranges for the various DSMs trained on the 1 billion-word ukWaC data.', 'We also include the highest and lowest individual test scores (signified by ↑ and ↓), in order to get an idea about the consistency of the results.', 'As can be seen in the table, the most consistent model is ISVD, which produces the best results in both the MEDIUM and MIXED frequency ranges.', 'The neural network models SGNS and CBOW produce the best results in the HIGH and LOW range, respectively, with CBOW clearly outperforming SGNS in the latter case.', 'The major difference between these models is that CBOW predicts a word based on a context, while SGNS predicts a context based on a word.', 'Clearly, the former approach is more beneficial for low-frequent items.', 'The PPMI, TSVD and RI models perform similarly across the frequency ranges, with RI producing somewhat lower results in the MEDIUM range, and TSVD producing somewhat lower results in the LOW range.', 'The CO model underperforms in all frequency ranges.', 'Worth noting is the fact that all models that are based on an explicit matrix (i.e. CO, PPMI, TSVD and ISVD) produce better results in the MEDIUM range than in the HIGH range.', 'The arguably most interesting results are in the LOW range.', 'Unsurprisingly, there is a general and significant drop in performance for low frequency items, but with interesting differences among the various models.', 'As already mentioned, the CBOW model produces the best results, closely followed by PPMI and RI.', 'It is noteworthy that the low-dimensional embeddings of the CBOW model only gives a modest improvement over the highdimensional explicit vectors of PPMI.', 'The worst results are produced by the ISVD model, which scores even lower than the baseline CO model.', 'This might be explained by the fact that ISVD removes the latent dimensions with largest variance, which are arguably the most important dimensions for very lowfrequent items.', 'Increasing the number of latent dimensions with high variance in the ISVD model improves the results in the LOW range (16.59 when removing only the top 100 dimensions).']
[None, None, ['ISVD', 'MEDIUM', 'MIXED'], ['SGNS', 'CBOW', 'HIGH', 'LOW'], ['CBOW', 'SGNS'], None, ['PPMI', 'TSVD', 'RI', 'MEDIUM', 'LOW'], ['CO', 'HIGH', 'MEDIUM', 'LOW', 'MIXED'], ['CO', 'PPMI', 'TSVD', 'ISVD', 'MEDIUM', 'HIGH'], ['LOW'], ['LOW'], ['CBOW', 'PPMI', 'RI'], ['CBOW', 'PPMI'], ['ISVD', 'CO'], ['ISVD'], ['ISVD', 'LOW']]
1
D16-1102table_2
Estimated precision and recall for Tamil, Bengali and Malayalam before and after non-expert curation. We list state-of-the-art results for German and Hindi for comparison.
4
[['LANG.', 'Bengali PROJECTED', 'Match', 'partial'], ['LANG.', 'Bengali PROJECTED', 'Match', 'exact'], ['LANG.', 'Bengali CURATED', 'Match', 'partial'], ['LANG.', 'Bengali CURATED', 'Match', 'exact'], ['LANG.', 'Malayalam PROJECTED', 'Match', 'partial'], ['LANG.', 'Malayalam PROJECTED', 'Match', 'exact'], ['LANG.', 'Malayalam CURATED', 'Match', 'partial'], ['LANG.', 'Malayalam CURATED', 'Match', 'exact'], ['LANG.', 'Tamil PROJECTED', 'Match', 'partial'], ['LANG.', 'Tamil PROJECTED', 'Match', 'exact'], ['LANG.', 'Tamil CURATED', 'Match', 'partial'], ['LANG.', 'Tamil CURATED', 'Match', 'exact'], ['LANG.', 'Chinese (Akbik et al. 2015)', 'Match', 'partial'], ['LANG.', 'Chinese (Akbik et al. 2015)', 'Match', 'exact'], ['LANG.', 'German (Akbik et al. 2015)', 'Match', 'partial'], ['LANG.', 'German (Akbik et al. 2015)', 'Match', 'exact'], ['LANG.', 'Hindi (Akbik et al. 2015)', 'Match', 'partial'], ['LANG.', 'Hindi (Akbik et al. 2015)', 'Match', 'exact']]
2
[['PRED.', 'P'], ['ARGUMENT', 'P'], ['ARGUMENT', 'R'], ['ARGUMENT', 'F1'], ['ARGUMENT', '%Agree']]
[['1.0', '0.84', '0.68', '0.75', '0.67'], ['1.0', '0.83', '0.68', '0.75', '0.67'], ['1.0', '0.88', '0.69', '0.78', '0.67'], ['1.0', '0.87', '0.69', '0.77', '0.67'], ['0.99', '0.87', '0.65', '0.75', '0.65'], ['0.99', '0.79', '0.63', '0.7', '0.65'], ['0.99', '0.92', '0.69', '0.78', '0.65'], ['0.99', '0.84', '0.67', '0.74', '0.65'], ['0.77', '0.49', '0.59', '0.53', '0.75'], ['0.77', '0.45', '0.58', '0.5', '0.75'], ['0.77', '0.62', '0.67', '0.64', '0.75'], ['0.77', '0.58', '0.65', '0.61', '0.75'], ['0.97', '0.93', '0.83', '0.88', '0.92'], ['0.97', '0.83', '0.81', '0.82', '0.92'], ['0.96', '0.95', '0.73', '0.83', '0.92'], ['0.96', '0.91', '0.73', '0.81', '0.92'], ['0.91', '0.93', '0.66', '0.77', '0.81'], ['0.91', '0.58', '0.54', '0.56', '0.81']]
column
['P', 'P', 'R', 'F1', '%Agree']
['LANG.']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PRED. || P</th> <th>ARGUMENT || P</th> <th>ARGUMENT || R</th> <th>ARGUMENT || F1</th> <th>ARGUMENT || %Agree</th> </tr> </thead> <tbody> <tr> <td>LANG. || Bengali PROJECTED || Match || partial</td> <td>1.0</td> <td>0.84</td> <td>0.68</td> <td>0.75</td> <td>0.67</td> </tr> <tr> <td>LANG. || Bengali PROJECTED || Match || exact</td> <td>1.0</td> <td>0.83</td> <td>0.68</td> <td>0.75</td> <td>0.67</td> </tr> <tr> <td>LANG. || Bengali CURATED || Match || partial</td> <td>1.0</td> <td>0.88</td> <td>0.69</td> <td>0.78</td> <td>0.67</td> </tr> <tr> <td>LANG. || Bengali CURATED || Match || exact</td> <td>1.0</td> <td>0.87</td> <td>0.69</td> <td>0.77</td> <td>0.67</td> </tr> <tr> <td>LANG. || Malayalam PROJECTED || Match || partial</td> <td>0.99</td> <td>0.87</td> <td>0.65</td> <td>0.75</td> <td>0.65</td> </tr> <tr> <td>LANG. || Malayalam PROJECTED || Match || exact</td> <td>0.99</td> <td>0.79</td> <td>0.63</td> <td>0.7</td> <td>0.65</td> </tr> <tr> <td>LANG. || Malayalam CURATED || Match || partial</td> <td>0.99</td> <td>0.92</td> <td>0.69</td> <td>0.78</td> <td>0.65</td> </tr> <tr> <td>LANG. || Malayalam CURATED || Match || exact</td> <td>0.99</td> <td>0.84</td> <td>0.67</td> <td>0.74</td> <td>0.65</td> </tr> <tr> <td>LANG. || Tamil PROJECTED || Match || partial</td> <td>0.77</td> <td>0.49</td> <td>0.59</td> <td>0.53</td> <td>0.75</td> </tr> <tr> <td>LANG. || Tamil PROJECTED || Match || exact</td> <td>0.77</td> <td>0.45</td> <td>0.58</td> <td>0.5</td> <td>0.75</td> </tr> <tr> <td>LANG. || Tamil CURATED || Match || partial</td> <td>0.77</td> <td>0.62</td> <td>0.67</td> <td>0.64</td> <td>0.75</td> </tr> <tr> <td>LANG. || Tamil CURATED || Match || exact</td> <td>0.77</td> <td>0.58</td> <td>0.65</td> <td>0.61</td> <td>0.75</td> </tr> <tr> <td>LANG. || Chinese (Akbik et al. 2015) || Match || partial</td> <td>0.97</td> <td>0.93</td> <td>0.83</td> <td>0.88</td> <td>0.92</td> </tr> <tr> <td>LANG. || Chinese (Akbik et al. 2015) || Match || exact</td> <td>0.97</td> <td>0.83</td> <td>0.81</td> <td>0.82</td> <td>0.92</td> </tr> <tr> <td>LANG. || German (Akbik et al. 2015) || Match || partial</td> <td>0.96</td> <td>0.95</td> <td>0.73</td> <td>0.83</td> <td>0.92</td> </tr> <tr> <td>LANG. || German (Akbik et al. 2015) || Match || exact</td> <td>0.96</td> <td>0.91</td> <td>0.73</td> <td>0.81</td> <td>0.92</td> </tr> <tr> <td>LANG. || Hindi (Akbik et al. 2015) || Match || partial</td> <td>0.91</td> <td>0.93</td> <td>0.66</td> <td>0.77</td> <td>0.81</td> </tr> <tr> <td>LANG. || Hindi (Akbik et al. 2015) || Match || exact</td> <td>0.91</td> <td>0.58</td> <td>0.54</td> <td>0.56</td> <td>0.81</td> </tr> </tbody></table>
Table 2
table_2
D16-1102
4
emnlp2016
4.2 Results. The evaluation results are listed in Table 2. For comparison, we include evaluation results reported for three high-resource languages: German and Chinese, representing average high-resource results, as well as Hindi, a below-average outlier. We make the following observations: Lower annotation projection quality. We find that the F1-scores of Bengali, Malayalam and Tamil are 6, 11 and 31 pp below that of an average highresource language (as exemplified by German in Table 2). Bengali and Malayalam, however, do surpass Hindi, for which only a relatively poor dependency parser was used. This suggests that syntactic annotation projection may be a better method for identifying predicate-argument structures in languages that lack fully developed dependency parsers.
[2, 1, 2, 1, 1, 1, 1]
['4.2 Results.', 'The evaluation results are listed in Table 2.', 'For comparison, we include evaluation results reported for three high-resource languages: German and Chinese, representing average high-resource results, as well as Hindi, a below-average outlier.', 'We make the following observations: Lower annotation projection quality.', 'We find that the F1-scores of Bengali, Malayalam and Tamil are 6, 11 and 31 pp below that of an average highresource language (as exemplified by German in Table 2).', 'Bengali and Malayalam, however, do surpass Hindi, for which only a relatively poor dependency parser was used.', 'This suggests that syntactic annotation projection may be a better method for identifying predicate-argument structures in languages that lack fully developed dependency parsers.']
[None, None, ['German (Akbik et al. 2015)', 'Chinese (Akbik et al. 2015)', 'Hindi (Akbik et al. 2015)'], None, ['Bengali PROJECTED', 'Malayalam PROJECTED', 'Tamil PROJECTED', 'German (Akbik et al. 2015)'], ['Bengali PROJECTED', 'Malayalam PROJECTED', 'Hindi (Akbik et al. 2015)'], None]
1
D16-1104table_2
Performance of unigrams versus our similarity-based features using embeddings from Word2Vec
3
[['Features', 'Baseline', 'Unigrams'], ['Features', 'Baseline', 'S'], ['Features', 'Baseline', 'WS'], ['Features', 'Baseline', 'Both']]
1
[['P'], ['R'], ['F']]
[['67.2', '78.8', '72.53'], ['64.6', '75.2', '69.49'], ['67.6', '51.2', '58.26'], ['67', '52.8', '59.05']]
column
['P', 'R', 'F']
['Features']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>Features || Baseline || Unigrams</td> <td>67.2</td> <td>78.8</td> <td>72.53</td> </tr> <tr> <td>Features || Baseline || S</td> <td>64.6</td> <td>75.2</td> <td>69.49</td> </tr> <tr> <td>Features || Baseline || WS</td> <td>67.6</td> <td>51.2</td> <td>58.26</td> </tr> <tr> <td>Features || Baseline || Both</td> <td>67</td> <td>52.8</td> <td>59.05</td> </tr> </tbody></table>
Table 2
table_2
D16-1104
3
emnlp2016
6 Results. Table 2 shows performance of sarcasm detection when our word embedding-based features are used on their own i.e, not as augmented features. The embedding in this case is Word2Vec. The four rows show baseline sets of features: unigrams, unweighted similarity using word embeddings (S), weighted similarity using word embeddings (WS) and both (i.e., unweighted plus weighted similarities using word embeddings). Using only unigrams as features gives a F-score of 72.53%, while only unweighted and weighted features gives F-score of 69.49% and 58.26% respectively. This validates our intuition that word embedding-based features alone are not sufficient, and should be augmented with other features.
[2, 1, 2, 2, 1, 2]
['6 Results.', 'Table 2 shows performance of sarcasm detection when our word embedding-based features are used on their own i.e, not as augmented features.', 'The embedding in this case is Word2Vec.', 'The four rows show baseline sets of features: unigrams, unweighted similarity using word embeddings (S), weighted similarity using word embeddings (WS) and both (i.e., unweighted plus weighted similarities using word embeddings).', 'Using only unigrams as features gives a F-score of 72.53%, while only unweighted and weighted features gives F-score of 69.49% and 58.26% respectively.', 'This validates our intuition that word embedding-based features alone are not sufficient, and should be augmented with other features.']
[None, None, None, ['Features'], ['Unigrams', 'F', 'S', 'WS'], ['S', 'WS']]
1
D16-1104table_3
Performance obtained on augmenting word embedding features to features from four prior works, for four word embeddings; L: Liebrecht et al. (2013), G: Gonz´alez-Ib´anez et al. (2011a), B: Buschmeier et al. (2014) , J: Joshi et al. (2015)
1
[['L'], ['+S'], ['+WS'], ['+S+WS'], ['G'], ['+S'], ['+WS'], ['+S+WS'], ['B'], ['+S'], ['+WS'], ['+S+WS'], ['J'], ['+S'], ['+WS'], ['+S+WS']]
2
[['LSA', 'P'], ['LSA', 'R'], ['LSA', 'F'], ['GloVe', 'P'], ['GloVe', 'R'], ['GloVe', 'F'], ['Dependency Weights', 'P'], ['Dependency Weights', 'R'], ['Dependency Weights', 'F'], ['Word2Vec', 'P'], ['Word2Vec', 'R'], ['Word2Vec', 'F']]
[['73', '79', '75.8', '73', '79', '75.8', '73', '79', '75.8', '73', '79', '75.8'], ['81.8', '78.2', '79.95', '81.8', '79.2', '80.47', '81.8', '78.8', '80.27', '80.4', '80', '80.2'], ['76.2', '79.8', '77.9', '76.2', '79.6', '77.86', '81.4', '80.8', '81.09', '80.8', '78.6', '79.68'], ['77.6', '79.8', '78.68', '74', '79.4', '76.60', '82', '80.4', '81.19', '81.6', '78.2', '79.86'], ['84.8', '73.8', '78.91', '84.8', '73.8', '78.91', '84.8', '73.8', '78.91', '84.8', '73.8', '78.91'], ['84.2', '74.4', '79', '84', '72.6', '77.8', '84.4', '72', '77.7', '84', '72.8', '78'], ['84.4', '73.6', '78.63', '84', '75.2', '79.35', '84.4', '72.6', '78.05', '83.8', '70.2', '76.4'], ['84.2', '73.6', '78.54', '84', '74', '78.68', '84.2', '72.2', '77.73', '84', '72.8', '78'], ['81.6', '72.2', '76.61', '81.6', '72.2', '76.61', '81.6', '72.2', '76.61', '81.6', '72.2', '76.61'], ['78.2', '75.6', '76.87', '80.4', '76.2', '78.24', '81.2', '74.6', '77.76', '81.4', '72.6', '76.74'], ['75.8', '77.2', '76.49', '76.6', '77', '76.79', '76.2', '76.4', '76.29', '81.6', '73.4', '77.28'], ['74.8', '77.4', '76.07', '76.2', '78.2', '77.18', '75.6', '78.8', '77.16', '81', '75.4', '78.09'], ['85.2', '74.4', '79.43', '85.2', '74.4', '79.43', '85.2', '74.4', '79.43', '85.2', '74.4', '79.43'], ['84.8', '73.8', '78.91', '85.6', '74.8', '79.83', '85.4', '74.4', '79.52', '85.4', '74.6', '79.63'], ['85.6', '75.2', '80.06', '85.4', '72.6', '78.48', '85.4', '73.4', '78.94', '85.6', '73.4', '79.03'], ['84.8', '73.6', '78.8', '85.8', '75.4', '80.26', '85.6', '74.4', '79.6', '85.2', '73.2', '78.74']]
column
['P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F']
['LSA', 'GloVe', 'Dependency Weights', 'Word2Vec']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LSA || P</th> <th>LSA || R</th> <th>LSA || F</th> <th>GloVe || P</th> <th>GloVe || R</th> <th>GloVe || F</th> <th>Dependency Weights || P</th> <th>Dependency Weights || R</th> <th>Dependency Weights || F</th> <th>Word2Vec || P</th> <th>Word2Vec || R</th> <th>Word2Vec || F</th> </tr> </thead> <tbody> <tr> <td>L</td> <td>73</td> <td>79</td> <td>75.8</td> <td>73</td> <td>79</td> <td>75.8</td> <td>73</td> <td>79</td> <td>75.8</td> <td>73</td> <td>79</td> <td>75.8</td> </tr> <tr> <td>+S</td> <td>81.8</td> <td>78.2</td> <td>79.95</td> <td>81.8</td> <td>79.2</td> <td>80.47</td> <td>81.8</td> <td>78.8</td> <td>80.27</td> <td>80.4</td> <td>80</td> <td>80.2</td> </tr> <tr> <td>+WS</td> <td>76.2</td> <td>79.8</td> <td>77.9</td> <td>76.2</td> <td>79.6</td> <td>77.86</td> <td>81.4</td> <td>80.8</td> <td>81.09</td> <td>80.8</td> <td>78.6</td> <td>79.68</td> </tr> <tr> <td>+S+WS</td> <td>77.6</td> <td>79.8</td> <td>78.68</td> <td>74</td> <td>79.4</td> <td>76.60</td> <td>82</td> <td>80.4</td> <td>81.19</td> <td>81.6</td> <td>78.2</td> <td>79.86</td> </tr> <tr> <td>G</td> <td>84.8</td> <td>73.8</td> <td>78.91</td> <td>84.8</td> <td>73.8</td> <td>78.91</td> <td>84.8</td> <td>73.8</td> <td>78.91</td> <td>84.8</td> <td>73.8</td> <td>78.91</td> </tr> <tr> <td>+S</td> <td>84.2</td> <td>74.4</td> <td>79</td> <td>84</td> <td>72.6</td> <td>77.8</td> <td>84.4</td> <td>72</td> <td>77.7</td> <td>84</td> <td>72.8</td> <td>78</td> </tr> <tr> <td>+WS</td> <td>84.4</td> <td>73.6</td> <td>78.63</td> <td>84</td> <td>75.2</td> <td>79.35</td> <td>84.4</td> <td>72.6</td> <td>78.05</td> <td>83.8</td> <td>70.2</td> <td>76.4</td> </tr> <tr> <td>+S+WS</td> <td>84.2</td> <td>73.6</td> <td>78.54</td> <td>84</td> <td>74</td> <td>78.68</td> <td>84.2</td> <td>72.2</td> <td>77.73</td> <td>84</td> <td>72.8</td> <td>78</td> </tr> <tr> <td>B</td> <td>81.6</td> <td>72.2</td> <td>76.61</td> <td>81.6</td> <td>72.2</td> <td>76.61</td> <td>81.6</td> <td>72.2</td> <td>76.61</td> <td>81.6</td> <td>72.2</td> <td>76.61</td> </tr> <tr> <td>+S</td> <td>78.2</td> <td>75.6</td> <td>76.87</td> <td>80.4</td> <td>76.2</td> <td>78.24</td> <td>81.2</td> <td>74.6</td> <td>77.76</td> <td>81.4</td> <td>72.6</td> <td>76.74</td> </tr> <tr> <td>+WS</td> <td>75.8</td> <td>77.2</td> <td>76.49</td> <td>76.6</td> <td>77</td> <td>76.79</td> <td>76.2</td> <td>76.4</td> <td>76.29</td> <td>81.6</td> <td>73.4</td> <td>77.28</td> </tr> <tr> <td>+S+WS</td> <td>74.8</td> <td>77.4</td> <td>76.07</td> <td>76.2</td> <td>78.2</td> <td>77.18</td> <td>75.6</td> <td>78.8</td> <td>77.16</td> <td>81</td> <td>75.4</td> <td>78.09</td> </tr> <tr> <td>J</td> <td>85.2</td> <td>74.4</td> <td>79.43</td> <td>85.2</td> <td>74.4</td> <td>79.43</td> <td>85.2</td> <td>74.4</td> <td>79.43</td> <td>85.2</td> <td>74.4</td> <td>79.43</td> </tr> <tr> <td>+S</td> <td>84.8</td> <td>73.8</td> <td>78.91</td> <td>85.6</td> <td>74.8</td> <td>79.83</td> <td>85.4</td> <td>74.4</td> <td>79.52</td> <td>85.4</td> <td>74.6</td> <td>79.63</td> </tr> <tr> <td>+WS</td> <td>85.6</td> <td>75.2</td> <td>80.06</td> <td>85.4</td> <td>72.6</td> <td>78.48</td> <td>85.4</td> <td>73.4</td> <td>78.94</td> <td>85.6</td> <td>73.4</td> <td>79.03</td> </tr> <tr> <td>+S+WS</td> <td>84.8</td> <td>73.6</td> <td>78.8</td> <td>85.8</td> <td>75.4</td> <td>80.26</td> <td>85.6</td> <td>74.4</td> <td>79.6</td> <td>85.2</td> <td>73.2</td> <td>78.74</td> </tr> </tbody></table>
Table 3
table_3
D16-1104
4
emnlp2016
Table 3 shows results for four kinds of word embeddings. All entries in the tables are higher than the simple unigrams baseline, i.e., F-score for each of the four is higher than unigrams - highlighting that these are better features for sarcasm detection than simple unigrams. Values in bold indicate the best F-score for a given prior work-embedding type combination. In case of Liebrecht et al. (2013) for Word2Vec, the overall improvement in F-score is 4%. Precision increases by 8% while recall remains nearly unchanged. For features given in Gonzalez- ´ Ibanez et al. (2011a), there is a negligible degradation of ´ 0.91% when word embedding-based features based on Word2Vec are used. For Buschmeier et al. (2014) for Word2Vec, we observe an improvement in F-score from 76.61% to 78.09%. Precision remains nearly unchanged while recall increases. In case of Joshi et al. (2015) and Word2Vec, we observe a slight improvement of 0.20% when unweighted (S) features are used. This shows that word embedding-based features are useful, across four past works for Word2Vec. Table 3 also shows that the improvement holds across the four word embedding types as well. The maximum improvement is observed in case of Liebrecht et al. (2013). It is around 4% in case of LSA, 5% in case of GloVe, 6% in case of Dependency weight-based and 4% in case of Word2Vec. These improvements are not directly comparable because the four embeddings have different vocabularies (since they are trained on different datasets) and vocabulary sizes, their results cannot be directly compared.
[1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2]
['Table 3 shows results for four kinds of word embeddings.', 'All entries in the tables are higher than the simple unigrams baseline, i.e., F-score for each of the four is higher than unigrams - highlighting that these are better features for sarcasm detection than simple unigrams.', 'Values in bold indicate the best F-score for a given prior work-embedding type combination.', 'In case of Liebrecht et al. (2013) for Word2Vec, the overall improvement in F-score is 4%.', 'Precision increases by 8% while recall remains nearly unchanged.', 'For features given in Gonzalez- ´ Ibanez et al. (2011a), there is a negligible degradation of ´ 0.91% when word embedding-based features based on Word2Vec are used.', 'For Buschmeier et al. (2014) for Word2Vec, we observe an improvement in F-score from 76.61% to 78.09%.', 'Precision remains nearly unchanged while recall increases.', 'In case of Joshi et al. (2015) and Word2Vec, we observe a slight improvement of 0.20% when unweighted (S) features are used.', 'This shows that word embedding-based features are useful, across four past works for Word2Vec.', 'Table 3 also shows that the improvement holds across the four word embedding types as well.', 'The maximum improvement is observed in case of Liebrecht et al. (2013).', 'It is around 4% in case of LSA, 5% in case of GloVe, 6% in case of Dependency weight-based and 4% in case of Word2Vec.', 'These improvements are not directly comparable because the four embeddings have different vocabularies (since they are trained on different datasets) and vocabulary sizes, their results cannot be directly compared.']
[None, ['F'], ['F'], ['L', 'F', 'Word2Vec'], ['P'], ['G', 'Word2Vec'], ['B', 'Word2Vec', 'F'], ['P', 'R'], ['J', 'Word2Vec', '+S'], ['Word2Vec'], ['LSA', 'GloVe', 'Dependency Weights', 'Word2Vec'], ['L'], ['LSA', 'GloVe', 'Dependency Weights', 'Word2Vec'], None]
1
D16-1108table_4
Spearman rank correlation of thread ˜si,j with karma scores. (*) indicates statistical significance (p < 0.05). thread level, and in Table 5 for the user level. On the thread level, the hyb-500.30 style model consistently finds positive, statistically significant, correlation between the post’s stylistic similarity score and its karma. This result suggests that language style adaptation does contribute to being well-received by the community. None of the other models explored in the previous section had this property, and for the topic models the correlation is mostly negative. On the user level, all correlations between a user’s k-index and their style/topic match are statistically significant, though the hyb-500.30 style model shows more positive correlation than other models. In both cases, the word_only model gives results between the style and topic models. The hyb-15k model has results that are similar to the word_only model, and the tag_only model has mostly negative correlation.
2
[['subreddit', 'askmen'], ['subreddit', 'askscience'], ['subreddit', 'askwomen'], ['subreddit', 'atheism'], ['subreddit', 'chgmyvw'], ['subreddit', 'fitness'], ['subreddit', 'politics'], ['subreddit', 'worldnews']]
1
[['hyb-500.30'], ['word only'], ['topic-100']]
[['0.392*', '0.222*', '0.055'], ['0.321*', '-0.110', '-0.166*'], ['0.501*', '0.388*', '0.005'], ['0.137*', '-0.229*', '-0.251'], ['0.167*', '-0.121*', '-0.306*'], ['0.130*', '0.017', '-0.313*'], ['0.533*', '0.341*', '0.011'], ['0.374*', '0.148*', '-0.277*']]
column
['correlation', 'correlation', 'correlation']
['hyb-500.30']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>hyb-500.30</th> <th>word only</th> <th>topic-100</th> </tr> </thead> <tbody> <tr> <td>subreddit || askmen</td> <td>0.392*</td> <td>0.222*</td> <td>0.055</td> </tr> <tr> <td>subreddit || askscience</td> <td>0.321*</td> <td>-0.110</td> <td>-0.166*</td> </tr> <tr> <td>subreddit || askwomen</td> <td>0.501*</td> <td>0.388*</td> <td>0.005</td> </tr> <tr> <td>subreddit || atheism</td> <td>0.137*</td> <td>-0.229*</td> <td>-0.251</td> </tr> <tr> <td>subreddit || chgmyvw</td> <td>0.167*</td> <td>-0.121*</td> <td>-0.306*</td> </tr> <tr> <td>subreddit || fitness</td> <td>0.130*</td> <td>0.017</td> <td>-0.313*</td> </tr> <tr> <td>subreddit || politics</td> <td>0.533*</td> <td>0.341*</td> <td>0.011</td> </tr> <tr> <td>subreddit || worldnews</td> <td>0.374*</td> <td>0.148*</td> <td>-0.277*</td> </tr> </tbody></table>
Table 4
table_4
D16-1108
5
emnlp2016
We compute a normalized community similarity score s˜i,j = si,j − si,m, where si,m is the corresponding score from the subreddit merged others. The correlation between s˜i,j and community feedback is reported for three models in Table 4 for the thread level, and in Table 5 for the user level. On the thread level, the hyb-500.30 style model consistently finds positive, statistically significant, correlation between the post’s stylistic similarity score and its karma. This result suggests that language style adaptation does contribute to being well-received by the community. None of the other models explored in the previous section had this property, and for the topic models the correlation is mostly negative. On the user level, all correlations between a user’s k-index and their style/topic match are statistically significant, though the hyb-500.30 style model shows more positive correlation than other models. In both cases, the word_only model gives results between the style and topic models. The hyb-15k model has results that are similar to the word_only model, and the tag_only model has mostly negative correlation.
[2, 1, 1, 1, 1, 0, 0, 0]
['We compute a normalized community similarity score s˜i,j = si,j − si,m, where si,m is the corresponding score from the subreddit merged others.', 'The correlation between s˜i,j and community feedback is reported for three models in Table 4 for the thread level, and in Table 5 for the user level.', 'On the thread level, the hyb-500.30 style model consistently finds positive, statistically significant, correlation between the post’s stylistic similarity score and its karma.', 'This result suggests that language style adaptation does contribute to being well-received by the community.', 'None of the other models explored in the previous section had this property, and for the topic models the correlation is mostly negative.', 'On the user level, all correlations between a user’s k-index and their style/topic match are statistically significant, though the hyb-500.30 style model shows more positive correlation than other models.', 'In both cases, the word_only model gives results between the style and topic models.', 'The hyb-15k model has results that are similar to the word_only model, and the tag_only model has mostly negative correlation.']
[None, None, ['hyb-500.30'], ['hyb-500.30'], None, None, None, None]
1
D16-1122table_2
Event detection performance (nDCG; higher is better) using thirty-nine well-known events that took place between 1973 and 1978. Capsule outperforms all four baseline methods.
2
[['Method', 'Capsule (this paper)'], ['Method', 'term-count deviation + tf-idf (equation (7))'], ['Method', 'term-count deviation (equation (6))'], ['Method', 'random'], ['Method', '“event-only” Capsule (this paper)']]
1
[['nDCG']]
[['0.693'], ['0.652'], ['0.642'], ['0.557'], ['0.426']]
column
['nDCG']
['Capsule (this paper)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>nDCG</th> </tr> </thead> <tbody> <tr> <td>Method || Capsule (this paper)</td> <td>0.693</td> </tr> <tr> <td>Method || term-count deviation + tf-idf (equation (7))</td> <td>0.652</td> </tr> <tr> <td>Method || term-count deviation (equation (6))</td> <td>0.642</td> </tr> <tr> <td>Method || random</td> <td>0.557</td> </tr> <tr> <td>Method || “event-only” Capsule (this paper)</td> <td>0.426</td> </tr> </tbody></table>
Table 2
table_2
D16-1122
8
emnlp2016
Specifically, we used each method to construct a ranked list of time intervals. Then, for each method, we computed the discounted cumulative gain (DCG), which, in this context, is equivalent to computing X 39 eD1 1 log rank e; Lmethod T  ; (9) where L method T is the method’s ranked list of time intervals and rank e; Lmethod T  is the rank of the e th well-known event in L method T. Finally, we divided the DCG by the ideal DCG—i.e., P39 eD1 1 log .e/— .obtain the normalized DCG (nDCG). Table 2 shows that Capsule outperforms all four baseline methods.
[0, 0, 0, 1]
['Specifically, we used each method to construct a ranked list of time intervals.', 'Then, for each method, we computed the discounted cumulative gain (DCG), which, in this context, is equivalent to computing X 39 eD1 1 log rank e; Lmethod T \x01\x01 ; (9) where L method T is the method’s ranked list of time intervals and rank e; Lmethod T \x01 is the rank of the e th well-known event in L method T.', 'Finally, we divided the DCG by the ideal DCG—i.e., P39 eD1 1 log .e/— .obtain the normalized DCG (nDCG).', 'Table 2 shows that Capsule outperforms all four baseline methods.']
[None, None, None, ['Capsule (this paper)', 'Method']]
1
D16-1129table_2
Results of multi-label classification from Experiment 1. Hamming-loss and One-Error are shown for two systems – Bidirectional LSTM and Bidirectional LSTM with Convolution and Attention.
2
[['Debate', 'Ban plastic water bottles?'], ['Debate', 'Christianity or Atheism'], ['Debate', 'Evolution vs. Creation'], ['Debate', 'Firefox vs. Internet Explorer'], ['Debate', 'Gay marriage: right or wrong?'], ['Debate', 'Should parents use spanking?'], ['Debate', 'If your spouse committed murder...'], ['Debate', 'India has the potential to lead the world'], ['Debate', 'Is it better to have a lousy father or to be fatherless?'], ['Debate', 'Is porn wrong?'], ['Debate', 'Is the school uniform a good or bad idea?'], ['Debate', 'Pro-choice vs. Pro-life'], ['Debate', 'Should Physical Education be mandatory?'], ['Debate', 'TV is better than books'], ['Debate', 'Personal pursuit or common good?'], ['Debate', 'W. Farquhar ought to be honored...'], ['Debate', 'Average']]
2
[['BLSTM', 'H-loss'], ['BLSTM', 'one-E'], ['BLSTM/CNN/ATT', 'H-loss'], ['BLSTM/CNN/ATT', 'one-E']]
[['0.092', '0.283', '0.090', '0.305'], ['0.105', '0.212', '0.105', '0.218'], ['0.093', '0.196', '0.094', '0.234'], ['0.080', '0.312', '0.078', '0.345'], ['0.095', '0.243', '0.094', '0.270'], ['0.082', '0.312', '0.083', '0.344'], ['0.094', '0.297', '0.094', '0.272'], ['0.088', '0.294', '0.086', '0.322'], ['0.086', '0.367', '0.085', '0.381'], ['0.098', '0.278', '0.100', '0.270'], ['0.081', '0.279', '0.077', '0.406'], ['0.095', '0.218', '0.098', '0.218'], ['0.095', '0.273', '0.095', '0.277'], ['0.091', '0.265', '0.087', '0.300'], ['0.095', '0.328', '0.094', '0.343'], ['0.054', '0.528', '0.052', '0.570'], ['0.089', '0.293', '0.088', '0.317']]
column
['H-loss', 'one-E', 'H-loss', 'one-E']
['BLSTM', 'BLSTM/CNN/ATT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLSTM || H-loss</th> <th>BLSTM || one-E</th> <th>BLSTM/CNN/ATT || H-loss</th> <th>BLSTM/CNN/ATT || one-E</th> </tr> </thead> <tbody> <tr> <td>Debate || Ban plastic water bottles?</td> <td>0.092</td> <td>0.283</td> <td>0.090</td> <td>0.305</td> </tr> <tr> <td>Debate || Christianity or Atheism</td> <td>0.105</td> <td>0.212</td> <td>0.105</td> <td>0.218</td> </tr> <tr> <td>Debate || Evolution vs. Creation</td> <td>0.093</td> <td>0.196</td> <td>0.094</td> <td>0.234</td> </tr> <tr> <td>Debate || Firefox vs. Internet Explorer</td> <td>0.080</td> <td>0.312</td> <td>0.078</td> <td>0.345</td> </tr> <tr> <td>Debate || Gay marriage: right or wrong?</td> <td>0.095</td> <td>0.243</td> <td>0.094</td> <td>0.270</td> </tr> <tr> <td>Debate || Should parents use spanking?</td> <td>0.082</td> <td>0.312</td> <td>0.083</td> <td>0.344</td> </tr> <tr> <td>Debate || If your spouse committed murder...</td> <td>0.094</td> <td>0.297</td> <td>0.094</td> <td>0.272</td> </tr> <tr> <td>Debate || India has the potential to lead the world</td> <td>0.088</td> <td>0.294</td> <td>0.086</td> <td>0.322</td> </tr> <tr> <td>Debate || Is it better to have a lousy father or to be fatherless?</td> <td>0.086</td> <td>0.367</td> <td>0.085</td> <td>0.381</td> </tr> <tr> <td>Debate || Is porn wrong?</td> <td>0.098</td> <td>0.278</td> <td>0.100</td> <td>0.270</td> </tr> <tr> <td>Debate || Is the school uniform a good or bad idea?</td> <td>0.081</td> <td>0.279</td> <td>0.077</td> <td>0.406</td> </tr> <tr> <td>Debate || Pro-choice vs. Pro-life</td> <td>0.095</td> <td>0.218</td> <td>0.098</td> <td>0.218</td> </tr> <tr> <td>Debate || Should Physical Education be mandatory?</td> <td>0.095</td> <td>0.273</td> <td>0.095</td> <td>0.277</td> </tr> <tr> <td>Debate || TV is better than books</td> <td>0.091</td> <td>0.265</td> <td>0.087</td> <td>0.300</td> </tr> <tr> <td>Debate || Personal pursuit or common good?</td> <td>0.095</td> <td>0.328</td> <td>0.094</td> <td>0.343</td> </tr> <tr> <td>Debate || W. Farquhar ought to be honored...</td> <td>0.054</td> <td>0.528</td> <td>0.052</td> <td>0.570</td> </tr> <tr> <td>Debate || Average</td> <td>0.089</td> <td>0.293</td> <td>0.088</td> <td>0.317</td> </tr> </tbody></table>
Table 2
table_2
D16-1129
6
emnlp2016
Results from Table 2 do not show significant differences between the two models. Putting the oneerror numbers into human performance context can be done only indirectly, as the data validation presented in Section 3.4 had a different set-up. Here we can see that the error rate of the most confident predicted label is about 30%, while human performed similarly by choosing from a two different label sets in a binary settings, so their task was inherently harder.
[1, 2, 1]
['Results from Table 2 do not show significant differences between the two models.', 'Putting the one-error numbers into human performance context can be done only indirectly, as the data validation presented in Section 3.4 had a different set-up.', 'Here we can see that the error rate of the most confident predicted label is about 30%, while human performed similarly by choosing from a two different label sets in a binary settings, so their task was inherently harder.']
[['BLSTM', 'BLSTM/CNN/ATT'], None, ['BLSTM', 'BLSTM/CNN/ATT']]
1
D16-1132table_2
Results of intra-sentential subject zero anaphora resolution
3
[['Method', 'Ouchi et al. (ACL2015)', '-'], ['Method', 'Iida et al. (EMNLP2015)', '-'], ['Method', 'single column CNN (w/ position vec.)', '-'], ['Method', 'MCNN', 'BASE'], ['Method', 'MCNN', 'BASE+SURFSEQ'], ['Method', 'MCNN', 'BASE+DEPTREE'], ['Method', 'MCNN', 'BASE+SURFSEQ+DEPTREE'], ['Method', 'MCNN', 'BASE+SURFSEQ+PREDCONTEXT'], ['Method', 'MCNN', 'BASE+DEPTREE+PREDCONTEXT'], ['Method', 'MCNN', 'BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)']]
1
[['#cols.'], ['Recall'], ['Precision'], ['F-score'], ['Avg.P']]
[['—', '0.539', '0.612', '0.573', '0.670'], ['—', '0.484', '0.357', '0.411', '—'], ['1', '0.365', '0.524', '0.430', '0.540'], ['1', '0.446', '0.394', '0.419', '0.448'], ['4', '0.458', '0.597', '0.518', '0.679'], ['5', '0.339', '0.688', '0.454', '0.690'], ['8', '0.417', '0.695', '0.521', '0.730'], ['7', '0.459', '0.631', '0.531', '0.702'], ['8', '0.298', '0.728', '0.422', '0.702'], ['11', '0.418', '0.704', '0.525', '0.732']]
column
['#cols.', 'Recall', 'Precision', 'F-score', 'Avg.P']
['MCNN', 'BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>#cols.</th> <th>Recall</th> <th>Precision</th> <th>F-score</th> <th>Avg.P</th> </tr> </thead> <tbody> <tr> <td>Method || Ouchi et al. (ACL2015) || -</td> <td>—</td> <td>0.539</td> <td>0.612</td> <td>0.573</td> <td>0.670</td> </tr> <tr> <td>Method || Iida et al. (EMNLP2015) || -</td> <td>—</td> <td>0.484</td> <td>0.357</td> <td>0.411</td> <td>—</td> </tr> <tr> <td>Method || single column CNN (w/ position vec.) || -</td> <td>1</td> <td>0.365</td> <td>0.524</td> <td>0.430</td> <td>0.540</td> </tr> <tr> <td>Method || MCNN || BASE</td> <td>1</td> <td>0.446</td> <td>0.394</td> <td>0.419</td> <td>0.448</td> </tr> <tr> <td>Method || MCNN || BASE+SURFSEQ</td> <td>4</td> <td>0.458</td> <td>0.597</td> <td>0.518</td> <td>0.679</td> </tr> <tr> <td>Method || MCNN || BASE+DEPTREE</td> <td>5</td> <td>0.339</td> <td>0.688</td> <td>0.454</td> <td>0.690</td> </tr> <tr> <td>Method || MCNN || BASE+SURFSEQ+DEPTREE</td> <td>8</td> <td>0.417</td> <td>0.695</td> <td>0.521</td> <td>0.730</td> </tr> <tr> <td>Method || MCNN || BASE+SURFSEQ+PREDCONTEXT</td> <td>7</td> <td>0.459</td> <td>0.631</td> <td>0.531</td> <td>0.702</td> </tr> <tr> <td>Method || MCNN || BASE+DEPTREE+PREDCONTEXT</td> <td>8</td> <td>0.298</td> <td>0.728</td> <td>0.422</td> <td>0.702</td> </tr> <tr> <td>Method || MCNN || BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)</td> <td>11</td> <td>0.418</td> <td>0.704</td> <td>0.525</td> <td>0.732</td> </tr> </tbody></table>
Table 2
table_2
D16-1132
8
emnlp2016
The results in Table 2 show that our method using all the column sets achieved the best average precision among the combination of column sets that include at least the BASE column set. This suggests that all of the clues introduced by our four column sets are effective for performance improvement. Table 2 also demonstrates that our method using all the column sets obtained better average precision than the strongest baseline, Ouchi’s method, in spite of an unfavorable condition for it. The results also show that our method with all of the column sets achieved a better F-score than Iida’s method and the single-column baseline. However, it achieved a lower F-score than Ouchi’s method. This was caused by the choice of different recall levels for computing the F-score. In contrast, the PR curves for these two methods in Figure 5 show that our method obtained higher precision than Ouchi’s method at all recall levels. Particularly, it got high precision in a wide range of recall levels (e.g., around 0.8 in precision at 0.25 in recall and around 0.7 in precision at 0.4 in recall), while the precision obtained by Ouchi’s method at 0.25 in recall was just around 0.65. We believe this difference becomes crucial when using the outputs of each method for developing accurate real-world NLP applications.
[1, 1, 1, 1, 1, 2, 0, 1, 2]
['The results in Table 2 show that our method using all the column sets achieved the best average precision among the combination of column sets that include at least the BASE column set.', 'This suggests that all of the clues introduced by our four column sets are effective for performance improvement.', 'Table 2 also demonstrates that our method using all the column sets obtained better average precision than the strongest baseline, Ouchi’s method, in spite of an unfavorable condition for it.', 'The results also show that our method with all of the column sets achieved a better F-score than Iida’s method and the single-column baseline.', 'However, it achieved a lower F-score than Ouchi’s method.', 'This was caused by the choice of different recall levels for computing the F-score.', 'In contrast, the PR curves for these two methods in Figure 5 show that our method obtained higher precision than Ouchi’s method at all recall levels.', 'Particularly, it got high precision in a wide range of recall levels (e.g., around 0.8 in precision at 0.25 in recall and around 0.7 in precision at 0.4 in recall), while the precision obtained by Ouchi’s method at 0.25 in recall was just around 0.65.', 'We believe this difference becomes crucial when using the outputs of each method for developing accurate real-world NLP applications.']
[['MCNN'], ['BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)'], ['BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)', 'Precision', 'Ouchi et al. (ACL2015)'], ['BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)', 'F-score', 'Iida et al. (EMNLP2015)', 'single column CNN (w/ position vec.)'], ['BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)', 'F-score', 'Ouchi et al. (ACL2015)'], ['F-score'], None, ['Recall', 'Precision', 'Ouchi et al. (ACL2015)'], None]
1
D16-1136table_4
Spearman’s rank correlation for monolingual similarity measurement on 3 datasets WS-de (353 pairs), WS-en (353 pairs) and RW-en (2034 pairs). We compare against 5 baseline crosslingual word embeddings. The best CLWE performance is bold. For reference, we add the monolingual CBOW with and without embeddings combination, Yih and Qazvinian (2012) and Shazeer et al. (2016) which represents the monolingual state-of-the-art results for WS-en and RW-en.
3
[['Model', 'Baselines', 'Klementiev et al. (2012)'], ['Model', 'Baselines', 'Chandar A P et al. (2014)'], ['Model', 'Baselines', 'Hermann and Blunsom (2014)'], ['Model', 'Baselines', 'Luong et al. (2015)'], ['Model', 'Baselines', 'Gouws and Sogaard (2015)'], ['Model', 'Mono', 'CBOW'], ['Model', 'Mono', '+combine'], ['Model', 'Mono', 'Yih and Qazvinian (2012)'], ['Model', 'Mono', 'Shazeer et al. (2016)'], ['Model', 'Ours', 'Our joint-model'], ['Model', 'Ours', '+combine']]
1
[['WS-de'], ['WS-en'], ['RW-en']]
[['23.8', '13.2', '7.3'], ['34.6', '39.8', '20.5'], ['28.3', '19.8', '13.6'], ['47.4', '49.3', '25.3'], ['67.4', '71.8', '31.0'], ['62.2', '70.3', '42.7'], ['65.8', '74.1', '43.1'], ['-', '81.0', '-'], ['-', '74.8', '48.3'], ['59.3', '68.6', '38.1'], ['71.1', '76.2', '44.0']]
column
['correlation', 'correlation', 'correlation']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WS-de</th> <th>WS-en</th> <th>RW-en</th> </tr> </thead> <tbody> <tr> <td>Model || Baselines || Klementiev et al. (2012)</td> <td>23.8</td> <td>13.2</td> <td>7.3</td> </tr> <tr> <td>Model || Baselines || Chandar A P et al. (2014)</td> <td>34.6</td> <td>39.8</td> <td>20.5</td> </tr> <tr> <td>Model || Baselines || Hermann and Blunsom (2014)</td> <td>28.3</td> <td>19.8</td> <td>13.6</td> </tr> <tr> <td>Model || Baselines || Luong et al. (2015)</td> <td>47.4</td> <td>49.3</td> <td>25.3</td> </tr> <tr> <td>Model || Baselines || Gouws and Sogaard (2015)</td> <td>67.4</td> <td>71.8</td> <td>31.0</td> </tr> <tr> <td>Model || Mono || CBOW</td> <td>62.2</td> <td>70.3</td> <td>42.7</td> </tr> <tr> <td>Model || Mono || +combine</td> <td>65.8</td> <td>74.1</td> <td>43.1</td> </tr> <tr> <td>Model || Mono || Yih and Qazvinian (2012)</td> <td>-</td> <td>81.0</td> <td>-</td> </tr> <tr> <td>Model || Mono || Shazeer et al. (2016)</td> <td>-</td> <td>74.8</td> <td>48.3</td> </tr> <tr> <td>Model || Ours || Our joint-model</td> <td>59.3</td> <td>68.6</td> <td>38.1</td> </tr> <tr> <td>Model || Ours || +combine</td> <td>71.1</td> <td>76.2</td> <td>44.0</td> </tr> </tbody></table>
Table 4
table_4
D16-1136
7
emnlp2016
We train the model as described in §4, which is the combine embeddings setting from Table 3. Since the evaluation involves de and en word similarity, we train the CLWE for en-de pair. Table 4 shows the performance of our combined model compared with several baselines. Our combined model out-performed both Luong et al. (2015) and Gouws and Søgaard (2015) which represent the best published crosslingual embeddings trained on bitext and monolingual data respectively.
[0, 2, 1, 1]
['We train the model as described in §4, which is the combine embeddings setting from Table 3.', 'Since the evaluation involves de and en word similarity, we train the CLWE for en-de pair.', 'Table 4 shows the performance of our combined model compared with several baselines.', 'Our combined model out-performed both Luong et al. (2015) and Gouws and Søgaard (2015) which represent the best published crosslingual embeddings trained on bitext and monolingual data respectively.']
[None, None, ['Model'], ['Ours', '+combine', 'Luong et al. (2015)', 'Gouws and Sogaard (2015)']]
1
D16-1136table_6
CLDC performance for both en → de and de → en direction for many CLWE. The MT baseline uses phrase-based statistical machine translation to translate the source language to target language (Klementiev et al., 2012). The best scores are bold.
2
[['Model', 'MT baseline'], ['Model', 'Klementiev et al. (2012)'], ['Model', 'Gouws et al. (2015)'], ['Model', 'Kocisk ˇ y et al. (2014)'], ['Model', 'Chandar A P et al. (2014) 91.8'], ['Model', 'Hermann and Blunsom (2014)'], ['Model', 'Luong et al. (2015)'], ['Model', 'Our model']]
1
[['en → de'], ['de → en']]
[['68.1', '67.4'], ['77.6', '71.1'], ['86.5', '75.0'], ['83.1', '75.4 ´'], ['74.2'], ['86.4', '74.7'], ['88.4', '80.3'], ['86.3', '76.8']]
column
['accuracy', 'accuracy']
['Our model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en → de</th> <th>de → en</th> </tr> </thead> <tbody> <tr> <td>Model || MT baseline</td> <td>68.1</td> <td>67.4</td> </tr> <tr> <td>Model || Klementiev et al. (2012)</td> <td>77.6</td> <td>71.1</td> </tr> <tr> <td>Model || Gouws et al. (2015)</td> <td>86.5</td> <td>75.0</td> </tr> <tr> <td>Model || Kocisk ˇ y et al. (2014)</td> <td>83.1</td> <td>75.4 ´</td> </tr> <tr> <td>Model || Chandar A P et al. (2014) 91.8</td> <td>74.2</td> <td>None</td> </tr> <tr> <td>Model || Hermann and Blunsom (2014)</td> <td>86.4</td> <td>74.7</td> </tr> <tr> <td>Model || Luong et al. (2015)</td> <td>88.4</td> <td>80.3</td> </tr> <tr> <td>Model || Our model</td> <td>86.3</td> <td>76.8</td> </tr> </tbody></table>
Table 6
table_6
D16-1136
8
emnlp2016
Table 6 shows the CLDC results for various CLWE. Despite its simplicity, our model achieves competitive performance. Note that aside from our model, all other models in Table 6 use a large bitext (Europarl) which may not exist for many lowresource languages, limiting their applicability.
[1, 1, 2]
['Table 6 shows the CLDC results for various CLWE.', 'Despite its simplicity, our model achieves competitive performance.', 'Note that aside from our model, all other models in Table 6 use a large bitext (Europarl) which may not exist for many lowresource languages, limiting their applicability.']
[None, ['Our model'], ['Model']]
1
D16-1138table_3
Average accuracy over all the morphological inflection datasets. The baseline results for Seq2Seq variants are taken from (Faruqui et al., 2016).
2
[['Model', 'Seq2Seq'], ['Model', 'Seq2Seq w/ Attention'], ['Model', 'Adapted-seq2seq (FTND16)'], ['Model', 'uniSSNT+'], ['Model', 'biSSNT+']]
1
[['Avg. accuracy']]
[['79.08'], ['95.64'], ['96.20'], ['87.85'], ['95.32']]
column
['Avg. accuracy']
['biSSNT+']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Avg. accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq</td> <td>79.08</td> </tr> <tr> <td>Model || Seq2Seq w/ Attention</td> <td>95.64</td> </tr> <tr> <td>Model || Adapted-seq2seq (FTND16)</td> <td>96.20</td> </tr> <tr> <td>Model || uniSSNT+</td> <td>87.85</td> </tr> <tr> <td>Model || biSSNT+</td> <td>95.32</td> </tr> </tbody></table>
Table 3
table_3
D16-1138
7
emnlp2016
Table 3 gives the average accuracy of the uniSSNT+, biSSNT+, vanilla encoder-decoder, and attention-based models. The model with the best previous average result — denoted as adaptedseq2seq (FTND16) (Faruqui et al., 2016) — is also included for comparison. Our biSSNT+ model outperforms the vanilla encoder-decoder by a large margin and almost matches the state-of-the-art result on this task. As mentioned earlier, a characteristic of these datasets is that the stems and their corresponding inflected forms mostly overlap. Compare to the vanilla encoder-decoder, our model is better at copying and finding correspondences between prefix, stem and suffix segments.
[1, 2, 1, 2, 2]
['Table 3 gives the average accuracy of the uniSSNT+, biSSNT+, vanilla encoder-decoder, and attention-based models.', 'The model with the best previous average result — denoted as adaptedseq2seq (FTND16) (Faruqui et al., 2016) — is also included for comparison.', 'Our biSSNT+ model outperforms the vanilla encoder-decoder by a large margin and almost matches the state-of-the-art result on this task.', 'As mentioned earlier, a characteristic of these datasets is that the stems and their corresponding inflected forms mostly overlap.', 'Compare to the vanilla encoder-decoder, our model is better at copying and finding correspondences between prefix, stem and suffix segments.']
[['Seq2Seq', 'Seq2Seq w/ Attention', 'uniSSNT+', 'biSSNT+', 'Avg. accuracy'], ['Adapted-seq2seq (FTND16)'], ['biSSNT+', 'Seq2Seq'], None, ['uniSSNT+', 'biSSNT+', 'Seq2Seq']]
1
D16-1144table_4
Study of typing performance on the three datasets.
2
[['Typing Method', 'CLPL (Cour et al. 2011)'], ['Typing Method', 'PL-SVM (Nguyen and Caruana 2008)'], ['Typing Method', 'FIGER (Ling and Weld 2012)'], ['Typing Method', 'FIGER-Min (Gillick et al. 2014)'], ['Typing Method', 'HYENA (Yosef et al. 2012)'], ['Typing Method', 'HYENA-Min'], ['Typing Method', 'ClusType (Ren et al. 2015)'], ['Typing Method', 'HNM (Dong et al. 2015)'], ['Typing Method', 'DeepWalk (Perozzi et al. 2014)'], ['Typing Method', 'LINE (Tang et al. 2015b)'], ['Typing Method', 'PTE (Tang et al. 2015a)'], ['Typing Method', 'WSABIE (Yogatama et al. 2015)'], ['Typing Method', 'AFET-NoCo'], ['Typing Method', 'AFET-NoPa'], ['Typing Method', 'AFET-CoH'], ['Typing Method', 'AFET']]
2
[['Wiki', 'Acc'], ['Wiki', 'Ma-F1'], ['Wiki', 'Mi-F1'], ['OntoNotes', 'Acc'], ['OntoNotes', 'Ma-F1'], ['OntoNotes', 'Mi-F1'], ['BBN', 'Acc'], ['BBN', 'Ma-F1'], ['BBN', 'Mi-F1']]
[['0.162', '0.431', '0.411', '0.201', '0.347', '0.358', '0.438', '0.603', '0.536'], ['0.428', '0.613', '0.571', '0.225', '0.455', '0.437', '0.465', '0.648', '0.582'], ['0.474', '0.692', '0.655', '0.369', '0.578', '0.516', '0.467', '0.672', '0.612'], ['0.453', '0.691', '0.631', '0.373', '0.570', '0.509', '0.444', '0.671', '0.613'], ['0.288', '0.528', '0.506', '0.249', '0.497', '0.446', '0.523', '0.576', '0.587'], ['0.325', '0.566', '0.536', '0.295', '0.523', '0.470', '0.524', '0.582', '0.595'], ['0.274', '0.429', '0.448', '0.305', '0.468', '0.404', '0.441', '0.498', '0.573'], ['0.237', '0.409', '0.417', '0.122', '0.288', '0.272', '0.551', '0.591', '0.606'], ['0.414', '0.563', '0.511', '0.479', '0.669', '0.611', '0.586', '0.638', '0.628'], ['0.181', '0.480', '0.499', '0.436', '0.634', '0.578', '0.576', '0.687', '0.690'], ['0.405', '0.575', '0.526', '0.436', '0.630', '0.572', '0.604', '0.684', '0.695'], ['0.480', '0.679', '0.657', '0.404', '0.580', '0.527', '0.619', '0.670', '0.680'], ['0.526', '0.693', '0.654', '0.486', '0.652', '0.594', '0.655', '0.711', '0.716'], ['0.513', '0.675', '0.642', '0.463', '0.637', '0.591', '0.669', '0.715', '0.724'], ['0.433', '0.583', '0.551', '0.521', '0.680', '0.609', '0.657', '0.703', '0.712'], ['0.533', '0.693', '0.664', '0.551', '0.711', '0.647', '0.670', '0.727', '0.735']]
column
['Acc', 'Ma-F1', 'Mi-F1', 'Acc', 'Ma-F1', 'Mi-F1', 'Acc', 'Ma-F1', 'Mi-F1']
['AFET']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Wiki || Acc</th> <th>Wiki || Ma-F1</th> <th>Wiki || Mi-F1</th> <th>OntoNotes || Acc</th> <th>OntoNotes || Ma-F1</th> <th>OntoNotes || Mi-F1</th> <th>BBN || Acc</th> <th>BBN || Ma-F1</th> <th>BBN || Mi-F1</th> </tr> </thead> <tbody> <tr> <td>Typing Method || CLPL (Cour et al. 2011)</td> <td>0.162</td> <td>0.431</td> <td>0.411</td> <td>0.201</td> <td>0.347</td> <td>0.358</td> <td>0.438</td> <td>0.603</td> <td>0.536</td> </tr> <tr> <td>Typing Method || PL-SVM (Nguyen and Caruana 2008)</td> <td>0.428</td> <td>0.613</td> <td>0.571</td> <td>0.225</td> <td>0.455</td> <td>0.437</td> <td>0.465</td> <td>0.648</td> <td>0.582</td> </tr> <tr> <td>Typing Method || FIGER (Ling and Weld 2012)</td> <td>0.474</td> <td>0.692</td> <td>0.655</td> <td>0.369</td> <td>0.578</td> <td>0.516</td> <td>0.467</td> <td>0.672</td> <td>0.612</td> </tr> <tr> <td>Typing Method || FIGER-Min (Gillick et al. 2014)</td> <td>0.453</td> <td>0.691</td> <td>0.631</td> <td>0.373</td> <td>0.570</td> <td>0.509</td> <td>0.444</td> <td>0.671</td> <td>0.613</td> </tr> <tr> <td>Typing Method || HYENA (Yosef et al. 2012)</td> <td>0.288</td> <td>0.528</td> <td>0.506</td> <td>0.249</td> <td>0.497</td> <td>0.446</td> <td>0.523</td> <td>0.576</td> <td>0.587</td> </tr> <tr> <td>Typing Method || HYENA-Min</td> <td>0.325</td> <td>0.566</td> <td>0.536</td> <td>0.295</td> <td>0.523</td> <td>0.470</td> <td>0.524</td> <td>0.582</td> <td>0.595</td> </tr> <tr> <td>Typing Method || ClusType (Ren et al. 2015)</td> <td>0.274</td> <td>0.429</td> <td>0.448</td> <td>0.305</td> <td>0.468</td> <td>0.404</td> <td>0.441</td> <td>0.498</td> <td>0.573</td> </tr> <tr> <td>Typing Method || HNM (Dong et al. 2015)</td> <td>0.237</td> <td>0.409</td> <td>0.417</td> <td>0.122</td> <td>0.288</td> <td>0.272</td> <td>0.551</td> <td>0.591</td> <td>0.606</td> </tr> <tr> <td>Typing Method || DeepWalk (Perozzi et al. 2014)</td> <td>0.414</td> <td>0.563</td> <td>0.511</td> <td>0.479</td> <td>0.669</td> <td>0.611</td> <td>0.586</td> <td>0.638</td> <td>0.628</td> </tr> <tr> <td>Typing Method || LINE (Tang et al. 2015b)</td> <td>0.181</td> <td>0.480</td> <td>0.499</td> <td>0.436</td> <td>0.634</td> <td>0.578</td> <td>0.576</td> <td>0.687</td> <td>0.690</td> </tr> <tr> <td>Typing Method || PTE (Tang et al. 2015a)</td> <td>0.405</td> <td>0.575</td> <td>0.526</td> <td>0.436</td> <td>0.630</td> <td>0.572</td> <td>0.604</td> <td>0.684</td> <td>0.695</td> </tr> <tr> <td>Typing Method || WSABIE (Yogatama et al. 2015)</td> <td>0.480</td> <td>0.679</td> <td>0.657</td> <td>0.404</td> <td>0.580</td> <td>0.527</td> <td>0.619</td> <td>0.670</td> <td>0.680</td> </tr> <tr> <td>Typing Method || AFET-NoCo</td> <td>0.526</td> <td>0.693</td> <td>0.654</td> <td>0.486</td> <td>0.652</td> <td>0.594</td> <td>0.655</td> <td>0.711</td> <td>0.716</td> </tr> <tr> <td>Typing Method || AFET-NoPa</td> <td>0.513</td> <td>0.675</td> <td>0.642</td> <td>0.463</td> <td>0.637</td> <td>0.591</td> <td>0.669</td> <td>0.715</td> <td>0.724</td> </tr> <tr> <td>Typing Method || AFET-CoH</td> <td>0.433</td> <td>0.583</td> <td>0.551</td> <td>0.521</td> <td>0.680</td> <td>0.609</td> <td>0.657</td> <td>0.703</td> <td>0.712</td> </tr> <tr> <td>Typing Method || AFET</td> <td>0.533</td> <td>0.693</td> <td>0.664</td> <td>0.551</td> <td>0.711</td> <td>0.647</td> <td>0.670</td> <td>0.727</td> <td>0.735</td> </tr> </tbody></table>
Table 4
table_4
D16-1144
8
emnlp2016
Table 4 shows the results of AFET and its variants. Comparison with the other typing methods. AFET outperforms both FIGER and HYENA systems, demonstrating the predictive power of the learned embeddings, and the effectiveness of modeling type correlation information and noisy candidate types. We also observe that pruning methods do not always improve the performance, since they ggressively filter out rare types in the corpus, which may lead to low Recall. ClusType is not as good as FIGER and HYENA because it is intended for coarse types and only utilizes relation phrases.
[1, 2, 1, 1, 1]
['Table 4 shows the results of AFET and its variants.', 'Comparison with the other typing methods.', 'AFET outperforms both FIGER and HYENA systems, demonstrating the predictive power of the learned embeddings, and the effectiveness of modeling type correlation information and noisy candidate types.', 'We also observe that pruning methods do not always improve the performance, since they ggressively filter out rare types in the corpus, which may lead to low Recall.', 'ClusType is not as good as FIGER and HYENA because it is intended for coarse types and only utilizes relation phrases.']
[['AFET-NoCo', 'AFET-NoPa', 'AFET-CoH', 'AFET'], ['Typing Method'], ['AFET', 'FIGER (Ling and Weld 2012)', 'HYENA (Yosef et al. 2012)'], None, ['ClusType (Ren et al. 2015)', 'FIGER (Ling and Weld 2012)', 'HYENA (Yosef et al. 2012)']]
1
D16-1147table_4
Breakdown of test results (% hits@1) on WIKIMOVIES for Key-Value Memory Networks using different knowledge representations.
2
[['Question Type', 'Writer to Movie'], ['Question Type', 'Tag to Movie'], ['Question Type', 'Movie to Year'], ['Question Type', 'Movie to Writer'], ['Question Type', 'Movie to Tags'], ['Question Type', 'Movie to Language'], ['Question Type', 'Movie to IMDb Votes'], ['Question Type', 'Movie to IMDb Rating'], ['Question Type', 'Movie to Genre'], ['Question Type', 'Movie to Director'], ['Question Type', 'Movie to Actors'], ['Question Type', 'Director to Movie'], ['Question Type', 'Actor to Movie']]
1
[['KB'], ['IE'], ['Doc']]
[['97', '72', '91'], ['85', '35', '49'], ['95', '75', '89'], ['95', '61', '64'], ['94', '47', '48'], ['96', '62', '84'], ['92', '92', '92'], ['94', '75', '92'], ['97', '84', '86'], ['93', '76', '79'], ['91', '64', '64'], ['90', '78', '91'], ['93', '66', '83']]
column
['hits@1', 'hits@1', 'hits@1']
['KB', 'IE', 'Doc']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>KB</th> <th>IE</th> <th>Doc</th> </tr> </thead> <tbody> <tr> <td>Question Type || Writer to Movie</td> <td>97</td> <td>72</td> <td>91</td> </tr> <tr> <td>Question Type || Tag to Movie</td> <td>85</td> <td>35</td> <td>49</td> </tr> <tr> <td>Question Type || Movie to Year</td> <td>95</td> <td>75</td> <td>89</td> </tr> <tr> <td>Question Type || Movie to Writer</td> <td>95</td> <td>61</td> <td>64</td> </tr> <tr> <td>Question Type || Movie to Tags</td> <td>94</td> <td>47</td> <td>48</td> </tr> <tr> <td>Question Type || Movie to Language</td> <td>96</td> <td>62</td> <td>84</td> </tr> <tr> <td>Question Type || Movie to IMDb Votes</td> <td>92</td> <td>92</td> <td>92</td> </tr> <tr> <td>Question Type || Movie to IMDb Rating</td> <td>94</td> <td>75</td> <td>92</td> </tr> <tr> <td>Question Type || Movie to Genre</td> <td>97</td> <td>84</td> <td>86</td> </tr> <tr> <td>Question Type || Movie to Director</td> <td>93</td> <td>76</td> <td>79</td> </tr> <tr> <td>Question Type || Movie to Actors</td> <td>91</td> <td>64</td> <td>64</td> </tr> <tr> <td>Question Type || Director to Movie</td> <td>90</td> <td>78</td> <td>91</td> </tr> <tr> <td>Question Type || Actor to Movie</td> <td>93</td> <td>66</td> <td>83</td> </tr> </tbody></table>
Table 4
table_4
D16-1147
7
emnlp2016
A breakdown by question type comparing the different data sources for KVMemNNs is given in Table 4. IE loses out especially to Doc (and KB) on Writer, Director and Actor to Movie, perhaps because coreference is difficult in these cases – although it has other losses elsewhere too. Note that only 56% of subject-object pairs in IE match the triples in the original KB, so losses are expected. Doc loses out to KB particularly on Tag to Movie, Movie to Tags, Movie to Writer and Movie to Actors.
[1, 1, 2, 1]
['A breakdown by question type comparing the different data sources for KVMemNNs is given in Table 4.', 'IE loses out especially to Doc (and KB) on Writer, Director and Actor to Movie, perhaps because coreference is difficult in these cases – although it has other losses elsewhere too.', 'Note that only 56% of subject-object pairs in IE match the triples in the original KB, so losses are expected.', 'Doc loses out to KB particularly on Tag to Movie, Movie to Tags, Movie to Writer and Movie to Actors.']
[None, ['IE', 'Doc', 'KB', 'Writer to Movie', 'Director to Movie', 'Actor to Movie'], ['IE', 'KB'], ['Doc', 'KB', 'Tag to Movie', 'Movie to Writer', 'Movie to Actors']]
1
D16-1149table_3
Convergence t-values of paired t-tests comparing team-level partner differences (T Dif fp) of first 3, 5, 7 minutes vs. last 3, 5, 7 minutes, respectively, and of first vs. second game half, for each game. Positive t-values indicate convergence (i.e., that partner differences in the second interval are smaller than in the first). Negative t-values indicate divergence. Significant convergence results are in bold. * p < .05. n = 62.
2
[['Feature', 'Pitch-min'], ['Feature', 'Pitch-max'], ['Feature', 'Pitch-mean'], ['Feature', 'Pitch-sd'], ['Feature', 'Intensity-mean'], ['Feature', 'Intensity-min'], ['Feature', 'Intensity-max'], ['Feature', 'Shimmer-local'], ['Feature', 'Jitter-local']]
2
[['First vs. last 3 minutes', 'Game1'], ['First vs. last 3 minutes', 'Game2'], ['First vs. last 5 minutes', 'Game1'], ['First vs. last 5 minutes', 'Game2'], ['First vs. last 7 minutes', 'Game1'], ['First vs. last 7 minutes', 'Game2'], ['First vs. second half', 'Game1'], ['First vs. second half', 'Game2']]
[['2.474*', '-0.709', '1.487', '-1.299', '1.359', '-1.622', '0.329', '-0.884'], ['4.947*', '1.260', '1.892', '-0.468', '1.348', '-0.424', '0.457', '0.627'], ['-2.687*', '0.109', '-2.900*', '0.417', '-2.965*', '-0.361', '-1.905', '-0.266'], ['1.364', '0.409', '1.919', '0.591', '1.807', '0.576', '1.271', '0.089'], ['-0.275', '-2.946*', '-0.454', '-2.245*', '-0.229', '-1.825', '-0.360', '-1.540'], ['0.595', '-3.188*', '-0.136', '-4.335*', '0.009', '-3.317*', '-0.972', '-3.324*'], ['0.328', '0.327', '-0.731', '1.081', '-0.140', '0.511', '-0.222', '0.469'], ['2.896*', '-0.476', '3.396*', '-1.941', '3.006*', '-1.704', '2.794*', '-0.914'], ['3.205*', '0.725', '2.796*', '0.242', '2.867*', '0.469', '2.973*', '0.260']]
column
['t-value', 't-value', 't-value', 't-value', 't-value', 't-value', 't-value', 't-value']
['Game1', 'Game2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>First vs. last 3 minutes || Game1</th> <th>First vs. last 3 minutes || Game2</th> <th>First vs. last 5 minutes || Game1</th> <th>First vs. last 5 minutes || Game2</th> <th>First vs. last 7 minutes || Game1</th> <th>First vs. last 7 minutes || Game2</th> <th>First vs. second half || Game1</th> <th>First vs. second half || Game2</th> </tr> </thead> <tbody> <tr> <td>Feature || Pitch-min</td> <td>2.474*</td> <td>-0.709</td> <td>1.487</td> <td>-1.299</td> <td>1.359</td> <td>-1.622</td> <td>0.329</td> <td>-0.884</td> </tr> <tr> <td>Feature || Pitch-max</td> <td>4.947*</td> <td>1.260</td> <td>1.892</td> <td>-0.468</td> <td>1.348</td> <td>-0.424</td> <td>0.457</td> <td>0.627</td> </tr> <tr> <td>Feature || Pitch-mean</td> <td>-2.687*</td> <td>0.109</td> <td>-2.900*</td> <td>0.417</td> <td>-2.965*</td> <td>-0.361</td> <td>-1.905</td> <td>-0.266</td> </tr> <tr> <td>Feature || Pitch-sd</td> <td>1.364</td> <td>0.409</td> <td>1.919</td> <td>0.591</td> <td>1.807</td> <td>0.576</td> <td>1.271</td> <td>0.089</td> </tr> <tr> <td>Feature || Intensity-mean</td> <td>-0.275</td> <td>-2.946*</td> <td>-0.454</td> <td>-2.245*</td> <td>-0.229</td> <td>-1.825</td> <td>-0.360</td> <td>-1.540</td> </tr> <tr> <td>Feature || Intensity-min</td> <td>0.595</td> <td>-3.188*</td> <td>-0.136</td> <td>-4.335*</td> <td>0.009</td> <td>-3.317*</td> <td>-0.972</td> <td>-3.324*</td> </tr> <tr> <td>Feature || Intensity-max</td> <td>0.328</td> <td>0.327</td> <td>-0.731</td> <td>1.081</td> <td>-0.140</td> <td>0.511</td> <td>-0.222</td> <td>0.469</td> </tr> <tr> <td>Feature || Shimmer-local</td> <td>2.896*</td> <td>-0.476</td> <td>3.396*</td> <td>-1.941</td> <td>3.006*</td> <td>-1.704</td> <td>2.794*</td> <td>-0.914</td> </tr> <tr> <td>Feature || Jitter-local</td> <td>3.205*</td> <td>0.725</td> <td>2.796*</td> <td>0.242</td> <td>2.867*</td> <td>0.469</td> <td>2.973*</td> <td>0.260</td> </tr> </tbody></table>
Table 3
table_3
D16-1149
7
emnlp2016
The convergence results are shown in Table 3 for four different temporal comparison intervals. Comparison of the significant game 1 results shows that teams entrained on pitch min, pitch max, shimmer, and jitter in at least one of the intervals. Both shimmer and jitter converged for all choices of temporal units. For pitch, convergence was instead only seen using the first and last 3 minutes, which are the intervals farthest in the game from each other. The only feature that diverged during game 1 is pitch-mean. The rest of the features did not show significant team-level partner differences during game 1 for any temporal interval and thus exhibited maintenance, meaning that the team members neither converged nor diverged. During game 2, we observed maintenance for all features except for intensity-mean and intensity-min, which diverged. Together our results suggest that when teams in our corpus converged on a feature, they did so earlier in the experiment (namely, just during the first game, and sometimes just in the earliest part of the first game).
[1, 1, 1, 2, 1, 1, 1, 2]
['The convergence results are shown in Table 3 for four different temporal comparison intervals.', 'Comparison of the significant game 1 results shows that teams entrained on pitch min, pitch max, shimmer, and jitter in at least one of the intervals.', 'Both shimmer and jitter converged for all choices of temporal units.', 'For pitch, convergence was instead only seen using the first and last 3 minutes, which are the intervals farthest in the game from each other.', 'The only feature that diverged during game 1 is pitch-mean.', 'The rest of the features did not show significant team-level partner differences during game 1 for any temporal interval and thus exhibited maintenance, meaning that the team members neither converged nor diverged.', 'During game 2, we observed maintenance for all features except for intensity-mean and intensity-min, which diverged.', 'Together our results suggest that when teams in our corpus converged on a feature, they did so earlier in the experiment (namely, just during the first game, and sometimes just in the earliest part of the first game).']
[None, ['Game1', 'Pitch-min', 'Pitch-max', 'Shimmer-local', 'Jitter-local'], ['Shimmer-local', 'Jitter-local'], ['Pitch-min', 'Pitch-max', 'Pitch-mean', 'Pitch-sd'], ['Game1', 'Pitch-mean'], ['Game1', 'Feature'], ['Game2', 'Feature', 'Intensity-mean', 'Intensity-min'], ['Feature']]
1
D16-1150table_1
Results of the Regression Analysis
1
[['Agreeableness'], ['Conscientiousness'], ['Extroversion'], ['Neurotisim'], ['Openness'], ['Conservation'], ['Hedonism'], ['Openness to change'], ['Self-enhancement'], ['Self-transcendence']]
1
[['Safety'], ['Fuel'], ['Quality'], ['Style'], ['Price'], ['Luxury'], ['Perf'], ['Durab']]
[['0.39', '-0.52', '-0.53', '0.54', '0.81', '0.004', '-0.62', '-0.27'], ['-1.75', '-0.31', '0.80', '0.29', '-0.01', '0.27', '0.83', '-0.12'], ['0.69', '-0.71', '0.008', '-0.25', '-0.37', '0.48', '-0.07', '0.224'], ['1.08', '-0.01', '-0.46', '-0.11', '-0.32', '-0.07', '0.18', '-0.28'], ['1.59', '-0.05', '0.01', '-0.99', '0.36', '-0.53', '-0.46', '0.07'], ['1.99', '-0.99', '-0.66', '0.84', '-1.72', '0.21', '0.38', '-0.03'], ['1.47', '-0.15', '-0.69', '0.16', '0.51', '-0.06', '-0.82', '-0.43'], ['-2.15', '0.08', '0.58', '0.48', '-1.99', '-0.38', '2.29*', '1.07'], ['-1.39', '-1.12', '0.58', '0.47', '-0.31', '2.41', '0.77', '-1.41'], ['1.33', '2.37', '1.36', '-2.47', '-0.91', '-1.01', '-0.33', '-0.32']]
column
['regression', 'regression', 'regression', 'regression', 'regression', 'regression', 'regression', 'regression']
['Safety', 'Fuel', 'Quality', 'Style', 'Price', 'Luxury', 'Perf', 'Durab']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Safety</th> <th>Fuel</th> <th>Quality</th> <th>Style</th> <th>Price</th> <th>Luxury</th> <th>Perf</th> <th>Durab</th> </tr> </thead> <tbody> <tr> <td>Agreeableness</td> <td>0.39</td> <td>-0.52</td> <td>-0.53</td> <td>0.54</td> <td>0.81</td> <td>0.004</td> <td>-0.62</td> <td>-0.27</td> </tr> <tr> <td>Conscientiousness</td> <td>-1.75</td> <td>-0.31</td> <td>0.80</td> <td>0.29</td> <td>-0.01</td> <td>0.27</td> <td>0.83</td> <td>-0.12</td> </tr> <tr> <td>Extroversion</td> <td>0.69</td> <td>-0.71</td> <td>0.008</td> <td>-0.25</td> <td>-0.37</td> <td>0.48</td> <td>-0.07</td> <td>0.224</td> </tr> <tr> <td>Neurotisim</td> <td>1.08</td> <td>-0.01</td> <td>-0.46</td> <td>-0.11</td> <td>-0.32</td> <td>-0.07</td> <td>0.18</td> <td>-0.28</td> </tr> <tr> <td>Openness</td> <td>1.59</td> <td>-0.05</td> <td>0.01</td> <td>-0.99</td> <td>0.36</td> <td>-0.53</td> <td>-0.46</td> <td>0.07</td> </tr> <tr> <td>Conservation</td> <td>1.99</td> <td>-0.99</td> <td>-0.66</td> <td>0.84</td> <td>-1.72</td> <td>0.21</td> <td>0.38</td> <td>-0.03</td> </tr> <tr> <td>Hedonism</td> <td>1.47</td> <td>-0.15</td> <td>-0.69</td> <td>0.16</td> <td>0.51</td> <td>-0.06</td> <td>-0.82</td> <td>-0.43</td> </tr> <tr> <td>Openness to change</td> <td>-2.15</td> <td>0.08</td> <td>0.58</td> <td>0.48</td> <td>-1.99</td> <td>-0.38</td> <td>2.29*</td> <td>1.07</td> </tr> <tr> <td>Self-enhancement</td> <td>-1.39</td> <td>-1.12</td> <td>0.58</td> <td>0.47</td> <td>-0.31</td> <td>2.41</td> <td>0.77</td> <td>-1.41</td> </tr> <tr> <td>Self-transcendence</td> <td>1.33</td> <td>2.37</td> <td>1.36</td> <td>-2.47</td> <td>-0.91</td> <td>-1.01</td> <td>-0.33</td> <td>-0.32</td> </tr> </tbody></table>
Table 1
table_1
D16-1150
6
emnlp2016
In our first study, we employed regression analysis to identify significant correlations between personal traits and aspect ranks. Specifically, we trained eight linear regression models, one for each of the eight car aspects. The dependent variable in each model is the rank of an aspect (from 1 to 8) and the independent variables are the ten user traits. In the regression analysis, we only focused on the main effects since a full interaction model with ten traits will require much more data to train. Since the raw scores of the personality and value traits use different scales, we normalized these scores so that they are all from 0 to 1. Table 1 shows the regression results. Several interesting patterns were discovered in this analysis: (a) a positive correlation between the rank of “luxury” and “self-enhancement”, a trait often associated with people who pursue self-interests and value social status, prestige and personal success (p < 0.0001). This pattern suggests that to promote a car to someone who scores high on “selfenhancement”, we need to highlight the “luxury” aspect of a car. (b) the rank of “safety” is positively correlated with “conservation”, a trait associated with people who conform to tradition and pursue safety, harmony, and stability (p < 0.005). This result suggests that for someone values “conservation”, it is better to emphasize “car safety” in a personalized sales message. (c) “self-transcendence”, a trait often associated with people who pursue the protection of the welfare of others and the nature, is positively correlated with the rank of “fuel economy” (p < 0.005) but negatively correlated with the rank of “style” (p < 0.005). This suggests that for someone who values “self-transcendence”, it is better to emphasize “fuel economy”, but not so much on “style”. Other significant correlations uncovered in this analysis include a negative correlation between car “price” and “conservation” (p < 0.005), a negative correlation between car “safety” and “conscientiousness” (p < 0.05), and a positive correlation between “openness to change” and car “performance” (p < 0.05).
[2, 2, 2, 2, 2, 1, 1, 2, 1, 2, 1, 2, 1]
['In our first study, we employed regression analysis to identify significant correlations between personal traits and aspect ranks.', 'Specifically, we trained eight linear regression models, one for each of the eight car aspects.', 'The dependent variable in each model is the rank of an aspect (from 1 to 8) and the independent variables are the ten user traits.', 'In the regression analysis, we only focused on the main effects since a full interaction model with ten traits will require much more data to train.', 'Since the raw scores of the personality and value traits use different scales, we normalized these scores so that they are all from 0 to 1.', 'Table 1 shows the regression results.', 'Several interesting patterns were discovered in this analysis: (a) a positive correlation between the rank of “luxury” and “self-enhancement”, a trait often associated with people who pursue self-interests and value social status, prestige and personal success (p < 0.0001).', 'This pattern suggests that to promote a car to someone who scores high on “selfenhancement”, we need to highlight the “luxury” aspect of a car.', '(b) the rank of “safety” is positively correlated with “conservation”, a trait associated with people who conform to tradition and pursue safety, harmony, and stability (p < 0.005).', 'This result suggests that for someone values “conservation”, it is better to emphasize “car safety” in a personalized sales message.', '(c) “self-transcendence”, a trait often associated with people who pursue the protection of the welfare of others and the nature, is positively correlated with the rank of “fuel economy” (p < 0.005) but negatively correlated with the rank of “style” (p < 0.005).', 'This suggests that for someone who values “self-transcendence”, it is better to emphasize “fuel economy”, but not so much on “style”.', 'Other significant correlations uncovered in this analysis include a negative correlation between car “price” and “conservation” (p < 0.005), a negative correlation between car “safety” and “conscientiousness” (p < 0.05), and a positive correlation between “openness to change” and car “performance” (p < 0.05).']
[None, None, None, None, None, None, ['Luxury', 'Self-enhancement'], None, ['Safety', 'Conservation'], None, ['Self-transcendence', 'Fuel', 'Style'], None, ['Price', 'Conservation', 'Safety', 'Conscientiousness', 'Openness to change', 'Perf']]
1
D16-1151table_9
Performance of different feature groups for alignment.
3
[['Feature', 'Entailment score only', '-'], ['Feature', 'Entailment score only', '+Lexical'], ['Feature', 'Entailment score only', '+Syntactic'], ['Feature', 'Entailment score only', '+Sentence']]
1
[['P'], ['R'], ['F1']]
[['39.55', '14.59', '21.32'], ['50.75', '26.02', '34.40'], ['62.31', '31.47', '41.82'], ['62.33', '31.41', '41.53']]
column
['P', 'R', 'F1']
['Feature']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Feature || Entailment score only || -</td> <td>39.55</td> <td>14.59</td> <td>21.32</td> </tr> <tr> <td>Feature || Entailment score only || +Lexical</td> <td>50.75</td> <td>26.02</td> <td>34.40</td> </tr> <tr> <td>Feature || Entailment score only || +Syntactic</td> <td>62.31</td> <td>31.47</td> <td>41.82</td> </tr> <tr> <td>Feature || Entailment score only || +Sentence</td> <td>62.33</td> <td>31.41</td> <td>41.53</td> </tr> </tbody></table>
Table 9
table_9
D16-1151
8
emnlp2016
Table 9 shows an ablation of the alignment classifier features. Entailment of arguments is the most informative feature for argument alignment. Adding lexical and syntactic context compatibilities adds significant boosts in precision and recall. Knowing that the arguments are retrieved by the same query pattern (sentence feature) only provides minor improvements. Even though the overall classification performance is far from perfect, cross sentence can benefit from alignment as long as it provides a higher score for argument pairs that should align compared to those that should not.
[1, 1, 1, 1, 2]
['Table 9 shows an ablation of the alignment classifier features.', 'Entailment of arguments is the most informative feature for argument alignment.', 'Adding lexical and syntactic context compatibilities adds significant boosts in precision and recall.', 'Knowing that the arguments are retrieved by the same query pattern (sentence feature) only provides minor improvements.', 'Even though the overall classification performance is far from perfect, cross sentence can benefit from alignment as long as it provides a higher score for argument pairs that should align compared to those that should not.']
[None, ['Entailment score only'], ['+Lexical', '+Syntactic', 'P', 'R'], ['+Sentence'], ['+Sentence']]
1
D16-1152table_4
Evaluation results on the NEEL-test and TACL datasets for different systems. The best results are in bold.
3
[['System', 'Our approach', 'NTEL-nonstruct'], ['System', 'Our approach', 'NTEL'], ['System', 'Our approach', 'NTEL user-entity'], ['System', 'Our approach', 'NTEL mention-entity'], ['System', 'Our approach', 'NTEL user-entity mention-entity'], ['System', 'Best published results', 'S-MART']]
2
[['NEEL -test', 'P'], ['NEEL -test', 'R'], ['NEEL-test', 'F1'], ['TACL', 'P'], ['TACL', 'R'], ['TACL', 'F1'], ['-', 'Avg. F1']]
[['80.0', '68.0', '73.5', '64.7', '62.3', '63.5', '68.5'], ['82.8', '69.3', '75.4', '68.0', '66.0', '67.0', '71.2'], ['82.3', '71.8', '76.7', '66.9', '68.7', '67.8', '72.2'], ['80.2', '75.8', '77.9', '66.9', '69.3', '68.1', '73.0'], ['81.9', '75.6', '78.6', '69.0', '69.0', '69.0', '73.8'], ['80.2', '75.4', '77.7', '60.1', '67.7', '63.6', '70.7']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'Avg. F1']
['Our approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NEEL-test || P</th> <th>NEEL-test || R</th> <th>NEEL-test || F1</th> <th>TACL || P</th> <th>TACL || R</th> <th>TACL || F1</th> <th>Avg. F1 || -</th> </tr> </thead> <tbody> <tr> <td>System || Our approach || NTEL-nonstruct</td> <td>80.0</td> <td>68.0</td> <td>73.5</td> <td>64.7</td> <td>62.3</td> <td>63.5</td> <td>68.5</td> </tr> <tr> <td>System || Our approach || NTEL</td> <td>82.8</td> <td>69.3</td> <td>75.4</td> <td>68.0</td> <td>66.0</td> <td>67.0</td> <td>71.2</td> </tr> <tr> <td>System || Our approach || NTEL user-entity</td> <td>82.3</td> <td>71.8</td> <td>76.7</td> <td>66.9</td> <td>68.7</td> <td>67.8</td> <td>72.2</td> </tr> <tr> <td>System || Our approach || NTEL mention-entity</td> <td>80.2</td> <td>75.8</td> <td>77.9</td> <td>66.9</td> <td>69.3</td> <td>68.1</td> <td>73.0</td> </tr> <tr> <td>System || Our approach || NTEL user-entity mention-entity</td> <td>81.9</td> <td>75.6</td> <td>78.6</td> <td>69.0</td> <td>69.0</td> <td>69.0</td> <td>73.8</td> </tr> <tr> <td>System || Best published results || S-MART</td> <td>80.2</td> <td>75.4</td> <td>77.7</td> <td>60.1</td> <td>67.7</td> <td>63.6</td> <td>70.7</td> </tr> </tbody></table>
Table 4
table_4
D16-1152
8
emnlp2016
Table 4 summarizes the empirical findings for our approach and S-MART (Yang and Chang, 2015) on the tweet entity linking task. For the systems with user-entity bilinear function, we report results obtained from embeddings trained on RETWEET+ in Table 4, and other results are available in Table 5. The best hyper-parameters are: the number of hidden units for the MLP is 40, the L2 regularization penalty for the composition parameters is 0.005, and the user embedding size is 100. For the word embedding size, we find 600 offers marginal improvements over 400 but requires longer training time. Thus, we choose 400 as the size of word embeddings. As presented in Table 4, NTEL-nonstruct performs 2.7% F1 worse than the NTEL baseline on the two test sets, which indicates the non-overlapping inference improves system performance on the task. With structured inference but without embeddings, NTEL performs roughly the same as S-MART, showing that a feedforward neural network offers similar expressivity to the regression trees employed by Yang and Chang (2015). Performance improves substantially with the incorporation of low-dimensional author, mention, and entity representations. As shown in Table 4, by learning the interactions between mention and entity representations, NTEL with mention-entity bilinear function outperforms the NTEL baseline system by 1.8% F1 on average. Specifically, the bilinear function results in considerable performance gains in recalls, with small compromise in precisions on the datasets.
[1, 1, 2, 2, 2, 1, 1, 2, 1, 1]
['Table 4 summarizes the empirical findings for our approach and S-MART (Yang and Chang, 2015) on the tweet entity linking task.', 'For the systems with user-entity bilinear function, we report results obtained from embeddings trained on RETWEET+ in Table 4, and other results are available in Table 5.', 'The best hyper-parameters are: the number of hidden units for the MLP is 40, the L2 regularization penalty for the composition parameters is 0.005, and the user embedding size is 100.', 'For the word embedding size, we find 600 offers marginal improvements over 400 but requires longer training time.', 'Thus, we choose 400 as the size of word embeddings.', 'As presented in Table 4, NTEL-nonstruct performs 2.7% F1 worse than the NTEL baseline on the two test sets, which indicates the non-overlapping inference improves system performance on the task.', 'With structured inference but without embeddings, NTEL performs roughly the same as S-MART, showing that a feedforward neural network offers similar expressivity to the regression trees employed by Yang and Chang (2015).', 'Performance improves substantially with the incorporation of low-dimensional author, mention, and entity representations.', 'As shown in Table 4, by learning the interactions between mention and entity representations, NTEL with mention-entity bilinear function outperforms the NTEL baseline system by 1.8% F1 on average.', 'Specifically, the bilinear function results in considerable performance gains in recalls, with small compromise in precisions on the datasets.']
[['System'], None, None, None, None, ['NTEL-nonstruct', 'NTEL', 'F1'], ['NTEL', 'S-MART'], None, ['NTEL', 'Avg. F1'], ['R', 'P']]
1
D16-1153table_6
NIST evaluations for Uyghur. * indicates transfer from Uzbek and Turkish
2
[['Model', 'Lample et al. (2016)'], ['Model', 'Our best transfer model']]
1
[['F1']]
[['37.1'], ['51.2']]
column
['F1']
['Our best transfer model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Lample et al. (2016)</td> <td>37.1</td> </tr> <tr> <td>Model || Our best transfer model</td> <td>51.2</td> </tr> </tbody></table>
Table 6
table_6
D16-1153
7
emnlp2016
Table 6 documents NIST evaluation results on an unseen Uyghur test set (with gold annotations) for the best transfer model configuration jointly trained on Turkish and Uzbek gold annotations and Uyghur training annotations produced by a non-speaker linguist (non-gold). Since Uyghur lacks helpful typelevel orthographic features such as capitalization, our transfer model in table 6 does not use any sparse features or attention but benefits from transfer via the phonological character representations we've proposed. Despite the noisy supervision provided in the target language, transferring from Turkish and Uzbek provides a +14.1 F1 improvement over a state of the art monolingual model trained on the same Uyghur annotations. It is worth pointing out that this transfer was achieved across 3 languages each with different scripts, morphology, phonology and lexicons.
[1, 2, 1, 2]
['Table 6 documents NIST evaluation results on an unseen Uyghur test set (with gold annotations) for the best transfer model configuration jointly trained on Turkish and Uzbek gold annotations and Uyghur training annotations produced by a non-speaker linguist (non-gold).', "Since Uyghur lacks helpful typelevel orthographic features such as capitalization, our transfer model in table 6 does not use any sparse features or attention but benefits from transfer via the phonological character representations we've proposed.", 'Despite the noisy supervision provided in the target language, transferring from Turkish and Uzbek provides a +14.1 F1 improvement over a state of the art monolingual model trained on the same Uyghur annotations.', 'It is worth pointing out that this transfer was achieved across 3 languages each with different scripts, morphology, phonology and lexicons.']
[None, None, ['F1', 'Model'], None]
1
D16-1154table_3
LMs performance on the LTCB test set.
1
[['KN'], ['KN+cache'], ['FFNN [M*200]-600-600-80k'], ['FOFE [M*200]-600-600-80k'], ['RNN [600]-R600-80k'], ['LSTM [200]-R600-80k'], ['LSTM [200]-R600-R600-80k'], ['LSRC [200]-R600-80k'], ['LSRC [200]-R600-600-80k']]
1
[['Model'], ['Model'], ['Model']]
[['239', '156', '132'], ['188', '127', '109'], ['235', '150', '114'], ['112', '107', '100'], ['85', '85', '85'], ['66', '66', '66'], ['61', '61', '61'], ['63', '63', '63'], ['59', '59', '59']]
column
['perplexity', 'perplexity', 'perplexity']
['LSRC [200]-R600-80k', 'LSRC [200]-R600-600-80k']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model</th> <th>Model</th> <th>Model</th> <th>NoP</th> </tr> </thead> <tbody> <tr> <td>Context Size M=N-1</td> <td>1</td> <td>2</td> <td>4</td> <td>4</td> </tr> <tr> <td>KN</td> <td>239</td> <td>156</td> <td>132</td> <td>-</td> </tr> <tr> <td>KN+cache</td> <td>188</td> <td>127</td> <td>109</td> <td>-</td> </tr> <tr> <td>FFNN [M*200]-600-600-80k</td> <td>235</td> <td>150</td> <td>114</td> <td>64.84M</td> </tr> <tr> <td>FOFE [M*200]-600-600-80k</td> <td>112</td> <td>107</td> <td>100</td> <td>64.84M</td> </tr> <tr> <td>RNN [600]-R600-80k</td> <td>85</td> <td>85</td> <td>85</td> <td>96.36M</td> </tr> <tr> <td>LSTM [200]-R600-80k</td> <td>66</td> <td>66</td> <td>66</td> <td>65.92M</td> </tr> <tr> <td>LSTM [200]-R600-R600-80k</td> <td>61</td> <td>61</td> <td>61</td> <td>68.80M</td> </tr> <tr> <td>LSRC [200]-R600-80k</td> <td>63</td> <td>63</td> <td>63</td> <td>65.96M</td> </tr> <tr> <td>LSRC [200]-R600-600-80k</td> <td>59</td> <td>59</td> <td>59</td> <td>66.32M</td> </tr> </tbody></table>
Table 3
table_3
D16-1154
8
emnlp2016
The results shown in Table 3 generally confirm the conclusions we drew from the PTB experiments above. In particular, we can see that the proposed LSRC model largely outperforms all other models. In particular, LSRC clearly outperforms LSTM with a negligible increase in the number of parameters (resulting from the additional 200 × 200 = 0.04M local connection weights Ulc) for the single layer results. We can also see that this improvement is maintained for deep models (2 hidden layers), where the LSRC model achieves a slightly better performance while reducing the number of parameters by ≈ 2.5M and speeding up the training time by ≈ 20% compared to deep LSTM.
[1, 1, 1, 1]
['The results shown in Table 3 generally confirm the conclusions we drew from the PTB experiments above.', 'In particular, we can see that the proposed LSRC model largely outperforms all other models.', 'In particular, LSRC clearly outperforms LSTM with a negligible increase in the number of parameters (resulting from the additional 200 × 200 = 0.04M local connection weights Ulc) for the single layer results.', 'We can also see that this improvement is maintained for deep models (2 hidden layers), where the LSRC model achieves a slightly better performance while reducing the number of parameters by ≈ 2.5M and speeding up the training time by ≈ 20% compared to deep LSTM.']
[None, ['LSRC [200]-R600-80k', 'LSRC [200]-R600-600-80k', 'FFNN [M*200]-600-600-80k', 'FOFE [M*200]-600-600-80k', 'RNN [600]-R600-80k', 'LSTM [200]-R600-80k', 'LSTM [200]-R600-R600-80k'], ['LSRC [200]-R600-80k', 'LSRC [200]-R600-600-80k', 'LSTM [200]-R600-80k', 'LSTM [200]-R600-R600-80k'], ['LSRC [200]-R600-80k', 'LSRC [200]-R600-600-80k', 'LSTM [200]-R600-80k', 'LSTM [200]-R600-R600-80k']]
1
D16-1156table_2
Results on our subset of the PASCAL-50S and PASCAL-Context-50S datasets. We are able to significantly outperform the Stanford Parser and make small improvements over DeepLab-CRF for PASCAL-50S.
1
[['DeepLab-CRF'], ['Stanford Parser'], ['Average'], ['Domain Adaptation'], ['Ours CASCADE'], ['Ours MEDIATOR'], ['oracle']]
2
[['PASCAL-50S', 'Instance-Level Jaccard Index'], ['PASCAL-50S', 'PPAR Acc.'], ['PASCAL-50S', 'Average'], ['PASCAL-Context-50S', 'Instance-Level.1 Jaccard Index'], ['PASCAL-Context-50S', 'PPAR Acc.'], ['PASCAL-Context-50S', 'Average']]
[['66.83', '-', '-', '43.94', '-', '-'], ['-', '62.42', '-', '-', '50.75', '-'], ['-', '-', '64.63', '-', '-', '47.345'], ['-', '72.08', '-', '-', '58.32', '-'], ['67.56', '75.00', '71.28', '43.94', '63.58', '53.76'], ['67.58', '80.33', '73.96', '43.94', '63.58', '53.76'], ['69.96', '96.50', '83.23', '49.21', '75.75', '62.48']]
column
['Instance-Level Jaccard Index', 'PPAR Acc.', 'Average', 'Instance-Level.1 Jaccard Index', 'PPAR Acc.', 'Average']
['Ours CASCADE', 'Ours MEDIATOR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PASCAL-50S || Instance-Level Jaccard Index</th> <th>PASCAL-50S || PPAR Acc.</th> <th>PASCAL-50S || Average</th> <th>PASCAL-Context-50S || Instance-Level.1 Jaccard Index</th> <th>PASCAL-Context-50S || PPAR Acc.</th> <th>PASCAL-Context-50S || Average</th> </tr> </thead> <tbody> <tr> <td>DeepLab-CRF</td> <td>66.83</td> <td>-</td> <td>-</td> <td>43.94</td> <td>-</td> <td>-</td> </tr> <tr> <td>Stanford Parser</td> <td>-</td> <td>62.42</td> <td>-</td> <td>-</td> <td>50.75</td> <td>-</td> </tr> <tr> <td>Average</td> <td>-</td> <td>-</td> <td>64.63</td> <td>-</td> <td>-</td> <td>47.345</td> </tr> <tr> <td>Domain Adaptation</td> <td>-</td> <td>72.08</td> <td>-</td> <td>-</td> <td>58.32</td> <td>-</td> </tr> <tr> <td>Ours CASCADE</td> <td>67.56</td> <td>75.00</td> <td>71.28</td> <td>43.94</td> <td>63.58</td> <td>53.76</td> </tr> <tr> <td>Ours MEDIATOR</td> <td>67.58</td> <td>80.33</td> <td>73.96</td> <td>43.94</td> <td>63.58</td> <td>53.76</td> </tr> <tr> <td>oracle</td> <td>69.96</td> <td>96.50</td> <td>83.23</td> <td>49.21</td> <td>75.75</td> <td>62.48</td> </tr> </tbody></table>
Table 2
table_2
D16-1156
8
emnlp2016
We present our results in Table 2. Our approach significantly outperforms the Stanford Parser (De Marneffe et al., 2006) by 17.91% (28.69% relative) for PASCAL-50S, and 12.83% (25.28% relative) for PASCAL-Context-50S. We also make small improvements over DeepLabCRF (Chen et al., 2015) in the case of PASCAL-50S. To measure statistical significance of our results, we performed paired t-tests between MEDIATOR and INDEP. For both modules (and average), the null hypothesis (that the accuracies of our approach and baseline come from the same distribution) can be successfully rejected at p-value 0.05. For sake of completeness, we also compared MEDIATOR with our ablated system (CASCADE) and found statistically significant differences only in PPAR.
[1, 1, 1, 2, 2, 2]
['We present our results in Table 2.', 'Our approach significantly outperforms the Stanford Parser (De Marneffe et al., 2006) by 17.91% (28.69% relative) for PASCAL-50S, and 12.83% (25.28% relative) for PASCAL-Context-50S.', 'We also make small improvements over DeepLabCRF (Chen et al., 2015) in the case of PASCAL-50S.', 'To measure statistical significance of our results, we performed paired t-tests between MEDIATOR and INDEP.', 'For both modules (and average), the null hypothesis (that the accuracies of our approach and baseline come from the same distribution) can be successfully rejected at p-value 0.05.', 'For sake of completeness, we also compared MEDIATOR with our ablated system (CASCADE) and found statistically significant differences only in PPAR.']
[None, ['Ours CASCADE', 'Ours MEDIATOR', 'Stanford Parser', 'PASCAL-50S', 'PASCAL-Context-50S'], ['DeepLab-CRF', 'PASCAL-50S'], None, None, ['Ours CASCADE', 'Ours MEDIATOR', 'PPAR Acc.']]
1
D16-1157table_4
Results on part-of-speech tagging.
2
[['Model', 'charCNN'], ['Model', 'charLSTM'], ['Model', 'CHARAGRAM'], ['Model', 'CHARAGRAM (2-layer)']]
1
[['Accuracy (%)']]
[['97.02'], ['96.90'], ['96.99'], ['97.10']]
column
['Accuracy (%)']
['CHARAGRAM (2-layer)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || charCNN</td> <td>97.02</td> </tr> <tr> <td>Model || charLSTM</td> <td>96.90</td> </tr> <tr> <td>Model || CHARAGRAM</td> <td>96.99</td> </tr> <tr> <td>Model || CHARAGRAM (2-layer)</td> <td>97.10</td> </tr> </tbody></table>
Table 4
table_4
D16-1157
6
emnlp2016
The results are shown in Table 4. Performance is similar across models. We found that adding a second fully-connected 150 dimensional layer to the CHARAGRAM model improved results slightly.
[1, 1, 1]
['The results are shown in Table 4.', 'Performance is similar across models.', 'We found that adding a second fully-connected 150 dimensional layer to the CHARAGRAM model improved results slightly.']
[None, ['Accuracy (%)', 'Model'], ['CHARAGRAM (2-layer)', 'Accuracy (%)']]
1
D16-1160table_1
Translation results (BLEU score) for different translation methods. For our methods exploring the source-side monolingual data, we investigate the performance change as we choose different scales of monolingual data (e.g. from top 25% to 100% according to the word coverage of the monolingual sentence in source language vocabulary of bilingual training corpus).
2
[['Method', 'Moses'], ['Method', 'RNNSearch'], ['Method', 'RNNSearch-Mono-SL (25%)'], ['Method', 'RNNSearch-Mono-SL (50%)'], ['Method', 'RNNSearch-Mono-SL (75%)'], ['Method', 'RNNSearch-Mono-SL (100%)'], ['Method', 'RNNSearch-Mono-MTL (25%)'], ['Method', 'RNNSearch-Mono-MTL (50%)'], ['Method', 'RNNSearch-Mono-MTL (75%)'], ['Method', 'RNNSearch-Mono-MTL (100%)'], ['Method', 'RNNSearch-Mono-Autoencoder (50%)'], ['Method', 'RNNSearch-Mono-Autoencoder (100%)']]
1
[['MT03'], ['MT04'], ['MT05'], ['MT06']]
[['30.30', '31.04', '28.19', '30.04'], ['28.38', '30.85', '26.78', '29.27'], ['29.65', '31.92', '28.65', '29.86'], ['32.43', '33.16', '30.43', '32.35'], ['30.24', '31.18', '29.33', '28.82'], ['29.97', '30.78', '26.45', '28.06'], ['31.68', '32.51', '29.8', '31.29'], ['33.38', '34.3', '31.57', '33.4'], ['31.69', '32.83', '28.17', '30.26'], ['30.31', '30.62', '27.23', '28.85'], ['31.55', '32.07', '28.19', '30.85'], ['27.81', '30.32', '25.84', '27.73']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['RNNSearch-Mono-Autoencoder (50%)', 'RNNSearch-Mono-Autoencoder (100%)', 'RNNSearch-Mono-MTL (25%)', 'RNNSearch-Mono-MTL (50%)', 'RNNSearch-Mono-MTL (75%)', 'RNNSearch-Mono-MTL (100%)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT06</th> </tr> </thead> <tbody> <tr> <td>Method || Moses</td> <td>30.30</td> <td>31.04</td> <td>28.19</td> <td>30.04</td> </tr> <tr> <td>Method || RNNSearch</td> <td>28.38</td> <td>30.85</td> <td>26.78</td> <td>29.27</td> </tr> <tr> <td>Method || RNNSearch-Mono-SL (25%)</td> <td>29.65</td> <td>31.92</td> <td>28.65</td> <td>29.86</td> </tr> <tr> <td>Method || RNNSearch-Mono-SL (50%)</td> <td>32.43</td> <td>33.16</td> <td>30.43</td> <td>32.35</td> </tr> <tr> <td>Method || RNNSearch-Mono-SL (75%)</td> <td>30.24</td> <td>31.18</td> <td>29.33</td> <td>28.82</td> </tr> <tr> <td>Method || RNNSearch-Mono-SL (100%)</td> <td>29.97</td> <td>30.78</td> <td>26.45</td> <td>28.06</td> </tr> <tr> <td>Method || RNNSearch-Mono-MTL (25%)</td> <td>31.68</td> <td>32.51</td> <td>29.8</td> <td>31.29</td> </tr> <tr> <td>Method || RNNSearch-Mono-MTL (50%)</td> <td>33.38</td> <td>34.3</td> <td>31.57</td> <td>33.4</td> </tr> <tr> <td>Method || RNNSearch-Mono-MTL (75%)</td> <td>31.69</td> <td>32.83</td> <td>28.17</td> <td>30.26</td> </tr> <tr> <td>Method || RNNSearch-Mono-MTL (100%)</td> <td>30.31</td> <td>30.62</td> <td>27.23</td> <td>28.85</td> </tr> <tr> <td>Method || RNNSearch-Mono-Autoencoder (50%)</td> <td>31.55</td> <td>32.07</td> <td>28.19</td> <td>30.85</td> </tr> <tr> <td>Method || RNNSearch-Mono-Autoencoder (100%)</td> <td>27.81</td> <td>30.32</td> <td>25.84</td> <td>27.73</td> </tr> </tbody></table>
Table 1
table_1
D16-1160
6
emnlp2016
Table 1 reports the translation quality for different methods. Comparing the first two lines in Table 1, it is obvious that the NMT method RNNSearch performs much worse than the SMT model Moses on Chinese-to-English translation. The gap is as large as approximately 2.0 BLEU points (28.38 vs. 30.30). We speculate that the encoder-decoder network models of NMT are not well optimized due to insufficient bilingual training data. The focus of this work is to figure out whether the encoder model of NMT can be improved using source-side monolingual data and further boost the translation quality. The four lines (3-6 in Table 1) show the BLEU scores when applying self-learning algorithm to incorporate the source-side monolingual data. Clearly, RNNSearch-Mono-SL outperforms RNNSearch in most cases. The best performance is obtained if the top 50% monolingual data is used. The biggest improvement is up to 4.05 BLEU points (32.43 vs. 28.38 on MT03) and it also significantly outperforms Moses. When employing our multi-task learning framework to incorporate source-side monolingual data, the translation quality can be further improved (Lines 7-10 in Table 1). For example, RNNSearchMono-MTL using the top 50% monolingual data can remarkably outperform the baseline RNNSearch, with an improvement up to 5.0 BLEU points (33.38 vs. 28.38 on MT03). Moreover, it also performs significantly better than the state-of-the-art phrasebased SMT Moses by the largest gains of 3.38 BLEU points (31.57 vs. 28.19 on MT05). The promising results demonstrate that source-side monolingual data can improve neural machine translation and our multi-task learning is more effective. From the last two lines in Table 1, we can see that RNNSearch-Mono-Autoencoder can also improve the translation quality by more than 1.0 BLEU points when using the most related monolingual data. However, it underperforms RNNSearch-MonoMTL by a large gap. It indicates that sentence reordering model is better than sentence reconstruction model for exploiting the source-side monolingual data.
[1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1]
['Table 1 reports the translation quality for different methods.', 'Comparing the first two lines in Table 1, it is obvious that the NMT method RNNSearch performs much worse than the SMT model Moses on Chinese-to-English translation.', 'The gap is as large as approximately 2.0 BLEU points (28.38 vs. 30.30).', 'We speculate that the encoder-decoder network models of NMT are not well optimized due to insufficient bilingual training data.', 'The focus of this work is to figure out whether the encoder model of NMT can be improved using source-side monolingual data and further boost the translation quality.', 'The four lines (3-6 in Table 1) show the BLEU scores when applying self-learning algorithm to incorporate the source-side monolingual data.', 'Clearly, RNNSearch-Mono-SL outperforms RNNSearch in most cases.', 'The best performance is obtained if the top 50% monolingual data is used.', 'The biggest improvement is up to 4.05 BLEU points (32.43 vs. 28.38 on MT03) and it also significantly outperforms Moses.', 'When employing our multi-task learning framework to incorporate source-side monolingual data, the translation quality can be further improved (Lines 7-10 in Table 1).', 'For example, RNNSearchMono-MTL using the top 50% monolingual data can remarkably outperform the baseline RNNSearch,\r\nwith an improvement up to 5.0 BLEU points (33.38 vs. 28.38 on MT03).', 'Moreover, it also performs significantly better than the state-of-the-art phrasebased SMT Moses by the largest gains of 3.38 BLEU points (31.57 vs. 28.19 on MT05).', 'The promising results demonstrate that source-side monolingual data can improve neural machine translation and our multi-task learning is more effective.', 'From the last two lines in Table 1, we can see that RNNSearch-Mono-Autoencoder can also improve the translation quality by more than 1.0 BLEU points when using the most related monolingual data.', 'However, it underperforms RNNSearch-MonoMTL by a large gap.', 'It indicates that sentence reordering model is better than sentence reconstruction model for exploiting the source-side monolingual data.']
[None, ['Method', 'RNNSearch', 'Moses'], ['Method', 'RNNSearch', 'Moses'], None, None, ['RNNSearch-Mono-SL (25%)', 'RNNSearch-Mono-SL (50%)', 'RNNSearch-Mono-SL (75%)', 'RNNSearch-Mono-SL (100%)'], ['RNNSearch-Mono-SL (25%)', 'RNNSearch-Mono-SL (50%)', 'RNNSearch-Mono-SL (75%)', 'RNNSearch-Mono-SL (100%)', 'RNNSearch'], ['RNNSearch-Mono-SL (50%)'], ['RNNSearch-Mono-SL (50%)', 'Moses'], ['RNNSearch-Mono-MTL (25%)', 'RNNSearch-Mono-MTL (50%)', 'RNNSearch-Mono-MTL (75%)', 'RNNSearch-Mono-MTL (100%)'], ['RNNSearch-Mono-MTL (50%)', 'RNNSearch', 'MT03'], ['RNNSearch-Mono-MTL (50%)', 'Moses', 'MT05'], ['RNNSearch-Mono-MTL (25%)', 'RNNSearch-Mono-MTL (50%)', 'RNNSearch-Mono-MTL (75%)', 'RNNSearch-Mono-MTL (100%)'], ['RNNSearch-Mono-Autoencoder (50%)', 'RNNSearch-Mono-Autoencoder (100%)'], ['RNNSearch-Mono-Autoencoder (50%)', 'RNNSearch-Mono-Autoencoder (100%)', 'RNNSearch-Mono-MTL (25%)', 'RNNSearch-Mono-MTL (50%)', 'RNNSearch-Mono-MTL (75%)', 'RNNSearch-Mono-MTL (100%)'], ['RNNSearch-Mono-Autoencoder (50%)', 'RNNSearch-Mono-Autoencoder (100%)', 'RNNSearch-Mono-MTL (25%)', 'RNNSearch-Mono-MTL (50%)', 'RNNSearch-Mono-MTL (75%)', 'RNNSearch-Mono-MTL (100%)']]
1
D16-1160table_2
Translation results (BLEU score) for different translation methods in large-scale training data.
2
[['Method', 'RNNSearch'], ['Method', 'RNNSearch-Mono-MTL (50%)'], ['Method', 'RNNSearch-Mono-MTL (100%)']]
1
[['MT03'], ['MT04'], ['MT05'], ['MT06']]
[['35.18', '36.20', '33.21', '32.86'], ['36.32', '37.51', '35.08', '34.26'], ['35.75', '36.74', '34.23', '33.52']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['RNNSearch-Mono-MTL (50%)', 'RNNSearch-Mono-MTL (100%)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT06</th> </tr> </thead> <tbody> <tr> <td>Method || RNNSearch</td> <td>35.18</td> <td>36.20</td> <td>33.21</td> <td>32.86</td> </tr> <tr> <td>Method || RNNSearch-Mono-MTL (50%)</td> <td>36.32</td> <td>37.51</td> <td>35.08</td> <td>34.26</td> </tr> <tr> <td>Method || RNNSearch-Mono-MTL (100%)</td> <td>35.75</td> <td>36.74</td> <td>34.23</td> <td>33.52</td> </tr> </tbody></table>
Table 2
table_2
D16-1160
8
emnlp2016
A natural question arises that is the source-side monolingual data still very helpful when we have much more bilingual training data. We conduct the large-scale experiments using our proposed multitask framework RNNSearch-Mono-MTL. Table 2 reports the results. We can see from the table that closely related source-side monolingual data (the top 50%) can also boost the translation quality on all of the test sets. The performance improvement can be more than 1.0 BLEU points. Compared to the results on small training data, the gains from source-side monolingual data are much smaller. It is reasonable since large-scale training data can make the parameters of the encoder-decoder parameters much stable. We can also observe the similar phenomenon that adding more unrelated monolingual data leads to decreased translation quality.
[0, 2, 1, 1, 1, 1, 2, 2]
['A natural question arises that is the source-side monolingual data still very helpful when we have much more bilingual training data.', 'We conduct the large-scale experiments using our proposed multitask framework RNNSearch-Mono-MTL.', 'Table 2 reports the results.', 'We can see from the table that closely related source-side monolingual data (the top 50%) can also boost the translation quality on all of the test sets.', 'The performance improvement can be more than 1.0 BLEU points.', 'Compared to the results on small training data, the gains from source-side monolingual data are much smaller.', 'It is reasonable since large-scale training data can make the parameters of the encoder-decoder parameters much stable.', 'We can also observe the similar phenomenon that adding more unrelated monolingual data leads to decreased translation quality.']
[None, None, None, ['RNNSearch-Mono-MTL (50%)', 'MT03', 'MT04', 'MT05', 'MT06'], None, ['RNNSearch-Mono-MTL (50%)', 'RNNSearch-Mono-MTL (100%)', 'MT03', 'MT04', 'MT05', 'MT06'], None, None]
1
D16-1161table_4
Best results in restricted setting with added unrestricted language model for original (2014) and extended (2014-10) CoNLL test set (trained with public data only).
3
[['System', 'Baseline', '-'], ['System', 'Baseline', '+CCLM'], ['System', 'Best dense', '-'], ['System', 'Best dense', '+CCLM'], ['System', 'Best parse', '-'], ['System', 'Best parse', '+CCLM']]
2
[['2014', 'Prec.'], ['2014', 'Recall'], ['2014', 'M2'], ['2015', 'Prec..1'], ['2015', 'Recall'], ['2015', 'M2']]
[['48.97', '26.03', '41.63', '69.29', '31.35', '55.78'], ['58.91', '25.05', '46.37', '77.17', '29.38', '58.23'], ['50.94', '26.21', '42.85', '71.21', '31.70', '57.00'], ['59.98', '28.17', '48.93', '79.98', '32.76', '62.08'], ['57.99', '25.11', '45.95', '76.61', '29.74', '58.25'], ['61.27', '27.98', '49.49', '80.93', '32.47', '62.33']]
column
['Prec.', 'Recall', 'M2', 'Prec.', 'Recall', 'M2']
['Best parse']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>2014 || Prec.</th> <th>2014 || Recall</th> <th>2014 || M2</th> <th>2015 || Prec..1</th> <th>2015 || Recall</th> <th>2015 || M2</th> </tr> </thead> <tbody> <tr> <td>System || Baseline || -</td> <td>48.97</td> <td>26.03</td> <td>41.63</td> <td>69.29</td> <td>31.35</td> <td>55.78</td> </tr> <tr> <td>System || Baseline || +CCLM</td> <td>58.91</td> <td>25.05</td> <td>46.37</td> <td>77.17</td> <td>29.38</td> <td>58.23</td> </tr> <tr> <td>System || Best dense || -</td> <td>50.94</td> <td>26.21</td> <td>42.85</td> <td>71.21</td> <td>31.70</td> <td>57.00</td> </tr> <tr> <td>System || Best dense || +CCLM</td> <td>59.98</td> <td>28.17</td> <td>48.93</td> <td>79.98</td> <td>32.76</td> <td>62.08</td> </tr> <tr> <td>System || Best parse || -</td> <td>57.99</td> <td>25.11</td> <td>45.95</td> <td>76.61</td> <td>29.74</td> <td>58.25</td> </tr> <tr> <td>System || Best parse || +CCLM</td> <td>61.27</td> <td>27.98</td> <td>49.49</td> <td>80.93</td> <td>32.47</td> <td>62.33</td> </tr> </tbody></table>
Table 4
table_4
D16-1161
9
emnlp2016
Table 4 summarizes the best results reported in this paper for the CoNLL-2014 test set (column 2014) before and after adding the Common Crawl n-gram language model. The vanilla Moses baseline with the Common Crawl model can be seen as a new simple baseline for unrestricted settings and is ahead of any previously published result. The combination of sparse features and web-scale monolingual data marks our best result, outperforming previously published results by 8% M2 using similar training data. While our sparse features cause a respectable gain when used with the smaller language model, the web-scale language model seems to cancel out part of the effect.
[1, 2, 1, 1]
['Table 4 summarizes the best results reported in this paper for the CoNLL-2014 test set (column 2014) before and after adding the Common Crawl n-gram language model.', 'The vanilla Moses baseline with the Common Crawl model can be seen as a new simple baseline for unrestricted settings and is ahead of any previously published result.', 'The combination of sparse features and web-scale monolingual data marks our best result, outperforming previously published results by 8% M2 using similar training data.', 'While our sparse features cause a respectable gain when used with the smaller language model, the web-scale language model seems to cancel out part of the effect.']
[['2014'], ['Baseline'], ['M2', 'Best parse', '+CCLM'], ['Best parse']]
1
D16-1163table_3
Our transfer method applied to re-scoring output nbest lists from the SBMT system. The first row shows the SBMT performance with no re-scoring and the other 3 rows show the performance after re-scoring with the selected model. Note: the ‘LM’ row shows the results when an RNN LM trained on the large English corpus was used to re-score.
2
[['Re-scorer', 'None'], ['Re-scorer', 'NMT'], ['Re-scorer', 'Xfer'], ['Re-scorer', 'LM']]
2
[['SBMT Decoder', 'Hausa'], ['SBMT Decoder', 'Turkish'], ['SBMT Decoder', 'Uzbek'], ['SBMT Decoder', 'Urdu']]
[['23.7', '20.4', '17.9', '17.9'], ['24.5', '21.4', '19.5', '18.2'], ['24.8', '21.8', '19.5', '19.1'], ['23.6', '21.1', '17.9', '18.2']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['Re-scorer']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SBMT Decoder || Hausa</th> <th>SBMT Decoder || Turkish</th> <th>SBMT Decoder || Uzbek</th> <th>SBMT Decoder || Urdu</th> </tr> </thead> <tbody> <tr> <td>Re-scorer || None</td> <td>23.7</td> <td>20.4</td> <td>17.9</td> <td>17.9</td> </tr> <tr> <td>Re-scorer || NMT</td> <td>24.5</td> <td>21.4</td> <td>19.5</td> <td>18.2</td> </tr> <tr> <td>Re-scorer || Xfer</td> <td>24.8</td> <td>21.8</td> <td>19.5</td> <td>19.1</td> </tr> <tr> <td>Re-scorer || LM</td> <td>23.6</td> <td>21.1</td> <td>17.9</td> <td>18.2</td> </tr> </tbody></table>
Table 3
table_3
D16-1163
3
emnlp2016
We also use the NMT model with transfer learning as a feature when re-scoring output n-best lists (n = 1000) from the SBMT system. Table 3 shows the results of re-scoring. We compare re-scoring with transfer NMT to re-scoring with baseline (i.e. non-transfer) NMT and to re-scoring with a neural language model. The neural language model is an LSTM RNN with 2 layers and 1000 hidden states. It has a target vocabulary of 100K and is trained using noise-contrastive estimation (Mnih and Teh, 2012, Vaswani et al., 2013, Baltescu and Blunsom, 2015 Williams et al., 2015). Additionally, it is trained using dropout with a dropout probability of 0.2 as suggested by Zaremba et al. (2014). Re-scoring with the transfer NMT model yields an improvement of 1.1–1.6 BLEU points above the strong SBMT system, we find that transfer NMT is a better re-scoring feature than baseline NMT or neural language models.
[0, 1, 2, 2, 2, 2, 1]
['We also use the NMT model with transfer learning as a feature when re-scoring output n-best lists (n = 1000) from the SBMT system.', 'Table 3 shows the results of re-scoring.', 'We compare re-scoring with transfer NMT to re-scoring with baseline (i.e. non-transfer) NMT and to re-scoring with a neural language model.', 'The neural language model is an LSTM RNN with 2 layers and 1000 hidden states.', 'It has a target vocabulary of 100K and is trained using noise-contrastive estimation (Mnih and Teh, 2012, Vaswani et al., 2013, Baltescu and Blunsom, 2015 Williams et al., 2015).', 'Additionally, it is trained using dropout with a dropout probability of 0.2 as suggested by Zaremba et al. (2014).', 'Re-scoring with the transfer NMT model yields an improvement of 1.1–1.6 BLEU points above the strong SBMT system, we find that transfer NMT is a better re-scoring feature than baseline NMT or neural language models.']
[None, None, None, None, None, None, ['Xfer']]
1
D16-1165table_2
Results of the ablation study.
3
[['System', 'Full Network', '-'], ['System', 'Full Network', '- Lexical similarity'], ['System', 'Full Network', '- Domain-specific'], ['System', 'Full Network', '- Distributed rep.'], ['System', 'No hidden layer', '-']]
1
[['MAP'], ['AvgRec'], ['MRR'], ['∆MAP']]
[['54.51', '60.93', '62.94', '-'], ['45.89', '51.54', '53.29', '-8.62'], ['48.48', '50.46', '53.78', '-6.03'], ['51.17', '56.63', '56.91', '-3.34'], ['52.19', '58.23', '59.95', '-2.32']]
column
['MAP', 'AvgRec', 'MRR', '∆MAP']
['System']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP</th> <th>AvgRec</th> <th>MRR</th> <th>∆MAP</th> </tr> </thead> <tbody> <tr> <td>System || Full Network || -</td> <td>54.51</td> <td>60.93</td> <td>62.94</td> <td>-</td> </tr> <tr> <td>System || Full Network || - Lexical similarity</td> <td>45.89</td> <td>51.54</td> <td>53.29</td> <td>-8.62</td> </tr> <tr> <td>System || Full Network || - Domain-specific</td> <td>48.48</td> <td>50.46</td> <td>53.78</td> <td>-6.03</td> </tr> <tr> <td>System || Full Network || - Distributed rep.</td> <td>51.17</td> <td>56.63</td> <td>56.91</td> <td>-3.34</td> </tr> <tr> <td>System || No hidden layer || -</td> <td>52.19</td> <td>58.23</td> <td>59.95</td> <td>-2.32</td> </tr> </tbody></table>
Table 2
table_2
D16-1165
7
emnlp2016
Table 2 shows the results of an ablation study when removing some groups of features. More specifically, we drop lexical similarities, domain-specific features, and the complex semantic-syntactic interactions modeled in the hidden layer between the embeddings and the domain-specific features. We can see that the lexical similarity features (which we modeled by MT evaluation metrics), have a large impact: excluding them from the network yields a decrease of over eight MAP points. This can be explained as the strong dependence that relatedness has over strict word matching. Since questions are relatively short, a better related question will be one that matches better the original question. As expected, eliminating the domain-specific features also hurts the performance greatly: by six MAP points absolute. Eliminating the use of distributed representation has a lesser impact: 3.3 MAP points absolute. This is in line with our previous findings (Guzman et al., 2015, Guzm ´ an et al., 2016a, ´Guzman et al., 2016b) that semantic and syntactic ´embeddings are useful to make a fine-grained distinction between comments (relevance, appropriateness), which are usually longer. We have also found that there is an interaction between features and similarity relations. For example, for relatedness, lexical similarity is 2.6 MAP points more informative10 than distributed representations. In contrast, forrelevance, distributed representations are 0.7 MAP points more informative than lexical similarities.
[1, 2, 1, 2, 0, 1, 1, 2, 1, 1, 2]
['Table 2 shows the results of an ablation study when removing some groups of features.', 'More specifically, we drop lexical similarities, domain-specific features, and the complex semantic-syntactic interactions modeled in the hidden layer between the embeddings and the domain-specific features.', 'We can see that the lexical similarity features (which we modeled by MT evaluation metrics), have a large impact: excluding them from the network yields a decrease of over eight MAP points.', 'This can be explained as the strong dependence that relatedness has over strict word matching.', 'Since questions are relatively short, a better related question will be one that matches better the original question.', 'As expected, eliminating the domain-specific features also hurts the performance greatly: by six MAP points absolute.', 'Eliminating the use of distributed representation has a lesser impact: 3.3 MAP points absolute.', 'This is in line with our previous findings (Guzman et al., 2015, Guzm ´ an et al., 2016a, ´Guzman et al., 2016b) that semantic and syntactic ´embeddings are useful to make a fine-grained distinction between comments (relevance, appropriateness), which are usually longer.', 'We have also found that there is an interaction between features and similarity relations.', 'For example, for relatedness, lexical similarity is 2.6 MAP points more informative10 than distributed representations.', 'In contrast, forrelevance, distributed representations are 0.7 MAP points more informative than lexical similarities.']
[None, None, ['- Lexical similarity', 'MAP'], None, None, ['MAP', '- Domain-specific'], ['MAP', '- Distributed rep.'], None, None, ['- Lexical similarity', '- Distributed rep.'], ['- Distributed rep.', '- Lexical similarity']]
1
D16-1168table_2
Human evaluation results on pairwise-comparisons between FULL and -SYN, and FULL and HUMAN, on STARtest and CARTOON datasets.
3
[['Model', 'FULL', '-'], ['Model', 'FULL', '-SYN'], ['Model', 'FULL', '-'], ['Model', 'FULL', '-SEM'], ['Model', 'FULL', '-'], ['Model', 'HUMAN', '-']]
1
[['STARtest'], ['CARTOON']]
[['65.0', '57.9'], ['35.0', '42.1'], ['68.8', '69.4'], ['31.2', '30.6'], ['17.9', '10.0'], ['82.1', '90.0']]
column
['pairwise-comparisons', 'pairwise-comparisons']
['Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>STARtest</th> <th>CARTOON</th> </tr> </thead> <tbody> <tr> <td>Model || FULL || -</td> <td>65.0</td> <td>57.9</td> </tr> <tr> <td>Model || FULL || -SYN</td> <td>35.0</td> <td>42.1</td> </tr> <tr> <td>Model || FULL || -</td> <td>68.8</td> <td>69.4</td> </tr> <tr> <td>Model || FULL || -SEM</td> <td>31.2</td> <td>30.6</td> </tr> <tr> <td>Model || FULL || -</td> <td>17.9</td> <td>10.0</td> </tr> <tr> <td>Model || HUMAN || -</td> <td>82.1</td> <td>90.0</td> </tr> </tbody></table>
Table 2
table_2
D16-1168
7
emnlp2016
However, a pairwise comparison between FULL and -SYN (Table 2) reveals that human subjects consistently prefer the output of FULL instead of -SYN both for STARtest and CARTOON. Table 2 also reports that HUMAN outperforms the output of the FULL model, and a pairwise comparison of FULL and -SEM which yields a result in line with the METEOR scores.
[1, 2]
['However, a pairwise comparison between FULL and -SYN (Table 2) reveals that human subjects consistently prefer the output of FULL instead of -SYN both for STARtest and CARTOON.', 'Table 2 also reports that HUMAN outperforms the output of the FULL model, and a pairwise comparison of FULL and -SEM which yields a result in line with the METEOR scores.']
[['FULL', 'HUMAN', '-SYN', 'STARtest', 'CARTOON'], ['HUMAN', 'FULL', '-SEM', 'STARtest', 'CARTOON']]
1
D16-1168table_3
Human evaluation results for FULL, -SYN, -SEMand HUMAN on thematicity, coherence and solvability on STARtest.
3
[['Model', 'HUMAN', '-'], ['Model', 'FULL', '-'], ['Model', 'FULL', '-SYN'], ['Model', 'FULL', '-SEM']]
1
[['Thematicity'], ['Coherence'], ['Solvability']]
[['3.7', '3.175', '4.025'], ['3.7', '3.025', '3.9'], ['3.375', '3.075', '3.825'], ['3.325', '2.65', '3.7']]
column
['Thematicity', 'Coherence', 'Solvability']
['Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Thematicity</th> <th>Coherence</th> <th>Solvability</th> </tr> </thead> <tbody> <tr> <td>Model || HUMAN || -</td> <td>3.7</td> <td>3.175</td> <td>4.025</td> </tr> <tr> <td>Model || FULL || -</td> <td>3.7</td> <td>3.025</td> <td>3.9</td> </tr> <tr> <td>Model || FULL || -SYN</td> <td>3.375</td> <td>3.075</td> <td>3.825</td> </tr> <tr> <td>Model || FULL || -SEM</td> <td>3.325</td> <td>2.65</td> <td>3.7</td> </tr> </tbody></table>
Table 3
table_3
D16-1168
7
emnlp2016
Table 3 shows the results of the detailed comparison of Thematicity, Coherence, and Solvability. This table clearly shows the strong contribution of the semantic component of our system. The specific contribution of the syntactic component is to pro duce overall more solvable and thematically satisfying problems, although it can slightly affect coherence especially when automatic parses fail. Finally, the overall high ratings for human-authored stories across all three dimensions, confirm the high quality of the crowd-sourced stories.
[1, 1, 1, 1]
['Table 3 shows the results of the detailed comparison of Thematicity, Coherence, and Solvability.', 'This table clearly shows the strong contribution of the semantic component of our system.', 'The specific contribution of the syntactic component is to pro duce overall more solvable and thematically satisfying problems, although it can slightly affect coherence especially when automatic parses fail.', 'Finally, the overall high ratings for human-authored stories across all three dimensions, confirm the high quality of the crowd-sourced stories.']
[['Thematicity', 'Coherence', 'Solvability'], ['-SEM'], ['-SYN'], ['HUMAN', 'Model']]
1
D16-1173table_1
Classification performance on SST2. The top and second blocks use only sentence-level annotations for training, while the bottom block uses both sentenceand phrases-level annotations. We report the accuracy of both the regularized teacher model q and the student model p after distillation.
3
[['Model', 'sentences', 'CNN (Kim 2014)'], ['Model', 'sentences', 'CNN+REL q'], ['Model', 'sentences', 'CNN+REL p'], ['Model', 'sentences', 'CNN+REL+LEX q'], ['Model', 'sentences', 'CNN+REL+LEX p'], ['Model', 'sentences', 'MC-CNN (Kim 2014)'], ['Model', 'sentences', 'Tensor-CNN (Lei et al. 2015)'], ['Model', 'sentences', 'CNN+But-q (Hu et al. 2016)'], ['Model', '+phrases', 'CNN (Kim 2014)'], ['Model', '+phrases', 'Tree-LSTM (Tai et al. 2015)'], ['Model', '+phrases', 'MC-CNN (Kim 2014)'], ['Model', '+phrases', 'CNN+But-q (Hu et al. 2016)'], ['Model', '+phrases', 'MVCNN (Yin and Schutze 2015)']]
1
[['Accuracy (%)']]
[['86.6'], ['87.8'], ['87.1'], ['88.0'], ['87.2'], ['86.8'], ['87.0'], ['87.1'], ['87.2'], ['88.0'], ['88.1'], ['89.2'], ['89.4']]
column
['Accuracy (%)']
['CNN+REL+LEX q', 'CNN+REL+LEX p']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || sentences || CNN (Kim 2014)</td> <td>86.6</td> </tr> <tr> <td>Model || sentences || CNN+REL q</td> <td>87.8</td> </tr> <tr> <td>Model || sentences || CNN+REL p</td> <td>87.1</td> </tr> <tr> <td>Model || sentences || CNN+REL+LEX q</td> <td>88.0</td> </tr> <tr> <td>Model || sentences || CNN+REL+LEX p</td> <td>87.2</td> </tr> <tr> <td>Model || sentences || MC-CNN (Kim 2014)</td> <td>86.8</td> </tr> <tr> <td>Model || sentences || Tensor-CNN (Lei et al. 2015)</td> <td>87.0</td> </tr> <tr> <td>Model || sentences || CNN+But-q (Hu et al. 2016)</td> <td>87.1</td> </tr> <tr> <td>Model || +phrases || CNN (Kim 2014)</td> <td>87.2</td> </tr> <tr> <td>Model || +phrases || Tree-LSTM (Tai et al. 2015)</td> <td>88.0</td> </tr> <tr> <td>Model || +phrases || MC-CNN (Kim 2014)</td> <td>88.1</td> </tr> <tr> <td>Model || +phrases || CNN+But-q (Hu et al. 2016)</td> <td>89.2</td> </tr> <tr> <td>Model || +phrases || MVCNN (Yin and Schutze 2015)</td> <td>89.4</td> </tr> </tbody></table>
Table 1
table_1
D16-1173
7
emnlp2016
Table 1 shows the classification performance on the SST2 dataset. From rows 1-3 we see that our proposed sentiment model that integrates the diverse set of knowledge (section 4) significantly outperforms the base CNN (Kim 2014). The improvement of the student network p validates the effectiveness of the iterative mutual distillation process. Consistent with the observations in (Hu et al. 2016), the regularized teacher model q provides further performance boost, though it imposes additional computational overhead for explicit knowledge representations. Note that our models are trained with only sentence-level annotations. Compared with the baselines trained in the same setting (rows 4-6), our model with the full knowledge, CNN+REL+LEX, performs the best. CNN+But-q (row 6) is the base CNN augmented with a logic rule that identifies contrastive sense through explicit occurrence of word “but” (section 3.1) (Hu et al. 2016). Our enhanced framework enables richer knowledge and achieves much better performance. Our method further outperforms the base CNN that is additionally trained with dense phrase-level annotations (row 7), showing improved generalization of the knowledge-enhanced model from limited data. Figure 2 further studies the performance with varying training sizes. We can clearly observe that the incorporated knowledge tends to offer higher improvement with less training data. This property can be particularly desirable in applications of structured predictions where manual annotations are expensive while rich human knowledge is available.
[1, 1, 2, 2, 2, 1, 2, 1, 1, 0, 0, 0]
['Table 1 shows the classification performance on the SST2 dataset.', 'From rows 1-3 we see that our proposed sentiment model that integrates the diverse set of knowledge (section 4) significantly outperforms the base CNN (Kim 2014).', 'The improvement of the student network p validates the effectiveness of the iterative mutual distillation process.', 'Consistent with the observations in (Hu et al. 2016), the regularized teacher model q provides further performance boost, though it imposes additional computational overhead for explicit knowledge representations.', 'Note that our models are trained with only sentence-level annotations.', 'Compared with the baselines trained in the same setting (rows 4-6), our model with the full knowledge, CNN+REL+LEX, performs the best.', 'CNN+But-q (row 6) is the base CNN augmented with a logic rule that identifies contrastive sense through explicit occurrence of word “but” (section 3.1) (Hu et al. 2016).', 'Our enhanced framework enables richer knowledge and achieves much better performance.', 'Our method further outperforms the base CNN that is additionally trained with dense phrase-level annotations (row 7), showing improved generalization of the knowledge-enhanced model from limited data.', 'Figure 2 further studies the performance with varying training sizes.', 'We can clearly observe that the incorporated knowledge tends to offer higher improvement with less training data.', 'This property can be particularly desirable in applications of structured predictions where manual annotations are expensive while rich human knowledge is available.']
[None, ['CNN+REL q', 'CNN+REL p', 'CNN+REL+LEX q', 'CNN+REL+LEX p', 'CNN (Kim 2014)'], None, None, None, ['CNN+REL+LEX q', 'CNN+REL+LEX p'], ['CNN+But-q (Hu et al. 2016)'], ['CNN+REL+LEX q', 'CNN+REL+LEX p'], ['CNN (Kim 2014)'], None, None, None]
1
D16-1173table_2
Classification performance on the CR dataset. We report the average accuracy±one standard deviation with 10fold CV. The top block compares the base CNN (row 1) with the knowledge-enhanced CNNs by our framework.
3
[['Model', '1', 'CNN (Kim, 2014)'], ['Model', '2', 'CNN+REL'], ['Model', '3', 'CNN+REL+LEX'], ['Model', '4', 'MC-CNN (Kim, 2014)'], ['Model', '5', 'Bi-RNN (Lai et al. 2015)'], ['Model', '6', 'CRF-PR (Yang and Cardie, 2014)'], ['Model', '7', 'AdaSent (Zhao et al. 2015)']]
1
[['Accuracy (%)']]
[['84.1±0.2'], ['q: 85.0±0.2, p: 84.7±0.2'], ['q: 85.3±0.3, p: 85.0±0.2'], ['85.0'], ['82.6'], ['82.7'], ['86.3']]
column
['Accuracy (%)']
['CNN+REL+LEX']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || 1 || CNN (Kim, 2014)</td> <td>84.1±0.2</td> </tr> <tr> <td>Model || 2 || CNN+REL</td> <td>q: 85.0±0.2, p: 84.7±0.2</td> </tr> <tr> <td>Model || 3 || CNN+REL+LEX</td> <td>q: 85.3±0.3, p: 85.0±0.2</td> </tr> <tr> <td>Model || 4 || MC-CNN (Kim, 2014)</td> <td>85.0</td> </tr> <tr> <td>Model || 5 || Bi-RNN (Lai et al. 2015)</td> <td>82.6</td> </tr> <tr> <td>Model || 6 || CRF-PR (Yang and Cardie, 2014)</td> <td>82.7</td> </tr> <tr> <td>Model || 7 || AdaSent (Zhao et al. 2015)</td> <td>86.3</td> </tr> </tbody></table>
Table 2
table_2
D16-1173
8
emnlp2016
Table 2 shows model performance on the CR dataset. Our model again surpasses the base network and several other competitive neural methods by a large margin. Though falling behind AdaSent (row 7) which has a more specialized and complex architecture than standard convolutional networks, the proposed framework indeed is general enough to apply on top of it for further enhancement. To further evaluate the proposed mutual distillation framework for learning knowledge, we compare to an extensive set of other possible knowledge optimization approaches. Table 3 shows the results. In row 2, the “opt-joint” method optimizes the regularized joint model of Eq.(2) directly in terms of both the neural network and knowledge parameters. Row 3, “opt-knwl-pipeline”, is an approach that first optimizes the standalone knowledge component and then inserts it into the previous framework of (Hu et al. 2016) as a fixed constraint. Without interaction between the knowledge and neural network learning, the pipelined method yields inferior results. Finally, rows 4-5 display a method that adapts the knowledge component at each iteration by optimizing the joint model q in terms of the knowledge parameters. We report the accuracy of both the student network p (row 4) and the joint teacher network q (row 5), and compare with our method in row 6 and 7, respectively. We can see that both models performs poorly, achieving the accuracy of only 68.6% for the knowledge component, similar to the accuracy achieved by the “opt-joint” method.
[1, 1, 2, 2, 0, 0, 0, 0, 0, 0, 0]
['Table 2 shows model performance on the CR dataset.', 'Our model again surpasses the base network and several other competitive neural methods by a large margin.', 'Though falling behind AdaSent (row 7) which has a more specialized and complex architecture than standard convolutional networks, the proposed framework indeed is general enough to apply on top of it for further enhancement.', 'To further evaluate the proposed mutual distillation framework for learning knowledge, we compare to an extensive set of other possible knowledge optimization approaches.', 'Table 3 shows the results.', 'In row 2, the “opt-joint” method optimizes the regularized joint model of Eq.(2) directly in terms of both the neural network and knowledge parameters.', 'Row 3, “opt-knwl-pipeline”, is an approach that first optimizes the standalone knowledge component and then inserts it into the previous framework of (Hu et al. 2016) as a fixed constraint.', 'Without interaction between the knowledge and neural network learning, the pipelined method yields inferior results.', 'Finally, rows 4-5 display a method that adapts the knowledge component at each iteration by optimizing the joint model q in terms of the knowledge parameters.', 'We report the accuracy of both the student network p (row 4) and the joint teacher network q (row 5), and compare with our method in row 6 and 7, respectively.', 'We can see that both models performs poorly, achieving the accuracy of only 68.6% for the knowledge component, similar to the accuracy achieved by the “opt-joint” method.']
[None, ['CNN+REL+LEX', 'Model'], ['AdaSent (Zhao et al. 2015)'], None, None, None, None, None, None, None, None]
1
D16-1174table_5
Evaluation results on the word to sense similarity test dataset of the SemEval-14 task on Cross-Level Semantic Similarity, according to Pearson (r × 100) and Spearman (ρ × 100) correlations. We show results for four similarity computation strategies (see §3.3). The best results per strategy are shown in bold whereas they are underlined for the best strategies per system. Systems marked with ∗ are evaluated on a slightly smaller dataset (474 of the original 500 pairs) so as to have a fair comparison with Rothe and Sch¨utze (2015) and Chen et al. (2014) that use older versions of WordNet (1.7.1 and 1.7, respectively).
2
[['System', 'DECONF*'], ['System', 'Rothe and Schutze (2015)*'], ['System', 'Iacobacci et al. (2015)*'], ['System', 'Chen et al. (2014)*'], ['System', 'DECONF'], ['System', 'Pilehvar and Navigli (2015)'], ['System', 'Iacobacci et al. (2015)']]
2
[['MaxSim', 'r'], ['MaxSim', 'rho'], ['AvgSim', 'r'], ['AvgSim', 'rho'], ['S2W', 'r'], ['S2W', 'rho'], ['S2A', 'r'], ['S2A', 'rho']]
[['36.4', '37.6', '36.8', '38.8', '34.9', '35.6', '37.5', '39.3'], ['34.0', '33.8', '34.1', '33.6', '33.4', '32.0', '35.4', '34.9'], ['19.1', '21.5', '21.3', '24.2', '22.7', '21.7', '19.5', '21.1'], ['17.7', '18.0', '17.2', '16.8', '27.7', '26.7', '17.9', '18.8'], ['35.5', '36.4', '36.2', '38.0', '34.9', '35.6', '36.8', '38.4'], ['19.4', '23.8', '21.2', '26.0', '-', '-', '-', '-'], ['19.0', '21.5', '20.9', '23.2', '22.3', '20.6', '19.2', '20.4']]
column
['r', 'rho', 'r', 'rho', 'r', 'rho', 'r', 'rho']
['DECONF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MaxSim || r</th> <th>MaxSim || rho</th> <th>AvgSim || r</th> <th>AvgSim || rho</th> <th>S2W || r</th> <th>S2W || rho</th> <th>S2A || r</th> <th>S2A || rho</th> </tr> </thead> <tbody> <tr> <td>System || DECONF*</td> <td>36.4</td> <td>37.6</td> <td>36.8</td> <td>38.8</td> <td>34.9</td> <td>35.6</td> <td>37.5</td> <td>39.3</td> </tr> <tr> <td>System || Rothe and Schutze (2015)*</td> <td>34.0</td> <td>33.8</td> <td>34.1</td> <td>33.6</td> <td>33.4</td> <td>32.0</td> <td>35.4</td> <td>34.9</td> </tr> <tr> <td>System || Iacobacci et al. (2015)*</td> <td>19.1</td> <td>21.5</td> <td>21.3</td> <td>24.2</td> <td>22.7</td> <td>21.7</td> <td>19.5</td> <td>21.1</td> </tr> <tr> <td>System || Chen et al. (2014)*</td> <td>17.7</td> <td>18.0</td> <td>17.2</td> <td>16.8</td> <td>27.7</td> <td>26.7</td> <td>17.9</td> <td>18.8</td> </tr> <tr> <td>System || DECONF</td> <td>35.5</td> <td>36.4</td> <td>36.2</td> <td>38.0</td> <td>34.9</td> <td>35.6</td> <td>36.8</td> <td>38.4</td> </tr> <tr> <td>System || Pilehvar and Navigli (2015)</td> <td>19.4</td> <td>23.8</td> <td>21.2</td> <td>26.0</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || Iacobacci et al. (2015)</td> <td>19.0</td> <td>21.5</td> <td>20.9</td> <td>23.2</td> <td>22.3</td> <td>20.6</td> <td>19.2</td> <td>20.4</td> </tr> </tbody></table>
Table 5
table_5
D16-1174
8
emnlp2016
Table 5 shows the results on the word to sense dataset of the SemEval-2014 CLSS task, according to Pearson (r × 100) and Spearman (rho × 100) correlation scores and for the four strategies. As can be seen from the low overall performance, the task is a very challenging benchmark with many WordNet out-of-vocabulary or slang terms and rare usages. Despite this, DECONF provides consistent improvement over the comparison sense representation techniques according to both measures and for all the strategies. Across the four strategies, S2A proves to be the most effective for DECONF and the representations of Rothe and Schutze (2015). The representations of Chen et al. (2014) perform best with the S2W strat egy whereas those of Iacobacci et al. (2015) do not show a consistent trend with relatively low performance across the four strategies. Also, a comparison of our results across the S2W and S2A strategies reveals that a word’s aggregated representation, i.e., the centroid of the representations of its senses, is more accurate than its original word representation.
[1, 2, 1, 1, 1, 1]
['Table 5 shows the results on the word to sense dataset of the SemEval-2014 CLSS task, according to Pearson (r × 100) and Spearman (rho × 100) correlation scores and for the four strategies.', 'As can be seen from the low overall performance, the task is a very challenging benchmark with many WordNet out-of-vocabulary or slang terms and rare usages.', 'Despite this, DECONF provides consistent improvement over the comparison sense representation techniques according to both measures and for all the strategies.', 'Across the four strategies, S2A proves to be the most effective for DECONF and the representations of Rothe and Schutze (2015).', 'The representations of Chen et al. (2014) perform best with the S2W strat egy whereas those of Iacobacci et al. (2015) do not show a consistent trend with relatively low performance across the four strategies.', 'Also, a comparison of our results across the S2W and S2A strategies reveals that a word’s aggregated representation, i.e., the centroid of the representations of its senses, is more accurate than its original word representation.']
[['r', 'rho'], None, ['DECONF'], ['S2A', 'DECONF'], ['Chen et al. (2014)*', 'S2W', 'Iacobacci et al. (2015)*'], ['S2W', 'S2A']]
1
D16-1175table_1
Example feature spaces for the lexemes white and clothes extracted from the dependency tree of Figure 1. Not all features are displayed for space reasons. Offsetting amod:shoes by amod results in an empty dependency path, leaving just the word co-occurrence :shoes as feature.
5
[['white', 'Distributional Features', 'amod:shoes', 'Offset Features (by amod)', ':shoes'], ['clothes', 'Distributional Features', 'amod:clean', 'Offset Features (by amod)', '-'], ['white', 'Distributional Features', 'amod:dobj:bought', 'Offset Features (by amod)', 'dobj:bought'], ['clothes', 'Distributional Features', 'dobj:like', 'Offset Features (by amod)', '-'], ['white', 'Distributional Features', 'amod:dobj:folded', 'Offset Features (by amod)', 'dobj:folded'], ['clothes', 'Distributional Features', 'dobj:folded', 'Offset Features (by amod)', '-'], ['white', 'Distributional Features', 'amod:dobj:nsubj:we', 'Offset Features (by amod)', 'dobj:nsubj:we'], ['clothes', 'Distributional Features', 'dobj:nsubj:we', 'Offset Features (by amod)', '-']]
1
[['Co-occurrence Count']]
[['1'], ['1'], ['1'], ['1'], ['1'], ['1'], ['1'], ['1']]
column
['Co-occurrence Count']
[]
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Co-occurrence Count</th> </tr> </thead> <tbody> <tr> <td>white || Distributional Features || amod:shoes || Offset Features (by amod) || :shoes</td> <td>1</td> </tr> <tr> <td>clothes || Distributional Features || amod:clean || Offset Features (by amod) || -</td> <td>1</td> </tr> <tr> <td>white || Distributional Features || amod:dobj:bought || Offset Features (by amod) || dobj:bought</td> <td>1</td> </tr> <tr> <td>clothes || Distributional Features || dobj:like || Offset Features (by amod) || -</td> <td>1</td> </tr> <tr> <td>white || Distributional Features || amod:dobj:folded || Offset Features (by amod) || dobj:folded</td> <td>1</td> </tr> <tr> <td>clothes || Distributional Features || dobj:folded || Offset Features (by amod) || -</td> <td>1</td> </tr> <tr> <td>white || Distributional Features || amod:dobj:nsubj:we || Offset Features (by amod) || dobj:nsubj:we</td> <td>1</td> </tr> <tr> <td>clothes || Distributional Features || dobj:nsubj:we || Offset Features (by amod) || -</td> <td>1</td> </tr> </tbody></table>
Table 1
table_1
D16-1175
4
emnlp2016
Table 1 shows a number of features extracted from the aligned dependency trees in Figure 1 and highlights that adjectives and nouns do not share many features if only first order dependencies would be considered. However through the inclusion of inverse and higher order dependency paths we can observe that the second order features of the adjective align with the first order features of the noun. For composition, the adjective white needs to be offset by its inverse relation to clothes making it distributionally similar to a noun that has been modified by white. Offsetting can be seen as shifting the current viewpoint in the APT data structure and is necessary for aligning the feature spaces for composition (Weir et al., 2016). We are then in a position to compose the offset representation of white with the vector for clothes by the union or the intersection of their features.
[1, 2, 2, 2, 2]
['Table 1 shows a number of features extracted from the aligned dependency trees in Figure 1 and highlights that adjectives and nouns do not share many features if only first order dependencies would be considered.', 'However through the inclusion of inverse and higher order dependency paths we can observe that the second order features of the adjective align with the first order features of the noun.', 'For composition, the adjective white needs to be offset by its inverse relation to clothes making it distributionally similar to a noun that has been modified by white.', 'Offsetting can be seen as shifting the current viewpoint in the APT data structure and is necessary for aligning the feature spaces for composition (Weir et al., 2016).', 'We are then in a position to compose the offset representation of white with the vector for clothes by the union or the intersection of their features.']
[None, None, None, None, None]
0
D16-1175table_3
Effect of the magnitude of the shift parameter k in SPPMI on the word similarity tasks. Boldface means best performance per dateset.
2
[['APTs', 'k = 1'], ['APTs', 'k = 5'], ['APTs', 'k = 10'], ['APTs', 'k = 40'], ['APTs', 'k = 100']]
2
[['MEN', 'without DI'], ['MEN', 'with DI'], ['SimLex-999', 'without DI'], ['SimLex-999', 'with DI'], ['WordSim-353 (rel)', 'without DI'], ['WordSim-353 (rel)', 'with DI'], ['WordSim-353 (sub)', 'without DI'], ['WordSim-353 (sub)', 'with DI']]
[['0.54', '0.52', '0.31', '0.30', '0.34', '0.27', '0.62', '0.60'], ['0.64', '0.65', '0.35', '0.36', '0.56', '0.51', '0.74', '0.73'], ['0.63', '0.66', '0.35', '0.36', '0.56', '0.55', '0.75', '0.74'], ['0.63', '0.68', '0.30', '0.32', '0.55', '0.61', '0.75', '0.76'], ['0.61', '0.67', '0.26', '0.29', '0.47', '0.60', '0.71', '0.72']]
column
['similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity']
['APTs']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MEN || without DI</th> <th>MEN || with DI</th> <th>SimLex-999 || without DI</th> <th>SimLex-999 || with DI</th> <th>WordSim-353 (rel) || without DI</th> <th>WordSim-353 (rel) || with DI</th> <th>WordSim-353 (sub) || without DI</th> <th>WordSim-353 (sub) || with DI</th> </tr> </thead> <tbody> <tr> <td>APTs || k = 1</td> <td>0.54</td> <td>0.52</td> <td>0.31</td> <td>0.30</td> <td>0.34</td> <td>0.27</td> <td>0.62</td> <td>0.60</td> </tr> <tr> <td>APTs || k = 5</td> <td>0.64</td> <td>0.65</td> <td>0.35</td> <td>0.36</td> <td>0.56</td> <td>0.51</td> <td>0.74</td> <td>0.73</td> </tr> <tr> <td>APTs || k = 10</td> <td>0.63</td> <td>0.66</td> <td>0.35</td> <td>0.36</td> <td>0.56</td> <td>0.55</td> <td>0.75</td> <td>0.74</td> </tr> <tr> <td>APTs || k = 40</td> <td>0.63</td> <td>0.68</td> <td>0.30</td> <td>0.32</td> <td>0.55</td> <td>0.61</td> <td>0.75</td> <td>0.76</td> </tr> <tr> <td>APTs || k = 100</td> <td>0.61</td> <td>0.67</td> <td>0.26</td> <td>0.29</td> <td>0.47</td> <td>0.60</td> <td>0.71</td> <td>0.72</td> </tr> </tbody></table>
Table 3
table_3
D16-1175
7
emnlp2016
Table 3 highlights the effect of the SPPMI shift parameter k, while keeping the number of neighbours fixed at 30 and using the static top n neighbour retrieval function. For the APT model, a value of k = 40 performs best (except for SimLex-999, where smaller shifts give better results), with a performance drop-off for larger shifts. In our experiments we find that a shift of k = 1 results in top performance for the untyped vector space model. It appears that shifting the PPMI scores in the APT model has the effect of cleaning the vectors from noisy PPMI artefacts, which reinforces the predominant sense, while other senses get suppressed. Subsequently, this results in a cleaner neighbourhood around the word vector, dominated by a single sense. This explains why distributional inference slightly degrades performance for smaller values of k.
[1, 1, 2, 2, 2, 2]
['Table 3 highlights the effect of the SPPMI shift parameter k, while keeping the number of neighbours fixed at 30 and using the static top n neighbour retrieval function.', 'For the APT model, a value of k = 40 performs best (except for SimLex-999, where smaller shifts give better results), with a performance drop-off for larger shifts.', 'In our experiments we find that a shift of k = 1 results in top performance for the untyped vector space model.', 'It appears that shifting the PPMI scores in the APT model has the effect of cleaning the vectors from noisy PPMI artefacts, which reinforces the predominant sense, while other senses get suppressed.', 'Subsequently, this results in a cleaner neighbourhood around the word vector, dominated by a single sense.', 'This explains why distributional inference slightly degrades performance for smaller values of k.']
[None, ['k = 40', 'MEN', 'SimLex-999', 'WordSim-353 (rel)', 'WordSim-353 (sub)'], ['k = 1'], None, None, None]
1
D16-1175table_4
Neighbour retrieval function comparison. Boldface means best performance on a dataset per VSM type. *) With 3 significant figures, the density window approach (0.713) is slightly better than the baseline without DI (0.708), static top n (0.710) and WordNet (0.710).
2
[['APTs (k = 40)', 'MEN'], ['APTs (k = 40)', 'SimLex-999'], ['APTs (k = 40)', 'WordSim-353 (rel)'], ['APTs (k = 40)', 'WordSim-353 (sub)'], ['Untyped VSM (k = 1)', 'MEN*'], ['Untyped VSM (k = 1)', 'SimLex-999'], ['Untyped VSM (k = 1)', 'WordSim-353 (rel)'], ['Untyped VSM (k = 1)', 'WordSim-353 (sub)']]
1
[['No Distributional Inference'], ['Density Window'], ['Static Top n'], ['WordNet']]
[['0.63', '0.67', '0.68', '0.63'], ['0.3', '0.32', '0.32', '0.38'], ['0.55', '0.62', '0.61', '0.56'], ['0.75', '0.78', '0.76', '0.77'], ['0.71', '0.71', '0.71', '0.71'], ['0.3', '0.29', '0.3', '0.36'], ['0.6', '0.64', '0.64', '0.52'], ['0.7', '0.73', '0.72', '0.67']]
column
['similarity', 'similarity', 'similarity', 'similarity']
['APTs (k = 40)', 'Untyped VSM (k = 1)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>No Distributional Inference</th> <th>Density Window</th> <th>Static Top n</th> <th>WordNet</th> </tr> </thead> <tbody> <tr> <td>APTs (k = 40) || MEN</td> <td>0.63</td> <td>0.67</td> <td>0.68</td> <td>0.63</td> </tr> <tr> <td>APTs (k = 40) || SimLex-999</td> <td>0.3</td> <td>0.32</td> <td>0.32</td> <td>0.38</td> </tr> <tr> <td>APTs (k = 40) || WordSim-353 (rel)</td> <td>0.55</td> <td>0.62</td> <td>0.61</td> <td>0.56</td> </tr> <tr> <td>APTs (k = 40) || WordSim-353 (sub)</td> <td>0.75</td> <td>0.78</td> <td>0.76</td> <td>0.77</td> </tr> <tr> <td>Untyped VSM (k = 1) || MEN*</td> <td>0.71</td> <td>0.71</td> <td>0.71</td> <td>0.71</td> </tr> <tr> <td>Untyped VSM (k = 1) || SimLex-999</td> <td>0.3</td> <td>0.29</td> <td>0.3</td> <td>0.36</td> </tr> <tr> <td>Untyped VSM (k = 1) || WordSim-353 (rel)</td> <td>0.6</td> <td>0.64</td> <td>0.64</td> <td>0.52</td> </tr> <tr> <td>Untyped VSM (k = 1) || WordSim-353 (sub)</td> <td>0.7</td> <td>0.73</td> <td>0.72</td> <td>0.67</td> </tr> </tbody></table>
Table 4
table_4
D16-1175
7
emnlp2016
Table 4 shows that distributional inference successfully infers missing information for both model types, resulting in improved performance over models without the use of DI on all datasets. The improvements are typically larger for the APT model, suggesting that it is missing more distributional knowledge in its elementary representations than untyped models. The density window and static top n neighbour retrieval functions perform very similar, however the static approach is more consistent and never underperforms the baseline for either model type on any dataset. The WordNet based neighbour retrieval function performs particularly well on SimLex-999. This can be explained by the fact that antonyms, which frequently happen to be among the nearest neighbours in distributional vector spaces, are regarded as dissimilar in SimLex-999, whereas the WordNet neighbour retrieval function only returns synonyms. The results furthermore confirm the effect that untyped models perform better on datasets modelling relatedness, whereas typed models work better for substitutability tasks (Baroni and Lenci, 2011).
[1, 1, 1, 1, 2, 2]
['Table 4 shows that distributional inference successfully infers missing information for both model types, resulting in improved performance over models without the use of DI on all datasets.', 'The improvements are typically larger for the APT model, suggesting that it is missing more distributional knowledge in its elementary representations than untyped models.', 'The density window and static top n neighbour retrieval functions perform very similar, however the static approach is more consistent and never underperforms the baseline for either model type on any dataset.', 'The WordNet based neighbour retrieval function performs particularly well on SimLex-999.', 'This can be explained by the fact that antonyms, which frequently happen to be among the nearest neighbours in distributional vector spaces, are regarded as dissimilar in SimLex-999, whereas the WordNet neighbour retrieval function only returns synonyms.', 'The results furthermore confirm the effect that untyped models perform better on datasets modelling relatedness, whereas typed models work better for substitutability tasks (Baroni and Lenci, 2011).']
[None, ['APTs (k = 40)'], ['Density Window', 'Static Top n'], ['WordNet', 'SimLex-999'], ['WordNet', 'SimLex-999'], ['Untyped VSM (k = 1)', 'APTs (k = 40)']]
1
D16-1175table_6
Neighbour retrieval function. Underlined means best performance per phrase type, boldface means best average performance overall.
2
[['APTs', 'Adjective-Noun'], ['APTs', 'Noun-Noun'], ['APTs', 'Verb-Object'], ['APTs', 'Average']]
2
[['No Distributional Inference', 'intersection'], ['No Distributional Inference', 'union'], ['Density Window', 'intersection'], ['Density Window', 'union'], ['Static Top n', 'intersection'], ['Static Top n', 'union'], ['WordNet', 'intersection'], ['WordNet', 'union']]
[['0.10', '0.41', '0.31', '0.39', '0.25', '0.40', '0.12', '0.41'], ['0.18', '0.42', '0.34', '0.38', '0.37', '0.45', '0.24', '0.36'], ['0.17', '0.36', '0.36', '0.36', '0.34', '0.35', '0.25', '0.36'], ['0.15', '0.40', '0.34', '0.38', '0.32', '0.40', '0.20', '0.38']]
column
['similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity']
['APTs']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>No Distributional Inference || intersection</th> <th>No Distributional Inference || union</th> <th>Density Window || intersection</th> <th>Density Window || union</th> <th>Static Top n || intersection</th> <th>Static Top n || union</th> <th>WordNet || intersection</th> <th>WordNet || union</th> </tr> </thead> <tbody> <tr> <td>APTs || Adjective-Noun</td> <td>0.10</td> <td>0.41</td> <td>0.31</td> <td>0.39</td> <td>0.25</td> <td>0.40</td> <td>0.12</td> <td>0.41</td> </tr> <tr> <td>APTs || Noun-Noun</td> <td>0.18</td> <td>0.42</td> <td>0.34</td> <td>0.38</td> <td>0.37</td> <td>0.45</td> <td>0.24</td> <td>0.36</td> </tr> <tr> <td>APTs || Verb-Object</td> <td>0.17</td> <td>0.36</td> <td>0.36</td> <td>0.36</td> <td>0.34</td> <td>0.35</td> <td>0.25</td> <td>0.36</td> </tr> <tr> <td>APTs || Average</td> <td>0.15</td> <td>0.40</td> <td>0.34</td> <td>0.38</td> <td>0.32</td> <td>0.40</td> <td>0.20</td> <td>0.38</td> </tr> </tbody></table>
Table 6
table_6
D16-1175
9
emnlp2016
Table 6 shows that the static top n and density window neighbour retrieval functions perform very similar again. The density window retrieval function outperforms static top n for composition by intersection and vice versa for composition by union. The WordNet approach is competitive for composition by union, but underperfoms the other approaches for composition by intersection significantly. For further experiments we use the static top n approach as it is computationally cheap and easy to interpret due to the fixed number of neighbours. Table 6 also shows that while composition by intersection is significantly improved by distributional inference, composition by union does not appear to benefit from it.
[1, 1, 1, 2, 1]
['Table 6 shows that the static top n and density window neighbour retrieval functions perform very similar again.', 'The density window retrieval function outperforms static top n for composition by intersection and vice versa for composition by union.', 'The WordNet approach is competitive for composition by union, but underperfoms the other approaches for composition by intersection significantly.', 'For further experiments we use the static top n approach as it is computationally cheap and easy to interpret due to the fixed number of neighbours.', 'Table 6 also shows that while composition by intersection is significantly improved by distributional inference, composition by union does not appear to benefit from it.']
[['Density Window', 'Static Top n'], ['Density Window', 'Static Top n'], ['WordNet'], None, ['intersection', 'union']]
1
D16-1175table_7
Results for the Mitchell and Lapata (2010) dataset. Results in brackets denote the performance of the respective models without the use of distributional inference. Underlined means best within group, boldfaced means best overall.
2
[['Model', 'APT – union'], ['Model', 'APT – intersect'], ['Model', 'Untyped VSM – addition'], ['Model', 'Untyped VSM – multiplication'], ['Model', 'Mitchell and Lapata (2010) (untyped VSM & multiplication)'], ['Model', 'Blacoe and Lapata (2012) (untyped VSM & multiplication)'], ['Model', 'Hashimoto et al. (2014) (PAS-CLBLM & Addnl)'], ['Model', 'Wieting et al. (2015) (Paragram word embeddings & RNN)'], ['Model', 'Weir et al. (2016) (APT & union)']]
1
[['Adjective-Noun'], ['Noun-Noun'], ['Verb-Object'], ['Average']]
[['0.45 (0.45)', '0.45 (0.43)', '0.38 (0.37)', '0.43 (0.42)'], ['0.50 (0.38)', '0.49 (0.44)', '0.43 (0.36)', '0.47 (0.39)'], ['0.46 (0.46)', '0.40 (0.41)', '0.38 (0.33)', '0.41 (0.40)'], ['0.46 (0.42)', '0.48 (0.45)', '0.40 (0.39)', '0.45 (0.42)'], ['0.46', '0.49', '0.37', '0.44'], ['0.48', '0.50', '0.35', '0.44'], ['0.52', '0.46', '0.45', '0.48'], ['0.51', '0.40', '0.50', '0.47'], ['0.45', '0.42', '0.42', '0.43']]
column
['similarity', 'similarity', 'similarity', 'similarity']
['APT – union', 'Untyped VSM – addition']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Adjective-Noun</th> <th>Noun-Noun</th> <th>Verb-Object</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>Model || APT – union</td> <td>0.45 (0.45)</td> <td>0.45 (0.43)</td> <td>0.38 (0.37)</td> <td>0.43 (0.42)</td> </tr> <tr> <td>Model || APT – intersect</td> <td>0.50 (0.38)</td> <td>0.49 (0.44)</td> <td>0.43 (0.36)</td> <td>0.47 (0.39)</td> </tr> <tr> <td>Model || Untyped VSM – addition</td> <td>0.46 (0.46)</td> <td>0.40 (0.41)</td> <td>0.38 (0.33)</td> <td>0.41 (0.40)</td> </tr> <tr> <td>Model || Untyped VSM – multiplication</td> <td>0.46 (0.42)</td> <td>0.48 (0.45)</td> <td>0.40 (0.39)</td> <td>0.45 (0.42)</td> </tr> <tr> <td>Model || Mitchell and Lapata (2010) (untyped VSM &amp; multiplication)</td> <td>0.46</td> <td>0.49</td> <td>0.37</td> <td>0.44</td> </tr> <tr> <td>Model || Blacoe and Lapata (2012) (untyped VSM &amp; multiplication)</td> <td>0.48</td> <td>0.50</td> <td>0.35</td> <td>0.44</td> </tr> <tr> <td>Model || Hashimoto et al. (2014) (PAS-CLBLM &amp; Addnl)</td> <td>0.52</td> <td>0.46</td> <td>0.45</td> <td>0.48</td> </tr> <tr> <td>Model || Wieting et al. (2015) (Paragram word embeddings &amp; RNN)</td> <td>0.51</td> <td>0.40</td> <td>0.50</td> <td>0.47</td> </tr> <tr> <td>Model || Weir et al. (2016) (APT &amp; union)</td> <td>0.45</td> <td>0.42</td> <td>0.42</td> <td>0.43</td> </tr> </tbody></table>
Table 7
table_7
D16-1175
9
emnlp2016
Table 7 shows that composition by intersection with distributional inference considerably improves upon the best results for APT models without distributional inference and for untyped count-based models, and is competitive with the state-of-the-art neural network based models of Hashimoto et al. (2014) and Wieting et al. (2015). Distributional inference also improves upon the performance of an untyped VSM where composition by pointwise multiplication is outperforming the models of Mitchell and Lapata (2010), and Blacoe and Lapata (2012). Table 7 furthermore shows that DI has a smaller effect on the APT model based on composition by union and the untyped model based on composition by pointwise addition. The reason, as pointed out in the discussion for Table 5, is that the composition function has no disambiguating effect and thus cannot eliminate unrelated neighbours introduced by distributional inference. An intersective composition function on the other hand is able to perform the disambiguation locally in any given phrasal context. This furthermore suggests that for the APT model it is not necessary to explicitly model different word senses in separate vectors, as composition by intersection is able to disambiguate any word in context individually. Unlike the models of Hashimoto et al. (2014) and Wieting et al. (2015), the elementary word representations, as well as the representations for composed phrases and the composition process in our models are fully interpretable2.
[1, 1, 1, 2, 2, 2, 2]
['Table 7 shows that composition by intersection with distributional inference considerably improves upon the best results for APT models without distributional inference and for untyped count-based models, and is competitive with the state-of-the-art neural network based models of Hashimoto et al. (2014) and Wieting et al. (2015).', 'Distributional inference also improves upon the performance of an untyped VSM where composition by pointwise multiplication is outperforming the models of Mitchell and Lapata (2010), and Blacoe and Lapata (2012).', 'Table 7 furthermore shows that DI has a smaller effect on the APT model based on composition by union and the untyped model based on composition by pointwise addition.', 'The reason, as pointed out in the discussion for Table 5, is that the composition function has no disambiguating effect and thus cannot eliminate unrelated neighbours introduced by distributional inference.', 'An intersective composition function on the other hand is able to perform the disambiguation locally in any given phrasal context.', 'This furthermore suggests that for the APT model it is not necessary to explicitly model different word senses in separate vectors, as composition by intersection is able to disambiguate any word in context individually.', 'Unlike the models of Hashimoto et al. (2014) and Wieting et al. (2015), the elementary word representations, as well as the representations for composed phrases and the composition process in our models are fully interpretable2.']
[None, None, ['APT – union', 'Untyped VSM – addition'], None, None, None, None]
1
D16-1179table_2
VPE detection results (baseline F1, Machine Learning F1, ML F1 improvement) obtained with 5-fold cross validation.
2
[['Auxiliariy', 'Do'], ['Auxiliariy', 'Be'], ['Auxiliariy', 'Have'], ['Auxiliariy', 'Modal'], ['Auxiliariy', 'To'], ['Auxiliariy', 'So'], ['Auxiliariy', 'ALL']]
1
[['Baseline'], ['ML'], ['Change']]
[['0.83', '0.89', '0.06'], ['0.34', '0.63', '0.29'], ['0.43', '0.75', '0.32'], ['0.8', '0.86', '0.06'], ['0.76', '0.79', '0.03'], ['0.67', '0.86', '0.19'], ['0.71', '0.82', '0.11']]
column
['F1', 'F1', 'F1']
['Change']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Baseline</th> <th>ML</th> <th>Change</th> </tr> </thead> <tbody> <tr> <td>Auxiliariy || Do</td> <td>0.83</td> <td>0.89</td> <td>0.06</td> </tr> <tr> <td>Auxiliariy || Be</td> <td>0.34</td> <td>0.63</td> <td>0.29</td> </tr> <tr> <td>Auxiliariy || Have</td> <td>0.43</td> <td>0.75</td> <td>0.32</td> </tr> <tr> <td>Auxiliariy || Modal</td> <td>0.8</td> <td>0.86</td> <td>0.06</td> </tr> <tr> <td>Auxiliariy || To</td> <td>0.76</td> <td>0.79</td> <td>0.03</td> </tr> <tr> <td>Auxiliariy || So</td> <td>0.67</td> <td>0.86</td> <td>0.19</td> </tr> <tr> <td>Auxiliariy || ALL</td> <td>0.71</td> <td>0.82</td> <td>0.11</td> </tr> </tbody></table>
Table 2
table_2
D16-1179
4
emnlp2016
Results. Using a standard logistic regression classifier, we achieve an 11% improvement in accuracy over the baseline approach, as can be seen in Table 2. The rule-based approach was insufficient for be and have VPE, where logistic regression provides the largest improvements. Although we improve upon the baseline by 29%, the accuracy achieved for beVPE is still low, this occurs mainly because: (i) be is the most commonly used auxiliary, so the number of negative examples is high compared to the number of positive examples, and, (ii) the analysis of the some of the false positives showed that there may have been genuine cases of VPE that were missed by the annotators of the dataset (Bos and Spenader, 2011). For example, this sentence (in file wsj 2057) was missed by the annotators (trigger in bold, an tecedent italicized) “Some people tend to ignore that a 50-point move is less in percentage terms than it was when the stock market was lower.”, here it is clear that was is a trigger for VPE.
[2, 1, 1, 1, 2]
['Results.', 'Using a standard logistic regression classifier, we achieve an 11% improvement in accuracy over the baseline approach, as can be seen in Table 2.', 'The rule-based approach was insufficient for be and have VPE, where logistic regression provides the largest improvements.', 'Although we improve upon the baseline by 29%, the accuracy achieved for beVPE is still low, this occurs mainly because: (i) be is the most commonly used auxiliary, so the number of negative examples is high compared to the number of positive examples, and, (ii) the analysis of the some of the false positives showed that there may have been genuine cases of VPE that were missed by the annotators of the dataset (Bos and Spenader, 2011).', 'For example, this sentence (in file wsj 2057) was missed by the annotators (trigger in bold, an tecedent italicized) “Some people tend to ignore that a 50-point move is less in percentage terms than it was when the stock market was lower.”, here it is clear that was is a trigger for VPE.']
[None, ['ALL', 'Change'], ['Be', 'Have'], ['Be'], None]
1
D16-1179table_3
Results (precision, recall, F1) for VPE detection using the train-test split proposed by Bos and Spenader (2011).
2
[['Test Set Results', 'Liu et al. (2016)'], ['Test Set Results', 'This work']]
1
[['P'], ['R'], ['F1']]
[['0.8022', '0.6134', '0.6952'], ['0.7574', '0.8655', '0.8078']]
column
['P', 'R', 'F1']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Test Set Results || Liu et al. (2016)</td> <td>0.8022</td> <td>0.6134</td> <td>0.6952</td> </tr> <tr> <td>Test Set Results || This work</td> <td>0.7574</td> <td>0.8655</td> <td>0.8078</td> </tr> </tbody></table>
Table 3
table_3
D16-1179
4
emnlp2016
In Table 3, we compare our results to those achieved by Liu et al. (2016) when using WSJ sets 0-14 for training and sets 20-24 for testing. We improve on their overall accuracy by over 11%, due to the 25% improvement in recall achieved by our method. Our results show that oversampling the positive examples in the dataset and incorporating linguistically motivated syntactic features provide substantial gains for VPE detection. Additionally, we consider every instance of the word to as a potential trigger, while they do not - this lowers their recall because they miss every gold-standard instance of toVPE. Thus, not only do we improve upon the stateof-the-art accuracy, but we also expand the scope of VPE-detection to include to-VPE without causing a significant decrease in accuracy.
[1, 1, 2, 2, 2]
['In Table 3, we compare our results to those achieved by Liu et al. (2016) when using WSJ sets 0-14 for training and sets 20-24 for testing.', 'We improve on their overall accuracy by over 11%, due to the 25% improvement in recall achieved by our method.', 'Our results show that oversampling the positive examples in the dataset and incorporating linguistically motivated syntactic features provide substantial gains for VPE detection.', 'Additionally, we consider every instance of the word to as a potential trigger, while they do not - this lowers their recall because they miss every gold-standard instance of toVPE.', 'Thus, not only do we improve upon the stateof-the-art accuracy, but we also expand the scope of VPE-detection to include to-VPE without causing a significant decrease in accuracy.']
[['Liu et al. (2016)', 'This work'], ['R', 'This work'], None, None, None]
1
D16-1179table_6
Feature ablation results (feature set excluded, precision, recall, F1) on VPE detection; obtained with 5-fold cross validation.
2
[['Excluded', 'Auxiliary'], ['Excluded', 'Lexical'], ['Excluded', 'Syntactic'], ['Excluded', 'NONE']]
1
[['P'], ['R'], ['F1']]
[['0.7982', '0.7611', '0.7781'], ['0.6937', '0.8408', '0.7582'], ['0.7404', '0.733', '0.7343'], ['0.8242', '0.812', '0.817']]
column
['P', 'R', 'F1']
['Syntactic']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Excluded || Auxiliary</td> <td>0.7982</td> <td>0.7611</td> <td>0.7781</td> </tr> <tr> <td>Excluded || Lexical</td> <td>0.6937</td> <td>0.8408</td> <td>0.7582</td> </tr> <tr> <td>Excluded || Syntactic</td> <td>0.7404</td> <td>0.733</td> <td>0.7343</td> </tr> <tr> <td>Excluded || NONE</td> <td>0.8242</td> <td>0.812</td> <td>0.817</td> </tr> </tbody></table>
Table 6
table_6
D16-1179
8
emnlp2016
Trigger Detection. In Table 6 we can see that the syntactic features were essential for obtaining the best results, as can be seen by the 8.3% improvement, from 73.4% to 81.7%, obtained from including these features. This shows that notions from theoretical linguistics can prove to be invaluable when approaching the problem of VPE detection and that extracting these features in related problems may improve performance.
[2, 1, 2]
['Trigger Detection.', 'In Table 6 we can see that the syntactic features were essential for obtaining the best results, as can be seen by the 8.3% improvement, from 73.4% to 81.7%, obtained from including these features.', 'This shows that notions from theoretical linguistics can prove to be invaluable when approaching the problem of VPE detection and that extracting these features in related problems may improve performance.']
[None, ['Syntactic', 'F1', 'NONE'], None]
1
D16-1179table_7
Feature ablation results (feature set excluded, precision, recall, F1) on antecedent identification; obtained with 5fold cross validation.
2
[['Features Excluded', 'Alignment'], ['Features Excluded', 'NP Relation'], ['Features Excluded', 'Syntactic'], ['Features Excluded', 'Matching'], ['Features Excluded', 'NONE']]
1
[['Accuracy']]
[['0.6511'], ['0.6428'], ['0.5495'], ['0.6504'], ['0.6518']]
column
['Accuracy']
['Features Excluded']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Features Excluded || Alignment</td> <td>0.6511</td> </tr> <tr> <td>Features Excluded || NP Relation</td> <td>0.6428</td> </tr> <tr> <td>Features Excluded || Syntactic</td> <td>0.5495</td> </tr> <tr> <td>Features Excluded || Matching</td> <td>0.6504</td> </tr> <tr> <td>Features Excluded || NONE</td> <td>0.6518</td> </tr> </tbody></table>
Table 7
table_7
D16-1179
8
emnlp2016
Antecedent Identification. Table 7 presents the results from a feature ablation study on antecedent identification. The most striking observation is that the alignment features do not add any significant improvement in the results. This is either because there simply is not an inherent parallelism between the trigger site and the antecedent site, or because the other features represent the parallelism adequately without necessitating the addition of the alignment features. The heuristic syntactic features provide a large (10%) accuracy improvement when included. These results show that a dependency-based alignment approach to feature extraction does not represent the parallelism between the trigger and antecedent as well as features based on the lexical and syntactic properties of the two.
[2, 1, 1, 2, 1, 2]
['Antecedent Identification.', 'Table 7 presents the results from a feature ablation study on antecedent\r\nidentification.', 'The most striking observation is that the alignment features do not add any significant improvement in the results.', 'This is either because there simply is not an inherent parallelism between the trigger site and the antecedent site, or because the other features represent the parallelism adequately without necessitating the addition of the alignment features.', 'The heuristic syntactic features provide a large (10%) accuracy improvement when included.', 'These results show that a dependency-based alignment approach to feature extraction does not represent the parallelism between the trigger and antecedent as well as features based on the lexical and syntactic properties of the two.']
[None, None, ['Alignment', 'Accuracy'], None, ['Syntactic', 'Accuracy'], None]
1
D16-1181table_1
1-best supertagging results on both the dev and test sets. BLSTM is the baseline model without attention; BLSTMlocal and -global are the two attention-based models.
2
[['Model', 'C&C'], ['Model', 'Xu et al. (2015)'], ['Model', 'Xu et al. (2016)'], ['Model', 'Lewis et al. (2016)'], ['Model', 'Vaswani et al. (2016)'], ['Model', 'Vaswani et al. (2016) +LM +beam'], ['Model', 'BLSTM'], ['Model', 'BLSTM-local'], ['Model', 'BLSTM-global']]
1
[['Dev'], ['Test']]
[['91.50', '92.02'], ['93.07', '93.00'], ['93.49', '93.52'], ['94.1', '94.3'], ['94.08', '-'], ['94.24', '94.50'], ['94.11', '94.29'], ['94.31', '94.46'], ['94.22', '94.42']]
column
['Accuracy', 'Accuracy']
['BLSTM', 'BLSTM-local', 'BLSTM-global']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Model || C&amp;C</td> <td>91.50</td> <td>92.02</td> </tr> <tr> <td>Model || Xu et al. (2015)</td> <td>93.07</td> <td>93.00</td> </tr> <tr> <td>Model || Xu et al. (2016)</td> <td>93.49</td> <td>93.52</td> </tr> <tr> <td>Model || Lewis et al. (2016)</td> <td>94.1</td> <td>94.3</td> </tr> <tr> <td>Model || Vaswani et al. (2016)</td> <td>94.08</td> <td>-</td> </tr> <tr> <td>Model || Vaswani et al. (2016) +LM +beam</td> <td>94.24</td> <td>94.50</td> </tr> <tr> <td>Model || BLSTM</td> <td>94.11</td> <td>94.29</td> </tr> <tr> <td>Model || BLSTM-local</td> <td>94.31</td> <td>94.46</td> </tr> <tr> <td>Model || BLSTM-global</td> <td>94.22</td> <td>94.42</td> </tr> </tbody></table>
Table 1
table_1
D16-1181
7
emnlp2016
Table 1 summarizes 1-best supertagging results. Our baseline BLSTM model without attention achieves the same level of accuracy as Lewis et al. (2016) and the baseline BLSTM model of Vaswani et al. (2016). Compared with the latter, our hidden state size is 50% smaller (256 vs. 512). For training and testing the local attention model (BLSTM-local), we used an attention window size of 5 (tuned on the dev set), and it gives an improvement of 0.94% over the BRNN supertagger (Xu et al., 2016), achieving an accuracy on par with the beam-search (size 12) model of Vaswani et al. (2016) that is enhanced with a language model. Despite being able to consider wider contexts than the local model, the global attention model (BLSTMglobal) did not show further gains, hence we used BLSTM-local for all parsing experiments below.
[1, 1, 2, 1, 1]
['Table 1 summarizes 1-best supertagging results.', 'Our baseline BLSTM model without attention achieves the same level of accuracy as Lewis et al. (2016) and the baseline BLSTM model of Vaswani et al. (2016).', 'Compared with the latter, our hidden state size is 50% smaller (256 vs. 512).', 'For training and testing the local attention model (BLSTM-local), we used an attention window size of 5 (tuned on the dev set), and it gives an improvement of 0.94% over the BRNN supertagger (Xu et al., 2016), achieving an accuracy on par with the beam-search (size 12) model of Vaswani et al. (2016) that is enhanced with a language model.', 'Despite being able to consider wider contexts than the local model, the global attention model (BLSTMglobal) did not show further gains, hence we used BLSTM-local for all parsing experiments below.']
[None, ['BLSTM', 'Lewis et al. (2016)', 'Vaswani et al. (2016)'], None, ['BLSTM-local', 'Xu et al. (2016)', 'Vaswani et al. (2016)'], ['BLSTM-global', 'BLSTM-local']]
1
D16-1181table_4
Parsing results on the dev (Section 00) and test (Section 23) sets with 100% coverage, with all LSTM models using the BLSTM-local supertagging model. All experiments using auto POS. CAT (lexical category assignment accuracy). LSTM-greedy is the full greedy parser.
2
[['Model', 'C&C (normal-form)'], ['Model', 'C&C (dependency hybrid)'], ['Model', 'Zhang and Clark (2011)'], ['Model', 'Xu et al. (2014)'], ['Model', 'Ambati et al. (2016)'], ['Model', 'Xu et al. (2016)-greedy'], ['Model', 'Xu et al. (2016)-XF1'], ['Model', 'LSTM-greedy'], ['Model', 'LSTM-XF1'], ['Model', 'LSTM-XF1']]
2
[['Beam', '-'], ['Section 00', 'LP'], ['Section 00', 'LR'], ['Section 00', 'LF'], ['Section 00', 'CAT'], ['Section 23', 'LP'], ['Section 23', 'LR'], ['Section 23', 'LF'], ['Section 23', 'CAT']]
[['-', '85.18', '82.53', '83.83', '92.39', '85.45', '83.97', '84.70', '92.83'], ['-', '86.07', '82.77', '84.39', '92.57', '86.24', '84.17', '85.19', '93.0'], ['16', '87.15', '82.95', '85.0', '92.77', '87.43', '83.61', '85.48', '93.12'], ['128', '86.29', '84.09', '85.18', '92.75', '87.03', '85.08', '86.04', '93.1'], ['16', '-', '-', '85.69', '93.02', '-', '-', '85.57', '92.86'], ['1', '88.12', '81.38', '84.61', '93.42', '88.53', '81.65', '84.95', '93.57'], ['8', '88.20', '83.40', '85.73', '93.56', '88.74', '84.22', '86.42', '93.87'], ['1', '89.43', '83.86', '86.56', '94.47', '89.75', '84.10', '86.83', '94.63'], ['1', '89.68', '85.29', '87.43', '94.41', '89.85', '85.51', '87.62', '94.53'], ['8', '89.54', '85.46', '87.45', '94.39', '89.81', '85.81', '87.76', '94.57']]
column
['Beam', 'LP', 'LR', 'LF', 'CAT', 'LP', 'LR', 'LF', 'CAT']
['LSTM-greedy', 'LSTM-XF1']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Beam || -</th> <th>Section 00 || LP</th> <th>Section 00 || LR</th> <th>Section 00 || LF</th> <th>Section 00 || CAT</th> <th>Section 23 || LP</th> <th>Section 23 || LR</th> <th>Section 23 || LF</th> <th>Section 23 || CAT</th> </tr> </thead> <tbody> <tr> <td>Model || C&amp;C (normal-form)</td> <td>-</td> <td>85.18</td> <td>82.53</td> <td>83.83</td> <td>92.39</td> <td>85.45</td> <td>83.97</td> <td>84.70</td> <td>92.83</td> </tr> <tr> <td>Model || C&amp;C (dependency hybrid)</td> <td>-</td> <td>86.07</td> <td>82.77</td> <td>84.39</td> <td>92.57</td> <td>86.24</td> <td>84.17</td> <td>85.19</td> <td>93.0</td> </tr> <tr> <td>Model || Zhang and Clark (2011)</td> <td>16</td> <td>87.15</td> <td>82.95</td> <td>85.0</td> <td>92.77</td> <td>87.43</td> <td>83.61</td> <td>85.48</td> <td>93.12</td> </tr> <tr> <td>Model || Xu et al. (2014)</td> <td>128</td> <td>86.29</td> <td>84.09</td> <td>85.18</td> <td>92.75</td> <td>87.03</td> <td>85.08</td> <td>86.04</td> <td>93.1</td> </tr> <tr> <td>Model || Ambati et al. (2016)</td> <td>16</td> <td>-</td> <td>-</td> <td>85.69</td> <td>93.02</td> <td>-</td> <td>-</td> <td>85.57</td> <td>92.86</td> </tr> <tr> <td>Model || Xu et al. (2016)-greedy</td> <td>1</td> <td>88.12</td> <td>81.38</td> <td>84.61</td> <td>93.42</td> <td>88.53</td> <td>81.65</td> <td>84.95</td> <td>93.57</td> </tr> <tr> <td>Model || Xu et al. (2016)-XF1</td> <td>8</td> <td>88.20</td> <td>83.40</td> <td>85.73</td> <td>93.56</td> <td>88.74</td> <td>84.22</td> <td>86.42</td> <td>93.87</td> </tr> <tr> <td>Model || LSTM-greedy</td> <td>1</td> <td>89.43</td> <td>83.86</td> <td>86.56</td> <td>94.47</td> <td>89.75</td> <td>84.10</td> <td>86.83</td> <td>94.63</td> </tr> <tr> <td>Model || LSTM-XF1</td> <td>1</td> <td>89.68</td> <td>85.29</td> <td>87.43</td> <td>94.41</td> <td>89.85</td> <td>85.51</td> <td>87.62</td> <td>94.53</td> </tr> <tr> <td>Model || LSTM-XF1</td> <td>8</td> <td>89.54</td> <td>85.46</td> <td>87.45</td> <td>94.39</td> <td>89.81</td> <td>85.81</td> <td>87.76</td> <td>94.57</td> </tr> </tbody></table>
Table 4
table_4
D16-1181
8
emnlp2016
The XF1 model. Table 4 also shows the results for the XF1 models (LSTM-XF1), which use all four types of embeddings. We used a beam size of 8, and a ? value of 0.06 for both training and testing (tuned on the dev set), and training took 12 epochs to converge (Fig. 4b), with an F1 of 87.45% on the dev set. Decoding the XF1 model with greedy inference only slightly decreased recall and F1, and this resulted in a highly accurate deterministic parser. On the test set, our XF1 greedy model gives 2.67% F1 improvement over the greedy model in Xu et al. (2016), and the beam-search XF1 model achieves an F1 improvement of 1.34% compared with the XF1 model of Xu et al. (2016).
[0, 1, 2, 1, 1]
['The XF1 model.', 'Table 4 also shows the results for the XF1 models (LSTM-XF1), which use all four types of embeddings.', 'We used a beam size of 8, and a ? value of 0.06 for both training and testing (tuned on the dev set), and training took 12 epochs to converge (Fig. 4b), with an F1 of 87.45% on the dev set.', 'Decoding the XF1 model with greedy inference only slightly decreased recall and F1, and this resulted in a highly accurate deterministic parser.', 'On the test set, our XF1 greedy model gives 2.67% F1 improvement over the greedy model in Xu et al. (2016), and the beam-search XF1 model achieves an F1 improvement of 1.34% compared with the XF1 model of Xu et al. (2016).']
[None, ['LSTM-XF1'], ['Beam'], ['LSTM-greedy', 'LR', 'LF'], ['LSTM-greedy', 'LF', 'Xu et al. (2014)']]
1
D16-1182table_1
Parsers’ performance in terms of accuracy and robustness. The best result in each column is given in bold, and the worst result is in italics.
2
[['Parser', 'Malt'], ['Parser', 'Mate'], ['Parser', 'MST'], ['Parser', 'SNN'], ['Parser', 'SyntaxNet'], ['Parser', 'Turbo'], ['Parser', 'Tweebo'], ['Parser', 'Yara']]
3
[['Train on PTB §1-21', 'UAS', 'PTB §23'], ['Train on PTB §1-21', 'Robustness F1', 'ESL'], ['Train on PTB §1-21', 'Robustness F1', 'MT'], ['Train on Tweebanktrain', 'UAF1', 'Tweebanktest'], ['Train on Tweebanktrain', 'Robustness F1', 'ESL'], ['Train on Tweebanktrain', 'Robustness F1', 'MT']]
[['89.58', '93.05', '76.26', '77.48', '94.36', '80.66'], ['93.16', '93.24', '77.07', '76.26', '91.83', '75.74'], ['91.17', '92.80', '76.51', '73.99', '92.37', '77.71'], ['90.70', '93.15', '74.18', '53.4', '88.90', '71.54'], ['93.04', '93.24', '76.39', '75.75', '88.78', '81.87'], ['92.84', '93.72', '77.79', '79.42', '93.28', '78.26'], ['-', '-', '-', '80.91', '93.39', '79.47'], ['93.09', '93.52', '73.15', '78.06', '93.04', '75.83']]
column
['UAS', 'Robustness F1', 'Robustness F1', 'UAF1', 'Robustness F1', 'Robustness F1']
['Parser']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Train on PTB §1-21 || UAS || PTB §23</th> <th>Train on PTB §1-21 || Robustness F1 || ESL</th> <th>Train on PTB §1-21 || Robustness F1 || MT</th> <th>Train on Tweebanktrain || UAF1 || Tweebanktest</th> <th>Train on Tweebanktrain || Robustness F1 || ESL</th> <th>Train on Tweebanktrain || Robustness F1 || MT</th> </tr> </thead> <tbody> <tr> <td>Parser || Malt</td> <td>89.58</td> <td>93.05</td> <td>76.26</td> <td>77.48</td> <td>94.36</td> <td>80.66</td> </tr> <tr> <td>Parser || Mate</td> <td>93.16</td> <td>93.24</td> <td>77.07</td> <td>76.26</td> <td>91.83</td> <td>75.74</td> </tr> <tr> <td>Parser || MST</td> <td>91.17</td> <td>92.80</td> <td>76.51</td> <td>73.99</td> <td>92.37</td> <td>77.71</td> </tr> <tr> <td>Parser || SNN</td> <td>90.70</td> <td>93.15</td> <td>74.18</td> <td>53.4</td> <td>88.90</td> <td>71.54</td> </tr> <tr> <td>Parser || SyntaxNet</td> <td>93.04</td> <td>93.24</td> <td>76.39</td> <td>75.75</td> <td>88.78</td> <td>81.87</td> </tr> <tr> <td>Parser || Turbo</td> <td>92.84</td> <td>93.72</td> <td>77.79</td> <td>79.42</td> <td>93.28</td> <td>78.26</td> </tr> <tr> <td>Parser || Tweebo</td> <td>-</td> <td>-</td> <td>-</td> <td>80.91</td> <td>93.39</td> <td>79.47</td> </tr> <tr> <td>Parser || Yara</td> <td>93.09</td> <td>93.52</td> <td>73.15</td> <td>78.06</td> <td>93.04</td> <td>75.83</td> </tr> </tbody></table>
Table 1
table_1
D16-1182
5
emnlp2016
The overall performances of all parsers are shown in Table 1. Note that the Tweebo Parser’s performance is not trained on the PTB because it is a specialization of the Turbo Parser, designed to parse tweets. Table 1 shows that, for both training conditions, the parser that has the best robustness score in the ESL domain has also high robustness for the MT domain. This suggests that it might be possible to build robust parsers for multiple ungrammatical domains. The training conditions do matter – Malt performs better when trained from Tweebank than from the PTB. In contrast, Tweebank is not a good fit with the neural network parsers due to its small size. Moreover, SNN uses pre-trained word embeddings, and 60% of Tweebank tokens are missing. Next, let us compare parsers within each train/test configuration for their relative robustness. When trained on the PTB, all parsers are comparably robust on ESL data, while they exhibit more differences on the MT data, and, as expected, everyone’s performance is much lower because MT errors are more diverse than ESL errors. We expected that by training on Tweebank, parsers will perform better on ESL data (and maybe even MT data), since Tweebank is arguably more similar to the test domains than the PTB, we also expected Tweebo to outperform others. The results are somewhat surprising. On the one hand, the highest parser score increased from 93.72% (Turbo trained on PTB) to 94.36% (Malt trained on Tweebank), but the two neural network parsers performed significantly worse, most likely due to the small training size of Tweebank. Interestingly, although SyntaxNet has the lowest score on ESL, it has the highest score on MT, showing promise in its robustness.
[1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 1, 1]
['The overall performances of all parsers are shown in Table 1.', 'Note that the Tweebo Parser’s performance is not trained on the PTB because it is a specialization of the Turbo Parser, designed to parse tweets.', 'Table 1 shows that, for both training conditions, the parser that has the best robustness score in the ESL domain has also high robustness for the MT domain.', 'This suggests that it might be possible to build robust parsers for multiple ungrammatical domains.', 'The training conditions do matter – Malt performs better when trained from Tweebank than from the PTB.', 'In contrast, Tweebank is not a good fit with the neural network parsers due to its small size.', 'Moreover, SNN uses pre-trained word embeddings, and 60% of Tweebank tokens are missing.', 'Next, let us compare parsers within each train/test configuration for their relative robustness.', 'When trained on the PTB, all parsers are comparably robust on ESL data, while they exhibit more differences on the MT data, and, as expected, everyone’s performance is much lower because MT errors are more diverse than ESL errors.', 'We expected that by training on Tweebank, parsers will perform better on ESL data (and maybe even MT data), since Tweebank is arguably more similar to the test domains than the PTB, we also expected Tweebo to outperform others.', 'The results are somewhat surprising.', 'On the one hand, the highest parser score increased from 93.72% (Turbo trained on PTB) to 94.36% (Malt trained on Tweebank), but the two neural network parsers performed significantly worse, most likely due to the small training size of Tweebank.', 'Interestingly, although SyntaxNet has the lowest score on ESL, it has the highest score on MT, showing promise in its robustness.']
[['Parser'], None, ['Parser', 'ESL', 'MT', 'Robustness F1'], None, ['Malt', 'Train on Tweebanktrain', 'Train on PTB §1-21'], ['Train on Tweebanktrain'], ['SNN'], None, ['Parser', 'ESL'], ['Train on Tweebanktrain', 'ESL', 'Tweebo', 'Train on PTB §1-21'], None, ['Turbo', 'Train on PTB §1-21', 'Malt', 'Train on Tweebanktrain'], ['SyntaxNet', 'ESL', 'MT']]
1
D16-1183table_2
Test SMATCH results.12
2
[['Parser', 'JAMR'], ['Parser', 'CKY (Artzi et al. 2015)'], ['Parser', 'Shift Reduce'], ['Parser', 'Wang et al. (2015a)']]
1
[['P'], ['R'], ['F']]
[['67.8', '59.2', '63.2'], ['66.8', '65.7', '66.3'], ['68.1', '64.2', '66.1'], ['72.0', '67.0', '70.0']]
column
['P', 'R', 'F']
['Shift Reduce']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>Parser || JAMR</td> <td>67.8</td> <td>59.2</td> <td>63.2</td> </tr> <tr> <td>Parser || CKY (Artzi et al. 2015)</td> <td>66.8</td> <td>65.7</td> <td>66.3</td> </tr> <tr> <td>Parser || Shift Reduce</td> <td>68.1</td> <td>64.2</td> <td>66.1</td> </tr> <tr> <td>Parser || Wang et al. (2015a)</td> <td>72.0</td> <td>67.0</td> <td>70.0</td> </tr> </tbody></table>
Table 2
table_2
D16-1183
8
emnlp2016
Table 2 shows the test results using our best performing model (ensemble with syntax features). We compare our approach to the CKY parser of Artzi et al. ,(2015) and JAMR (Flanigan et al., 2014). We also list the results of Wang et al.,(2015b), who demonstrated the benefit of auxiliary analyzers and is the current state of the art. Our performance is comparable to the CKY parser of (Artzi et al., 2015), which we use to bootstrap our system. This demonstrates the ability of our parser to match the performance of a dynamic-programming parser, which executes significantly more operations per sentence.
[1, 2, 2, 1, 1]
['Table 2 shows the test results using our best performing model (ensemble with syntax features).', 'We compare our approach to the CKY parser of Artzi et al. ,(2015) and JAMR (Flanigan et al., 2014).', 'We also list the results of Wang et al.,(2015b), who demonstrated the benefit of auxiliary analyzers and is the current state of the art.', 'Our performance is comparable to the CKY parser of (Artzi et al., 2015), which we use to bootstrap our system.', 'This demonstrates the ability of our parser to match the performance of a dynamic-programming parser, which executes significantly more operations per sentence.']
[None, ['CKY (Artzi et al. 2015)', 'JAMR', 'Shift Reduce'], ['Wang et al. (2015a)'], ['CKY (Artzi et al. 2015)', 'Shift Reduce'], ['Shift Reduce']]
1
D16-1184table_6
Parsing performance on web queries
2
[['System', 'Stanford'], ['System', 'MSTParser'], ['System', 'LSTMParser'], ['System', 'QueryParser + label refinement'], ['System', 'QueryParser + word2vec'], ['System', 'QueryParser + label refinement + word2vec']]
2
[['All (n=1000)', 'UAS'], ['All (n=1000)', 'LAS'], ['NoFunc (n=900)', 'UAS'], ['NoFunc (n=900)', 'LAS'], ['Func (n=100)', 'UAS'], ['Func (n=100)', 'LAS']]
[['0.694', '0.602', '0.670', '0.568', '0.834', '0.799'], ['0.699', '0.616', '0.683', '0.691', '0.799', '0.766'], ['0.700', '0.608', '0.679', '0.578', '0.827', '0.790'], ['0.829', '0.769', '0.824', '0.761', '0.858', '0.818'], ['0.843', '0.788', '0.843', '0.784', '0.838', '0.812'], ['0.862', '0.804', '0.858', '0.795', '0.883', '0.854']]
column
['UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS']
['QueryParser + label refinement', 'QueryParser + word2vec', 'QueryParser + label refinement + word2vec']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>All (n=1000) || UAS</th> <th>All (n=1000) || LAS</th> <th>NoFunc (n=900) || UAS</th> <th>NoFunc (n=900) || LAS</th> <th>Func (n=100) || UAS</th> <th>Func (n=100) || LAS</th> </tr> </thead> <tbody> <tr> <td>System || Stanford</td> <td>0.694</td> <td>0.602</td> <td>0.670</td> <td>0.568</td> <td>0.834</td> <td>0.799</td> </tr> <tr> <td>System || MSTParser</td> <td>0.699</td> <td>0.616</td> <td>0.683</td> <td>0.691</td> <td>0.799</td> <td>0.766</td> </tr> <tr> <td>System || LSTMParser</td> <td>0.700</td> <td>0.608</td> <td>0.679</td> <td>0.578</td> <td>0.827</td> <td>0.790</td> </tr> <tr> <td>System || QueryParser + label refinement</td> <td>0.829</td> <td>0.769</td> <td>0.824</td> <td>0.761</td> <td>0.858</td> <td>0.818</td> </tr> <tr> <td>System || QueryParser + word2vec</td> <td>0.843</td> <td>0.788</td> <td>0.843</td> <td>0.784</td> <td>0.838</td> <td>0.812</td> </tr> <tr> <td>System || QueryParser + label refinement + word2vec</td> <td>0.862</td> <td>0.804</td> <td>0.858</td> <td>0.795</td> <td>0.883</td> <td>0.854</td> </tr> </tbody></table>
Table 6
table_6
D16-1184
9
emnlp2016
Table 6 shows the results. We use 3 versions of QueryParser. The first two use random word embedding for initialization, and the first one does not use label refinement. From the results, it can be concluded that QueryParser consistently outperformed competitors on query parsing task. Pretrained word2vec embeddings improve performance by 3-5 percent, and the postprocess of label refinement also improves the performance by 1-2 percent. Table 6 also shows that conventional depencency parsers trained on sentence dataset relies much more on the syntactic signals in the input. While Stanford parser and MSTParser have similar performance to our parser on Func dataset, the performance drops significantly on All and NoFunc dataset, when the majority of input has no function words.
[1, 2, 1, 1, 1, 1, 1]
['Table 6 shows the results.', 'We use 3 versions of QueryParser.', 'The first two use random word embedding for initialization, and the first one does not use label refinement.', 'From the results, it can be concluded that QueryParser consistently outperformed competitors on query parsing task.', 'Pretrained word2vec embeddings improve performance by 3-5 percent, and the postprocess of label refinement also improves the performance by 1-2 percent.', 'Table 6 also shows that conventional depencency parsers trained on sentence dataset relies much more on the syntactic signals in the input.', 'While Stanford parser and MSTParser have similar performance to our parser on Func dataset, the performance drops significantly on All and NoFunc dataset, when the majority of input has no function words.']
[None, ['QueryParser + label refinement', 'QueryParser + word2vec', 'QueryParser + label refinement + word2vec'], ['QueryParser + label refinement', 'QueryParser + word2vec', 'QueryParser + label refinement + word2vec'], ['QueryParser + label refinement', 'QueryParser + word2vec', 'QueryParser + label refinement + word2vec', 'System'], ['QueryParser + label refinement', 'QueryParser + word2vec', 'QueryParser + label refinement + word2vec'], ['Stanford', 'MSTParser', 'LSTMParser'], ['Stanford', 'MSTParser', 'Func (n=100)', 'All (n=1000)', 'NoFunc (n=900)']]
1
D16-1185table_2
Experimental results on different methods using descriptions. Contingency-based methods generally outperforms summarization-based methods.
1
[['ILP-Ext (Banerjee et al. 2015)'], ['ILP-Abs (Banerjee et al. 2015)'], ['Our approach TREM'], ['w/o SR'], ['w/o CC'], ['w/o SR&CC (summarization only)']]
1
[['ROUGE-1'], ['ROUGE-2'], ['ROUGE-SU4']]
[['0.308', '0.112', '0.091'], ['0.361', '0.158', '0.12'], ['0.405', '0.207', '0.148'], ['0.393', '0.189', '0.144'], ['0.383', '0.171', '0.132'], ['0.374', '0.168', '0.129']]
column
['ROUGE-1', 'ROUGE-2', 'ROUGE-SU4']
['Our approach TREM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-SU4</th> </tr> </thead> <tbody> <tr> <td>ILP-Ext (Banerjee et al. 2015)</td> <td>0.308</td> <td>0.112</td> <td>0.091</td> </tr> <tr> <td>ILP-Abs (Banerjee et al. 2015)</td> <td>0.361</td> <td>0.158</td> <td>0.12</td> </tr> <tr> <td>Our approach TREM</td> <td>0.405</td> <td>0.207</td> <td>0.148</td> </tr> <tr> <td>w/o SR</td> <td>0.393</td> <td>0.189</td> <td>0.144</td> </tr> <tr> <td>w/o CC</td> <td>0.383</td> <td>0.171</td> <td>0.132</td> </tr> <tr> <td>w/o SR&amp;CC (summarization only)</td> <td>0.374</td> <td>0.168</td> <td>0.129</td> </tr> </tbody></table>
Table 2
table_2
D16-1185
7
emnlp2016
Table 2 shows our experimental results comparing TREM and baseline models using descriptions. In general, contingency-based methods (TREM, TREM w/o SR and TREM w/o CC) outperform summarization-based methods. Our contingency assumptions are verified as adding CC and SC both improve TREM with summarization component only. Moreover, the best result is achieved by the complete TREM model with both contingency factors. It suggests that these two factors, modeling word-level summarization and sentence-level reconstruction, are complementary. From the summarization-based methods, we can see that our TREM-Summ gets higher ROUGE scores than two ILP approaches. Additionally, we note that the performance of ILP-Ext is poor. This is because ILP-Ext tends to output short sentences, while ROUGE is a recall-oriented measurement.
[1, 1, 1, 1, 2, 1, 1, 2]
['Table 2 shows our experimental results comparing TREM and baseline models using descriptions.', 'In general, contingency-based methods (TREM, TREM w/o SR and TREM w/o CC) outperform summarization-based methods.', 'Our contingency assumptions are verified as adding CC and SC both improve TREM with summarization component only.', 'Moreover, the best result is achieved by the complete TREM model with both contingency factors.', 'It suggests that these two factors, modeling word-level summarization and sentence-level reconstruction, are complementary.', 'From the summarization-based methods, we can see that our TREM-Summ gets higher ROUGE scores than two ILP approaches.', 'Additionally, we note that the performance of ILP-Ext is poor.', 'This is because ILP-Ext tends to output short sentences, while ROUGE is a recall-oriented measurement.']
[None, ['Our approach TREM', 'w/o SR', 'w/o CC'], None, ['Our approach TREM'], None, ['Our approach TREM', 'ILP-Ext (Banerjee et al. 2015)', 'ILP-Abs (Banerjee et al. 2015)', 'ROUGE-1', 'ROUGE-2', 'ROUGE-SU4'], ['ILP-Ext (Banerjee et al. 2015)'], None]
1
D16-1187table_3
Spam and nonspam review detection results in the doctor, hotel, and restaurant review domains.
1
[['SMTL-LLR'], ['MTL-LR'], ['MTRL'], ['TSVM'], ['LR'], ['SVM'], ['PU']]
1
[['Doctor'], ['Hotel'], ['Restaurant'], ['Average']]
[['85.4%', '88.7%', '87.5%', '87.2%'], ['83.1%', '86.7%', '85.7%', '85.2%'], ['82.0%', '85.4%', '84.7%', '84.0%'], ['80.6%', '84.2%', '83.8%', '82.9%'], ['79.8%', '83.5%', '83.1%', '82.1%'], ['79.0%', '83.5%', '82.9%', '81.8%'], ['68.5%', '75.4%', '74.0%', '72.6%']]
column
['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy']
['SMTL-LLR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Doctor</th> <th>Hotel</th> <th>Restaurant</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>SMTL-LLR</td> <td>85.4%</td> <td>88.7%</td> <td>87.5%</td> <td>87.2%</td> </tr> <tr> <td>MTL-LR</td> <td>83.1%</td> <td>86.7%</td> <td>85.7%</td> <td>85.2%</td> </tr> <tr> <td>MTRL</td> <td>82.0%</td> <td>85.4%</td> <td>84.7%</td> <td>84.0%</td> </tr> <tr> <td>TSVM</td> <td>80.6%</td> <td>84.2%</td> <td>83.8%</td> <td>82.9%</td> </tr> <tr> <td>LR</td> <td>79.8%</td> <td>83.5%</td> <td>83.1%</td> <td>82.1%</td> </tr> <tr> <td>SVM</td> <td>79.0%</td> <td>83.5%</td> <td>82.9%</td> <td>81.8%</td> </tr> <tr> <td>PU</td> <td>68.5%</td> <td>75.4%</td> <td>74.0%</td> <td>72.6%</td> </tr> </tbody></table>
Table 3
table_3
D16-1187
8
emnlp2016
Table 3 reports the spam and nonspam review detection accuracy of our methods SMTL-LLR and MTLLR against all other baseline methods. In terms of 5% significance level, the differences between SMTL-LLR and the baseline methods are considered to be statistically significant. Under symmetric multi-task learning setting, our methods SMTL-LLR and MTL-LR outperform all other baselines for identifying spam reviews from nonspam ones. MTL-LR achieves the average accuracy of 85.2% across the three domains, which is 3.1% and 3.4% better than LR and SVM trained in the single task learning setting, and 1.2% higher than MTRL. Training with a large quantity of unlabeled review data in addition to labeled ones, SMTL-LLR improves the performance of MTL-LR, and achieves the best average accuracy of 87.2% across the domains, which is 3.2% better than that of MTRL, and is 4.3% better than TSVM, a semi-supervised single task learning model. PU gives the worst performance, because learning only with partially labeled positive review data (spam) and unlabeled data may not generalize as well as other methods.
[1, 1, 1, 1, 1, 1]
['Table 3 reports the spam and nonspam review detection accuracy of our methods SMTL-LLR and MTLLR against all other baseline methods.', 'In terms of 5% significance level, the differences between SMTL-LLR and the baseline methods are considered to be statistically significant.', 'Under symmetric multi-task learning setting, our methods SMTL-LLR and MTL-LR outperform all other baselines for identifying spam reviews from nonspam ones.', 'MTL-LR achieves the average accuracy of 85.2% across the three domains, which is 3.1% and 3.4% better than LR and SVM trained in the single task learning setting, and 1.2% higher than MTRL.', 'Training with a large quantity of unlabeled review data in addition to labeled ones, SMTL-LLR improves the performance of MTL-LR, and achieves the best average accuracy of 87.2% across the domains, which is 3.2% better than that of MTRL, and is 4.3% better than TSVM, a semi-supervised single task learning model.', 'PU gives the worst performance, because learning only with partially labeled positive review data (spam) and unlabeled data may not generalize as well as other methods.']
[None, ['SMTL-LLR', 'MTL-LR', 'MTRL', 'TSVM', 'LR', 'SVM', 'PU'], ['SMTL-LLR', 'MTL-LR', 'MTRL', 'TSVM', 'LR', 'SVM', 'PU'], ['MTL-LR', 'LR', 'SVM', 'MTRL', 'Average'], ['SMTL-LLR', 'MTL-LR', 'Average', 'MTRL', 'TSVM'], ['PU']]
1
D16-1194table_5
Evaluation of annotators performance
2
[['Parameters', 'True-positive'], ['Parameters', 'True-negative'], ['Parameters', 'False-positive'], ['Parameters', 'False-negative'], ['Parameters', 'Precision'], ['Parameters', 'Recall'], ['Parameters', 'Accuracy'], ['Parameters', 'F-Measure']]
1
[['Expert 1'], ['Expert 2'], ['Expert 3']]
[['130', '99', '125'], ['161', '164', '166'], ['20', '51', '25'], ['14', '11', '9'], ['86.67', '66.00', '83.33'], ['90.27', '90.00', '93.28'], ['89.54', '80.92', '89.54'], ['88.43', '76.15', '88.03']]
column
['Kappa', 'Kappa', 'Kappa']
['Parameters']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Expert 1</th> <th>Expert 2</th> <th>Expert 3</th> </tr> </thead> <tbody> <tr> <td>Parameters || True-positive</td> <td>130</td> <td>99</td> <td>125</td> </tr> <tr> <td>Parameters || True-negative</td> <td>161</td> <td>164</td> <td>166</td> </tr> <tr> <td>Parameters || False-positive</td> <td>20</td> <td>51</td> <td>25</td> </tr> <tr> <td>Parameters || False-negative</td> <td>14</td> <td>11</td> <td>9</td> </tr> <tr> <td>Parameters || Precision</td> <td>86.67</td> <td>66.00</td> <td>83.33</td> </tr> <tr> <td>Parameters || Recall</td> <td>90.27</td> <td>90.00</td> <td>93.28</td> </tr> <tr> <td>Parameters || Accuracy</td> <td>89.54</td> <td>80.92</td> <td>89.54</td> </tr> <tr> <td>Parameters || F-Measure</td> <td>88.43</td> <td>76.15</td> <td>88.03</td> </tr> </tbody></table>
Table 5
table_5
D16-1194
7
emnlp2016
As Table 5 shows, the best performance of annotators is highlighted and regarded as the upper bound performance (UB) of the NLD task on our dataset. The state-of-the-art unsupervised PD system named STS (Islam and Inkpen, 2008), as well as the state-of-the-art supervised PD system named RAE (Socher et al., 2011), are utilized to generate the baselines of the NLD task. STS uses the similarity score of 0.5 as the threshold to evaluate their method in the PD task. RAE applies supervised learning to classify a pair as a true or false instance of paraphrasing. These approaches are utilized on our evaluation as baselines for the NLD task.
[1, 2, 2, 2, 2]
['As Table 5 shows, the best performance of annotators is highlighted and regarded as the upper bound performance (UB) of the NLD task on our dataset.', 'The state-of-the-art unsupervised PD system named STS (Islam and Inkpen, 2008), as well as the state-of-the-art supervised PD system named RAE (Socher et al., 2011), are utilized to generate the baselines of the NLD task.', 'STS uses the similarity score of 0.5 as the threshold to evaluate their method in the PD task.', 'RAE applies supervised learning to classify a pair as a true or false instance of paraphrasing.', 'These approaches are utilized on our evaluation as baselines for the NLD task.']
[None, None, None, None, None]
0
D16-1194table_6
Evaluation of NLDS
2
[['Method', 'UB'], ['Method', 'STS'], ['Method', 'RAE'], ['Method', 'Uni-gram'], ['Method', 'Bi-gram'], ['Method', 'Tri-gram'], ['Method', 'POS'], ['Method', 'Lexical'], ['Method', 'Flickr'], ['Method', 'NLDS']]
1
[['R (%)'], ['P (%)'], ['A (%)'], ['F1 (%)']]
[['92.38', '86.67', '89.54', '88.43'], ['100.0', '46.15', '46.15', '63.16'], ['100.0', '46.4', '46.4', '63.39'], ['11.11', '35.29', '52.8', '16.9'], ['44.44', '61.54', '64.0', '51.61'], ['50.0', '62.79', '65.6', '55.67'], ['77.78', '72.77', '78.4', '76.52'], ['85.18', '59.74', '68.8', '70.23'], ['48.96', '94.0', '74.0', '64.38'], ['80.95', '96.22', '88.8', '87.93']]
column
['R (%)', 'P (%)', 'A (%)', 'F1 (%)']
['NLDS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R (%)</th> <th>P (%)</th> <th>A (%)</th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Method || UB</td> <td>92.38</td> <td>86.67</td> <td>89.54</td> <td>88.43</td> </tr> <tr> <td>Method || STS</td> <td>100.0</td> <td>46.15</td> <td>46.15</td> <td>63.16</td> </tr> <tr> <td>Method || RAE</td> <td>100.0</td> <td>46.4</td> <td>46.4</td> <td>63.39</td> </tr> <tr> <td>Method || Uni-gram</td> <td>11.11</td> <td>35.29</td> <td>52.8</td> <td>16.9</td> </tr> <tr> <td>Method || Bi-gram</td> <td>44.44</td> <td>61.54</td> <td>64.0</td> <td>51.61</td> </tr> <tr> <td>Method || Tri-gram</td> <td>50.0</td> <td>62.79</td> <td>65.6</td> <td>55.67</td> </tr> <tr> <td>Method || POS</td> <td>77.78</td> <td>72.77</td> <td>78.4</td> <td>76.52</td> </tr> <tr> <td>Method || Lexical</td> <td>85.18</td> <td>59.74</td> <td>68.8</td> <td>70.23</td> </tr> <tr> <td>Method || Flickr</td> <td>48.96</td> <td>94.0</td> <td>74.0</td> <td>64.38</td> </tr> <tr> <td>Method || NLDS</td> <td>80.95</td> <td>96.22</td> <td>88.8</td> <td>87.93</td> </tr> </tbody></table>
Table 6
table_6
D16-1194
8
emnlp2016
To assess the importance of each feature utilized in the proposed framework, we performed a feature ablation study (Cohen and Howe, 1988) on N-gram, POS analysis, lexical analysis (GTM and WordNet), and Flickr, separately on the DStest dataset. The results are listed in Table 6. A series of cross-validation and Student’s t-tests are applied after running NLDS, STS, RAE, and UB methods on the F-measure metric. The tests reveal that the performance of NLDS is significantly better than STS and RAE, no significant differences could be found between UB and NLDS. These results demonstrate that NLDS would represent an effective approach for NLD that is on pair with annotator judgement and overcomes state-of-the-art approaches for related tasks.
[2, 1, 2, 1, 1]
['To assess the importance of each feature utilized in the proposed framework, we performed a feature ablation study (Cohen and Howe, 1988) on N-gram, POS analysis, lexical analysis (GTM and WordNet), and Flickr, separately on the DStest dataset.', 'The results are listed in Table 6.', 'A series of cross-validation and Student’s t-tests are applied after running NLDS, STS, RAE, and UB methods on the F-measure metric.', 'The tests reveal that the performance of NLDS is significantly better than STS and RAE, no significant differences could be found between UB and NLDS.', 'These results demonstrate that NLDS would represent an effective approach for NLD that is on pair with annotator judgement and overcomes state-of-the-art approaches for related tasks.']
[['Uni-gram', 'Bi-gram', 'Tri-gram', 'POS', 'Lexical', 'Flickr'], None, ['NLDS', 'STS', 'RAE', 'UB'], ['NLDS', 'STS', 'RAE', 'UB'], ['NLDS']]
1
D16-1196table_3
Results ILCI corpus (% BLEU). The reported scores are:W: word-level, WX: word-level followed by transliteration of OOV words, M: morph-level, MX: morph-level followed by transliteration of OOV morphemes, C: character-level, O: orthographic syllable. The values marked in bold indicate the best scores for the language pair.
1
[['ben-hin'], ['pan-hin'], ['kok-mar'], ['mal-tam'], ['tel-mal'], ['hin-mal'], ['mal-hin']]
1
[['W'], ['WX'], ['M'], ['MX'], ['C'], ['O']]
[['31.23', '32.79', '32.17', '32.32', '27.95', '33.46'], ['68.96', '71.71', '71.29', '71.42', '71.26', '72.51'], ['21.39', '21.90', '22.81', '22.82', '19.83', '23.53'], ['6.52', '7.01', '7.61', '7.65', '4.50', '7.86'], ['6.62', '6.94', '7.86', '7.89', '6.00', '8.51'], ['8.49', '8.77', '9.23', '9.26', '6.28', '10.45'], ['15.23', '16.26', '17.08', '17.30', '12.33', '18.50']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['O']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>W</th> <th>WX</th> <th>M</th> <th>MX</th> <th>C</th> <th>O</th> </tr> </thead> <tbody> <tr> <td>ben-hin</td> <td>31.23</td> <td>32.79</td> <td>32.17</td> <td>32.32</td> <td>27.95</td> <td>33.46</td> </tr> <tr> <td>pan-hin</td> <td>68.96</td> <td>71.71</td> <td>71.29</td> <td>71.42</td> <td>71.26</td> <td>72.51</td> </tr> <tr> <td>kok-mar</td> <td>21.39</td> <td>21.90</td> <td>22.81</td> <td>22.82</td> <td>19.83</td> <td>23.53</td> </tr> <tr> <td>mal-tam</td> <td>6.52</td> <td>7.01</td> <td>7.61</td> <td>7.65</td> <td>4.50</td> <td>7.86</td> </tr> <tr> <td>tel-mal</td> <td>6.62</td> <td>6.94</td> <td>7.86</td> <td>7.89</td> <td>6.00</td> <td>8.51</td> </tr> <tr> <td>hin-mal</td> <td>8.49</td> <td>8.77</td> <td>9.23</td> <td>9.26</td> <td>6.28</td> <td>10.45</td> </tr> <tr> <td>mal-hin</td> <td>15.23</td> <td>16.26</td> <td>17.08</td> <td>17.30</td> <td>12.33</td> <td>18.50</td> </tr> </tbody></table>
Table 3
table_3
D16-1196
4
emnlp2016
Comparison of Translation Units: Table 3 compares the BLEU scores for various translation systems. The orthographic syllable level system is clearly better than all other systems. It significantly outperforms the character-level system (by 46% on an average). The system also outperforms two strong baselines which address data sparsity: (a) a word-level system with transliteration of OOV words (10% improvement), (b) a morph-level system with transliteration of OOV words (5% improvement). The OS-level representation is more beneficial when morphologically rich languages are involved in translation. Significantly, OS-level translation is also the best system for translation between languages of different language families. The Le-BLEU scores also show the same trend as BLEU scores, but we have not reported it due to space limits. There are a very small number of untranslated OSes, which we handled by simple mapping of untranslated characters from source to target script. This barely increased translation accuracy (0.02% increase in BLEU score).
[1, 1, 1, 1, 1, 1, 1, 1, 1]
['Table 3 compares the BLEU scores for various translation systems.', 'The orthographic syllable level system is clearly better than all other systems.', 'It significantly outperforms the character-level system (by 46% on an average).', 'The system also outperforms two strong baselines which address data sparsity: (a) a word-level system with transliteration of OOV words (10% improvement), (b) a morph-level system with transliteration of OOV words (5% improvement).', 'The OS-level representation is more beneficial when morphologically rich\r\nlanguages are involved in translation.', 'Significantly, OS-level translation is also the best system for translation between languages of different language families.', 'The Le-BLEU scores also show the same trend as BLEU scores, but we have not reported it due to space limits.', 'There are a very small number of untranslated OSes, which we handled by simple mapping of untranslated characters from source to target script.', 'This barely increased translation accuracy (0.02% increase in BLEU score).']
[None, None, None, None, None, None, None, None, None]
1
D16-1200table_3
Results for our system and other participants in the SemEval 2015 Task 4: TimeLine.
2
[['System', 'GPLSIUA 1'], ['System', 'GPLSIUA 2'], ['System', 'HeidelToul 1'], ['System', 'HeidelToul 2'], ['System', 'Our System Binary'], ['System', 'Our System Alignment']]
2
[['Airbus', 'F1'], ['GM', 'F1'], ['Stock', 'F1'], ['Total', 'P'], ['Total', 'R'], ['Total', 'F1']]
[['22.35', '19.28', '33.59', '21.73', '30.46', '25.36'], ['20.47', '16.17', '29.90', '20.08', '26.00', '22.66'], ['19.62', '7.25', '20.37', '20.11', '14.76', '17.03'], ['16.50', '10.82', '25.89', '13.58', '28.23', '18.34'], ['17.99', '20.97', '34.95', '25.97', '24.79', '25.37'], ['25.65', '26.64', '32.35', '29.05', '28.12', '28.58']]
column
['F1', 'F1', 'F1', 'P', 'R', 'F1']
['Our System Binary', 'Our System Alignment']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Airbus || F1</th> <th>GM || F1</th> <th>Stock || F1</th> <th>Total || P</th> <th>Total || R</th> <th>Total || F1</th> </tr> </thead> <tbody> <tr> <td>System || GPLSIUA 1</td> <td>22.35</td> <td>19.28</td> <td>33.59</td> <td>21.73</td> <td>30.46</td> <td>25.36</td> </tr> <tr> <td>System || GPLSIUA 2</td> <td>20.47</td> <td>16.17</td> <td>29.90</td> <td>20.08</td> <td>26.00</td> <td>22.66</td> </tr> <tr> <td>System || HeidelToul 1</td> <td>19.62</td> <td>7.25</td> <td>20.37</td> <td>20.11</td> <td>14.76</td> <td>17.03</td> </tr> <tr> <td>System || HeidelToul 2</td> <td>16.50</td> <td>10.82</td> <td>25.89</td> <td>13.58</td> <td>28.23</td> <td>18.34</td> </tr> <tr> <td>System || Our System Binary</td> <td>17.99</td> <td>20.97</td> <td>34.95</td> <td>25.97</td> <td>24.79</td> <td>25.37</td> </tr> <tr> <td>System || Our System Alignment</td> <td>25.65</td> <td>26.64</td> <td>32.35</td> <td>29.05</td> <td>28.12</td> <td>28.58</td> </tr> </tbody></table>
Table 3
table_3
D16-1200
5
emnlp2016
In Table 3 we compare the binary classification model (Our System Binary) against the alignment model (Our System Alignment) and show that the latter outperforms the former by a margin of 3.2 points in F-score, achieving a micro F1-score of 28.58 across the three test corpora, thus confirming the benefits of joint inference. The only corpus in which joint inference did not help was Stock which has on average shorter event chains per document (Minard et al., 2015) and thus renders joint anchoring less likely to be useful.
[1, 1]
['In Table 3 we compare the binary classification model (Our System Binary) against the alignment model (Our System Alignment) and show that the latter outperforms the former by a margin of 3.2 points in F-score, achieving a micro F1-score of 28.58 across the three test corpora, thus confirming the benefits of joint inference.', 'The only corpus in which joint inference did not help was Stock which has on average shorter event chains per document (Minard et al., 2015) and thus renders joint anchoring less likely to be useful.']
[['Our System Binary', 'Our System Alignment'], None]
1
D16-1204table_1
Youtube dataset: METEOR and BLEU@4 in %, and human ratings (1-5) on relevance and grammar. Best results in bold, * indicates significant over S2VT.
3
[['Model', 'S2VT', '-'], ['Model', 'Early Fusion', '-'], ['Model', 'Late Fusion', '-'], ['Model', 'Deep Fusion', '-'], ['Model', 'Glove', '-'], ['Model', 'Glove+Deep', '- Web Corpus'], ['Model', 'Glove+Deep', '- In-Domain'], ['Model', 'Ensemble', '-']]
1
[['METEOR'], ['B-4'], ['Relevance'], ['Grammar']]
[['29.2', '37.0', '2.06', '3.76'], ['29.6', '37.6', '-', '-'], ['29.4', '37.2', '-', '-'], ['29.6', '39.3', '-', '-'], ['30.0', '37.0', '-', '-'], ['30.3', '38.1', '2.12', '4.05*'], ['30.3', '38.8', '2.21*', '4.17*'], ['31.4', '42.1', '2.24*', '4.20*']]
column
['METEOR', 'B-4', 'Relevance', 'Grammar']
['Ensemble']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>METEOR</th> <th>B-4</th> <th>Relevance</th> <th>Grammar</th> </tr> </thead> <tbody> <tr> <td>Model || S2VT || -</td> <td>29.2</td> <td>37.0</td> <td>2.06</td> <td>3.76</td> </tr> <tr> <td>Model || Early Fusion || -</td> <td>29.6</td> <td>37.6</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Late Fusion || -</td> <td>29.4</td> <td>37.2</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Deep Fusion || -</td> <td>29.6</td> <td>39.3</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Glove || -</td> <td>30.0</td> <td>37.0</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Glove+Deep || - Web Corpus</td> <td>30.3</td> <td>38.1</td> <td>2.12</td> <td>4.05*</td> </tr> <tr> <td>Model || Glove+Deep || - In-Domain</td> <td>30.3</td> <td>38.8</td> <td>2.21*</td> <td>4.17*</td> </tr> <tr> <td>Model || Ensemble || -</td> <td>31.4</td> <td>42.1</td> <td>2.24*</td> <td>4.20*</td> </tr> </tbody></table>
Table 1
table_1
D16-1204
4
emnlp2016
Comparison of the proposed techniques in Table 1 shows that Deep Fusion performs well on both METEOR and BLEU, incorporating Glove embeddings substantially increases METEOR, and combining them both does best. Our final model is an ensemble (weighted average) of the Glove, and the two Glove+Deep Fusion models trained on the external and in-domain COCO (Lin et al., 2014) sentences. We note here that the state-of-the-art on this dataset is achieved by HRNE (Pan et al., 2015) (METEOR 33.1) which proposes a superior visual processing pipeline using attention to encode the video. Human ratings also correlate well with the METEOR scores, confirming that our methods give a modest improvement in descriptive quality. However, incorporating linguistic knowledge significantly improves the grammaticality of the results, making them more comprehensible to human users.
[1, 2, 2, 2, 2]
['Comparison of the proposed techniques in Table 1 shows that Deep Fusion performs well on both METEOR and BLEU, incorporating Glove embeddings substantially increases METEOR, and combining them both does best.', 'Our final model is an ensemble (weighted average) of the Glove, and the two Glove+Deep Fusion models trained on the external and in-domain COCO (Lin et al., 2014) sentences.', 'We note here that the state-of-the-art on this dataset is achieved by HRNE (Pan et al., 2015) (METEOR 33.1) which proposes a superior visual processing pipeline using attention to encode the video.', 'Human ratings also correlate well with the METEOR scores, confirming that our methods give a modest improvement in descriptive quality.', 'However, incorporating linguistic knowledge significantly improves the grammaticality of the results, making them more comprehensible to human users.']
[['Deep Fusion', 'METEOR', 'B-4', 'Glove+Deep', 'Ensemble'], ['Ensemble'], None, ['Ensemble'], None]
1
D16-1207table_2
Accuracy under cross-domain evaluation; the best result for each dataset is indicated in bold.
2
[['Train/Test', 'Dropout (beta) = 0.3'], ['Train/Test', 'Dropout (beta) = 0.5'], ['Train/Test', 'Dropout (beta) = 0.7'], ['Train/Test', 'Robust Regularization (lambda) = 10^-3'], ['Train/Test', 'Robust Regularization (lambda) = 10^-2'], ['Train/Test', 'Robust Regularization (lambda) = 10^-1'], ['Train/Test', 'Robust Regularization (lambda) = 1'], ['Train/Test', 'Dropout + Robust beta = 0.5 lambda = 10^-2']]
2
[['MR/CR', '67.5'], ['CR/MR', '61.0']]
[['71.6', '62.2'], ['71.0', '62.1'], ['70.9', '62.0'], ['70.8', '61.6'], ['71.1', '62.5'], ['72.0', '62.2'], ['71.8', '62.3'], ['72.0', '62.4']]
column
['accuracy', 'accuracy']
['Robust Regularization (lambda) = 10^-3', 'Robust Regularization (lambda) = 10^-2', 'Robust Regularization (lambda) = 10^-1', 'Robust Regularization (lambda) = 1', 'Dropout + Robust beta = 0.5 lambda = 10^-2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MR/CR || 67.5</th> <th>CR/MR || 61.0</th> </tr> </thead> <tbody> <tr> <td>Train/Test || Dropout (beta) = 0.3</td> <td>71.6</td> <td>62.2</td> </tr> <tr> <td>Train/Test || Dropout (beta) = 0.5</td> <td>71.0</td> <td>62.1</td> </tr> <tr> <td>Train/Test || Dropout (beta) = 0.7</td> <td>70.9</td> <td>62.0</td> </tr> <tr> <td>Train/Test || Robust Regularization (lambda) = 10^-3</td> <td>70.8</td> <td>61.6</td> </tr> <tr> <td>Train/Test || Robust Regularization (lambda) = 10^-2</td> <td>71.1</td> <td>62.5</td> </tr> <tr> <td>Train/Test || Robust Regularization (lambda) = 10^-1</td> <td>72.0</td> <td>62.2</td> </tr> <tr> <td>Train/Test || Robust Regularization (lambda) = 1</td> <td>71.8</td> <td>62.3</td> </tr> <tr> <td>Train/Test || Dropout + Robust beta = 0.5 lambda = 10^-2</td> <td>72.0</td> <td>62.4</td> </tr> </tbody></table>
Table 2
table_2
D16-1207
5
emnlp2016
Table 2 presents the results of the cross-domain experiment, whereby we train a model on MR and test on CR, and vice versa, to measure the robustness of the different regularization methods in a more real-world setting. Once again, we see that our regularization method is superior to word-level dropout and the baseline CNN, and the techniques combined do very well, consistent with our findings for synthetic noise.
[1, 1]
['Table 2 presents the results of the cross-domain experiment, whereby we train a model on MR and test on CR, and vice versa, to measure the robustness of the different regularization methods in a more real-world setting.', 'Once again, we see that our regularization method is superior to word-level dropout and the baseline CNN, and the techniques combined do very well, consistent with our findings for synthetic noise.']
[None, ['Robust Regularization (lambda) = 10^-3', 'Robust Regularization (lambda) = 10^-2', 'Robust Regularization (lambda) = 10^-1', 'Robust Regularization (lambda) = 1', 'Dropout + Robust beta = 0.5 lambda = 10^-2']]
1
D16-1210table_2
Word alignment performance.
2
[['Method', 'HMM+none'], ['Method', 'HMM+sym'], ['Method', 'HMM+itg'], ['Method', 'IBM Model 4+none'], ['Method', 'IBM Model 4+sym'], ['Method', 'IBM Model 4+itg']]
2
[['Hansard Fr-En', 'F-measure'], ['Hansard Fr-En', 'AER'], ['KFTT Ja-En', 'F-measure'], ['KFTT Ja-En', 'AER'], ['BTEC Ja-En', 'F-measure'], ['BTEC Ja-En', 'AER']]
[['0.7900', '0.0646', '0.4623', '0.5377', '0.4425', '0.5575'], ['0.7923', '0.0597', '0.4678', '0.5322', '0.4534', '0.5466'], ['0.7869', '0.0629', '0.4690', '0.5310', '0.4499', '0.5501'], ['0.7780', '0.0775', '0.5379', '0.4621', '0.4454', '0.5546'], ['0.7800', '0.0693', '0.5545', '0.4455', '0.4761', '0.5239'], ['0.7791', '0.0710', '0.5613', '0.4387', '0.4809', '0.5191']]
column
['F-measure', 'AER', 'F-measure', 'AER', 'F-measure', 'AER']
['HMM+itg']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Hansard Fr-En || F-measure</th> <th>Hansard Fr-En || AER</th> <th>KFTT Ja-En || F-measure</th> <th>KFTT Ja-En || AER</th> <th>BTEC Ja-En || F-measure</th> <th>BTEC Ja-En || AER</th> </tr> </thead> <tbody> <tr> <td>Method || HMM+none</td> <td>0.7900</td> <td>0.0646</td> <td>0.4623</td> <td>0.5377</td> <td>0.4425</td> <td>0.5575</td> </tr> <tr> <td>Method || HMM+sym</td> <td>0.7923</td> <td>0.0597</td> <td>0.4678</td> <td>0.5322</td> <td>0.4534</td> <td>0.5466</td> </tr> <tr> <td>Method || HMM+itg</td> <td>0.7869</td> <td>0.0629</td> <td>0.4690</td> <td>0.5310</td> <td>0.4499</td> <td>0.5501</td> </tr> <tr> <td>Method || IBM Model 4+none</td> <td>0.7780</td> <td>0.0775</td> <td>0.5379</td> <td>0.4621</td> <td>0.4454</td> <td>0.5546</td> </tr> <tr> <td>Method || IBM Model 4+sym</td> <td>0.7800</td> <td>0.0693</td> <td>0.5545</td> <td>0.4455</td> <td>0.4761</td> <td>0.5239</td> </tr> <tr> <td>Method || IBM Model 4+itg</td> <td>0.7791</td> <td>0.0710</td> <td>0.5613</td> <td>0.4387</td> <td>0.4809</td> <td>0.5191</td> </tr> </tbody></table>
Table 2
table_2
D16-1210
4
emnlp2016
Table 2 shows the results of word alignment evaluations, where none denotes that the model has no constraint. In KFTT and BTEC Corpus, itg achieved significant improvement against sym and none on IBM Model 4 (p ? 0.05). However, in the Hansard Corpus, itg shows no improvement against sym. This indicates that capturing structural coherence by itg yields a significant benefit to word alignment in a linguistically different language pair such as Ja-En. For example, some function words appear more than once in both a source and target sentence, and they are not symmetrically aligned with each other, especially in regards to the Ja-En language pair. Although the baseline methods tend to be unable to align such long-distance word pairs, the proposed method can correctly catch them because itg can determine the relation of long-distance words. We discuss more details about the effectiveness of the ITG constraint in Section 4.1.
[1, 1, 1, 2, 2, 2, 0]
['Table 2 shows the results of word alignment evaluations, where none denotes that the model has no constraint.', 'In KFTT and BTEC Corpus, itg achieved significant improvement against sym and none on IBM Model 4 (p ? 0.05).', 'However, in the Hansard Corpus, itg shows no improvement against sym.', 'This indicates that capturing structural coherence by itg yields a significant benefit to word alignment in a linguistically different language pair such as Ja-En.', 'For example, some function words appear more than once in both a source and target sentence, and they are not symmetrically aligned with each other, especially in regards to the Ja-En language pair.', 'Although the baseline methods tend to be unable to align such long-distance word pairs, the proposed method can correctly catch them because itg can determine the relation of long-distance words.', 'We discuss more details about the effectiveness of the ITG constraint in Section 4.1.']
[None, ['HMM+itg', 'KFTT Ja-En', 'BTEC Ja-En', 'IBM Model 4+none', 'IBM Model 4+sym'], ['Hansard Fr-En', 'HMM+itg', 'IBM Model 4+itg', 'IBM Model 4+sym'], ['HMM+itg'], None, None, None]
1
D16-1220table_2
Performance on the proverb test data. ∗: significantly different from B with p < .001. #: significantly different from N with p < .001.
2
[['Features', 'B#'], ['Features', 'N*'], ['Features', 'N \\ s*'], ['Features', 'B ∪ N*']]
1
[['P'], ['R'], ['F']]
[['0.75', '0.70', '0.73'], ['0.86', '0.83', '0.85'], ['0.82', '0.87', '0.85'], ['0.87', '0.85', '0.86']]
column
['P', 'R', 'F']
['Features']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>Features || B#</td> <td>0.75</td> <td>0.70</td> <td>0.73</td> </tr> <tr> <td>Features || N*</td> <td>0.86</td> <td>0.83</td> <td>0.85</td> </tr> <tr> <td>Features || N \ s*</td> <td>0.82</td> <td>0.87</td> <td>0.85</td> </tr> <tr> <td>Features || B ∪ N*</td> <td>0.87</td> <td>0.85</td> <td>0.86</td> </tr> </tbody></table>
Table 2
table_2
D16-1220
4
emnlp2016
We then evaluated the best configuration from the cross-fold validation (N \ s) and the three feature sets B, N and B ∪ N on the held-out test data. The results of this experiment reported in Table 2 are similar to the cross-fold evaluation, and in this case the contribution of N features is even more accentuated. Indeed, the absolute F1 of N and B ? N is slightly higher on test data, while the f-measure of B decreases slightly. This might be explained by the low-dimensionality of N, which makes it less prone to overfitting the training data. On test data, N \ s is not found to outperform N. Interestingly, N \ s is the only configuration having higher recall than precision. As shown by the feature ablation experiments, one of the main reasons for the performance difference between N and B is the ability of the former to model domain information. This finding can be further confirmed by inspecting the cases where B misclassifies metaphors that are correctly detected by N. Among these, we can find several examples including words that belong to domains often used as a metaphor source, such as “grist” (domain: “gastronomy”) in “All is grist that comes to the mill”, or “horse” (domain: “animals”) in “You can take a horse to the water , but you can’t make him drink”.
[2, 1, 1, 2, 1, 1, 2, 2, 2]
['We then evaluated the best configuration from the cross-fold validation (N \\ s) and the three feature sets B, N and B ∪ N on the held-out test data.', 'The results of this experiment reported in Table 2 are similar to the cross-fold evaluation, and in this case the contribution of N features is even more accentuated.', 'Indeed, the absolute F1 of N and B ? N is slightly higher on test data, while the f-measure of B decreases slightly.', 'This might be explained by the low-dimensionality of N, which makes it less prone to overfitting the training data.', 'On test data, N \\ s is not found to outperform N.', 'Interestingly, N \\ s is the only configuration having higher recall than precision.', 'As shown by the feature ablation experiments, one of the main reasons for the performance difference between N and B is the ability of the former to model domain information.', 'This finding can be further confirmed by inspecting the cases where B misclassifies metaphors that are correctly detected by N.', 'Among these, we can find several examples including words that belong to domains often used as a metaphor source, such as “grist” (domain: “gastronomy”) in “All is grist that comes to the mill”, or “horse” (domain: “animals”) in “You can take a horse to the water , but you can’t make him drink”.']
[['B#', 'N*', 'N \\ s*', 'B ∪ N*'], None, ['F', 'N*', 'B ∪ N*'], None, ['N \\ s*', 'N*'], ['R', 'P', 'N \\ s*'], None, None, None]
1