premise
stringlengths
11
296
hypothesis
stringlengths
11
296
label
class label
0 classes
idx
int32
0
1.1k
They then use a discriminative model to rerank the translation output using additional nonworld level features.
They then use a generative model to rerank the translation output using additional nonworld level features.
-1no label
900
They then use a generative model to rerank the translation output using additional nonworld level features.
They then use a discriminative model to rerank the translation output using additional nonworld level features.
-1no label
901
In contrast to standard MT tasks, we are dealing with a relatively low-resource setting where the sparseness of the target vocabulary is an issue.
Unlike in standard MT tasks, we are dealing with a relatively low-resource setting where the sparseness of the target vocabulary is an issue.
-1no label
902
Unlike in standard MT tasks, we are dealing with a relatively low-resource setting where the sparseness of the target vocabulary is an issue.
In contrast to standard MT tasks, we are dealing with a relatively low-resource setting where the sparseness of the target vocabulary is an issue.
-1no label
903
A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding.
Logits are then computed for these actions and particular actions are chosen according to a softmax over these logits during training and decoding.
-1no label
904
Logits are then computed for these actions and particular actions are chosen according to a softmax over these logits during training and decoding.
A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding.
-1no label
905
A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding.
A distribution is then computed over these actions using a maximum-entropy approach and particular actions are chosen accordingly during training and decoding.
-1no label
906
A distribution is then computed over these actions using a maximum-entropy approach and particular actions are chosen accordingly during training and decoding.
A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding.
-1no label
907
A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding.
A distribution is then computed over these actions using a softmax function and particular actions are chosen randomly during training and decoding.
-1no label
908
A distribution is then computed over these actions using a softmax function and particular actions are chosen randomly during training and decoding.
A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding.
-1no label
909
The systems thus produced are incremental: dialogues are processed word-by-word, shown previously to be essential in supporting natural, spontaneous dialogue.
The systems thus produced support the capability to interrupt an interlocutor mid-sentence.
-1no label
910
The systems thus produced support the capability to interrupt an interlocutor mid-sentence.
The systems thus produced are incremental: dialogues are processed word-by-word, shown previously to be essential in supporting natural, spontaneous dialogue.
-1no label
911
The systems thus produced are incremental: dialogues are processed word-by-word, shown previously to be essential in supporting natural, spontaneous dialogue.
The systems thus produced are incremental: dialogues are processed sentence-by-sentence, shown previously to be essential in supporting natural, spontaneous dialogue.
-1no label
912
The systems thus produced are incremental: dialogues are processed sentence-by-sentence, shown previously to be essential in supporting natural, spontaneous dialogue.
The systems thus produced are incremental: dialogues are processed word-by-word, shown previously to be essential in supporting natural, spontaneous dialogue.
-1no label
913
Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one example is enough from which to learn.
Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one-shot learning is sufficient.
-1no label
914
Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one-shot learning is sufficient.
Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one example is enough from which to learn.
-1no label
915
Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one example is enough from which to learn.
Indeed, it is often stated that for humans to learn how to perform adequately in a domain, any number of examples is enough from which to learn.
-1no label
916
Indeed, it is often stated that for humans to learn how to perform adequately in a domain, any number of examples is enough from which to learn.
Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one example is enough from which to learn.
-1no label
917
We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG.
We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end natural language generation.
-1no label
918
We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end natural language generation.
We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG.
-1no label
919
We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG.
We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end natural language parsing.
-1no label
920
We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end natural language parsing.
We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG.
-1no label
921
To assess the reliability of ratings, we calculated the intra-class correlation coefficient (ICC), which measures inter-observer reliability on ordinal data for more than two raters (Landis and Koch, 1977).
To assess the unreliability of ratings, we calculated the intra-class correlation coefficient (ICC), which measures inter-observer reliability on ordinal data for more than two raters (Landis and Koch, 1977).
-1no label
922
To assess the unreliability of ratings, we calculated the intra-class correlation coefficient (ICC), which measures inter-observer reliability on ordinal data for more than two raters (Landis and Koch, 1977).
To assess the reliability of ratings, we calculated the intra-class correlation coefficient (ICC), which measures inter-observer reliability on ordinal data for more than two raters (Landis and Koch, 1977).
-1no label
923
We also show that metric performance is data- and system-specific.
We also show that metric performance varies between datasets and systems.
-1no label
924
We also show that metric performance varies between datasets and systems.
We also show that metric performance is data- and system-specific.
-1no label
925
We also show that metric performance is data- and system-specific.
We also show that metric performance is constant between datasets and systems.
-1no label
926
We also show that metric performance is constant between datasets and systems.
We also show that metric performance is data- and system-specific.
-1no label
927
Our experiments indicate that neural systems are quite good at producing fluent outputs and generally score well on standard word-match metrics, but perform quite poorly at content selection and at capturing long-term structure.
Our experiments indicate that neural systems are quite good at surface-level language modeling, but perform quite poorly at capturing higher level semantics and structure.
-1no label
928
Our experiments indicate that neural systems are quite good at surface-level language modeling, but perform quite poorly at capturing higher level semantics and structure.
Our experiments indicate that neural systems are quite good at producing fluent outputs and generally score well on standard word-match metrics, but perform quite poorly at content selection and at capturing long-term structure.
-1no label
929
Our experiments indicate that neural systems are quite good at producing fluent outputs and generally score well on standard word-match metrics, but perform quite poorly at content selection and at capturing long-term structure.
Our experiments indicate that neural systems are quite good at capturing higher level semantics and structure but perform quite poorly at surface-level language modeling.
-1no label
930
Our experiments indicate that neural systems are quite good at capturing higher level semantics and structure but perform quite poorly at surface-level language modeling.
Our experiments indicate that neural systems are quite good at producing fluent outputs and generally score well on standard word-match metrics, but perform quite poorly at content selection and at capturing long-term structure.
-1no label
931
Reconstruction-based techniques can also be applied at the document or sentence-level during training.
Reconstruction-based techniques can operate on multiple scales during training.
-1no label
932
Reconstruction-based techniques can operate on multiple scales during training.
Reconstruction-based techniques can also be applied at the document or sentence-level during training.
-1no label
933
Reconstruction-based techniques can also be applied at the document or sentence-level during training.
Reconstruction-based techniques can also be applied at the document or sentence-level during test.
-1no label
934
Reconstruction-based techniques can also be applied at the document or sentence-level during test.
Reconstruction-based techniques can also be applied at the document or sentence-level during training.
-1no label
935
Reconstruction-based techniques can also be applied at the document or sentence-level during training.
Reconstruction-based techniques can only be applied at the sentence-level during training.
-1no label
936
Reconstruction-based techniques can only be applied at the sentence-level during training.
Reconstruction-based techniques can also be applied at the document or sentence-level during training.
-1no label
937
In practice, our proposed extractive evaluation will pick up on many errors in this passage.
In practice, our proposed extractive evaluation will pick up on few errors in this passage.
-1no label
938
In practice, our proposed extractive evaluation will pick up on few errors in this passage.
In practice, our proposed extractive evaluation will pick up on many errors in this passage.
-1no label
939
Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of passing the Bechdel test.
Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of two named women characters talking about something besides men.
-1no label
940
Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of two named women characters talking about something besides men.
Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of passing the Bechdel test.
-1no label
941
Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of passing the Bechdel test.
Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of men in the narrative talking to each other about women.
-1no label
942
Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of men in the narrative talking to each other about women.
Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of passing the Bechdel test.
-1no label
943
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power.
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are more often in positions where they can forbid or permit actions and decisions.
-1no label
944
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are more often in positions where they can forbid or permit actions and decisions.
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power.
-1no label
945
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power.
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are more often in positions where they are blocked or allowed to do things by others.
-1no label
946
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are more often in positions where they are blocked or allowed to do things by others.
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power.
-1no label
947
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power.
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of low power.
-1no label
948
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of low power.
Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power.
-1no label
949
Looking at pictures online of people trying to take photos of mirrors they want to sell is my new thing...
Looking at pictures online of people trying to take photos of mirrors is my new thing...
-1no label
950
Looking at pictures online of people trying to take photos of mirrors is my new thing...
Looking at pictures online of people trying to take photos of mirrors they want to sell is my new thing...
-1no label
951
A serene wind rolled across the glade.
A tempestuous wind rolled across the glade.
-1no label
952
A tempestuous wind rolled across the glade.
A serene wind rolled across the glade.
-1no label
953
A serene wind rolled across the glade.
An easterly wind rolled across the glade.
-1no label
954
An easterly wind rolled across the glade.
A serene wind rolled across the glade.
-1no label
955
A serene wind rolled across the glade.
A calm wind rolled across the glade.
-1no label
956
A calm wind rolled across the glade.
A serene wind rolled across the glade.
-1no label
957
A serene wind rolled across the glade.
A wind rolled across the glade.
-1no label
958
A wind rolled across the glade.
A serene wind rolled across the glade.
-1no label
959
The reaction was strongly exothermic.
The reaction media got very hot.
-1no label
960
The reaction media got very hot.
The reaction was strongly exothermic.
-1no label
961
The reaction was strongly exothermic.
The reaction media got very cold.
-1no label
962
The reaction media got very cold.
The reaction was strongly exothermic.
-1no label
963
The reaction was strongly endothermic.
The reaction media got very hot.
-1no label
964
The reaction media got very hot.
The reaction was strongly endothermic.
-1no label
965
The reaction was strongly endothermic.
The reaction media got very cold.
-1no label
966
The reaction media got very cold.
The reaction was strongly endothermic.
-1no label
967
She didn't think I had already finished it, but I had.
I had already finished it.
-1no label
968
I had already finished it.
She didn't think I had already finished it, but I had.
-1no label
969
She didn't think I had already finished it, but I had.
I hadn't already finished it.
-1no label
970
I hadn't already finished it.
She didn't think I had already finished it, but I had.
-1no label
971
She thought I had already finished it, but I hadn't.
I had already finished it.
-1no label
972
I had already finished it.
She thought I had already finished it, but I hadn't.
-1no label
973
She thought I had already finished it, but I hadn't.
I hadn't already finished it.
-1no label
974
I hadn't already finished it.
She thought I had already finished it, but I hadn't.
-1no label
975
Temple said that the business was facing difficulties, but didn't make any specific claims.
Temple didn't make any specific claims.
-1no label
976
Temple didn't make any specific claims.
Temple said that the business was facing difficulties, but didn't make any specific claims.
-1no label
977
Temple said that the business was facing difficulties, but didn't make any specific claims.
The business didn't make any specific claims.
-1no label
978
The business didn't make any specific claims.
Temple said that the business was facing difficulties, but didn't make any specific claims.
-1no label
979
Temple said that the business was facing difficulties, but didn't have a chance of going into the red.
Temple didn't have a chance of going into the red.
-1no label
980
Temple didn't have a chance of going into the red.
Temple said that the business was facing difficulties, but didn't have a chance of going into the red.
-1no label
981
Temple said that the business was facing difficulties, but didn't have a chance of going into the red.
Temple said the business didn't have a chance of going into the red.
-1no label
982
Temple said the business didn't have a chance of going into the red.
Temple said that the business was facing difficulties, but didn't have a chance of going into the red.
-1no label
983
The profits of the businesses that focused on branding were still negative.
The businesses that focused on branding still had negative profits.
-1no label
984
The businesses that focused on branding still had negative profits.
The profits of the businesses that focused on branding were still negative.
-1no label
985
The profits of the business that was most successful were still negative.
The profits that focused on branding were still negative.
-1no label
986
The profits that focused on branding were still negative.
The profits of the business that was most successful were still negative.
-1no label
987
The profits of the businesses that were highest this quarter were still negative.
The businesses that were highest this quarter still had negative profits.
-1no label
988
The businesses that were highest this quarter still had negative profits.
The profits of the businesses that were highest this quarter were still negative.
-1no label
989
The profits of the businesses that were highest this quarter were still negative.
For the businesses, the profits that were highest were still negative.
-1no label
990
For the businesses, the profits that were highest were still negative.
The profits of the businesses that were highest this quarter were still negative.
-1no label
991
I baked him a cake.
I baked him.
-1no label
992
I baked him.
I baked him a cake.
-1no label
993
I baked him a cake.
I baked a cake for him.
-1no label
994
I baked a cake for him.
I baked him a cake.
-1no label
995
I gave him a note.
I gave a note to him.
-1no label
996
I gave a note to him.
I gave him a note.
-1no label
997
Jake broke the vase.
The vase broke.
-1no label
998
The vase broke.
Jake broke the vase.
-1no label
999