premise
stringlengths
11
296
hypothesis
stringlengths
11
296
label
class label
0 classes
idx
int32
0
1.1k
If Charles' left wing, commanded by Nauendorf, united with Hotze's force, approaching from the east, Masséna knew Charles would attack and very likely push him out of Zürich.
If Charles' left wing, commanded by Nauendorf, united with Hotze's force, approaching from the east, Masséna would prepare for Charles to attack and very likely push him out of Zürich.
-1no label
800
If Charles' left wing, commanded by Nauendorf, united with Hotze's force, approaching from the east, Masséna would prepare for Charles to attack and very likely push him out of Zürich.
If Charles' left wing, commanded by Nauendorf, united with Hotze's force, approaching from the east, Masséna knew Charles would attack and very likely push him out of Zürich.
-1no label
801
Ferdinand of Naples refused to pay agreed-upon tribute to France, and his subjects followed this refusal with a rebellion.
Ferdinand of Naples refused to pay France the agreed-upon tribute, and his subjects followed this refusal with a rebellion.
-1no label
802
Ferdinand of Naples refused to pay France the agreed-upon tribute, and his subjects followed this refusal with a rebellion.
Ferdinand of Naples refused to pay agreed-upon tribute to France, and his subjects followed this refusal with a rebellion.
-1no label
803
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
Furthermore, the French dangerously underestimated Austrian military skill and tenacity.
-1no label
804
Furthermore, the French dangerously underestimated Austrian military skill and tenacity.
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
-1no label
805
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
Furthermore, the French dangerously underestimated Austrian military skill.
-1no label
806
Furthermore, the French dangerously underestimated Austrian military skill.
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
-1no label
807
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four supraocular scales (above the eyes) in most specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
-1no label
808
There are four supraocular scales (above the eyes) in most specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
-1no label
809
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four scales above the eyes in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
-1no label
810
There are four scales above the eyes in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
-1no label
811
All 860 officers and men on board, including Spee, went down with the ship.
Spee went down with the ship.
-1no label
812
Spee went down with the ship.
All 860 officers and men on board, including Spee, went down with the ship.
-1no label
813
Regional governors could not rely on the king for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
Regional governors could not rely on anyone for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
-1no label
814
Regional governors could not rely on anyone for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
Regional governors could not rely on the king for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
-1no label
815
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
-1no label
816
The pharaohs of the Middle Kingdom of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
-1no label
817
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom of China restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
-1no label
818
The pharaohs of the Middle Kingdom of China restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
-1no label
819
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
-1no label
820
The pharaohs of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
-1no label
821
The 15th Tank Corps was a tank corps of the Soviet Union's Red Army.
The 15th Tank Corps was a corps of the Soviet Union's Red Army.
-1no label
822
The 15th Tank Corps was a corps of the Soviet Union's Red Army.
The 15th Tank Corps was a tank corps of the Soviet Union's Red Army.
-1no label
823
I can't believe it's not butter.
It's not butter.
-1no label
824
It's not butter.
I can't believe it's not butter.
-1no label
825
I can't believe it's not butter.
It's butter.
-1no label
826
It's butter.
I can't believe it's not butter.
-1no label
827
However, these regularities are sometimes obscured by semantic and syntactic differences.
However, these regularities are always obscured by semantic and syntactic differences.
-1no label
828
However, these regularities are always obscured by semantic and syntactic differences.
However, these regularities are sometimes obscured by semantic and syntactic differences.
-1no label
829
However, these regularities are sometimes obscured by semantic and syntactic differences.
However, these regularities are sometimes obscured by syntactic differences.
-1no label
830
However, these regularities are sometimes obscured by syntactic differences.
However, these regularities are sometimes obscured by semantic and syntactic differences.
-1no label
831
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of pragmatic meaning enrichment.
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of meaning enrichment.
-1no label
832
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of meaning enrichment.
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of pragmatic meaning enrichment.
-1no label
833
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other colors.
-1no label
834
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other colors.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
-1no label
835
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other utterances.
-1no label
836
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other utterances.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
-1no label
837
While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
While most approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
-1no label
838
While most approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
-1no label
839
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can usually be inferred from the first few sentences.
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can always be inferred from the first few sentences.
-1no label
840
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can always be inferred from the first few sentences.
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can usually be inferred from the first few sentences.
-1no label
841
Each captures only a single aspect of coherence, and all focus on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
Each captures only a single aspect of coherence and focuses on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
-1no label
842
Each captures only a single aspect of coherence and focuses on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
Each captures only a single aspect of coherence, and all focus on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
-1no label
843
In a coherent context, a machine should be able to guess the next utterance given the preceding ones.
In a coherent context, a machine can guess the next utterance given the preceding ones.
-1no label
844
In a coherent context, a machine can guess the next utterance given the preceding ones.
In a coherent context, a machine should be able to guess the next utterance given the preceding ones.
-1no label
845
We thus propose eliminating the influence of the language model, which yields the following coherence score.
The language model yields the following coherence score.
-1no label
846
The language model yields the following coherence score.
We thus propose eliminating the influence of the language model, which yields the following coherence score.
-1no label
847
We thus propose eliminating the influence of the language model, which yields the following coherence score.
Eliminating the influence of the language model yields the following coherence score.
-1no label
848
Eliminating the influence of the language model yields the following coherence score.
We thus propose eliminating the influence of the language model, which yields the following coherence score.
-1no label
849
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
The topic for the current sentence is drawn based on the global document-level topic distribution in vanilla LDA.
-1no label
850
The topic for the current sentence is drawn based on the global document-level topic distribution in vanilla LDA.
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
-1no label
851
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word).
-1no label
852
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word).
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
-1no label
853
We publicly share our dataset and code for future research.
We publicly share our dataset for future research.
-1no label
854
We publicly share our dataset for future research.
We publicly share our dataset and code for future research.
-1no label
855
We publicly share our dataset and code for future research.
We code for future research.
-1no label
856
We code for future research.
We publicly share our dataset and code for future research.
-1no label
857
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
This gives to the model a sense of the implied action dynamics of the verb between the agent and the world.
-1no label
858
This gives to the model a sense of the implied action dynamics of the verb between the agent and the world.
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
-1no label
859
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
This gives the model to a sense of the implied action dynamics of the verb between the agent and the world.
-1no label
860
This gives the model to a sense of the implied action dynamics of the verb between the agent and the world.
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
-1no label
861
This attribute group specifies prominent body parts involved in carrying out the action.
This attribute group specifies prominent limbs involved in carrying out the action.
-1no label
862
This attribute group specifies prominent limbs involved in carrying out the action.
This attribute group specifies prominent body parts involved in carrying out the action.
-1no label
863
This problem has been studied before for zero-shot object recognition, but there are several key differences.
This problem has been previously studied for zero-shot object recognition, but there are several key differences.
-1no label
864
This problem has been previously studied for zero-shot object recognition, but there are several key differences.
This problem has been studied before for zero-shot object recognition, but there are several key differences.
-1no label
865
This problem has been studied before for zero-shot object recognition, but there are several key differences.
This problem will be studied for zero-shot object recognition, but there are several key differences.
-1no label
866
This problem will be studied for zero-shot object recognition, but there are several key differences.
This problem has been studied before for zero-shot object recognition, but there are several key differences.
-1no label
867
Understanding a long document requires tracking how entities are introduced and evolve over time.
Understanding a long document requires evolving over time.
-1no label
868
Understanding a long document requires evolving over time.
Understanding a long document requires tracking how entities are introduced and evolve over time.
-1no label
869
Understanding a long document requires tracking how entities are introduced and evolve over time.
Understanding a long document requires tracking how entities evolve over time.
-1no label
870
Understanding a long document requires tracking how entities evolve over time.
Understanding a long document requires tracking how entities are introduced and evolve over time.
-1no label
871
Understanding a long document requires tracking how entities are introduced and evolve over time.
Understanding a long document requires understanding how entities are introduced.
-1no label
872
Understanding a long document requires understanding how entities are introduced.
Understanding a long document requires tracking how entities are introduced and evolve over time.
-1no label
873
We do not assume that these variables are observed at test time.
These variables are not observed at test time.
-1no label
874
These variables are not observed at test time.
We do not assume that these variables are observed at test time.
-1no label
875
To compute the perplexity numbers on the test data, our model only takes account of log probabilities on word prediction.
To compute the perplexity numbers on the test data, our model doesn't take account of anything other than the log probabilities on word prediction.
-1no label
876
To compute the perplexity numbers on the test data, our model doesn't take account of anything other than the log probabilities on word prediction.
To compute the perplexity numbers on the test data, our model only takes account of log probabilities on word prediction.
-1no label
877
We also experiment with the option to either use the pretrained GloVe word embeddings or randomly initialized word embeddings (then updated during training).
We experiment with the option using randomly initialized word embeddings (then updated during training).
-1no label
878
We experiment with the option using randomly initialized word embeddings (then updated during training).
We also experiment with the option to either use the pretrained GloVe word embeddings or randomly initialized word embeddings (then updated during training).
-1no label
879
The entity prediction task requires predicting xxxx given the preceding text either by choosing a previously mentioned entity or deciding that this is a “new entity”.
The entity prediction task requires predicting xxxx given the preceding text by choosing a previously mentioned entity.
-1no label
880
The entity prediction task requires predicting xxxx given the preceding text by choosing a previously mentioned entity.
The entity prediction task requires predicting xxxx given the preceding text either by choosing a previously mentioned entity or deciding that this is a “new entity”.
-1no label
881
So there is no dedicated memory block for every entity and no distinction between entity mentions and non-mention words.
So there is no dedicated high-dimensional memory block for every entity and no distinction between entity mentions and non-mention words.
-1no label
882
So there is no dedicated high-dimensional memory block for every entity and no distinction between entity mentions and non-mention words.
So there is no dedicated memory block for every entity and no distinction between entity mentions and non-mention words.
-1no label
883
Our approach complements these previous methods.
Our approach complements some previous methods.
-1no label
884
Our approach complements some previous methods.
Our approach complements these previous methods.
-1no label
885
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated over 650 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
-1no label
886
We manually annotated over 650 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
-1no label
887
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated over 690 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
-1no label
888
We manually annotated over 690 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
-1no label
889
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
To generate diversity, workers whose paraphrases had high edit distance compared to the MG question got a bonus.
-1no label
890
To generate diversity, workers whose paraphrases had high edit distance compared to the MG question got a bonus.
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
-1no label
891
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
To generate diversity, workers got a bonus if the edit distance of a paraphrase was above 3 operations compared to the MG question.
-1no label
892
To generate diversity, workers got a bonus if the edit distance of a paraphrase was above 3 operations compared to the MG question.
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
-1no label
893
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate simple questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
-1no label
894
To generate simple questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
-1no label
895
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate highly compositional questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
-1no label
896
To generate highly compositional questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
-1no label
897
In this paper, we explore the idea of polyglot semantic translation, or learning semantic parsing models that are trained on multiple datasets and natural languages.
In this paper, we explore the idea of learning semantic parsing models that are trained on multiple datasets and natural languages.
-1no label
898
In this paper, we explore the idea of learning semantic parsing models that are trained on multiple datasets and natural languages.
In this paper, we explore the idea of polyglot semantic translation, or learning semantic parsing models that are trained on multiple datasets and natural languages.
-1no label
899