premise
stringlengths
11
296
hypothesis
stringlengths
11
296
label
stringclasses
1 value
idx
int32
0
1.1k
__hfsplit__
stringclasses
1 value
__rowid__
stringlengths
32
32
If Charles' left wing, commanded by Nauendorf, united with Hotze's force, approaching from the east, Masséna knew Charles would attack and very likely push him out of Zürich.
If Charles' left wing, commanded by Nauendorf, united with Hotze's force, approaching from the east, Masséna would prepare for Charles to attack and very likely push him out of Zürich.
contradiction
800
test
744e40d611ea43719c56a71a755e8b4a
If Charles' left wing, commanded by Nauendorf, united with Hotze's force, approaching from the east, Masséna would prepare for Charles to attack and very likely push him out of Zürich.
If Charles' left wing, commanded by Nauendorf, united with Hotze's force, approaching from the east, Masséna knew Charles would attack and very likely push him out of Zürich.
contradiction
801
test
f0ab4ea9536d443893f8148799ab4be1
Ferdinand of Naples refused to pay agreed-upon tribute to France, and his subjects followed this refusal with a rebellion.
Ferdinand of Naples refused to pay France the agreed-upon tribute, and his subjects followed this refusal with a rebellion.
contradiction
802
test
2591bd3f5e804c68b77cd4dc5630a0ac
Ferdinand of Naples refused to pay France the agreed-upon tribute, and his subjects followed this refusal with a rebellion.
Ferdinand of Naples refused to pay agreed-upon tribute to France, and his subjects followed this refusal with a rebellion.
contradiction
803
test
33c9548b0dff42de972f997aebc12f2a
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
Furthermore, the French dangerously underestimated Austrian military skill and tenacity.
contradiction
804
test
6c558388b93c4cc6acaaf50c008c2151
Furthermore, the French dangerously underestimated Austrian military skill and tenacity.
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
contradiction
805
test
b22e7b345b744e968f2dccae1a5efd6f
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
Furthermore, the French dangerously underestimated Austrian military skill.
contradiction
806
test
d02e79e7156049ec922ea235b2618e46
Furthermore, the French dangerously underestimated Austrian military skill.
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
contradiction
807
test
e121be401a5d4060894b15641dea0d9e
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four supraocular scales (above the eyes) in most specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
contradiction
808
test
51a891d53f4f4525a95c08185bfd82df
There are four supraocular scales (above the eyes) in most specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
contradiction
809
test
94d652234459497d9b43d7c324a7fc12
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four scales above the eyes in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
contradiction
810
test
eccfd00871394c35961d319f7a4ec6c3
There are four scales above the eyes in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
contradiction
811
test
cd2361163453481eb8646dc92c0383e0
All 860 officers and men on board, including Spee, went down with the ship.
Spee went down with the ship.
contradiction
812
test
86b9826d915d43e187d944d3aab469aa
Spee went down with the ship.
All 860 officers and men on board, including Spee, went down with the ship.
contradiction
813
test
ae85e10174dd44e394348fac6a2595b2
Regional governors could not rely on the king for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
Regional governors could not rely on anyone for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
contradiction
814
test
9a01a3f3f06046df9b9b14ed6dc30a0e
Regional governors could not rely on anyone for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
Regional governors could not rely on the king for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
contradiction
815
test
ba103bd8ff354600a604d107aaf5a207
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
contradiction
816
test
441548393a0c4f1799225bbc7b2c857c
The pharaohs of the Middle Kingdom of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
contradiction
817
test
e5a618ca012046f28586e07e93e16b6b
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom of China restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
contradiction
818
test
7511e144403e4aeaa214bc3b387d400f
The pharaohs of the Middle Kingdom of China restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
contradiction
819
test
591ad070f23b41bfbf30f1035ae439ec
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
contradiction
820
test
310439439ca4478fbf28fa72783ecf34
The pharaohs of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
contradiction
821
test
b458cc50da244c48b045e66ef0f0b83a
The 15th Tank Corps was a tank corps of the Soviet Union's Red Army.
The 15th Tank Corps was a corps of the Soviet Union's Red Army.
contradiction
822
test
d21a50d2abbd495eb4f7bbab51f9431e
The 15th Tank Corps was a corps of the Soviet Union's Red Army.
The 15th Tank Corps was a tank corps of the Soviet Union's Red Army.
contradiction
823
test
25feed46ba3846efa7586a7c01ff1ace
I can't believe it's not butter.
It's not butter.
contradiction
824
test
e7097a6f736449cc98abfba29949311d
It's not butter.
I can't believe it's not butter.
contradiction
825
test
e1a6caa6324c4165b93ed04451732415
I can't believe it's not butter.
It's butter.
contradiction
826
test
54c86d5e71a2446ba00e7f13d326df2a
It's butter.
I can't believe it's not butter.
contradiction
827
test
3db31b6bd8894bd58e0d9fe4b951e46f
However, these regularities are sometimes obscured by semantic and syntactic differences.
However, these regularities are always obscured by semantic and syntactic differences.
contradiction
828
test
87b2f5d43ca64a4f992e1d648c739336
However, these regularities are always obscured by semantic and syntactic differences.
However, these regularities are sometimes obscured by semantic and syntactic differences.
contradiction
829
test
ecec3c378a8e4367a3836236892b50ac
However, these regularities are sometimes obscured by semantic and syntactic differences.
However, these regularities are sometimes obscured by syntactic differences.
contradiction
830
test
2a0784487ede4ddfb762c6a629e5e377
However, these regularities are sometimes obscured by syntactic differences.
However, these regularities are sometimes obscured by semantic and syntactic differences.
contradiction
831
test
f14debcf40ca4b9196325a5007f30ef1
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of pragmatic meaning enrichment.
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of meaning enrichment.
contradiction
832
test
204b76340d7e44bf8eaaf3e4563bfdf0
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of meaning enrichment.
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of pragmatic meaning enrichment.
contradiction
833
test
032ff1d1a31c413eb172389e6e70b07f
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other colors.
contradiction
834
test
63ee3166ed9045cf8b0fd93ce4e08ab1
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other colors.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
contradiction
835
test
26cc58adbfbc40a3ae845dfdff2144fe
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other utterances.
contradiction
836
test
c5b2f27ce8d5466ea647aa4194309925
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other utterances.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
contradiction
837
test
a4ef345297c74549b27b60a8a8d76b7d
While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
While most approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
contradiction
838
test
b1a40efa2bd94d8886dcd9478c399043
While most approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
contradiction
839
test
5581e91374ec49f6ae494a77c96b1a3f
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can usually be inferred from the first few sentences.
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can always be inferred from the first few sentences.
contradiction
840
test
44e0d7d35e9f487693944dbcf3968f90
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can always be inferred from the first few sentences.
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can usually be inferred from the first few sentences.
contradiction
841
test
c4e8728c318f4692848bdb94aeab639a
Each captures only a single aspect of coherence, and all focus on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
Each captures only a single aspect of coherence and focuses on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
contradiction
842
test
05aed5898ccb4dfca285b5d0f999e903
Each captures only a single aspect of coherence and focuses on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
Each captures only a single aspect of coherence, and all focus on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
contradiction
843
test
ce3014f92616498dace9db3f355f7321
In a coherent context, a machine should be able to guess the next utterance given the preceding ones.
In a coherent context, a machine can guess the next utterance given the preceding ones.
contradiction
844
test
28c8ca9e5f604cf59e22de4667bc6634
In a coherent context, a machine can guess the next utterance given the preceding ones.
In a coherent context, a machine should be able to guess the next utterance given the preceding ones.
contradiction
845
test
daee1c9c41f94844ac8d67c1eb21ca29
We thus propose eliminating the influence of the language model, which yields the following coherence score.
The language model yields the following coherence score.
contradiction
846
test
c689f1d9a2dd4a7ca597e68e3bfe2a2c
The language model yields the following coherence score.
We thus propose eliminating the influence of the language model, which yields the following coherence score.
contradiction
847
test
262a7fa5415543b5bb4f99d3f90a9e98
We thus propose eliminating the influence of the language model, which yields the following coherence score.
Eliminating the influence of the language model yields the following coherence score.
contradiction
848
test
c8f12cad7c924471b001ada63ebb18f8
Eliminating the influence of the language model yields the following coherence score.
We thus propose eliminating the influence of the language model, which yields the following coherence score.
contradiction
849
test
6f4a87203fc948f78770cafcebbf2c1b
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
The topic for the current sentence is drawn based on the global document-level topic distribution in vanilla LDA.
contradiction
850
test
51941f88a73142f29da77566bdb279eb
The topic for the current sentence is drawn based on the global document-level topic distribution in vanilla LDA.
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
contradiction
851
test
f2c4843e5ad341cbadbdbf2630256fce
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word).
contradiction
852
test
3129d7ddc6e142deb086f7b74dea6661
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word).
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
contradiction
853
test
8630d25a2d594e8aa2eae71c25a995c7
We publicly share our dataset and code for future research.
We publicly share our dataset for future research.
contradiction
854
test
cc48502dea10401f810138264d9bc3fa
We publicly share our dataset for future research.
We publicly share our dataset and code for future research.
contradiction
855
test
55eac0aacf02419f806500be026cc61a
We publicly share our dataset and code for future research.
We code for future research.
contradiction
856
test
7df8cf661ac34e5d8b90ad78a80a5bc0
We code for future research.
We publicly share our dataset and code for future research.
contradiction
857
test
3bb6c576fa8d4688b1927ab6e16f8458
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
This gives to the model a sense of the implied action dynamics of the verb between the agent and the world.
contradiction
858
test
6bac2d59d82b44b7a3136b192f0f33ae
This gives to the model a sense of the implied action dynamics of the verb between the agent and the world.
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
contradiction
859
test
24b4358565b8464c876957f6655fbf2e
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
This gives the model to a sense of the implied action dynamics of the verb between the agent and the world.
contradiction
860
test
37bd1dff9e4c4e36b583ba7ba63a1b2e
This gives the model to a sense of the implied action dynamics of the verb between the agent and the world.
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
contradiction
861
test
98d5dd21643d4bc08fe1941585bfb0fd
This attribute group specifies prominent body parts involved in carrying out the action.
This attribute group specifies prominent limbs involved in carrying out the action.
contradiction
862
test
402ed689150e4e59b86b8ae9ef5b25cc
This attribute group specifies prominent limbs involved in carrying out the action.
This attribute group specifies prominent body parts involved in carrying out the action.
contradiction
863
test
8a5c3f8383004248a31142d18497fc5d
This problem has been studied before for zero-shot object recognition, but there are several key differences.
This problem has been previously studied for zero-shot object recognition, but there are several key differences.
contradiction
864
test
ef6162de6b7d4d27b8cde347d878289b
This problem has been previously studied for zero-shot object recognition, but there are several key differences.
This problem has been studied before for zero-shot object recognition, but there are several key differences.
contradiction
865
test
72a0ebb1b97943cf97f4b09ab07a3567
This problem has been studied before for zero-shot object recognition, but there are several key differences.
This problem will be studied for zero-shot object recognition, but there are several key differences.
contradiction
866
test
6d47f17f63b343bcaff8243951304e9b
This problem will be studied for zero-shot object recognition, but there are several key differences.
This problem has been studied before for zero-shot object recognition, but there are several key differences.
contradiction
867
test
c06630ba06e5407c81dd841483af6587
Understanding a long document requires tracking how entities are introduced and evolve over time.
Understanding a long document requires evolving over time.
contradiction
868
test
5f44c66b2e8a43e1b62af7f5b234fdf3
Understanding a long document requires evolving over time.
Understanding a long document requires tracking how entities are introduced and evolve over time.
contradiction
869
test
aaab77059d374411b68ce5fbf9f58343
Understanding a long document requires tracking how entities are introduced and evolve over time.
Understanding a long document requires tracking how entities evolve over time.
contradiction
870
test
4069d206566942b4a206f4d918eab4b0
Understanding a long document requires tracking how entities evolve over time.
Understanding a long document requires tracking how entities are introduced and evolve over time.
contradiction
871
test
a3ca6805a9484522bed82a113b824a13
Understanding a long document requires tracking how entities are introduced and evolve over time.
Understanding a long document requires understanding how entities are introduced.
contradiction
872
test
163c7e760b5145ccb12a802d642f47ac
Understanding a long document requires understanding how entities are introduced.
Understanding a long document requires tracking how entities are introduced and evolve over time.
contradiction
873
test
e4a13eb66099468a90f8761d622f6f3f
We do not assume that these variables are observed at test time.
These variables are not observed at test time.
contradiction
874
test
e0d0aa0633b148519ec407c8b8f1dd6f
These variables are not observed at test time.
We do not assume that these variables are observed at test time.
contradiction
875
test
b7c870c088fb4ea3ab37b18fd2c3ce5a
To compute the perplexity numbers on the test data, our model only takes account of log probabilities on word prediction.
To compute the perplexity numbers on the test data, our model doesn't take account of anything other than the log probabilities on word prediction.
contradiction
876
test
edce9f2c5fde4d74a85365ab3df2824b
To compute the perplexity numbers on the test data, our model doesn't take account of anything other than the log probabilities on word prediction.
To compute the perplexity numbers on the test data, our model only takes account of log probabilities on word prediction.
contradiction
877
test
4e355856f0d24edcbf30c8f787bae4ca
We also experiment with the option to either use the pretrained GloVe word embeddings or randomly initialized word embeddings (then updated during training).
We experiment with the option using randomly initialized word embeddings (then updated during training).
contradiction
878
test
a832c73e39ab484cb6045dc61fd86732
We experiment with the option using randomly initialized word embeddings (then updated during training).
We also experiment with the option to either use the pretrained GloVe word embeddings or randomly initialized word embeddings (then updated during training).
contradiction
879
test
3d520cbcfd404ba7a6e2c8409660c265
The entity prediction task requires predicting xxxx given the preceding text either by choosing a previously mentioned entity or deciding that this is a “new entity”.
The entity prediction task requires predicting xxxx given the preceding text by choosing a previously mentioned entity.
contradiction
880
test
5a4db0f42ff04c82b638dfdd0442b461
The entity prediction task requires predicting xxxx given the preceding text by choosing a previously mentioned entity.
The entity prediction task requires predicting xxxx given the preceding text either by choosing a previously mentioned entity or deciding that this is a “new entity”.
contradiction
881
test
dfdd9d0d53eb41cfbfede1f992d617f4
So there is no dedicated memory block for every entity and no distinction between entity mentions and non-mention words.
So there is no dedicated high-dimensional memory block for every entity and no distinction between entity mentions and non-mention words.
contradiction
882
test
1a6a540fb4a14ae0907cb3c46d080382
So there is no dedicated high-dimensional memory block for every entity and no distinction between entity mentions and non-mention words.
So there is no dedicated memory block for every entity and no distinction between entity mentions and non-mention words.
contradiction
883
test
966d2828da234e08bb52a9ab7d1e1cc1
Our approach complements these previous methods.
Our approach complements some previous methods.
contradiction
884
test
44649b7b44834128b94e711d4b85eb55
Our approach complements some previous methods.
Our approach complements these previous methods.
contradiction
885
test
a95c85a4e1bd4ac4ba57452c4541407b
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated over 650 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
contradiction
886
test
56ec4382e5b640be9f6357f9c4c18424
We manually annotated over 650 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
contradiction
887
test
ba4fa52fd3a04f1e8edec38548f8f5c9
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated over 690 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
contradiction
888
test
93c95d96a7864d18857f84f8256afff4
We manually annotated over 690 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
contradiction
889
test
804e738548c844df83181ae9cce6a147
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
To generate diversity, workers whose paraphrases had high edit distance compared to the MG question got a bonus.
contradiction
890
test
0d2016589319463e9565e2fad883d627
To generate diversity, workers whose paraphrases had high edit distance compared to the MG question got a bonus.
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
contradiction
891
test
8ecff5a968ce4532b07f285a7cc3ec8d
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
To generate diversity, workers got a bonus if the edit distance of a paraphrase was above 3 operations compared to the MG question.
contradiction
892
test
557ec8134af24b51ae174d39f60557a5
To generate diversity, workers got a bonus if the edit distance of a paraphrase was above 3 operations compared to the MG question.
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
contradiction
893
test
972e047a1b6947b18c61f8ac77d9ab01
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate simple questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
contradiction
894
test
b26f04667ac54c35b1ea8389ef8277d0
To generate simple questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
contradiction
895
test
3d1f470df7cd4e67956c55c086fd48c9
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate highly compositional questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
contradiction
896
test
52a6cecfe1b44dabaa0d800748b9f43d
To generate highly compositional questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
contradiction
897
test
d17601648d514c209e16e76010bd936b
In this paper, we explore the idea of polyglot semantic translation, or learning semantic parsing models that are trained on multiple datasets and natural languages.
In this paper, we explore the idea of learning semantic parsing models that are trained on multiple datasets and natural languages.
contradiction
898
test
5904047c5a914b1287e9bccc42210eb9
In this paper, we explore the idea of learning semantic parsing models that are trained on multiple datasets and natural languages.
In this paper, we explore the idea of polyglot semantic translation, or learning semantic parsing models that are trained on multiple datasets and natural languages.
contradiction
899
test
426ae8e8b2314049a8f38b2258bb659e