Datasets:
Tasks:
Question Answering
Sub-tasks:
multiple-choice-qa
Language Creators:
found
Annotations Creators:
found
Source Datasets:
original
ArXiv:
Tags:
License:
Meaningless Paragraph
#2
by
keremturgutlu
- opened
Paragraphs provided for each example looks like some random text, I manually went over some examples and I can verify that there is no coherency between sentences in the paragraph or relevancy to the given question . Could you please provide more information in the datasheet on how the paragraphs were obtained, more detail beyond the source: wikipedia. Thanks!
@mhardalov I am also curious about how these paragraphs were selected, and why there are 4 slightly different supporting paragraphs for each question?