Datasets:
task_categories:
- question-answering
- text-retrieval
- text2text-generation
task_ids:
- open-domain-qa
- document-retrieval
- abstractive-qa
language:
- pl
pretty_name: PolQA
size_categories:
- 10K<n<100K
annotations_creators:
- expert-generated
Dataset Card for PolQA Dataset
Dataset Description
- Paper: Improving Question Answering Performance through Manual Annotation: Costs, Benefits and Strategies
- Point of Contact: Piotr Rybak
Dataset Summary
PolQA is the first Polish dataset for open-domain question answering. It consists of 7,000 questions, 87,525 manually labeled evidence passages, and a corpus of over 7 million candidate passages. The dataset can be used to train both a passage retriever and an abstractive reader.
Supported Tasks and Leaderboards
open-domain-qa
: The dataset can be used to train a model for open-domain question answering. Success on this task is typically measured using metric defined during PolEval 2021.document-retrieval
: The dataset can be used to train a model for document retrieval. Success on this task is typically measured by top-k retrieval accuracy or NDCG.abstractive-qa
: The dataset can be used to train a model for abstractive question answering. Success on this task is typically measured using metric defined during PolEval 2021.
Languages
The text is in Polish, as spoken by the host of the Jeden z Dziesięciu TV show (questions) and Polish Wikipedia editors (passages). The BCP-47 code for Polish is pl-PL.
Dataset Structure
Data Instances
The main part of the dataset consists of manually annotated question-passage pairs. For each instance, there is a question
, a passage (passage_id
, passage_title
, passage_text
), and a boolean indicator if the passage is relevant
for the given question (i.e. does it contain the answers).
For each question
there is a list of possible answers
formulated in a natural language, in a way a Polish
speaker would answer the questions. It means that the answers might
contain prepositions, be inflected, and contain punctuation. In some
cases, the answer might have multiple correct variants, e.g. numbers
are written as numerals and words, synonyms, abbreviations and their
expansions.
Additionally, we provide a classification of each question-answer pair based on the question_formulation
, the question_type
, and the entity_type/entity_subtype
, according to the taxonomy proposed by
Maciej Ogrodniczuk and Piotr Przybyła (2021).
{
'question_id': 6,
'passage_title': 'Mumbaj',
'passage_text': 'Mumbaj lub Bombaj (marathi मुंबई, trb.: Mumbaj; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim.',
'passage_wiki': 'Mumbaj lub Bombaj (mr. मुंबई, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszą po Delhi aglomerację liczącą 23 miliony mieszkańców. Dzięki naturalnemu położeniu jest to największy port morski kraju. Znajdują się tutaj także najsilniejsze giełdy Azji Południowej: National Stock Exchange of India i Bombay Stock Exchange.',
'passage_id': '42609-0',
'duplicate': False,
'question': 'W którym państwie leży Bombaj?',
'relevant': True,
'annotated_by': 'Igor',
'answers': "['w Indiach', 'Indie']",
'question_formulation': 'QUESTION',
'question_type': 'SINGLE ENTITY',
'entity_type': 'NAMED',
'entity_subtype': 'COUNTRY',
'split': 'train',
'passage_source': 'human'
}
The second part of the dataset is a corpus of Polish Wikipedia (March 2022 snapshot) passages. The raw Wikipedia snapshot was parsed using WikiExtractor and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.
{
'id': '42609-0',
'title': 'Mumbaj',
'text': 'Mumbaj lub Bombaj (mr. मुंबई, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszą po Delhi aglomerację liczącą 23 miliony mieszkańców. Dzięki naturalnemu położeniu jest to największy port morski kraju. Znajdują się tutaj także najsilniejsze giełdy Azji Południowej: National Stock Exchange of India i Bombay Stock Exchange.'
}
Data Fields
Question-passage pairs:
question_id
: an integer id of the questionpassage_title
: a string containing the title of the Wikipedia articlepassage_text
: a string containing the passage text as extracted by the human annotatorpassage_wiki
: a string containing the passage text as it can be found in the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.passage_id
: a string containing the id of the passage from the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.duplicate
: a boolean flag representing whether a question-passage pair is duplicated in the dataset. This occurs when the same passage was found in multiple passage sources.question
: a string containing the questionrelevant
: a boolean flag representing whether a passage is relevant to the question (i.e. does it contain the answers)annotated_by
: a string containing the name of the annotator who verified the relevance of the pairanswers
: a string containing a list of possible short answers to the questionquestion_formulation
: a string containing a kind of expression used to request information. One of the following:QUESTION
, e.g. What is the name of the first letter of the Greek alphabet?COMMAND
, e.g. Expand the abbreviation ’CIA’.COMPOUND
, e.g. This French writer, born in the 19th century, is considered a pioneer of sci-fi literature. What is his name?
question_type
: a string indicating what type of information is sought by the question. One of the following:SINGLE ENTITY
, e.g. Who is the hero in the Tomb Rider video game series?MULTIPLE ENTITIES
, e.g. Which two seas are linked by the Corinth Canal?ENTITY CHOICE
, e.g. Is "Sombrero" a type of dance, a hat, or a dish?YES/NO
, e.g. When the term of office of the Polish Sejm is terminated, does it apply to the Senate as well?OTHER NAME
, e.g. What was the nickname of Louis I, the King of the Franks?GAP FILLING
, e.g. Finish the proverb: "If you fly with the crows... ".
entity_type
: a string containing a type of the sought entity. One of the following:NAMED
,UNNAMED
, orYES/NO
.entity_subtype
: a string containing a subtype of the sought entity. Can take one of the 34 different values.split
: a string containing the split of the dataset. One of the following:train
,valid
, ortest
.passage_source
: a string containing the source of the passage. One of the following:human
: the passage was proposed by a human annotator using any internal (i.e. Wikipedia search) or external (e.g. Google) search engines and any keywords or queries they considered usefulhard-negatives
: the passage was proposed using a neural retriever trained on the passages found by the human annotatorszero-shot
: the passage was proposed by the BM25 retriever and re-ranked using multilingual cross-encoder
Corpus of passages:
id
: a string representing the Wikipedia article id and the index of extracted passage. Matches thepassage_id
from the main part of the dataset.title
: a string containing the title of the Wikipedia article. Matches thepassage_title
from the main part of the dataset.text
: a string containing the passage text. Matches thepassage_wiki
from the main part of the dataset.
Data Splits
The questions are assigned into one of three splits: train
, validation
, and test
. The validation
and test
questions are randomly sampled from the test-B
dataset from the PolEval 2021 competition.
# questions | # positive passages | # negative passages | |
---|---|---|---|
training | 5,000 | 27,131 | 34,904 |
validation | 1,000 | 5,839 | 6,927 |
text | 1,000 | 5,938 | 6,786 |
Dataset Creation
Curation Rationale
The PolQA dataset was created to support and promote the research in the open-domain question answering for Polish. It also serves as a benchmark to evaluate OpenQA systems.
Source Data
Initial Data Collection and Normalization
The majority of questions come from two existing resources, the 6,000 questions from the PolEval 2021 shared task on QA and additional 1,000 questions gathered by one of the shared task participants. Originally, the questions come from collections associated with TV shows, both officially published and gathered online by their fans, as well as questions used in actual quiz competitions, on TV or online.
The evidence passages come from the Polish Wikipedia (March 2022 snapshot). The raw Wikipedia snapshot was parsed using WikiExtractor and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.
Who are the source language producers?
The questions come from various sources and their authors are unknown but are mostly analogous (or even identical) to questions asked during the Jeden z Dziesięciu TV show.
The passages were written by the editors of the Polish Wikipedia.
Annotations
Annotation process
Two approaches were used to annotate the question-passage pairs. Each of them consists of two phases: the retrieval of candidate passages and the manual verification of their relevance.
In the first approach, we asked annotators to use internal (i.e. Wikipedia search) or external (e.g. Google) search engines to find up to five relevant passages using any keywords or queries they consider useful (passage_source="human"
). Based on those passages, we trained the neural retriever to extend the number of relevant passages, as well as to retrieve the hard negatives (passage_source="hard-negatives"
).
In the second approach, the passage candidates were proposed by the BM25 retriever and re-ranked using multilingual cross-encoder (passage_source="zero-shot"
).
In both cases, all proposed question-passage pairs were manually verified by the annotators.
Who are the annotators?
The annotation team consisted of 16 annotators, all native Polish speakers, most of them having linguistic backgrounds and previous experience as an annotator.
Personal and Sensitive Information
The dataset does not contain any personal or sensitive information.
Considerations for Using the Data
Social Impact of Dataset
This dataset was created to promote the research in the open-domain question answering for Polish and allow developing question answering systems.
Discussion of Biases
The passages proposed by the hard-negative
and zero-shot
methods are bound to be easier to retrieve by retrievers since they were proposed by such. To mitigate this bias, we include the passages found by the human annotators in an unconstrained way (passage_source="human"
). We hypothesize that it will result in more unbiased and diverse examples. Moreover, we asked the annotators to find not one but up to five passages, preferably from different articles to even further increase passage diversity.
Other Known Limitations
The PolQA dataset focuses on trivia questions which might limit its usefulness in real-world applications since neural retrievers generalize poorly to other domains.
Additional Information
Dataset Curators
The PolQA dataset was developed by Piotr Rybak, Piotr Przybyła, and Maciej Ogrodniczuk from the Institute of Computer Science, Polish Academy of Sciences.
Licensing Information
[More Information Needed]
Citation Information
@misc{rybak2022improving,
title={Improving Question Answering Performance through Manual Annotation: Costs, Benefits and Strategies},
author={Piotr Rybak and Piotr Przybyła and Maciej Ogrodniczuk},
year={2022},
eprint={2212.08897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}