--- language: - pt task_categories: - question-answering - text-retrieval task_ids: - document-retrieval - closed-domain-qa - explanation-generation dataset_info: - config_name: corpus features: - name: id dtype: string - name: file dtype: string - name: text dtype: string splits: - name: corpus num_bytes: 3972037 num_examples: 2657 download_size: 1900813 dataset_size: 3972037 - config_name: default features: - name: query-id dtype: string - name: positive-doc-id dtype: string - name: candidates-ids sequence: string splits: - name: train num_bytes: 1047552 num_examples: 2307 - name: dev num_bytes: 22687 num_examples: 50 - name: test num_bytes: 136328 num_examples: 300 download_size: 259566 dataset_size: 1206567 - config_name: queries features: - name: id dtype: string - name: file dtype: string - name: subject dtype: string - name: text dtype: string splits: - name: queries num_bytes: 1023324 num_examples: 2657 download_size: 629883 dataset_size: 1023324 configs: - config_name: corpus data_files: - split: corpus path: corpus/corpus-* - config_name: default data_files: - split: train path: data/train-* - split: dev path: data/dev-* - split: test path: data/test-* - config_name: queries data_files: - split: queries path: queries/queries-* license: cc-by-nc-nd-4.0 --- # The MilkQA Dataset *I am not the author of this dataset, this is a reproduction of the MilkQA dataset on HuggingFace, the original data can be downloaded from the link: http://nilc.icmc.usp.br/nilc/index.php/milkqa* MilkQA is a dataset of dense questions for the task of answer selection. It contains questions and answers of the dairy farming domain that were collected by the customer service of Embrapa Dairy Cattle between the years of 2003 and 2012. The dataset currently contains 2,657 anonymized pairs of questions and answers and is organized in three partitions: training, development and tests, that contains 2,307, 50 and 300 questions, respectively. Each question is associated to a pool of 50 candidate answers where only one answer is correct. MilkQA is composed of challenging questions that are different from those typically approached in Question Answering. In our work, we call them consumer questions. These questions usually occur in situations where people seek solutions for some problem and present very particular characteristics, such as the larger size and the lack of objectivity. Details about MilkQA are provided in the paper referenced below. Please, if you use the dataset, consider citing this paper (available at https://arxiv.org/abs/1801.03460). ## Citation **BibTeX:** ```bibtext @inproceedings{criscuolo2017milkqa, author = {Marcelo Criscuolo and Erick Rocha Fonseca and Sandra Maria Aluísio and Ana Carolina Sperança-Criscuolo}, title = {{MilkQA}: a Dataset of Consumer Questions for the Task of Answer Selection}, booktitle = {Proceedings of the 6th Brazilian Conference on Intelligent Systems (BRACIS)}, year = {2017}, month = {October}, date = {2-5}, address = {Uberlândia, Brazil}, publisher = {IEEE}, isbn = {978-1-5386-2407-4}, pages = {354--359}, volume = {1}, doi = {10.1109/BRACIS.2017.12}, } ``` ## License MilkQA is published by the Interinstitutional Center for Computational Lisguistics [NILC](http://nilc.icmc.usp.br/) at University of Sao Paulo (USP) under the license Creative Commons, with the clauses Attribution, NonCommercial and NoDerivatives [CC BY-NC-ND](https://creativecommons.org/licenses/by-nc-nd/4.0/). # Dataset Contents 1. Data Directory The queries and corpus subsets contains all the pairs of questions and answers linked by the same **id**. 2. Qrels The splits {train,dev,test} correspond to the training, development and test subsets. The column **query-id**, which represents a question in *queries* subset, is associated with its ground truth answer (**positive-doc-id**) and a pool of candidate answers (**candidates-ids**). The pool of candidates contains the ground truth answer. ## Dataset Description - **Homepage:** http://nilc.icmc.usp.br/nilc/index.php/milkqa - **Paper:** [MilkQA: a Dataset of Consumer Questions for the Task of Answer Selection](https://arxiv.org/abs/1801.03460)