Datasets:
Tasks:
Text Classification
Sub-tasks:
multi-class-classification
Languages:
English
Size:
1K<n<10K
License:
metadata
language:
- en
paperswithcode_id: trecqa
pretty_name: Text Retrieval Conference Question Answering
Dataset Card for "trec"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://cogcomp.seas.upenn.edu/Data/QA/QC/
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 0.34 MB
- Size of the generated dataset: 0.39 MB
- Total amount of disk used: 0.74 MB
Dataset Summary
The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set. The dataset has 6 labels, 47 level-2 labels. Average length of each sentence is 10, vocabulary size of 8700.
Data are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
default
- Size of downloaded dataset files: 0.34 MB
- Size of the generated dataset: 0.39 MB
- Total amount of disk used: 0.74 MB
An example of 'train' looks as follows.
{
"label-coarse": 1,
"label-fine": 2,
"text": "What fowl grabs the spotlight after the Chinese Year of the Monkey ?"
}
Data Fields
The data fields are the same among all splits.
default
label-coarse
: a classification label, with possible values includingDESC
(0),ENTY
(1),ABBR
(2),HUM
(3),NUM
(4).label-fine
: a classification label, with possible values includingmanner
(0),cremat
(1),animal
(2),exp
(3),ind
(4).text
: astring
feature.
Data Splits
name | train | test |
---|---|---|
default | 5452 | 500 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@inproceedings{li-roth-2002-learning,
title = "Learning Question Classifiers",
author = "Li, Xin and
Roth, Dan",
booktitle = "{COLING} 2002: The 19th International Conference on Computational Linguistics",
year = "2002",
url = "https://www.aclweb.org/anthology/C02-1150",
}
@inproceedings{hovy-etal-2001-toward,
title = "Toward Semantics-Based Answer Pinpointing",
author = "Hovy, Eduard and
Gerber, Laurie and
Hermjakob, Ulf and
Lin, Chin-Yew and
Ravichandran, Deepak",
booktitle = "Proceedings of the First International Conference on Human Language Technology Research",
year = "2001",
url = "https://www.aclweb.org/anthology/H01-1069",
}