Datasets:
YAML tags: null
annotations_creators:
- expert-generated
- crowdsourced
language_creators:
- expert-generated
languages:
- en-US
licenses:
- other-individual-licenses
multilinguality:
- monolingual
pretty_name: ''
size_categories:
- unknown
source_datasets:
- original
- extended|ade_corpus_v2
- extended|banking77
task_categories:
- text-classification
task_ids:
- multi-class-classification
Dataset Card for RAFT
Table of Contents
- Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: raft.elicit.org
- Repository: https://huggingface.co/datasets/ought/raft
- Paper: forthcoming
- Leaderboard: https://huggingface.co/spaces/ought/raft-leaderboard
- Point of Contact: alexneel@gmail.com
Dataset Summary
The Real-world Annotation for Few-shot Tasks (RAFT) dataset is an aggregation of English-language datasets found in the real world. Associated with each dataset is a binary or multiclass classification task, intended to improve our understanding of how language models perform on tasks that have concrete, real-world value. Only 50 labeled examples are provided in each dataset.
Supported Tasks and Leaderboards
text-classification
: Each subtask in RAFT is a text classification task, and the provided train and test sets can be used to submit to the (RAFT Leaderboard)[https://huggingface.co/spaces/ought/raft-leaderboard] To prevent overfitting and tuning on a held-out test set, the leaderboard is only evaluated once per week. Each task has its macro-f1 score calculated, then those scores are averaged to produce the overall leaderboard score.
Languages
The only language intentionally included in the dataset is American English (en-US). However, we have not examined every single example in the train and test sets.
Dataset Structure
Data Instances
Dataset | First Example |
---|---|
Ade Corpus V2 | Sentence: No regional side effects were noted. |
Banking 77 | Query: Is it possible for me to change my PIN number? |
Terms Of Service | Sentence: Crowdtangle may change these terms of service, as described above, notwithstanding any provision to the contrary in any agreemen... |
Tai Safety Research | Title: Malign generalization without internal search |
Neurips Impact Statement Risks | Paper title: Auto-Panoptic: Cooperative Multi-Component Architecture Search for Panoptic Segmentation... |
Overruling | Sentence: in light of both our holding today and previous rulings in johnson, dueser, and gronroos, we now explicitly overrule dupree.... |
Systematic Review Inclusion | Title: Prototyping and transforming facial textures for perception research... |
One Stop English | Article: For 85 years, it was just a grey blob on classroom maps of the solar system. But, on 15 July, Pluto was seen in high resolution ... |
Tweet Eval Hate | Tweet: New to Twitter-- any men on here know what the process is to get #verified?... |
Twitter Complaints | Tweet text: @HMRCcustomers No this is my first job |
Semiconductor Org Types | Paper title: 3Gb/s AC-coupled chip-to-chip communication using a low-swing pulse receiver... |
Data Fields
The ID field is used for indexing data points. It will be used to match your submissions with the true test labels, so you must include it in your submission. All other columns contain textual data. Some contain links and URLs to websites on the internet.
All output fields are designated with the "Label" column header. The 0 value in this column indicates that the entry is unlabeled, and should only appear in the unlabeled test set. Other values in this column are various other labels. To get their textual value for a given dataset:
# Load the dataset
dataset = datasets.load_dataset("ought/raft", "ade_corpus_v2")
# First, get the object that holds information about the "Label" feature in the dataset.
label_info = dataset.features["Label"]
# Use the int2str method to access the textual labels.
print([label_info.int2str(i) for i in (0, 1, 2)])
# >>> ['Unlabeled', 'ADE-related', 'not ADE-related']
Data Splits
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
RAFT aggregates many other datasets, each of which is provided under its own license. Generally, those licenses permit research and commercial use.
Dataset | License |
---|---|
Ade Corpus V2 | Test |
Banking 77 | |
Terms Of Service | |
Tai Safety Research | |
Neurips Impact Statement Risks | |
Overruling | |
Systematic Review Inclusion | |
One Stop English | |
Tweet Eval Hate | |
Twitter Complaints | |
Semiconductor Org Types |
Citation Information
[More Information Needed]
Contributions
Thanks to @neel-alex, @uvafan, and @lewtun for adding this dataset.