Back to all datasets
Dataset: blimp 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/nlp library:

				
Copy to clipboard
from nlp import load_dataset dataset = load_dataset("blimp")

Description

BLiMP is a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics. The data is automatically generated according to expert-crafted grammars.

Citation

@article{warstadt2019blimp,
  title={BLiMP: A Benchmark of Linguistic Minimal Pairs for English},
  author={Warstadt, Alex and Parrish, Alicia and Liu, Haokun and Mohananey, Anhad and Peng, Wei, and Wang, Sheng-Fu and Bowman, Samuel R},
  journal={arXiv preprint arXiv:1912.00582},
  year={2019}
}

Models trained or fine-tuned on blimp

None yet. Start fine-tuning now =)