BabyBERTa-1 / README.md
phueb's picture
add more info
f5e1df5

BabyBERTA

BabyBERTA is a slightly-modified and much smaller RoBERTa model trained on 5M words of American-English child-directed input. It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed.

This model was trained by Philip Huebner, currently at the UIUC Language and Learning Lab.

More info can be found here.