File size: 508 Bytes
6fa1048
 
 
f5e1df5
 
 
 
 
1
2
3
4
5
6
7
8
## BabyBERTA

BabyBERTA is a slightly-modified and much smaller RoBERTa model trained on 5M words of American-English child-directed input.
It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed. 

This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org).

More info can be found [here](https://github.com/phueb/BabyBERTa).