File size: 3,177 Bytes
86e225d
 
 
 
 
 
 
 
 
 
 
6fa1048
 
3a4027c
 
d64ce14
f5e1df5
 
96ea199
 
096814d
 
13c39ae
f3f2b02
096814d
 
f3f2b02
13c39ae
096814d
 
96ea199
 
 
 
 
 
 
3a4027c
 
96ea199
 
 
 
1c0595e
d64ce14
 
 
 
 
 
 
f3f2b02
 
96ea199
 
0929141
f3f2b02
 
96ea199
 
3a4027c
 
 
f5e1df5
 
96ea199
 
0929141
 
df8908b
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
language: en
tags:
- BabyBERTa
datasets:
- CHILDES
widget:
- text: "Look here. What is that <mask> ?"
- text: "Do you like your <mask> ?"
---

## BabyBERTA

### Overview

BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input.
It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed. 

The three provided models are randomly selected from 10 that were trained and reported in the paper. 

## Loading the tokenizer

BabyBERTa was trained with `add_prefix_space=True`, so it will not work properly with the tokenizer defaults.
For instance, to load the tokenizer for BabyBERTa-1, load it as follows:

```python
tokenizer = RobertaTokenizerFast.from_pretrained("phueb/BabyBERTa-1",
                                                 add_prefix_space=True)
```

### Hyper-Parameters

See the paper for details. 
All provided models were trained for 400K steps with a batch size of 16.
Importantly, BabyBERTa never predicts unmasked tokens during training - `unmask_prob` is set to  zero.


### Performance

BabyBerta was developed for learning grammatical knowledge from child-directed input. 
Its grammatical knowledge was evaluated using the [Zorro](https://github.com/phueb/Zorro) test suite.
The best model achieves an overall accuracy of 80.3, 
comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021).
Both values differ slightly from those reported in the [CoNLL 2021 paper](https://aclanthology.org/2021.conll-1.49/). 
There are two reasons for this:
1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation.
Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased.
In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change.
2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective.
 this resulted in a small reduction in the performance of BabyBERTa.
 
Overall Accuracy on Zorro:
 
| Model Name                             | Accuracy (holistic scoring)  | Accuracy (MLM-scoring) | 
|----------------------------------------|------------------------------|------------|
| [BabyBERTa-1][link-BabyBERTa-1]        | 80.3                         | 79.9       | 
| [BabyBERTa-2][link-BabyBERTa-2]        | 78.6                         | 78.2       | 
| [BabyBERTa-3][link-BabyBERTa-3]        | 74.5                         | 78.1       | 



### Additional Information

This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org).

More info can be found [here](https://github.com/phueb/BabyBERTa).


[link-BabyBERTa-1]: https://huggingface.co/phueb/BabyBERTa-1
[link-BabyBERTa-2]: https://huggingface.co/phueb/BabyBERTa-2
[link-BabyBERTa-3]: https://huggingface.co/phueb/BabyBERTa-3