add more info
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
|
3 |
### Overview
|
4 |
|
5 |
-
|
6 |
It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed.
|
7 |
|
8 |
### Performance
|
@@ -10,11 +10,15 @@ It is intended for language acquisition research, on a single desktop with a sin
|
|
10 |
The provided model is the best-performing out of 10 that were evaluated on the [Zorro](https://github.com/phueb/Zorro) test suite.
|
11 |
This model was trained for 400K steps, and achieves an overall accuracy of 80.3,
|
12 |
comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021).
|
13 |
-
The latter value is slightly larger than that reported in the paper (Huebner et al., 2020) because the authors previously lower-cased all words in Zorro before evaluation.
|
14 |
-
Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been exposed to proper nouns that are title-cased.
|
15 |
-
Because BabyBERTa is not case-sensitive, performance is not influenced by this change.
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
### Additional Information
|
20 |
|
|
|
2 |
|
3 |
### Overview
|
4 |
|
5 |
+
BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input.
|
6 |
It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed.
|
7 |
|
8 |
### Performance
|
|
|
10 |
The provided model is the best-performing out of 10 that were evaluated on the [Zorro](https://github.com/phueb/Zorro) test suite.
|
11 |
This model was trained for 400K steps, and achieves an overall accuracy of 80.3,
|
12 |
comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021).
|
|
|
|
|
|
|
13 |
|
14 |
+
Both values differ slightly from those reported in the paper (Huebner et al., 2020).
|
15 |
+
There are two reasons for this:
|
16 |
+
1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation.
|
17 |
+
Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased.
|
18 |
+
In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change.
|
19 |
+
2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective.
|
20 |
+
this resulted in a small reduction in the performance of BabyBERTa.
|
21 |
+
|
22 |
|
23 |
### Additional Information
|
24 |
|