Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ tags:
|
|
4 |
license: cc-by-4.0
|
5 |
---
|
6 |
## bert-ascii-small
|
7 |
-
A small
|
8 |
|
9 |
## License
|
10 |
CC BY 4.0
|
|
|
4 |
license: cc-by-4.0
|
5 |
---
|
6 |
## bert-ascii-small
|
7 |
+
A small-size BERT Language Model pre-trained by predicting the summation of the **ASCII** code values of the characters in a masked token as a pre-training objective. For more details about the pre-training objective and the pre-training hyperparameters, please refer to [How does the pre-training objective affect what large language models learn about linguistic properties?](https://aclanthology.org/2022.acl-short.16/)
|
8 |
|
9 |
## License
|
10 |
CC BY 4.0
|