File size: 845 Bytes
4f8709f
17b74b1
 
4f8709f
 
aa11d78
1f258d8
17b74b1
 
 
 
d2ca64f
f4381ba
bdca281
d9ebdee
d2ca64f
 
d9ebdee
 
d2ca64f
bdca281
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
tags: 
- bert
license: cc-by-4.0
---
## bert-rand-base
A BERT base Language Model with a **random** pre-training objective. For more details about the pre-training objective and the pre-training hyperparameters, please refer to [How does the pre-training objective affect what large language models learn about linguistic properties?](https://aclanthology.org/2022.acl-short.16/)

## License
CC BY 4.0

## Citation
If you use this model, please cite the following paper:
```
@inproceedings{alajrami2022does,
  title={How does the pre-training objective affect what large language models learn about linguistic properties?},
  author={Alajrami, Ahmed and Aletras, Nikolaos},
  booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
  pages={131--147},
  year={2022}
}
```