aajrami commited on
Commit
cfa2731
1 Parent(s): c25be44

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -1,3 +1,22 @@
1
  ---
 
 
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - bert
4
  license: cc-by-4.0
5
  ---
6
+ ## bert-rand-medium
7
+ is a medium-size BERT Language Model with a **random** pre-training objective. For more details about the pre-training objective and the pre-training hyperparameters, please refer to [How does the pre-training objective affect what large language models learn about linguistic properties?](https://aclanthology.org/2022.acl-short.16/)
8
+
9
+ ## License
10
+ CC BY 4.0
11
+
12
+ ## Citation
13
+ If you use this model, please cite the following paper:
14
+ ```
15
+ @inproceedings{alajrami2022does,
16
+ title={How does the pre-training objective affect what large language models learn about linguistic properties?},
17
+ author={Alajrami, Ahmed and Aletras, Nikolaos},
18
+ booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
19
+ pages={131--147},
20
+ year={2022}
21
+ }
22
+ ```