omarmomen commited on
Commit
0b70aa5
1 Parent(s): 5e0c3c5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - omarmomen/babylm_10M
5
+ language:
6
+ - en
7
+ metrics:
8
+ - perplexity
9
+ library_name: transformers
10
+ ---
11
+ # Model Card for omarmomen/structformer_s2_final_with_pos
12
+
13
+ This model is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023.
14
+ The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/)
15
+
16
+ <strong>omarmomen/structformer_s2_final_with_pos</strong> is a modification of the vanilla transformer encoder to incorporate syntactic inductive bias using an unsupervised parsing mechanism.
17
+
18
+ This model variant places the parser network after 4 attention blocks.
19
+
20
+ The model is pretrained on the BabyLM 10M dataset using a custom pretrained RobertaTokenizer (https://huggingface.co/omarmomen/babylm_tokenizer_32k).