omarmomen's picture
Update README.md
8952578 verified
metadata
license: mit
datasets:
  - omarmomen/babylm_10M
language:
  - en
metrics:
  - perplexity
library_name: transformers

Model Card for omarmomen/structformer_s1_final_with_pos

This model is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023. The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/)

omarmomen/structformer_s1_final_with_pos is a modification of the vanilla transformer encoder to incorporate syntactic inductive bias using an unsupervised parsing mechanism.

This model variant places the parser network ahead of all the attention blocks.

The model is pretrained on the BabyLM 10M dataset using a custom pretrained RobertaTokenizer (https://huggingface.co/omarmomen/babylm_tokenizer_32k).

https://arxiv.org/abs/2310.20589