Edit model card

Model Card for omarmomen/structroberta_s1_final

This model is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023. The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/)

omarmomen/structroberta_s1_final is a modification on the Roberta Model to incorporate syntactic inductive bias using an unsupervised parsing mechanism.

This model variant places the parser network ahead of all attention blocks.

The model is pretrained on the BabyLM 10M dataset using a custom pretrained RobertaTokenizer (https://huggingface.co/omarmomen/babylm_tokenizer_32k).

https://arxiv.org/abs/2310.20589

Downloads last month
9
Inference Examples
Mask token: <mask>
Inference API (serverless) does not yet support model repos that contain custom code.

Dataset used to train omarmomen/structroberta_s1_final