Salhan et al (2024) Multilingual BabyLMs trained on CHILDES corpora.

CLIMB-MAO
AI & ML interests
None defined yet.
Recent Activity
View all activity
Organization Card
Less is More: Pre-Training Cross-Lingual Small-Scale Language Models with Cognitively-Plausible Curriculum Learning Strategies. Available from: https://arxiv.org/abs/2410.22886.
Salhan et al (2024) creates age-ordered corpora of Child-Directed Speech for four typologically distant language families to implement SSLMs and acquisition-inspired curricula cross-lingually.
The MAO-CHILDES dataset contains extract orthographic datasets for French, German, Japanese and Chinese and several other lower-resource languages. It is part of a wider effort for cognitively-inspired pretraining using resources from Language Acquistiion.
You can also find pretrained BabyLMs for French, German, Japanese and Chinese with three different cognitively-inspired curriculum learning in the branches of each language-specific BabyLM.
Collections
1
models
22

climb-mao/dutch-childes-curricula
Updated

climb-mao/japanese-childes-curricula
Updated

climb-mao/english-childes-curricula
Updated

climb-mao/portuguese-climb-roberta_pre_layer_norm-model
Updated

climb-mao/german-climb-roberta_pre_layer_norm-model
Updated

climb-mao/spanish-climb-roberta_pre_layer_norm-model
Updated

climb-mao/chinese-climb-roberta_pre_layer_norm-model
Updated

climb-mao/french-climb-roberta_pre_layer_norm-model
Updated

climb-mao/RON-CamBabyTokenizer
Updated

climb-mao/CAT-CamBabyTokenizer
Updated