-
The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"
Paper • 2309.12288 • Published • 3 -
Are Emergent Abilities in Large Language Models just In-Context Learning?
Paper • 2309.01809 • Published • 3 -
When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale
Paper • 2309.04564 • Published • 14 -
CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages
Paper • 2309.09400 • Published • 77
Moses Ananta
mosesananta
AI & ML interests
None yet
Organizations
None yet
Collections
3
datasets
None public yet