-
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper • 2203.05482 • Published • 5 -
Diverse Weight Averaging for Out-of-Distribution Generalization
Paper • 2205.09739 • Published • 1 -
Fusing finetuned models for better pretraining
Paper • 2204.03044 • Published • 4 -
Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs
Paper • 2309.07311 • Published • 2
Niels Horn
nilq
AI & ML interests
Natural language understanding, synthetic emotional speech, mechanistic interpretability.
Organizations
Collections
4
models
16
nilq/baby-python-1L-mistral-lua-stories-slerp
Text Generation
•
Updated
•
72
nilq/baby-python-mistral-1L-tiny-lua-ft
Text Generation
•
Updated
•
80
nilq/baby-python-mistral-1L-tiny-TinyStories-ft
Text Generation
•
Updated
•
90
•
1
nilq/baby-python-mistral-1L-tiny-base
Text Generation
•
Updated
•
197
nilq/lua-stories-slerp-mistral-1L-tiny
Text Generation
•
Updated
•
29
nilq/lua-stories-slerp-mistral-2L-tiny
Text Generation
•
Updated
•
25
nilq/mistral-2L-tiny
Text Generation
•
Updated
•
29
nilq/lua-stories-linear-mistral-1L-tiny
Text Generation
•
Updated
•
27
nilq/python-mistral-1L-mini
Text Generation
•
Updated
•
42
nilq/mistral-1L-tiny
Text Generation
•
Updated
•
689
•
5
datasets
9
nilq/baby-python-and-tiny-stories-and-lua
Viewer
•
Updated
•
3
nilq/baby-python-and-lua
Viewer
•
Updated
•
22
nilq/baby-python-and-tiny-stories
Viewer
•
Updated
•
15
nilq/python-and-tiny-stories
Updated
nilq/baby-python
Viewer
•
Updated
•
101
•
1
nilq/small-lua-stack
Viewer
•
Updated
•
23
•
1
nilq/small-python-stack
Viewer
•
Updated
•
15
nilq/babylm-100M
Viewer
•
Updated
nilq/babylm-10M
Viewer
•
Updated