view article Article The case for specialized pre-training: ultra-fast foundation models for dedicated tasks By Pclanglais • Aug 4 • 26
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits Paper • 2402.17764 • Published Feb 27 • 603
PALO: A Polyglot Large Multimodal Model for 5B People Paper • 2402.14818 • Published Feb 22 • 23