EfficientLLM: Pruning-Aware Pretraining Collection This is the models of our paper "EfficientLLM: Scalable Pruning-Aware Pretraining for Architecture-Agnostic Edge Language Models". • 3 items • Updated 14 days ago