Edit model card

80% 1x4 Block Sparse BERT-Base (uncased) Fine Tuned on SQuADv1.1

This model is a result of fine-tuning a Prune OFA 80% 1x4 block sparse pre-trained BERT-Base combined with knowledge distillation. This model yields the following results on SQuADv1.1 development set:
{"exact_match": 81.2867, "f1": 88.4735}

For further details see our paper, Prune Once for All: Sparse Pre-Trained Language Models, and our open source implementation available here.

Downloads last month
131
Hosted inference API
Question Answering
Examples
Examples
This model can be loaded on the Inference API on-demand.