Edit model card

90% Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1

This model is a result of fine-tuning a Prune OFA 90% sparse pre-trained BERT-Large combined with knowledge distillation. This model yields the following results on SQuADv1.1 development set:
{"exact_match": 83.56669820245979, "f1": 90.20829352733487}

For further details see our paper, Prune Once for All: Sparse Pre-Trained Language Models, and our open source implementation available here.

Downloads last month
40
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including Intel/bert-large-uncased-squadv1.1-sparse-90-unstructured