ofirzaf's picture
Update README.md
056596a
metadata
language: en

90% Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1

This model is a result of fine-tuning a Prune OFA 90% sparse pre-trained BERT-Large combined with knowledge distillation. This model yields the following results on SQuADv1.1 development set:
{"exact_match": 83.56669820245979, "f1": 90.20829352733487}

For further details see our paper, Prune Once for All: Sparse Pre-Trained Language Models, and our open source implementation available here.