metadata
tags:
- bert
- oBERT
language: en
datasets: squad
bert-large-uncased-finetuned-squadv1
This model is a finetuned version of the bert-large-uncased model on the SQuADv1 task.
It is produced as part of the work on the paper The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models.
SQuADv1 dev-set:
EM = 84.46
F1 = 91.23
Code: https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT
If you find the model useful, please consider citing our work.
Citation info
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}