Edit model card

oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli-v2

This model is obtained with The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models.

It corresponds to the model presented in the Table 2 - oBERT - MNLI 97% (in the upcoming updated version of the paper).

Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: MNLI
Sparsity: 97%
Number of layers: 12

The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with (*)):

| oBERT 97%     | m-acc | mm-acc|
| ------------- | ----- | ----- |
| seed=42       | 80.86 | 80.88 |
| seed=3407     | 80.83 | 81.65 |
| seed=123   (*)| 81.18 | 81.06 |
| seed=12345    | 80.79 | 80.95 |
| ------------- | ----- | ----- |
| mean          | 80.91 | 81.13 |
| stdev         | 0.178 | 0.351 |

Code: https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT

If you find the model useful, please consider citing our work.

Citation info

@article{kurtic2022optimal,
  title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
  author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
  journal={arXiv preprint arXiv:2203.07259},
  year={2022}
}
Downloads last month
19
Unable to determine this model’s pipeline type. Check the docs .