|
# oBERT-12-upstream-pruned-unstructured-97 |
|
|
|
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259). |
|
|
|
|
|
It corresponds to the upstream pruned model used as a starting point for sparse-transfer learning to downstream tasks presented in the `Table 2 - oBERT - {SQuADv1, MNLI, QQP} - 97%`. |
|
|
|
Finetuned versions of this model for each downstream task are: |
|
|
|
- SQuADv1: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1` |
|
- MNLI: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli` |
|
- QQP: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp` |
|
|
|
``` |
|
Pruning method: oBERT upstream unstructured |
|
Paper: https://arxiv.org/abs/2203.07259 |
|
Dataset: BookCorpus and English Wikipedia |
|
Sparsity: 97% |
|
Number of layers: 12 |
|
``` |
|
|
|
Code: _coming soon_ |
|
|
|
## BibTeX entry and citation info |
|
```bibtex |
|
@article{kurtic2022optimal, |
|
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models}, |
|
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan}, |
|
journal={arXiv preprint arXiv:2203.07259}, |
|
year={2022} |
|
} |
|
``` |