Edit model card

Model Details: 90% Sparse BERT-Large (uncased) Prune Once for All

This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read Zafrir et al. (2021).

Visualization of Prunce Once for All method from Zafrir et al. (2021): Zafrir2021_Fig1.png

Model Detail Description
Model Authors - Company Intel
Date September 30, 2021
Version 1
Type NLP - General sparse language model
Architecture "The method consists of two steps, teacher preparation and student pruning. The sparse pre-trained model we trained is the model we use for transfer learning while maintaining its sparsity pattern. We call the method Prune Once for All since we show how to fine-tune the sparse pre-trained models for several language tasks while we prune the pre-trained model only once." (Zafrir et al., 2021)
Paper or Other Resources Zafrir et al. (2021); GitHub Repo
License Apache 2.0
Questions or Comments Community Tab and Intel Developers Discord
Intended Use Description
Primary intended uses This is a general sparse language model; in its current form, it is not ready for downstream prediction tasks, but it can be fine-tuned for several language tasks including (but not limited to) question-answering, genre natural language inference, and sentiment classification.
Primary intended users Anyone who needs an efficient general language model for other downstream tasks.
Out-of-scope uses The model should not be used to intentionally create hostile or alienating environments for people.

How to use

Here is an example of how to import this model in Python:


import transformers

model = transformers.AutoModelForQuestionAnswering.from_pretrained('Intel/bert-large-uncased-sparse-90-unstructured-pruneofa')

For more code examples, refer to the GitHub Repo.

Metrics (Model Performance):

Model Model Size SQuADv1.1 (EM/F1) MNLI-m (Acc) MNLI-mm (Acc) QQP (Acc/F1) QNLI (Acc) SST-2 (Acc)
80% Sparse BERT-Base uncased fine-tuned on SQuAD1.1 - 81.29/88.47 - - - - -
85% Sparse BERT-Base uncased Medium 81.10/88.42 82.71 83.67 91.15/88.00 90.34 91.46
90% Sparse BERT-Base uncased Medium 79.83/87.25 81.45 82.43 90.93/87.72 89.07 90.88
90% Sparse BERT-Large uncased Large 83.35/90.20 83.74 84.20 91.48/88.43 91.39 92.95
85% Sparse DistilBERT uncased Small 78.10/85.82 81.35 82.03 90.29/86.97 88.31 90.60
90% Sparse DistilBERT uncased Small 76.91/84.82 80.68 81.47 90.05/86.67 87.66 90.02

All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.

Training and Evaluation Data Description
Datasets English Wikipedia Dataset (2500M words).
Motivation To build an efficient and accurate base model for several downstream language tasks.
Preprocessing "We use the English Wikipedia dataset (2500M words) for training the models on the pre-training task. We split the data into train (95%) and validation (5%) sets. Both sets are preprocessed as described in the models’ original papers (Devlin et al., 2019, Sanh et al., 2019). We process the data to use the maximum sequence length allowed by the models, however, we allow shorter sequences at a probability of 0:1."
Ethical Considerations Description
Data The training data come from Wikipedia articles
Human life The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of labelled Wikipedia articles.
Mitigations No additional risk mitigation strategies were considered during model development.
Risks and harms Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021, and Bender et al., 2021). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Beyond this, the extent of the risks involved by using the model remain unknown.
Use cases -
Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model.

BibTeX entry and citation info

@article{zafrir2021prune,
  title={Prune Once for All: Sparse Pre-Trained Language Models},
  author={Zafrir, Ofir and Larey, Ariel and Boudoukh, Guy and Shen, Haihao and Wasserblat, Moshe},
  journal={arXiv preprint arXiv:2111.05754},
  year={2021}
}
Downloads last month
17

Datasets used to train Intel/bert-large-uncased-sparse-90-unstructured-pruneofa

Collection including Intel/bert-large-uncased-sparse-90-unstructured-pruneofa