Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,84 @@
|
|
1 |
---
|
2 |
language: en
|
3 |
license: apache-2.0
|
|
|
|
|
4 |
---
|
5 |
-
|
6 |
-
This model is a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
-
For further details on the model and its result, see our paper and our implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
|
|
|
1 |
---
|
2 |
language: en
|
3 |
license: apache-2.0
|
4 |
+
datasets:
|
5 |
+
- wikipedia
|
6 |
---
|
7 |
+
## Model Details: 85% Sparse DistilBERT-Base (uncased) Prune Once for All
|
8 |
+
This model is a sparse pre-trained model that can be fine-tuned for a wide range of language tasks. The process of weight pruning is forcing some of the weights of the neural network to zero. Setting some of the weights to zero results in sparser matrices. Updating neural network weights does involve matrix multiplication, and if we can keep the matrices sparse while retaining enough important information, we can reduce the overall computational overhead. The term "sparse" in the title of the model indicates a ratio of sparsity in the weights; for more details, you can read [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754).
|
9 |
+
|
10 |
+
Visualization of Prunce Once for All method from [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754):
|
11 |
+
![Zafrir2021_Fig1.png](https://s3.amazonaws.com/moonup/production/uploads/6297f0e30bd2f58c647abb1d/nSDP62H9NHC1FA0C429Xo.png)
|
12 |
+
|
13 |
+
| Model Detail | Description |
|
14 |
+
| ----------- | ----------- |
|
15 |
+
| Model Authors - Company | Intel |
|
16 |
+
| Date | September 30, 2021 |
|
17 |
+
| Version | 1 |
|
18 |
+
| Type | NLP - General sparse language model |
|
19 |
+
| Architecture | "The method consists of two steps, teacher preparation and student pruning. The sparse pre-trained model we trained is the model we use for transfer learning while maintaining its sparsity pattern. We call the method Prune Once for All since we show how to fine-tune the sparse pre-trained models for several language tasks while we prune the pre-trained model only once." [(Zafrir et al., 2021)](https://arxiv.org/abs/2111.05754) |
|
20 |
+
| Paper or Other Resources | [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754); [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all) |
|
21 |
+
| License | Apache 2.0 |
|
22 |
+
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
|
23 |
+
|
24 |
+
| Intended Use | Description |
|
25 |
+
| ----------- | ----------- |
|
26 |
+
| Primary intended uses | This is a general sparse language model; in its current form, it is not ready for downstream prediction tasks, but it can be fine-tuned for several language tasks including (but not limited to) question-answering, genre natural language inference, and sentiment classification. |
|
27 |
+
| Primary intended users | Anyone who needs an efficient general language model for other downstream tasks. |
|
28 |
+
| Out-of-scope uses | The model should not be used to intentionally create hostile or alienating environments for people.|
|
29 |
+
|
30 |
+
### How to use
|
31 |
+
|
32 |
+
Here is an example of how to import this model in Python:
|
33 |
+
|
34 |
+
```python
|
35 |
+
|
36 |
+
import transformers
|
37 |
+
|
38 |
+
model = transformers.AutoModelForQuestionAnswering.from_pretrained('Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa')
|
39 |
+
|
40 |
+
```
|
41 |
+
|
42 |
+
For more code examples, refer to the [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
|
43 |
+
|
44 |
+
### Metrics (Model Performance):
|
45 |
+
| Model | Model Size | SQuADv1.1 (EM/F1) | MNLI-m (Acc) | MNLI-mm (Acc) | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) |
|
46 |
+
|-------------------------------|:----------:|:-----------------:|:------------:|:-------------:|:------------:|:----------:|:-----------:|
|
47 |
+
| [80% Sparse BERT-Base uncased fine-tuned on SQuAD1.1](https://huggingface.co/Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa) | - | 81.29/88.47 | - | - | - | - | - |
|
48 |
+
| [85% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-85-unstructured-pruneofa) | Medium | 81.10/88.42 | 82.71 | 83.67 | 91.15/88.00 | 90.34 | 91.46 |
|
49 |
+
| [90% Sparse BERT-Base uncased](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa) | Medium | 79.83/87.25 | 81.45 | 82.43 | 90.93/87.72 | 89.07 | 90.88 |
|
50 |
+
| [90% Sparse BERT-Large uncased](https://huggingface.co/Intel/bert-large-uncased-sparse-90-unstructured-pruneofa) | Large | 83.35/90.20 | 83.74 | 84.20 | 91.48/88.43 | 91.39 | 92.95 |
|
51 |
+
| [**85% Sparse DistilBERT uncased**](https://huggingface.co/Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa) | Small | 78.10/85.82 | 81.35 | 82.03 | 90.29/86.97 | 88.31 | 90.60 |
|
52 |
+
| [90% Sparse DistilBERT uncased](https://huggingface.co/Intel/distilbert-base-uncased-sparse-90-unstructured-pruneofa) | Small | 76.91/84.82 | 80.68 | 81.47 | 90.05/86.67 | 87.66 | 90.02 |
|
53 |
+
|
54 |
+
All the results are the mean of two seperate experiments with the same hyper-parameters and different seeds.
|
55 |
+
|
56 |
+
|
57 |
+
| Training and Evaluation Data | Description |
|
58 |
+
| ----------- | ----------- |
|
59 |
+
| Datasets | [English Wikipedia Dataset](https://huggingface.co/datasets/wikipedia) (2500M words). |
|
60 |
+
| Motivation | To build an efficient and accurate base model for several downstream language tasks. |
|
61 |
+
| Preprocessing | "We use the English Wikipedia dataset (2500M words) for training the models on the pre-training task. We split the data into train (95%) and validation (5%) sets. Both sets are preprocessed as described in the models’ original papers ([Devlin et al., 2019](https://arxiv.org/abs/1810.04805), [Sanh et al., 2019](https://arxiv.org/abs/1910.01108)). We process the data to use the maximum sequence length allowed by the models, however, we allow shorter sequences at a probability of 0:1." |
|
62 |
+
|
63 |
+
| Ethical Considerations | Description |
|
64 |
+
| ----------- | ----------- |
|
65 |
+
| Data | The training data come from Wikipedia articles |
|
66 |
+
| Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of labelled Wikipedia articles. |
|
67 |
+
| Mitigations | No additional risk mitigation strategies were considered during model development. |
|
68 |
+
| Risks and harms | Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf), and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Beyond this, the extent of the risks involved by using the model remain unknown.|
|
69 |
+
| Use cases | - |
|
70 |
+
|
71 |
+
| Caveats and Recommendations |
|
72 |
+
| ----------- |
|
73 |
+
| Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. |
|
74 |
+
|
75 |
+
### BibTeX entry and citation info
|
76 |
+
```bibtex
|
77 |
+
@article{zafrir2021prune,
|
78 |
+
title={Prune Once for All: Sparse Pre-Trained Language Models},
|
79 |
+
author={Zafrir, Ofir and Larey, Ariel and Boudoukh, Guy and Shen, Haihao and Wasserblat, Moshe},
|
80 |
+
journal={arXiv preprint arXiv:2111.05754},
|
81 |
+
year={2021}
|
82 |
+
}
|
83 |
+
```
|
84 |
|
|