Update README.md
Browse files
README.md
CHANGED
@@ -6,9 +6,12 @@ tags:
|
|
6 |
---
|
7 |
# mpt-7b-gsm8k-pruned40-quant
|
8 |
|
9 |
-
|
|
|
10 |
|
11 |
-
|
|
|
|
|
12 |
|
13 |
### Usage
|
14 |
|
@@ -29,6 +32,7 @@ All MPT model weights are available on [SparseZoo](https://sparsezoo.neuralmagic
|
|
29 |
| [neuralmagic/mpt-7b-gsm8k-pruned50-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned50-quant) | Quantization (W8A8) & 50% Pruning |
|
30 |
| [neuralmagic/mpt-7b-gsm8k-pruned60-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned60-quant) | Quantization (W8A8) & 60% Pruning |
|
31 |
| [neuralmagic/mpt-7b-gsm8k-pruned70-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned70-quant) | Quantization (W8A8) & 70% Pruning |
|
|
|
32 |
| [neuralmagic/mpt-7b-gsm8k-pruned80-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned80-quant) | Quantization (W8A8) & 80% Pruning |
|
33 |
|
34 |
|
|
|
6 |
---
|
7 |
# mpt-7b-gsm8k-pruned40-quant
|
8 |
|
9 |
+
**Paper**: [https://arxiv.org/pdf/xxxxxxx.pdf](https://arxiv.org/pdf/xxxxxxx.pdf)
|
10 |
+
**Code**: https://github.com/neuralmagic/deepsparse/tree/main/research/mpt
|
11 |
|
12 |
+
This model was produced from a [MPT-7B base model](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pt) finetuned on the GSM8k dataset with pruning applied using [SparseGPT](https://arxiv.org/abs/2301.00774) and retrain for 2 epochs with L2 distillation. Then it was exported for optimized inference with [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
|
13 |
+
|
14 |
+
GSM8k zero-shot accuracy with [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness) : 30.33% (FP32 baseline is 28.2%)
|
15 |
|
16 |
### Usage
|
17 |
|
|
|
32 |
| [neuralmagic/mpt-7b-gsm8k-pruned50-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned50-quant) | Quantization (W8A8) & 50% Pruning |
|
33 |
| [neuralmagic/mpt-7b-gsm8k-pruned60-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned60-quant) | Quantization (W8A8) & 60% Pruning |
|
34 |
| [neuralmagic/mpt-7b-gsm8k-pruned70-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned70-quant) | Quantization (W8A8) & 70% Pruning |
|
35 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned70-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned75-quant) | Quantization (W8A8) & 75% Pruning |
|
36 |
| [neuralmagic/mpt-7b-gsm8k-pruned80-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned80-quant) | Quantization (W8A8) & 80% Pruning |
|
37 |
|
38 |
|