mgoin's picture
Create README.md
3d4f8aa
|
raw
history blame
2.49 kB
---
datasets:
- gsm8k
tags:
- deepsparse
---
# mpt-7b-gsm8k-pruned40-quant
This model was produced from a MPT-7B base model finetuned on the GSM8k dataset with pruning and quantization applied using [SparseGPT](https://arxiv.org/abs/2301.00774). Then it was exported for optimized inference with [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
GSM8k zero-shot accuracy with [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness) : 30.33%
### Usage
```python
from deepsparse import TextGeneration
model = TextGeneration(model="hf:neuralmagic/mpt-7b-gsm8k-pruned40-quant")
model("There are twice as many boys as girls at Dr. Wertz's school. If there are 60 girls and 5 students to every teacher, how many teachers are there?", max_new_tokens=50)
```
All MPT model weights are available on [SparseZoo](https://sparsezoo.neuralmagic.com/?datasets=gsm8k&ungrouped=true) and CPU speedup for generative inference can be reproduced by following the instructions at [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt)
| Model Links | Compression |
| --------------------------------------------------------------------------------------------------------- | --------------------------------- |
| [neuralmagic/mpt-7b-gsm8k-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-quant) | Quantization (W8A8) |
| [neuralmagic/mpt-7b-gsm8k-pruned40-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned40-quant) | Quantization (W8A8) & 40% Pruning |
| [neuralmagic/mpt-7b-gsm8k-pruned50-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned50-quant) | Quantization (W8A8) & 50% Pruning |
| [neuralmagic/mpt-7b-gsm8k-pruned60-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned60-quant) | Quantization (W8A8) & 60% Pruning |
| [neuralmagic/mpt-7b-gsm8k-pruned70-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned70-quant) | Quantization (W8A8) & 70% Pruning |
| [neuralmagic/mpt-7b-gsm8k-pruned80-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned80-quant) | Quantization (W8A8) & 80% Pruning |
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).