robertgshaw2's picture
Update README.md
aefa346 verified
---
base_model: teknium/OpenHermes-2.5-Mistral-7B
inference: true
model_type: mistral
quantized_by: mgoin
tags:
- nm-vllm
- sparse
---
## OpenHermes-2.5-Mistral-7B-pruned50
This repo contains model files for [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) optimized for [nm-vllm](https://github.com/neuralmagic/nm-vllm), a high-throughput serving engine for compressed LLMs.
This model was pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [nm-vllm](https://github.com/neuralmagic/nm-vllm) for fast inference and low memory-usage:
```bash
pip install nm-vllm[sparse]
```
Run in a Python pipeline for local inference:
```python
from vllm import LLM, SamplingParams
model = LLM("nm-testing/OpenHermes-2.5-Mistral-7B-pruned50", sparsity="sparse_w16a16")
prompt = "How to make banana bread?"
formatted_prompt = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
sampling_params = SamplingParams(max_tokens=100)
outputs = model.generate(formatted_prompt, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
"""
Here is a simple recipe for making banana bread:
Ingredients:
- 3 ripe bananas
- 2 eggs
- 1/2 cup of sugar
- 1/2 cup of butter
- 2 cups of flour
- 1 teaspoon baking powder
- 2 teaspoons of baking soda
Instructions:
1. Preheat your oven at 350 degree Fahrenant.
"""
```
## Prompt template
```
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
Install [SparseML](https://github.com/neuralmagic/sparseml):
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
```
Replace the recipe as you like and run this one-shot compression script to apply SparseGPT:
```python
import sparseml.transformers
original_model_name = "teknium/OpenHermes-2.5-Mistral-7B"
calibration_dataset = "open_platypus"
output_directory = "output/"
recipe = """
test_stage:
obcq_modifiers:
SparseGPTModifier:
sparsity: 0.5
sequential_update: true
mask_structure: 0:0
targets: ['re:model.layers.\d*$']
"""
# Apply SparseGPT to the model
sparseml.transformers.oneshot(
model=original_model_name,
dataset=calibration_dataset,
recipe=recipe,
output_dir=output_directory,
)
```
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)