Llama-2-7b-pruned70-retrained-ultrachat-quant-ds
This repo contains a 70% sparse Llama 2 7B finetuned for chat tasks using the UltraChat 200k dataset. It was then quantized to 8-bit weights + activations and exported to deploy with DeepSparse, a CPU inference runtime for sparse models.
Official model weights from Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment.
Authors: Neural Magic, Cerebras
Usage
Below we share some code snippets on how to get quickly started with running the model.
Sparse Transfer
By leveraging a pre-sparsified model's structure, you can efficiently fine-tune on new data, leading to reduced hyperparameter tuning, training times, and computational costs. Learn about this process here.
Running the model
For accelerated inference with sparsity on CPUs, deploy with deepsparse.
# pip install deepsparse[llm]
from deepsparse import TextGeneration
model = TextGeneration(model_path="hf:neuralmagic/Llama-2-7b-pruned70-retrained-ultrachat-quant-ds")
input_text = "Write me a poem about Machine Learning."
outputs = model(input_text, max_new_tokens=100)
print(outputs.generations[0].text)
Evaluation Benchmark Results
Model evaluation metrics and results.
Benchmark | Metric | Llama-2-7b-ultrachat | Llama-2-7b-pruned70-retrained-ultrachat-quant-ds |
---|---|---|---|
AlpacaEval (Llama-2-70b-chat-hf evaluator) | Win rate | 57.6% | 57.1% |
Help
For further support, and discussions on these models and AI in general, join Neural Magic's Slack Community
- Downloads last month
- 21
Model tree for neuralmagic/Llama-2-7b-ultrachat200k-pruned_70-quantized-deepsparse
Base model
meta-llama/Llama-2-7b-hf