base_model: neuralmagic/Llama-2-7b-pruned50-retrained
inference: true
model_type: llama
pipeline_tag: text-generation
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
tags:
- sparse
- chat
Llama-2-7b-pruned50-retrained-ultrachat
This repo contains a 50% sparse Llama 2 7B finetuned for chat tasks using the UltraChat 200k dataset.
Authors: Neural Magic, Cerebras
Usage
Below we share some code snippets on how to get quickly started with running the model.
Sparse Transfer
By leveraging a pre-sparsified model's structure, you can efficiently fine-tune on new data, leading to reduced hyperparameter tuning, training times, and computational costs. Learn about this process here.
Running the model
This model has not been fine-tuned for instruction-following but may be run with the transformers library. For accelerated inference with sparsity, deploy with nm-vllm or deepsparse.
# pip install transformers accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("neuralmagic/Llama-2-7b-pruned50-retrained-ultrachat")
model = AutoModelForCausalLM.from_pretrained("neuralmagic/Llama-2-7b-pruned50-retrained-ultrachat", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
Evaluation Benchmark Results
Model evaluation metrics and results.
Benchmark | Metric | Llama-2-7b | Llama-2-7b-pruned50-retrained-ultrachat |
---|---|---|---|
MMLU | 5-shot, top-1 | xxxx | xxxx |
HellaSwag | 0-shot | xxxx | xxxx |
WinoGrande | partial score | xxxx | xxxx |
ARC-c | xxxx | xxxx | |
TruthfulQA | 5-shot | xxxx | xxxx |
HumanEval | pass@1 | xxxx | xxxx |
GSM8K | maj@1 | xxxx | xxxx |
Model Training Details
Coming soon.
Help
For further support, and discussions on these models and AI in general, join Neural Magic's Slack Community