Edit model card

Llama-2-7b-gsm8k-pruned_70

This repo contains a 70% sparse Llama 2 7B finetuned for arithmetic reasoning using the GSM8k dataset.

Official model weights from Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment.

Authors: Neural Magic, Cerebras

Usage

Below we share some code snippets on how to get quickly started with running the model.

Sparse Transfer

By leveraging a pre-sparsified model's structure, you can efficiently fine-tune on new data, leading to reduced hyperparameter tuning, training times, and computational costs. Learn about this process here.

Running the model

This model may be run with the transformers library. For accelerated inference with sparsity, deploy with nm-vllm or deepsparse.

# pip install transformers accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("neuralmagic/Llama-2-7b-gsm8k-pruned_70")
model = AutoModelForCausalLM.from_pretrained("neuralmagic/Llama-2-7b-gsm8k-pruned_70", device_map="auto")

input_text = "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
input_ids = tokenizer.apply_chat_template(input_text, add_generation_prompt=True, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Evaluation Benchmark Results

Model evaluation metrics and results.

Benchmark Metric Llama-2-7b-gsm8k Llama-2-7b-gsm8k-pruned_70
GSM8K 0-shot 35.5% 34.3%

Model Training Details

This model was obtained by sparse-tranfer of the sparse foundational model Llama-2-7b-pruned70-retrained on the GSM8k dataset. Sparse-transfer was performed with SquareHead knowledge distillation with Llama-2-7b-gsm8k as teacher.

Help

For further support, and discussions on these models and AI in general, join Neural Magic's Slack Community

Downloads last month
8
Safetensors
Model size
6.74B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for neuralmagic/Llama-2-7b-gsm8k-pruned_70

Dataset used to train neuralmagic/Llama-2-7b-gsm8k-pruned_70

Collection including neuralmagic/Llama-2-7b-gsm8k-pruned_70