Model Card for oopere/pruned60-llama-1b

This model is a pruned version of the Llama-3.2 architecture, with a parameter reduction of 60% in the MLP Layers. The pruning process aims to enhance computational efficiency while maintaining acceptable performance across specific tasks. This model is not intended to be used directly, but rather to be fine-tuned for specific tasks where it can achieve equal or superior performance compared to fine-tuning the base model for the same task.

Model Details

  • Model Type: Pruned version of LLaMA-1.2B using structured pruning
  • Original Model: meta-llama/Llama-3.2-1B
  • Pruning Method: Structured pruning of MLP layers using importance scores based on absolute maximum weights
  • Size Reduction: 39.3% (from 1.24B to 753M parameters)
  • Architecture: Same as original LLaMA but with reduced MLP layer sizes
  • Language(s): Same as original model
  • License: Same as original model
  • Developed by: Pere Martra

These models are part of the study "Exploring GLU Expansion Ratios: Structured Pruning in Llama-3.2 Models". They explore structured pruning in GLU-based architectures using Llama-3.2 (1B and 3B variants). The pruning experiments target optimal expansion ratios to balance performance, computational efficiency, and environmental sustainability. The models were evaluated across multiple benchmarks, including BoolQ, ARC-Easy, and MUSR, and demonstrate significant efficiency gains while maintaining robust task performance.

Performance on Standard Benchmarks

Benchmark Original Model Pruned Model Relative Change
ARC-Easy 65.19% 33.42% -48.7%
BoolQ 64.16% 55.60% -13.3%
LAMBADA-OpenAI 62.20% 10.15% -83.7%
LAMBADA-Standard 53.46% 6.73% -87.4%

Key Findings

  • Still maintains reasonable performance on binary classification tasks (BoolQ)
  • Severe degradation on reasoning tasks (ARC-Easy)
  • Critical impact on long-range comprehension (LAMBADA)
  • Extreme increase in perplexity for language modeling tasks

Limitations

  • Severe reduction in performance on complex language understanding tasks
  • Critical degradation in long-range dependency handling
  • Not suitable for language completion or generation tasks
  • Only recommended for simple classification tasks where memory constraints are critical

Implementation Details

Pruning Method

  • Technique: Structured pruning targeting MLP layers
  • Pruning Ratio: 60% of neurons removed from MLP layers
  • Selection Criteria: Importance scoring based on absolute maximum weights
  • Architecture Specifics: Maintained GLU structure during pruning

Hardware Requirements

Memory Requirements

  • Base Model:

    • Parameters: ~2.48 GB (FP16)
    • Total Runtime Memory: ~3.1 GB
  • Pruned Model (60%):

    • Parameters: ~1.51 GB (FP16)
    • Total Runtime Memory: ~1.9 GB
  • Memory Reduction:

    • Parameter Memory: 39.3%
    • Total Runtime Memory: ~38.7%

Notes:

  • Memory requirements assume FP16 precision
  • Actual memory usage may vary depending on:
    • Batch size
    • Sequence length
    • Implementation details
    • Runtime environment

Minimum Requirements

  • GPU Memory: 3GB for base model, 2GB for pruned model
  • CPU Memory: 8GB recommended for both models

Acknowledgments

Downloads last month
36
Safetensors
Model size
753M params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for oopere/pruned60-llama-3.2-1B

Finetuned
(195)
this model

Collection including oopere/pruned60-llama-3.2-1B