|
--- |
|
library_name: transformers |
|
license: llama3.2 |
|
metrics: |
|
- accuracy |
|
- perplexity |
|
base_model: |
|
- meta-llama/Llama-3.2-3B |
|
--- |
|
|
|
# Model Card for oopere/pruned20-llama-1b |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
This model is a pruned version of the Llama-3.2-3B model, with a parameter reduction of 10% in the MLP Layers. |
|
The pruning process aims to enhance computational efficiency while maintaining acceptable performance across specific tasks. |
|
This model is not intended to be used directly, but rather to be fine-tuned for specific tasks where it can achieve equal or superior performance compared to fine-tuning the base model for the same task. |
|
|
|
|
|
## Model Details |
|
|
|
- **Model Type:** Pruned version of LLaMA-3.2 using structured pruning |
|
- **Original Model:** meta-llama/Llama-3.2-3B |
|
- **Pruning Method:** Structured pruning of MLP layers using importance scores based on absolute maximum weights |
|
- **Size Reduction:** 7,47% (from 3.21B to 3B parameters) |
|
- **Architecture:** Same as original LLaMA but with reduced MLP layer sizes |
|
- **Language(s):** Same as original model |
|
- **License:** Same as original model |
|
- **Developed by:** [Pere Martra](https://huggingface.co/oopere) |
|
|
|
These models are part of the study "[Exploring GLU Expansion Ratios: Structured Pruning in Llama-3.2 Models](https://doi.org/10.31219/osf.io/qgxea)". They explore structured pruning in GLU-based architectures using Llama-3.2 (1B and 3B variants). The pruning experiments target optimal expansion ratios to balance performance, computational efficiency, and environmental sustainability. The models were evaluated across multiple benchmarks, including BoolQ, ARC-Easy, and MUSR, and demonstrate significant efficiency gains while maintaining robust task performance. |
|
|
|
|
|
### Performance on Standard Benchmarks |
|
|
|
| Benchmark | Original Model | Pruned Model | Relative Change | |
|
| ---- | ---- | ---- | ---- | |
|
| ARC-Easy | 65.19% | 60.69% | -6.9% | |
|
| BoolQ | 64.16% | 51.22% | -20.2% | |
|
| LAMBADA-OpenAI | 62.20% | 59.64% | -4.1% | |
|
| LAMBADA-Standard | 53.46% | 54.61% | +2.2% | |
|
|
|
### Key Findings |
|
- Surprisingly, an improvement is observed on the LAMBADA-Standard benchmark, with a 2.2% relative increase in accuracy. |
|
- Maintains competitive performance on binary classification tasks (BoolQ), with a 20.2% relative decrease in accuracy. |
|
- Moderate degradation observed on reasoning tasks (ARC-Easy), with a 6.9% relative decrease in accuracy. |
|
- Minimal impact on long-range comprehension (LAMBADA-OpenAI), with only a 4.1% relative decrease in accuracy. |
|
|
|
### Limitations |
|
- Reduced performance on tasks requiring complex reasoning, with moderate degradation observed on benchmarks like ARC-Easy. |
|
- Noticeable decrease in accuracy on binary classification tasks, as seen in BoolQ. |
|
- Mixed results on long-range dependencies, with minimal degradation on LAMBADA-OpenAI but variability across benchmarks. |
|
- May not be suitable for applications requiring consistently high accuracy across diverse language tasks. |
|
|
|
### Implementation Details |
|
- **Pruning Notebook:** [Detailed implementation and methodology](https://github.com/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6-PRUNING/6_3_pruning_structured_llama3.2-1b_OK.ipynb) |
|
- **GitHub Repository:** [LLM Course](https://github.com/peremartra/Large-Language-Model-Notebooks-Course) |
|
|
|
### Pruning Method |
|
- **Technique:** Structured pruning targeting MLP layers |
|
- **Pruning Ratio:** 10% of neurons removed from MLP layers |
|
- **Selection Criteria:** Importance scoring based on absolute maximum weights |
|
- **Architecture Specifics:** Maintained GLU structure during pruning |
|
|
|
### Hardware Requirements |
|
- Reduced memory footprint compared to original model |
|
- Can run on hardware with ~10% less memory than original |
|
|
|
## Acknowledgments |
|
- Thanks to [Mariusz Kurman](https://huggingface.co/mkurman) for creating [llama-pruning](https://github.com/MedITSolutionsKurman/llama-pruning), a library that extends and improve this pruning methodology. |