oopere commited on
Commit
f4e5c54
1 Parent(s): 86b1572

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -23,6 +23,9 @@ This model is not intended to be used directly, but rather to be fine-tuned for
23
  - **License:** Same as original model
24
  - **Developed by:** [Pere Martra](https://huggingface.co/oopere)
25
 
 
 
 
26
  ### Performance on Standard Benchmarks
27
  | Benchmark | Original Model | Pruned Model | Relative Change |
28
  | ---- | ---- | ---- | ---- |
 
23
  - **License:** Same as original model
24
  - **Developed by:** [Pere Martra](https://huggingface.co/oopere)
25
 
26
+ These models are part of the study "[Exploring GLU Expansion Ratios: Structured Pruning in Llama-3.2 Models](https://doi.org/10.31219/osf.io/qgxea)". They explore structured pruning in GLU-based architectures using Llama-3.2 (1B and 3B variants). The pruning experiments target optimal expansion ratios to balance performance, computational efficiency, and environmental sustainability. The models were evaluated across multiple benchmarks, including BoolQ, ARC-Easy, and MUSR, and demonstrate significant efficiency gains while maintaining robust task performance.
27
+
28
+
29
  ### Performance on Standard Benchmarks
30
  | Benchmark | Original Model | Pruned Model | Relative Change |
31
  | ---- | ---- | ---- | ---- |