File size: 1,053 Bytes
5af5d78 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
---
license: llama2
datasets:
- JeanKaddour/minipile
language:
- en
---
Meta's Llama 3 70B pruned to 42B parameters using the methodology described in [The Unreasonable Ineffectiveness of the Deeper Layers](https://arxiv.org/abs/2403.17887). Post-pruning trained using QLoRA for ~100M tokens from [JeanKaddour/minipile](https://huggingface.co/datasets/JeanKaddour/minipile).
Layers to prune selected using [PruneMe](https://github.com/arcee-ai/PruneMe).
Still evaluating, don't get too excited! Might be incredibly dumb. Check out these zero-shot MMLU numbers though:
| Groups |Version|Filter|n-shot|Metric|Value | |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |0.7319|± |0.0034|
| - humanities |N/A |none | 0|acc |0.6582|± |0.0063|
| - other |N/A |none | 0|acc |0.7927|± |0.0069|
| - social_sciences|N/A |none | 0|acc |0.8466|± |0.0064|
| - stem |N/A |none | 0|acc |0.6702|± |0.0079| |