Pruned SmolLM2-1.7B model - total parameters 1.47B.

Intermediate step, requires further training - try it yourself.

Eval results using SmolLM evaluation scripts (LightEval):

Task Version Metric Value Stderr
all acc_norm 0.4555 ± 0.0114
qem 0.0431 ± 0.0022
custom:arc:_average:0 acc_norm 0.5021 ± 0.0120
custom:arc:challenge:0 0 acc_norm 0.3686 ± 0.0141
custom:arc:easy:0 0 acc_norm 0.6355 ± 0.0099
custom:commonsense_qa:0 0 acc_norm 0.3333 ± 0.0135
custom:gsm8k:5 0 qem 0.0076 ± 0.0024
custom:hellaswag:0 0 acc_norm 0.5568 ± 0.0050
custom:mmlu_pro:0 0 acc_norm 0.1287 ± 0.0031
custom:openbook_qa:0 0 acc_norm 0.3660 ± 0.0216
custom:piqa:0 0 acc_norm 0.7187 ± 0.0105
custom:trivia_qa:0 0 qem 0.0787 ± 0.0020
custom:winogrande:0 0 acc_norm 0.5367 ± 0.0140
Downloads last month
16
Safetensors
Model size
1.47B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for aloobun/Pruned-SmolLM2-1.4B

Quantizations
1 model