Llama-3.2-3B-NuminaQA

Links

Introduction

This model serves as a 3B base in our minimalist R1-Zero recipe.

Training details:

Citation

@misc{liu2025understanding,
  title={Understanding R1-Zero-Like Training: A Critical Perspective},
  author={Zichen Liu and Changyu Chen and Wenjun Li and Penghui Qi and Tianyu Pang and Chao Du and Wee Sun Lee and Min Lin},
  year={2025},
  howpublished={\url{https://github.com/sail-sg/understand-r1-zero}},
}
Downloads last month
945
Safetensors
Model size
3.21B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for lkevinzc/Llama-3.2-3B-NuminaQA

Finetuned
(1)
this model
Finetunes
1 model
Quantizations
2 models

Dataset used to train lkevinzc/Llama-3.2-3B-NuminaQA