Llama-3.2-3B-NuminaQA

Links

Introduction

This model serves as a 3B base in our minimalist R1-Zero recipe.

Training details:

Citation

@article{liu2025understanding,
  title={Understanding r1-zero-like training: A critical perspective},
  author={Liu, Zichen and Chen, Changyu and Li, Wenjun and Qi, Penghui and Pang, Tianyu and Du, Chao and Lee, Wee Sun and Lin, Min},
  journal={arXiv preprint arXiv:2503.20783},
  year={2025}
}
Downloads last month
841
Safetensors
Model size
3.21B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for lkevinzc/Llama-3.2-3B-NuminaQA

Finetuned
(4)
this model
Finetunes
1 model
Quantizations
2 models

Dataset used to train lkevinzc/Llama-3.2-3B-NuminaQA