File size: 1,164 Bytes
95ecdbf ca88a42 95ecdbf ca88a42 95ecdbf ca88a42 95ecdbf ca88a42 95ecdbf ca88a42 95ecdbf ca88a42 95ecdbf ca88a42 95ecdbf ca88a42 95ecdbf ca88a42 7dcdf9c ca88a42 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
library_name: transformers
license: apache-2.0
datasets:
- lkevinzc/numia-1.5-qa-concatenated
base_model:
- HuggingFaceTB/FineMath-Llama-3B
---
# Llama-3.2-3B-NuminaQA
## Links
- 📜 [Paper](https://github.com/sail-sg/understand-r1-zero/blob/main/understand-r1-zero.pdf)
- 💻 [GitHub](https://github.com/sail-sg/understand-r1-zero)
- 🤗 [Oat-Zero Collection](https://huggingface.co/collections/sail/oat-zero-understanding-r1-zero-like-training-67dcdb07b9f3eb05f1501c4a)
## Introduction
This model serves as a 3B base in our minimalist R1-Zero recipe.
Training details:
- Base model: [HuggingFaceTB/FineMath-Llama-3B](https://huggingface.co/HuggingFaceTB/FineMath-Llama-3B)
- Dataset: [lkevinzc/numia-1.5-qa-concatenated](https://huggingface.co/datasets/lkevinzc/numia-1.5-qa-concatenated)
- Epochs: 2
- Learning rate: 1e-5
## Citation
```latex
@article{liu2025understanding,
title={Understanding r1-zero-like training: A critical perspective},
author={Liu, Zichen and Chen, Changyu and Li, Wenjun and Qi, Penghui and Pang, Tianyu and Du, Chao and Lee, Wee Sun and Lin, Min},
journal={arXiv preprint arXiv:2503.20783},
year={2025}
}
```
|