Text Classification
Transformers
Safetensors
English
qwen2
nvidia
qwen2.5
reward-model
text-generation-inference

Model Overview

Description

Qwen-2.5-Nemotron-32B-Reward is a reward model that assigns a numerical “reward” score to evaluate the quality of LLM-generated responses. A higher reward on one conversation indicates better performance within that context, but does not translate across unrelated prompts.

This model is ready for commercial/non-commercial use.

License/Terms of Use

Use of this model is governed by the NVIDIA Open Model License.

Deployment Geography

Global

Use Case

Qwen-2.5-Nemotron-32B-Reward labels an LLM-generated response to a user query with a reward score.

Release Date:

HuggingFace 06/27/2025 via https://huggingface.co/nvidia/Qwen-2.5-Nemotron-32B-Reward

References

RM-Bench LeaderBoard

As of 29 May 2025, Qwen-2.5-Nemotron-32B-Reward is slightly lower on RM-Bench and JudgeBench compared to Llama-3.3-Nemotron-70B-Reward.

Model Chat Math Code Safety Easy Normal Hard Overall RM-Bench
Qwen-2.5-Nemotron-32B-Reward 76.0 73.9 66.2 93.5 85.6 80.5 65.9 77.4
Llama-3.3-Nemotron-70B-Reward 75.4 84.5 69.3 90.4 92.1 85.7 71.1 79.9

JudgeBench LeaderBoard

Model Knowl. Reason. Math Code Overall JudgeBench
Qwen-2.5-Nemotron-32B-Reward 61.7 74.5 76.2 82.1 70.3
Llama-3.3-Nemotron-70B-Reward 70.8 76.5 82.1 66.7 73.7

Model Architecture

Architecture Type: Transformer
Network Architecture: Qwen2.5

We developed this model using Qwen-2.5-32B-Instruct as its foundation. This model contains 32 billion parameters.

Input:

Input Type(s): Text
Input Format: String
Input Parameters: One Dimensional (1D)
Other Properties Related to Input: Max of 128k tokens (but trained only on conversations up to 8K tokens)

Output:

Output Type(s): Float
Output Format: One Single Float
Output Parameters: One-Dimensional (1D)
Other Properties Related to Output: The float value represents the quality of the response, with a higher value representing higher quality.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s):

  • [NeMo - 24.05.llama.3.1]

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Hopper
  • NVIDIA Turing

Supported Operating System(s): Linux

Quick Start

You can use the model using HuggingFace Transformers library with 1 80GB GPUs (NVIDIA Ampere or newer), 2 GPUs with 48GB memory, or 4 or more with 16GB memory or less. A disk space of 120GB is needed to store the model.

This code has been tested on Transformers v4.51.2, torch 2.6.0+cu124 and 2 NVIDIA RTX A6000 48GB GPUs, but any setup that supports Qwen/Qwen2.5-32B-Instruct should support this model as well. If you run into problems, you can consider doing pip install -U transformers.

import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

model_name = "nvidia/Qwen-2.5-Nemotron-32B-Reward"

model = AutoModelForSequenceClassification.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "What is 1+1?"
good_response = "1+1=2"
bad_response = "1+1=3"

for response in [good_response, bad_response]:
    messages = [{'role': "user", "content": prompt}, {'role': "assistant", "content": response}]
    tokenized_message = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=False, return_tensors="pt", return_dict=True)
    reward = model(tokenized_message['input_ids'].cuda(),attention_mask=tokenized_message['attention_mask'].cuda()).logits[0][0].item()
    print(reward)

# Example quality - note that higher scores means higher quality, and scores can be negative.

# reward for good_response = 11.4296875
# reward for bad_response = -7.53515625

Model Version:

v1.0

Training, Testing and Evaluation Datasets:

Training Datasets:

Dataset Name: HelpSteer3
Dataset Link: https://huggingface.co/datasets/nvidia/HelpSteer3

Data Collection Method by dataset

  • [Hybrid: Human, Synthetic]

Labeling Method by dataset

  • [Human]

Properties:

  • 38,459 prompts, each with a pair of responses as well as human preferences between the pair of responses.

Dataset Name: HelpSteer2
Dataset Link: https://huggingface.co/datasets/nvidia/HelpSteer2

Data Collection Method by dataset

  • [Hybrid: Human, Synthetic]

Labeling Method by dataset

  • [Human]

Properties:

  • 6,766 prompts, each with a pair of responses as well as human preferences between the pair of responses.

Testing Datasets:

Dataset Name: HelpSteer3
Dataset Link: https://huggingface.co/datasets/nvidia/HelpSteer3

Data Collection Method by dataset

  • [Hybrid: Human, Synthetic]

Labeling Method by dataset

  • [Human]

Properties:

  • 2,017 prompts, each with a pair of responses as well as human preferences between the pair of responses.

Dataset Name: HelpSteer2
Dataset Link: https://huggingface.co/datasets/nvidia/HelpSteer2

Data Collection Method by dataset

  • [Hybrid: Human, Synthetic]

Labeling Method by dataset

  • [Human]

Properties:

  • 352 prompts, each with a pair of responses as well as human preferences between the pair of responses.

Evaluation Datasets

Dataset Name: RM-Bench
Dataset Link: https://huggingface.co/datasets/THU-KEG/RM-Bench

Data Collection Method by dataset

  • [Hybrid: Human, Synthetic]

Labeling Method by dataset

  • [Hybrid: Human, Synthetic]

Properties:

  • 1,327 prompts, each with three pairs of responses as well as preferences between the pair of responses.

Dataset Name: JudgeBench
Dataset Link: https://huggingface.co/datasets/ScalerLab/JudgeBench

Data Collection Method by dataset

  • [Hybrid: Human, Synthetic]

Labeling Method by dataset

  • [Hybrid: Human, Synthetic]

Properties:

  • 350 prompts, each with a pair of responses as well as preferences between the pair of responses.

Inference:

Engine: PyTorch
Test Hardware: H100, A100 80GB, A100 40GB

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Citation

If you find this model useful, please cite the following works:

@misc{wang2025helpsteer3preferenceopenhumanannotatedpreference,
      title={Help{S}teer3-{P}reference: Open Human-Annotated Preference Data across Diverse Tasks and Languages},
      author={Zhilin Wang and Jiaqi Zeng and Olivier Delalleau and Hoo-Chang Shin and Felipe Soares and Alexander Bukharin and Ellie Evans and Yi Dong and Oleksii Kuchaiev},
      year={2025},
      eprint={2505.11475},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.11475}, 
}

@misc{wang2025helpsteer2preferencecomplementingratingspreferences,
      title={HelpSteer2-Preference: Complementing Ratings with Preferences}, 
      author={Zhilin Wang and Alexander Bukharin and Olivier Delalleau and Daniel Egert and Gerald Shen and Jiaqi Zeng and Oleksii Kuchaiev and Yi Dong},
      year={2025},
      eprint={2410.01257},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2410.01257}, 
}
Downloads last month
142
Safetensors
Model size
32B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nvidia/Qwen-2.5-Nemotron-32B-Reward

Base model

Qwen/Qwen2.5-32B
Finetuned
(1029)
this model

Datasets used to train nvidia/Qwen-2.5-Nemotron-32B-Reward

Collection including nvidia/Qwen-2.5-Nemotron-32B-Reward