Text Classification
Transformers
Safetensors
English
qwen3
nvidia
reward-model
text-generation-inference
zhilinw commited on
Commit
c9f9d58
·
verified ·
1 Parent(s): fdbccc5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -34
README.md CHANGED
@@ -21,32 +21,31 @@ library_name: transformers
21
 
22
  ## Description
23
 
24
- **Qwen-3-Nemotron-32B-Reward** is a mid-sized reward model built on **Qwen3-32B** and fine-tuned using Bradley–Terry pairwise comparisons collected in HelpSteer3.
25
- To **preserve Qwen-3’s native reasoning**, training employed a `"nothink"` instruction that separates the reasoning trace from the reward signal.
26
 
27
- This model is optimized for commercial deployments requiring a balance between parameter count, inference latency, and scoring fidelity.
28
 
 
29
 
30
- **Purpose:**
31
- A specialized reward model that assigns a numerical “reward” score to evaluate the quality of LLM-generated responses.
32
 
33
- **Architecture & Training:**
34
- - **Base:** Built on **Qwen3-32B**.
35
- - **Training Data:** Human-annotated comparisons from HelpSteer3.
36
- - **Framework:** Bradley–Terry pairwise methodology with `"nothink"` instruction.
37
 
38
- **How It Works:**
39
- - **Input:** An English dialogue (user ↔ assistant) of up to 8,192 tokens.
40
- - **Output:** A single “reward” value assessing the last assistant response.
41
- - **Model Class:** `AutoModelForSequenceClassification`
42
 
43
- > **Note:** Scores are only directly comparable for different answers to the *same* prompt. A higher reward on one conversation indicates better performance within that context, but does *not* translate across unrelated prompts.
44
 
45
- ## License/Terms of Use
46
 
47
- Use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
 
 
 
 
 
 
 
48
 
49
- ---
50
 
51
  ## RM-Bench LeaderBoard
52
 
@@ -70,26 +69,43 @@ As of 29 May 2025, Qwen-3-Nemotron-32B-Reward has comparable scores on [JudgeBen
70
  | [Llama-3.3-Nemotron-70B-Reward](https://huggingface.co/nvidia/Llama-3.3-Nemotron-70B-Reward) | 70.8 | 76.5 | 82.1 | 66.7 | 73.7 |
71
  | [Llama-3.3-Nemotron-70B-Reward-Multilingual](https://huggingface.co/nvidia/Llama-3.3-Nemotron-70B-Reward-Multilingual) |66.2 | 71.4 | 82.1 |59.5 | 69.4|
72
 
73
- ---
74
 
75
- ## Use Case
76
 
77
- Qwen-3-Nemotron-32B-Reward assigns a reward score to an LLM-generated response in a user–assistant dialogue.
78
 
79
- ---
80
 
81
- ## References
82
-
83
- * [HelpSteer3-Preference](https://arxiv.org/abs/2505.11475)
84
- * [Qwen3 Model Card](https://huggingface.co/Qwen/Qwen3-32B)
85
-
86
- ---
87
 
88
  ## Model Architecture
89
 
90
  **Architecture Type:** Transformer
91
- **Network:** Qwen3, 32B parameters
92
- **Training:** Bradley–Terry preference training
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
 
94
  ## Quick Start
95
 
@@ -127,17 +143,74 @@ for response in [good_response, bad_response]:
127
  # reward for bad_response = -7.9765625
128
  ```
129
 
 
 
 
 
 
130
  ## Training Datasets:
131
 
132
- **Dataset Name:**
133
- HelpSteer-3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
134
 
135
- **Dataset Links:**
136
- https://huggingface.co/datasets/nvidia/HelpSteer2
137
- https://huggingface.co/datasets/nvidia/HelpSteer3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
138
 
139
  ## Ethical Considerations:
140
  NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
 
 
141
  Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
142
 
143
  ## Citation
 
21
 
22
  ## Description
23
 
24
+ Qwen-3-Nemotron-32B-Reward is a reward model that assigns a numerical “reward” score to evaluate the quality of LLM-generated responses. A higher reward on one conversation indicates better performance within that context, but does *not* translate across unrelated prompts.
 
25
 
26
+ This model is ready for commercial/non-commercial use.
27
 
28
+ ## License/Terms of Use
29
 
30
+ Use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
 
31
 
32
+ ### Deployment Geography
 
 
 
33
 
34
+ Global
 
 
 
35
 
36
+ ## Use Case
37
 
38
+ Qwen-3-Nemotron-32B-Reward assigns a reward score to an LLM-generated response in a user–assistant dialogue.
39
 
40
+ ## Release Date:
41
+
42
+ HuggingFace 06/27/2025 via https://huggingface.co/nvidia/Qwen-3-Nemotron-32B-Reward
43
+
44
+ ## References
45
+
46
+ * [HelpSteer3-Preference](https://arxiv.org/abs/2505.11475)
47
+ * [Qwen3 Model Card](https://huggingface.co/Qwen/Qwen3-32B)
48
 
 
49
 
50
  ## RM-Bench LeaderBoard
51
 
 
69
  | [Llama-3.3-Nemotron-70B-Reward](https://huggingface.co/nvidia/Llama-3.3-Nemotron-70B-Reward) | 70.8 | 76.5 | 82.1 | 66.7 | 73.7 |
70
  | [Llama-3.3-Nemotron-70B-Reward-Multilingual](https://huggingface.co/nvidia/Llama-3.3-Nemotron-70B-Reward-Multilingual) |66.2 | 71.4 | 82.1 |59.5 | 69.4|
71
 
 
72
 
 
73
 
 
74
 
 
75
 
 
 
 
 
 
 
76
 
77
  ## Model Architecture
78
 
79
  **Architecture Type:** Transformer
80
+ **Network Architecture:** Qwen3
81
+
82
+ We developed this model using [Qwen-3-32B](https://huggingface.co/Qwen/Qwen3-32B) as its foundation. This model contains 32 billion parameters.
83
+
84
+ ## Input:
85
+ **Input Type(s):** Text <br>
86
+ **Input Format:** String <br>
87
+ **Input Parameters:** One Dimensional (1D) <br>
88
+ **Other Properties Related to Input:** Max of 128k tokens (but trained only on conversations up to 8K tokens) <br>
89
+
90
+ ## Output:
91
+ **Output Type(s):** Float <br>
92
+ **Output Format:** One Single Float <br>
93
+ **Output Parameters:** One Dimensional (1D) <br>
94
+ **Other Properties Related to Output:** The float value represents the quality of the response, with a higher value representing higher quality. <br>
95
+
96
+ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
97
+
98
+ ## Software Integration:
99
+ **Runtime Engine(s):** <br>
100
+ * [NeMo - 24.05.llama.3.1] <br>
101
+
102
+ **Supported Hardware Microarchitecture Compatibility:** <br>
103
+ * NVIDIA Ampere <br>
104
+ * NVIDIA Hopper <br>
105
+ * NVIDIA Turing <br>
106
+
107
+ **Supported Operating System(s):** Linux <br>
108
+
109
 
110
  ## Quick Start
111
 
 
143
  # reward for bad_response = -7.9765625
144
  ```
145
 
146
+ ## Model Version:
147
+ v1.0
148
+
149
+ # Training, Testing and Evaluation Datasets:
150
+
151
  ## Training Datasets:
152
 
153
+ **Dataset Name:** HelpSteer3 <br>
154
+ **Dataset Link:** https://huggingface.co/datasets/nvidia/HelpSteer3
155
+
156
+ **Data Collection Method by dataset** <br>
157
+ * [Hybrid: Human, Synthetic] <br>
158
+
159
+ **Labeling Method by dataset** <br>
160
+ * [Human] <br>
161
+
162
+ **Properties:** <br>
163
+ * 38,459 prompts, each with a pair of responses as well as human preferences between the pair of responses.
164
+
165
+ ## Testing Datasets:
166
+
167
+ **Dataset Name:** HelpSteer3 <br>
168
+ **Dataset Link:** https://huggingface.co/datasets/nvidia/HelpSteer3
169
+
170
+ **Data Collection Method by dataset** <br>
171
+ * [Hybrid: Human, Synthetic] <br>
172
 
173
+ **Labeling Method by dataset** <br>
174
+ * [Human] <br>
175
+
176
+ **Properties:** <br>
177
+ * 2,017 prompts, each with a pair of responses as well as human preferences between the pair of responses.
178
+
179
+ ## Evaluation Datasets
180
+
181
+ **Dataset Name:** RM-Bench <br>
182
+ **Dataset Link:** https://huggingface.co/datasets/THU-KEG/RM-Bench
183
+
184
+ **Data Collection Method by dataset** <br>
185
+ * [Hybrid: Human, Synthetic] <br>
186
+
187
+ **Labeling Method by dataset** <br>
188
+ * [Hybrid: Human, Synthetic] <br>
189
+
190
+ **Properties:** <br>
191
+ * 1,327 prompts, each with three pairs of responses as well as preferences between the pair of responses.
192
+
193
+
194
+ **Dataset Name:** JudgeBench <br>
195
+ **Dataset Link:** https://huggingface.co/datasets/ScalerLab/JudgeBench
196
+
197
+ **Data Collection Method by dataset** <br>
198
+ * [Hybrid: Human, Synthetic] <br>
199
+
200
+ **Labeling Method by dataset** <br>
201
+ * [Hybrid: Human, Synthetic] <br>
202
+
203
+ **Properties:** <br>
204
+ * 350 prompts, each with a pair of responses as well as preferences between the pair of responses.
205
+
206
+ # Inference:
207
+ **Engine:** PyTorch <br>
208
+ **Test Hardware:** H100, A100 80GB, A100 40GB <br>
209
 
210
  ## Ethical Considerations:
211
  NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
212
+ For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](explainability.md), [Bias](bias.md), [Safety & Security](safety.md), and [Privacy](privacy.md) Subcards.
213
+
214
  Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
215
 
216
  ## Citation