|
--- |
|
pipeline_tag: image-text-to-text |
|
library_name: transformers |
|
license: mit |
|
base_model: |
|
- Qwen/Qwen2.5-VL-7B-Instruct |
|
tags: |
|
- Multimodal Reward Model |
|
- Reward Model |
|
--- |
|
|
|
<div align="center"> |
|
<img src="skywork-logo.png" alt="Introduction Image" width="500" height="400"> |
|
</div> |
|
|
|
## 🔥News |
|
|
|
**May 12, 2025**: Our technical report is now available on arXiv and we welcome citations:[Skywork-VL Reward: An Effective Reward Model for Multimodal Understanding and Reasoning](https://arxiv.org/abs/2505.07263) |
|
|
|
**April 24, 2025**: We released **Skywork-VL-Reward-7B**, A state-of-the-art multimodal reward model on [VLRewardBench](https://huggingface.co/spaces/MMInstruction/VL-RewardBench), and have released our technical report on the [R1V GitHub](https://github.com/SkyworkAI/Skywork-R1V/blob/main/SkyworkVL_RM.pdf) repository. |
|
|
|
## Introduction |
|
The lack of multimodal reward models on the market has become a major bottleneck restricting the development of multimodal reinforcement technology. |
|
We open source the 7B multimodal reward model Skywork-VL-Reward, injecting new momentum into the industry and opening a new chapter in multimodal reinforcement learning |
|
|
|
|
|
Skywork-VL-Reward is based on the [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) architecture with the addition of a value head structure for training reward model. |
|
We obtained SOTA of 73.1 in [VL-RewardBench](https://vl-rewardbench.github.io/) and high score of 90.1 in [RewardBench](https://huggingface.co/spaces/allenai/reward-bench). |
|
In addition, our MPO trained on Skywork-R1V-2.0 further validates the effectiveness of the model. |
|
We hope that this multimodal reward model will contribute to the open source community! |
|
Please refer to our technical report for more details. |
|
|
|
## Technical Report |
|
[Skywork-VL Reward: An Effective Reward Model for Multimodal Understanding and Reasoning](https://arxiv.org/abs/2505.07263) |
|
|
|
## Evaluation |
|
<h3 align="center">VL-RewardBench</h3> |
|
<table style="margin: auto;"> |
|
<thead> |
|
<tr> |
|
<th>Model Name</th><th>Model Size</th><th>General</th><th>Hallucination</th><th>Reasoning</th><th>Overall Accuracy</th><th>Macro Average</th> |
|
</tr> |
|
</thead> |
|
<tbody> |
|
<tr><td colspan="7" align="center"><i>Proprietary Models</td></tr> |
|
<tr><td>Claude-3.5-Sonnet(2024-06-22)</td><td>-</td><td>43.4</td><td>55.0</td><td>62.3</td><td>55.3</td><td>53.6</td></tr> |
|
<tr><td>Gemini-1.5-Flash (2024-09-24)</td><td>-</td><td>47.8</td><td>59.6</td><td>58.4</td><td>57.6</td><td>55.3</td></tr> |
|
<tr><td>GPT-4o(2024-08-06)</td><td>-</td><td>49.1</td><td>67.6</td><td>70.5</td><td>65.8</td><td>62.4</td></tr> |
|
<tr><td>Gemini-1.5-Pro(2024-09-24)</td><td>-</td><td>50.8</td><td>72.5</td><td>64.2</td><td>67.2</td><td>62.5</td></tr> |
|
<tr><td>Gemini-2.0-flash-exp(2024-12)</td><td>-</td><td>50.8</td><td>72.6</td><td>70.1</td><td><strong>68.8</strong></td><td><strong>64.5</strong></td></tr> |
|
<tr><td colspan="7" align="center"><i>Open-Source Models</td></tr> |
|
<tr><td>Qwen2-VL-7B-Instruct</td><td>7B</td><td>31.6</td><td>19.1</td><td>51.1</td><td>28.3</td><td>33.9</td></tr> |
|
<tr><td>MAmmoTH-VL-8B</td><td>8B</td><td>36.0</td><td>40.0</td><td>52.0</td><td>42.2</td><td>42.7</td></tr> |
|
<tr><td>Qwen2.5-VL-7B-Instruct</td><td>7B</td><td>43.4</td><td>42.0</td><td>63.0</td><td>48.0</td><td>49.5</td></tr> |
|
<tr><td>InternVL3-8B</td><td>8B</td><td>60.6</td><td>44.0</td><td>62.3</td><td>57.0</td><td>55.6</td></tr> |
|
<tr><td>IXC-2.5-Reward-7B</td><td>7B</td><td>80.3</td><td>65.3</td><td>60.4</td><td>66.3</td><td>68.6</td></tr> |
|
<tr><td>Qwen2-VL-72B-Instruct</td><td>72B</td><td>38.1</td><td>32.8</td><td>58.0</td><td>39.5</td><td>43.0</td></tr> |
|
<tr><td>Molmo-72B-0924</td><td>72B</td><td>33.9</td><td>42.3</td><td>54.9</td><td>44.1</td><td>43.7</td></tr> |
|
<tr><td>QVQ-72B-Preview</td><td>72B</td><td>41.8</td><td>46.2</td><td>51.2</td><td>46.4</td><td>46.4</td></tr> |
|
<tr><td>Qwen2.5-VL-72B-Instruct</td><td>72B</td><td>47.8</td><td>46.8</td><td>63.5</td><td>51.6</td><td>52.7</td></tr> |
|
<tr><td>InternVL3-78B</td><td>78B</td><td>67.8</td><td>52.5</td><td>64.5</td><td>63.3</td><td>61.6</td></tr> |
|
<tr><td><strong>Skywork-VL Reward(Ours)</strong></td><td>7B</td><td>66.0</td><td>80.0</td><td>61.0</td><td><strong>73.1</strong></td><td><strong>69.0</strong></td></tr> |
|
</tbody> |
|
</table> |
|
|
|
--- |
|
|
|
<h3 align="center">RewardBench</h3> |
|
<table style="margin: auto;"> |
|
<thead> |
|
<tr> |
|
<th>Model Name</th><th>Chat</th><th>Chat Hard</th><th>Safety</th><th>Reasoning</th><th>Score</th> |
|
</tr> |
|
</thead> |
|
<tbody> |
|
<tr><td colspan="7" align="center"><i>Language-Only Reward Models</td></tr> |
|
<tr><td>InternLM2-7B-Reward</td><td>99.2</td><td>69.5</td><td>87.2</td><td>94.5</td><td>87.6</td></tr> |
|
<tr><td>Skywork-Reward-Llama3.1-8B</td><td>95.8</td><td>87.3</td><td>90.8</td><td>96.2</td><td>92.5</td></tr> |
|
<tr><td>Skywork-Reward-Llama-3.1-8B-v0.2</td><td>94.7</td><td>88.4</td><td>92.7</td><td>96.7</td><td>93.1</td></tr> |
|
<tr><td>QRM-Llama3.1-8B-v2</td><td>96.4</td><td>86.8</td><td>92.6</td><td>96.8</td><td><strong>93.1</strong></td></tr> |
|
<tr><td colspan="7" align="center"><i>Multi-Modal Reward Models</td></tr> |
|
<tr><td>Qwen2-VL-7B-Instruct</td><td>65.1</td><td>50.9</td><td>55.8</td><td>68.3</td><td>60.0</td></tr> |
|
<tr><td>InternVL3-8B</td><td>97.2</td><td>50.4</td><td>83.6</td><td>83.9</td><td>78.8</td></tr> |
|
<tr><td>Qwen2.5-VL-7B-Instruct</td><td>94.3</td><td>63.8</td><td>84.1</td><td>86.2</td><td>82.1</td></tr> |
|
<tr><td>IXC-2.5-Reward-7B</td><td>90.8</td><td>83.8</td><td>87.8</td><td>90.0</td><td>88.1</td></tr> |
|
<tr><td><strong>Skywork-VL Reward(Ours)</strong></td><td>90.0</td><td>87.5</td><td>91.1</td><td>91.8</td><td><strong>90.1</strong></td></tr> |
|
</tbody> |
|
</table> |
|
|
|
--- |
|
|
|
|
|
## Usage |
|
### Set Up the Environment |
|
|
|
```shell |
|
conda create -n vl-reward python=3.11 |
|
conda activate vl-reward |
|
bash setup.sh |
|
``` |
|
|
|
### Run the Inference Code |
|
|
|
```python |
|
import torch |
|
from transformers import AutoProcessor, Qwen2_5_VLForConditionalGeneration |
|
from trl import AutoModelForCausalLMWithValueHead |
|
from qwen_vl_utils import process_vision_info |
|
from transformers.utils import cached_file |
|
from safetensors import safe_open |
|
|
|
|
|
processor = AutoProcessor.from_pretrained("Skywork/Skywork-VL-Reward-7B") |
|
# The default range for the number of visual tokens per image in the model is 4-16384. |
|
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost. |
|
# min_pixels = 256*28*28 |
|
# max_pixels = 1280*28*28 |
|
# processor = AutoProcessor.from_pretrained("Skywork/Skywork-VL-Reward-7B", min_pixels=min_pixels, max_pixels=max_pixels) |
|
|
|
model = Qwen2_5_VLForConditionalGeneration.from_pretrained( |
|
"Skywork/Skywork-VL-Reward-7B", |
|
device_map="auto", |
|
torch_dtype=torch.bfloat16, |
|
) |
|
# We recommend enabling flash_attention_2 for better acceleration and memory saving |
|
# pip install flash-attn --no-build-isolation |
|
# |
|
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained( |
|
# "Skywork/Skywork-VL-Reward-7B", |
|
# device_map="auto", |
|
# torch_dtype=torch.bfloat16, |
|
# attn_implementation="flash_attention_2", |
|
# ) |
|
|
|
model = AutoModelForCausalLMWithValueHead.from_pretrained(model) |
|
vhead_file = cached_file( |
|
path_or_repo_id="Skywork/Skywork-VL-Reward-7B", filename="value_head.safetensors" |
|
) |
|
with safe_open(vhead_file, framework="pt", device="cpu") as f: |
|
vhead_params = {key: f.get_tensor(key) for key in f.keys()} |
|
model.load_state_dict(vhead_params, strict=False) |
|
model.requires_grad_(False) |
|
model.eval() |
|
|
|
# score: 23.89 |
|
# if you use flash_attention_2 the score will be 23.76 |
|
demo_image = "demo.jpg" |
|
demo_question = "Hint: Please answer the question and provide the correct option letter, e.g., A, B, C, D, at the end.\nQuestion: Is Purple the highest value?\nChoices:\n(A) no\n(B) yes" |
|
demo_answer = "The answer is: B" |
|
|
|
messages = [ |
|
{ |
|
"role": "user", |
|
"content": [ |
|
{ |
|
"type": "image", |
|
"image": demo_image, |
|
}, |
|
{ |
|
"type": "text", |
|
"text": demo_question, |
|
}, |
|
], |
|
}, |
|
{ |
|
"role": "assistant", |
|
"content": demo_answer, |
|
}, |
|
] |
|
text = processor.apply_chat_template( |
|
messages, tokenize=False, add_generation_prompt=False |
|
) |
|
image_inputs, video_inputs = process_vision_info(messages) |
|
inputs = processor( |
|
text=[text], |
|
images=image_inputs, |
|
videos=video_inputs, |
|
padding=True, |
|
return_tensors="pt", |
|
) |
|
inputs = inputs.to("cuda") |
|
values = model(**inputs, return_dict=True, use_cache=False)[-1] |
|
scores = values.gather( |
|
dim=-1, index=(inputs["attention_mask"].sum(dim=-1, keepdim=True) - 1) |
|
) |
|
score = scores[0].item() |
|
print("Reward Score is: ", score) |
|
``` |
|
|
|
## Citation |
|
If you use this work in your research, please cite: |
|
``` |
|
@misc{wang2025skyworkvlrewardeffectivereward, |
|
title={Skywork-VL Reward: An Effective Reward Model for Multimodal Understanding and Reasoning}, |
|
author={Xiaokun Wang and Peiyu Wang and Jiangbo Pei and Wei Shen and Yi Peng and Yunzhuo Hao and Weijie Qiu and Ai Jian and Tianyidan Xie and Xuchen Song and Yang Liu and Yahui Zhou}, |
|
year={2025}, |
|
eprint={2505.07263}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV}, |
|
url={https://arxiv.org/abs/2505.07263}, |
|
} |
|
``` |