Datasets:
Populate dataset card for IR3D-Bench
Browse filesThis PR populates the dataset card for the IR3D-Bench dataset.
It adds:
- Comprehensive metadata, including `task_categories: image-to-3d`, `license: cc-by-nc-4.0`, relevant `tags` (inverse-rendering, vlm, 3d, scene-understanding, benchmark), and `library_name: datasets`.
- Links to the paper ([https://huggingface.co/papers/2506.23329](https://huggingface.co/papers/2506.23329)), the project page ([https://ir3d-bench.github.io/](https://ir3d-bench.github.io/)), and the associated code repository on GitHub ([https://github.com/Piang/IR3D-bench](https://github.com/Piang/IR3D-bench)).
- The paper abstract, motivation, and pipeline overview to provide extensive context.
- Detailed sample usage instructions for environment setup, inverse rendering, and evaluation, extracted from the project's GitHub README.
- The official BibTeX citation.
This enhancement significantly improves the discoverability and informational completeness of the IR3D-Bench dataset on the Hugging Face Hub.
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
task_categories:
|
3 |
+
- image-to-3d
|
4 |
+
license: cc-by-nc-4.0
|
5 |
+
tags:
|
6 |
+
- inverse-rendering
|
7 |
+
- vlm
|
8 |
+
- 3d
|
9 |
+
- scene-understanding
|
10 |
+
- benchmark
|
11 |
+
library_name: datasets
|
12 |
+
---
|
13 |
+
|
14 |
+
# IR3D-Bench: Evaluating Vision-Language Model Scene Understanding as Agentic Inverse Rendering
|
15 |
+
|
16 |
+
This repository contains the dataset and evaluation protocols for [IR3D-Bench: Evaluating Vision-Language Model Scene Understanding as Agentic Inverse Rendering](https://huggingface.co/papers/2506.23329).
|
17 |
+
|
18 |
+
* **Project Page:** [https://ir3d-bench.github.io/](https://ir3d-bench.github.io/)
|
19 |
+
* **Code (GitHub):** [https://github.com/Piang/IR3D-bench](https://github.com/Piang/IR3D-bench)
|
20 |
+
|
21 |
+
## Abstract
|
22 |
+
|
23 |
+
Vision-language models (VLMs) excel at descriptive tasks, but whether they truly understand scenes from visual observations remains uncertain. We introduce IR3D-Bench, a benchmark challenging VLMs to demonstrate understanding through active creation rather than passive recognition. Grounded in the analysis-by-synthesis paradigm, IR3D-Bench tasks Vision-Language Agents (VLAs) with actively using programming and rendering tools to recreate the underlying 3D structure of an input image, achieving agentic inverse rendering through tool use. This "understanding-by-creating" approach probes the tool-using generative capacity of VLAs, moving beyond the descriptive or conversational capacity measured by traditional scene understanding benchmarks. We provide a comprehensive suite of metrics to evaluate geometric accuracy, spatial relations, appearance attributes, and overall plausibility. Initial experiments on agentic inverse rendering powered by various state-of-the-art VLMs highlight current limitations, particularly in visual precision rather than basic tool usage. IR3D-Bench, including data and evaluation protocols, is released to facilitate systematic study and development of tool-using VLAs towards genuine scene understanding by creating.
|
24 |
+
|
25 |
+
## Motivation & Useful Findings
|
26 |
+
|
27 |
+
1. Inspired by Richard Feynman's aphorism ("What I cannot create, I do not understand."), we propose a new perspective to evaluate VLMs' spatial visual understanding via a pretext task: how well they "recreate this scene."
|
28 |
+
2. We found that the aim of scene reconstruction enables VLMs to spontaneously estimate key attributes (object ID, localization, color, material, object relations, etc.) via an inverse rendering fashion—critical for understanding what they see.
|
29 |
+
3. VLMs show surprising potential for human-like reflection during this "recreation" game: feeding VLMs their recreated scenes, they compare with originals and update their understanding of the scene (the key attributes they estimate). We expect this multi-round feedback iteration to unlock more possibilities for improving existing VLMs in both understanding and generation performance.
|
30 |
+
|
31 |
+
## Pipeline Overview
|
32 |
+
|
33 |
+
<p align="center">
|
34 |
+
<img src="https://ir3d-bench.github.io/assets/main_pipeline.png" alt="Pipeline"/>
|
35 |
+
</p>
|
36 |
+
|
37 |
+
## Dataset Setup
|
38 |
+
|
39 |
+
This dataset contains the processed data for the IR3D-Bench benchmark. You can download the data directly from this Hugging Face dataset repository ([`Piang/IR3D-bench`](https://huggingface.co/datasets/Piang/IR3D-bench)).
|
40 |
+
|
41 |
+
To download the processed data, ensure you have Git LFS installed and then clone the repository:
|
42 |
+
|
43 |
+
```bash
|
44 |
+
git lfs install
|
45 |
+
git clone https://huggingface.co/datasets/Piang/IR3D-bench
|
46 |
+
```
|
47 |
+
|
48 |
+
## Sample Usage (Inverse Rendering & Evaluation)
|
49 |
+
|
50 |
+
To run inverse rendering tasks with state-of-the-art Vision-Language Models (VLMs) and evaluate their performance, refer to the following instructions derived from the project's GitHub repository.
|
51 |
+
|
52 |
+
### Environment setup
|
53 |
+
|
54 |
+
1. **Create Environment:**
|
55 |
+
```shell
|
56 |
+
conda create --name ir3d python=3.10
|
57 |
+
conda activate ir3d
|
58 |
+
```
|
59 |
+
2. **Install vllm:**
|
60 |
+
```shell
|
61 |
+
pip install vllm
|
62 |
+
```
|
63 |
+
3. **Install Blender** (on Linux):
|
64 |
+
```shell
|
65 |
+
snap install blender --classic
|
66 |
+
```
|
67 |
+
4. **Install SAM:**
|
68 |
+
```shell
|
69 |
+
pip install git+https://github.com/facebookresearch/segment-anything.git
|
70 |
+
```
|
71 |
+
|
72 |
+
### Inverse Rendering
|
73 |
+
|
74 |
+
Task prompt for inverse rendering and gpt4o score is in `prompts/gpt4o_as_evaluator.txt` and `prompts/vlm_estimate_params.txt` within the code repository.
|
75 |
+
|
76 |
+
#### Latest Proprietary Models
|
77 |
+
|
78 |
+
Modified the `model-name` as defined in `main_vllm.py` to use the required model.
|
79 |
+
```shell
|
80 |
+
python main_vllm.py --model-type "model-name"
|
81 |
+
```
|
82 |
+
|
83 |
+
#### Open-source Models
|
84 |
+
|
85 |
+
Modified the `model-name` as you needed, such as "gpt-4o", "grok-3", etc.
|
86 |
+
```shell
|
87 |
+
python main_api.py \
|
88 |
+
--image_dir /path/to/images \
|
89 |
+
--result_dir /output/path \
|
90 |
+
--prompt_path prompts/vlm_estimate_params.txt \
|
91 |
+
--model_name "model-name"
|
92 |
+
```
|
93 |
+
|
94 |
+
### Evaluation
|
95 |
+
|
96 |
+
To calculate metrics:
|
97 |
+
|
98 |
+
```shell
|
99 |
+
bash cal_metric.sh "/output/path" "/path/to/images" "GPI_ID"
|
100 |
+
```
|
101 |
+
|
102 |
+
For more detailed setup and usage, please refer to the [IR3D-Bench GitHub repository](https://github.com/Piang/IR3D-bench).
|
103 |
+
|
104 |
+
## Citation
|
105 |
+
|
106 |
+
If you find our work helpful, please consider citing:
|
107 |
+
|
108 |
+
```bibtex
|
109 |
+
@article{liu2025ir3d,
|
110 |
+
title={IR3D-Bench: Evaluating Vision-Language Model Scene Understanding as Agentic Inverse Rendering},
|
111 |
+
author={Liu, Parker and Li, Chenxin and Li, Zhengxin and Wu, Yipeng and Li, Wuyang and Yang, Zhiqin and Zhang, Zhenyuan and Lin, Yunlong and Han, Sirui and Feng, Brandon Y},
|
112 |
+
journal={arXiv preprint arXiv:2506.23329},
|
113 |
+
year={2025}
|
114 |
+
}
|
115 |
+
```
|