Improve dataset card: Add paper, code, task category, and details

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +74 -3
README.md CHANGED
@@ -1,7 +1,78 @@
1
  ---
 
 
2
  license: mit
3
  size_categories:
4
  - 100K<n<1M
5
- language:
6
- - en
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: mit
5
  size_categories:
6
  - 100K<n<1M
7
+ task_categories:
8
+ - other
9
+ library_name: datasets
10
+ tags:
11
+ - 3d
12
+ - spatial-reasoning
13
+ - segmentation
14
+ - vision-language
15
+ - scannet
16
+ - embodied-ai
17
+ ---
18
+
19
+ # SURPRISE3D: A Dataset for Spatial Understanding and Reasoning in Complex 3D Scenes
20
+
21
+ This repository contains the dataset for the paper [SURPRISE3D: A Dataset for Spatial Understanding and Reasoning in Complex 3D Scenes](https://huggingface.co/papers/2507.07781).
22
+
23
+ **Codebase**: [https://github.com/hhllzz/surprise-3d](https://github.com/hhllzz/surprise-3d)
24
+
25
+ <p align="center">
26
+ <img src="https://github.com/hhllzz/surprise-3d/raw/main/assets/task.png" alt="overview" width="800" />
27
+ </p>
28
+
29
+ We introduce **Surprise3D**, a novel dataset designed to evaluate **language-guided spatial reasoning segmentation** in complex 3D scenes. The integration of language and 3D perception is critical for embodied AI and robotic systems to perceive, understand, and interact with the physical world. Spatial reasoning, a key capability for understanding spatial relationships between objects, remains underexplored in current 3D vision-language research.
30
+
31
+ Existing datasets often mix semantic cues (e.g., object name) with spatial context, leading models to rely on superficial shortcuts rather than genuinely interpreting spatial relationships. To address this gap, S\textsc{urprise}3D consists of more than 200k vision language pairs across 900+ detailed indoor scenes from ScanNet++ v2, including more than 2.8k unique object classes. The dataset contains 89k+ human-annotated spatial queries deliberately crafted without object name, thereby mitigating shortcut biases in spatial understanding.
32
+
33
+ These queries comprehensively cover various spatial reasoning skills, such as:
34
+ - **Relative position** (e.g., "Find the object behind the chair.")
35
+ - **Narrative perspective** (e.g., "Locate the object visible from the sofa.")
36
+ - **Parametric perspective** (e.g., "Select the object 2 meters to the left of the table.")
37
+ - **Absolute distance reasoning** (e.g., "Identify the object exactly 3 meters in front of you.").
38
+
39
+ Initial benchmarks demonstrate significant challenges for current state-of-the-art expert 3D visual grounding methods and 3D-LLMs, underscoring the necessity of our dataset and the accompanying 3D Spatial Reasoning Segmentation (3D-SRS) benchmark suite. S\textsc{urprise}3D and 3D-SRS aim to facilitate advancements in spatially aware AI, paving the way for effective embodied interaction and robotic planning.
40
+
41
+ ---
42
+ ## 🔍 Data Analysis
43
+
44
+ <p align="center">
45
+ <img src="https://github.com/hhllzz/surprise-3d/raw/main/assets/data_analysis.png" alt="Data Analysis" width="800" />
46
+ </p>
47
+
48
+ We provide a detailed analysis of the dataset:
49
+ 1. **Augmentation for Low-Frequency Objects**: Boosting the number of questions targeting rarely occurring objects to improve model robustness.
50
+ 2. **Object Frequency (%) by Question Type (Top 15 Objects)**: Examining how frequently the top 15 objects are referenced across different question types.
51
+ 3. **Distribution of Question Types**: Visualizing the proportion of questions across various reasoning categories.
52
+
53
+ Our dataset ensures a balanced distribution of reasoning types and incorporates augmentation techniques to reduce biases caused by object frequency disparities. This analysis supports the development of models that generalize better across diverse reasoning tasks.
54
+
55
+ ---
56
+
57
+ ## ⚙️ Train and Evaluation
58
+
59
+ We have modified parts of the [Reason3D](https://github.com/KuanchihHuang/Reason3D) codebase to support training and testing on our **Surprise3D** dataset. These modifications enable the preprocessing of **ScanNet++** data and the use of **Reason3D** for segmentation tasks on **Surprise3D**.
60
+
61
+ Please refer to the `Models/reason3d` directory within the [codebase repository](https://github.com/hhllzz/surprise-3d) for scripts to preprocess the **ScanNet++** data required for the **Surprise3D** dataset and for training and evaluation using **Reason3D**.
62
+
63
+ These updates allow us to leverage the powerful capabilities of **Reason3D** while ensuring compatibility with the unique structure and annotations of **Surprise3D**.
64
+
65
+ ---
66
+ ## Citation
67
+
68
+ If you find our dataset or work useful for your research, please consider citing the paper:
69
+
70
+ ```bibtex
71
+ @inproceedings{huang2024surprise3d,
72
+ title={SURPRISE3D: A Dataset for Spatial Understanding and Reasoning in Complex 3D Scenes},
73
+ author={Jiaxin Huang and Ziwen Li and Hanlue Zhang},
74
+ booktitle={The Thirty-eighth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
75
+ year={2024},
76
+ url={https://huggingface.co/papers/2507.07781}
77
+ }
78
+ ```