Datasets:

Modalities:
Image
Text
Size:
< 1K
ArXiv:
Tags:
VPT
Libraries:
Datasets
License:
Gracjan commited on
Commit
d7b715e
·
verified ·
1 Parent(s): 06bfb65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -51
README.md CHANGED
@@ -54,79 +54,80 @@ pretty_name: Isle
54
  size_categories:
55
  - n<1K
56
  ---
 
57
  <p align="center">
58
  <img src="isle.png" alt="" width="200">
59
  </p>
60
 
61
  ## Dataset Details
62
 
63
- The LEGO Visual Tasks Dataset is designed for research in visual perspective taking (VPT), scene understanding, and spatial reasoning. It contains 144 visual tasks inspired by human VPT tests, such as those detailed in O'Grady et al. (2020) and Lukosiunaite et al. (2024). Each task involves a minifigure-object pair created using LEGO components and systematically photographed in varying spatial arrangements and orientations.
64
-
65
- This dataset can be utilized for evaluating models’ abilities in object recognition, spatial reasoning, and perspective-taking, making it a valuable resource for studies in artificial intelligence, cognitive science, and computer vision.
66
-
67
- ## Dataset Sources
68
 
69
- - **Repository:** https://github.com/GracjanGoral/ISLE
70
- - **Paper:** https://arxiv.org/abs/2409.12969
71
 
72
- ## Direct Use
 
 
73
 
74
- The dataset is suitable for:
75
- - Training and evaluating models in visual perspective-taking tasks.
76
- - Studying scene understanding, spatial reasoning, and object recognition.
77
- - Testing abilities to answer diagnostic questions related to the dataset's scenarios.
78
 
79
- ## Out-of-Scope Use
80
 
81
- The dataset should not be used for:
82
- - Malicious purposes such as creating misleading or biased AI applications.
83
- - Tasks unrelated to the intended goals of visual reasoning and perspective-taking.
84
 
85
- ## Dataset Structure
86
-
87
- The dataset consists of:
88
-
89
- - **Images:** Photographs of nine unique LEGO minifigure-object pairs systematically varied by:
90
- - **Spatial Position:** Object relative to the minifigure’s left, right, behind, or in front.
91
- - **Minifigure Orientation:** Facing toward or away from the object.
92
- - **Camera Viewpoints:** Bird’s-eye view and surface-level view.
93
- - **Image Resolution:** Each image has 4000 x 3000 pixels.
 
94
 
95
- - **Metadata File:** A structured file containing:
96
- - `image_id`: Unique identifier for each image.
97
- - `question_1` to `question_7`: Questions testing scene understanding, spatial reasoning, and visual perspective-taking.
98
- - `gold_answer`: The correct answer for each question.
99
 
100
- ## Curation Rationale
101
 
102
- This dataset was created to systematically test visual reasoning models on tasks inspired by human cognitive processes. It aims to bridge the gap between machine learning performance and human abilities in tasks requiring scene understanding, spatial reasoning, and perspective-taking.
 
 
 
 
 
 
103
 
104
- ## Data Collection and Processing
105
 
106
- The images were collected by:
107
- - Designing nine unique minifigure-object pairs using LEGO pieces.
108
- - Systematically varying spatial positions, orientations, and camera angles for each pair.
109
- - Photographing each arrangement with consistent lighting and surface conditions.
110
- - The images were manually created, and each has a resolution of 4000 x 3000 pixels.
111
 
112
- ## Who are the source data producers?
113
 
114
- The source data producers include the dataset creator Gracjan Goral, who designed the LEGO minifigure-object pairs and performed systematic photography.
 
115
 
116
  ## Annotation Process
117
 
118
- - Three annotators labeled data, and the gold answers were produced by majority choice among the annotators.
119
- - The labeling process achieved over 99% agreement among annotators
120
-
121
- ## Personal and Sensitive Information
122
-
123
- The dataset does not contain any personal, sensitive, or private information. All data consists of LEGO objects and metadata derived from the tasks.
124
 
125
  ## Citation
126
 
127
- **BibTeX:**
128
- @dataset{LEGO_Visual_Tasks, author = {Gracjan Goral}, title = {LEGO Visual Tasks Dataset}, year = {2025}, publisher = {Hugging Face}, note = {https://huggingface.co/datasets/lego_visual_tasks}, howpublished = {Used in article: https://arxiv.org/abs/2409.12969} }
129
-
130
-
131
- **APA:**
132
- Goral, G. (2025). LEGO Visual Tasks Dataset. Available at https://huggingface.co/datasets/lego_visual_tasks. Used in article: https://arxiv.org/abs/2409.12969.
 
 
 
 
 
 
 
 
 
54
  size_categories:
55
  - n<1K
56
  ---
57
+
58
  <p align="center">
59
  <img src="isle.png" alt="" width="200">
60
  </p>
61
 
62
  ## Dataset Details
63
 
64
+ The **Isle (I spy with my little eye)** dataset helps researchers study visual perspective taking (VPT), scene understanding, and spatial reasoning.<br>
65
+ Visual perspective taking is the ability to imagine the world from someone else's viewpoint. This skill is important for everyday tasks like driving safely,<br>
66
+ coordinating actions with others, or knowing when it's your _turn to speak_.
 
 
67
 
68
+ This dataset includes high-quality images (over 11 Mpix) and consists of three subsets:
 
69
 
70
+ - **Isle-Bricks v1**
71
+ - **Isle-Bricks v2**
72
+ - **Isle-Dots**
73
 
74
+ The subsets **Isle-Bricks v1** and **Isle-Dots** come from the study *Seeing Through Their Eyes: Evaluating Visual Perspective Taking in Vision Language Models* ,<br>
75
+ and were created test Vision Language Models (VLMs).
 
 
76
 
77
+ **Isle-Bricks v2** provides additional images of Lego minifigures from two viewpoints *(See Figure 1 for example images.)*:
78
 
79
+ - **surface-level** view
80
+ - **bird’s eye** view
 
81
 
82
+ <p align="center">
83
+ <img src="example.png" alt="" width="700">
84
+ <figcaption align="center">
85
+ <h3>
86
+ Figure 1. Example images from the datasets: bottom left, Isle-Brick v1
87
+ bottom right, Isle-Dots; top left, Isle-Dots v2 (surface-level);<br>
88
+ and top right, Isle-Dots v2 (bird’s-eye view).
89
+ </h3>
90
+ </figcaption>
91
+ </p>
92
 
 
 
 
 
93
 
94
+ The Isle-Bricks v2 subset includes seven questions (Q1–Q7) to test visual perspective taking and related skills:
95
 
96
+ - **Q1:** _List and count all objects in the image that are not humanoid minifigures._
97
+ - **Q2:** _How many humanoid minifigures are in the image?_
98
+ - **Q3:** _Are the humanoid minifigure and the object on the same surface?_
99
+ - **Q4:** _In which cardinal direction (north, west, east, or south) is the object located relative to the humanoid minifigure?_
100
+ - **Q5:** _Which direction (north, west, east, or south) is the humanoid minifigure facing?_
101
+ - **Q6:** _Assuming the humanoid minifigure can see and its eyes are open, does it see the object?_
102
+ - **Q7:** _From the perspective of the humanoid minifigure, where is the object located relative to it (front, left, right, or back)?_
103
 
104
+ **Psychologists can also use this dataset to study human visual perception and understanding.**
105
 
106
+ # Another related dataset is [BlenderGaze](https://huggingface.co/datasets/Gracjan/BlenderGaze), containing over **2,000** images generated using Blender.
 
 
 
 
107
 
108
+ ## Dataset Sources
109
 
110
+ - **Repository:** [GitHub](https://github.com/GracjanGoral/ISLE)
111
+ - **Paper:** [arXiv](https://arxiv.org/abs/2409.12969)
112
 
113
  ## Annotation Process
114
 
115
+ - Three annotators labeled images, and final answers were based on the majority vote.
116
+ - Annotators agreed on labels over 99% of the time.
 
 
 
 
117
 
118
  ## Citation
119
 
120
+ ```bibtex
121
+ @misc{góral2024seeingeyesevaluatingvisual,
122
+ title={Seeing Through Their Eyes: Evaluating Visual Perspective Taking in Vision Language Models},
123
+ author={Gracjan Góral and Alicja Ziarko and Michal Nauman and Maciej Wołczyk},
124
+ year={2024},
125
+ eprint={2409.12969},
126
+ archivePrefix={arXiv},
127
+ primaryClass={cs.CL},
128
+ url={https://arxiv.org/abs/2409.12969},
129
+ }
130
+ ```
131
+ ```bibtex
132
+ Góral, G., Ziarko, A., Nauman, M., & Wołczyk, M. (2024). Seeing through their eyes: Evaluating visual perspective taking in vision language models. arXiv. https://arxiv.org/abs/2409.12969
133
+ ```