Datasets:

Modalities:
Image
Text
Size:
< 1K
ArXiv:
Tags:
VPT
Libraries:
Datasets
License:
Isle / README.md
Gracjan's picture
Update README.md
c0ffebf verified
metadata
license: cc-by-4.0
configs:
  - config_name: Isle-Brick-V2
    data_files:
      - split: test
        path: Isle-Brick-V2/*
    features:
      - name: image
        dtype: image
      - name: Q1
        dtype: int64
      - name: Q2
        dtype: int64
      - name: Q3
        dtype: string
      - name: Q4
        sequence: string
      - name: Q5
        sequence: string
      - name: Q6
        dtype: string
      - name: Q7
        sequence: string
  - config_name: Isle-Brick-V2-no_object
    data_files:
      - split: test
        path: Isle-Brick-V2-no_object/*
    features:
      - name: image
        dtype: image
      - name: Q5
        sequence: string
  - config_name: Isle-Brick-V2-visual_hint
    data_files:
      - split: test
        path: Isle-Brick-V2-visual_hint/*
    features:
      - name: image
        dtype: image
      - name: Q5
        sequence: string
  - config_name: Isle-Brick-V2-human
    data_files:
      - split: test
        path: Isle-Brick-V2-human/*
    features:
      - name: image
        dtype: image
      - name: Q5
        sequence: string
  - config_name: Isle-Brick-V2-zoom
    data_files:
      - split: test
        path: Isle-Brick-V2-zoom/*
    features:
      - name: image
        dtype: image
      - name: Q5
        sequence: string
      - name: zoom
        dtype: string
  - config_name: Isle-Brick-V1
    data_files:
      - split: test
        path: Isle-Brick-V1/*
    features:
      - name: image
        dtype: image
      - name: prompt
        dtype: string
      - name: label
        dtype: int64
  - config_name: Isle-Dots
    data_files:
      - split: test
        path: Isle-Dots/*
    features:
      - name: image
        dtype: image
      - name: level
        dtype: int64
      - name: prompt
        dtype: string
      - name: label
        dtype: int64
task_categories:
  - visual-question-answering
tags:
  - VPT
pretty_name: Isle
size_categories:
  - n<1K

Dataset Details

The Isle (I spy with my little eye) dataset helps researchers study visual perspective taking (VPT), scene understanding, and spatial reasoning.

Visual perspective taking is the ability to imagine the world from someone else's viewpoint. This skill is important
for everyday tasks like driving safely, coordinating actions with others, or knowing when it's your turn to speak.

This dataset includes high-quality images (over 11 Mpix) and consists of three subsets:

  • Isle-Bricks v1
  • Isle-Bricks v2
  • Isle-Dots

The subsets Isle-Bricks v1 and Isle-Dots come from the study Seeing Through Their Eyes: Evaluating Visual Perspective Taking in Vision Language Models, and were created to test Vision Language Models (VLMs).

Isle-Bricks v2 comes from the study: Beyond Recognition: Evaluating Visual Perspective Taking in Vision Language Models and provides additional images of Lego minifigures from two viewpoints (See Figure 1):

  • surface-level view
  • bird’s eye view

Figure 1. Example images from the datasets: bottom left, Isle-Brick v1 bottom right, Isle-Dots; top left, Isle-Dots v2 (surface-level); and top right, Isle-Dots v2 (bird’s-eye view).

The Isle-Bricks v2 subset includes seven questions (Q1–Q7) to test visual perspective taking and related skills:

  • Q1: List and count all objects in the image that are not humanoid minifigures.
  • Q2: How many humanoid minifigures are in the image?
  • Q3: Are the humanoid minifigure and the object on the same surface?
  • Q4: In which cardinal direction (north, west, east, or south) is the object located relative to the humanoid minifigure?
  • Q5: Which direction (north, west, east, or south) is the humanoid minifigure facing?
  • Q6: Assuming the humanoid minifigure can see and its eyes are open, does it see the object?
  • Q7: From the perspective of the humanoid minifigure, where is the object located relative to it (front, left, right, or back)?

Psychologists can also use this dataset to study human visual perception and understanding.

Another related dataset is BlenderGaze, containing over 2,000 images generated using Blender.

Dataset Sources

Annotation Process

  • Three annotators labeled images, and final answers were based on the majority vote.
  • Annotators agreed on labels over 99% of the time.

Citation

@misc{góral2025recognitionevaluatingvisualperspective,
      title={Beyond Recognition: Evaluating Visual Perspective Taking in Vision Language Models}, 
      author={Gracjan Góral and Alicja Ziarko and Piotr Miłoś and Michał Nauman and Maciej Wołczyk and Michał Kosiński},
      year={2025},
      eprint={2505.03821},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2505.03821}, 
}

@misc{góral2024seeingeyesevaluatingvisual,
      title={Seeing Through Their Eyes: Evaluating Visual Perspective Taking in Vision Language Models}, 
      author={Gracjan Góral and Alicja Ziarko and Michal Nauman and Maciej Wołczyk},
      year={2024},
      eprint={2409.12969},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.12969}, 
}
Góral, G., Ziarko, A., Miłoś, P., Nauman, M., Wołczyk, M., & Kosiński, M. (2025). Beyond recognition: Evaluating visual perspective taking in vision language models. arXiv. https://arxiv.org/abs/2505.03821

Góral, G., Ziarko, A., Nauman, M., & Wołczyk, M. (2024). Seeing through their eyes: Evaluating visual perspective taking in vision language models. arXiv. https://arxiv.org/abs/2409.12969