Papers
arxiv:2507.15028

Towards Video Thinking Test: A Holistic Benchmark for Advanced Video Reasoning and Understanding

Published on Jul 20
· Submitted by ZhangYuanhan on Jul 22
Authors:
,
,

Abstract

Video-TT assesses video LLMs' correctness and robustness in interpreting real-world videos through open-ended and adversarial questions.

AI-generated summary

Human intelligence requires correctness and robustness, with the former being foundational for the latter. In video understanding, correctness ensures the accurate interpretation of visual content, and robustness maintains consistent performance in challenging conditions. Despite advances in video large language models (video LLMs), existing benchmarks inadequately reflect the gap between these models and human intelligence in maintaining correctness and robustness in video interpretation. We introduce the Video Thinking Test (Video-TT), to assess if video LLMs can interpret real-world videos as effectively as humans. Video-TT reflects genuine gaps in understanding complex visual narratives, and evaluates robustness against natural adversarial questions. Video-TT comprises 1,000 YouTube Shorts videos, each with one open-ended question and four adversarial questions that probe visual and narrative complexity. Our evaluation shows a significant gap between video LLMs and human performance.

Community

Paper author Paper submitter
•
edited 14 days ago

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.15028 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.15028 in a Space README.md to link it from this page.

Collections including this paper 2