CAPability / README.md
lntzm's picture
Update README.md
a345570 verified
metadata
license: apache-2.0
configs:
  - config_name: object_category
    data_files:
      - split: test
        path:
          - annotations/object_category.jsonl
  - config_name: object_number
    data_files:
      - split: test
        path:
          - annotations/object_number.jsonl
  - config_name: object_color
    data_files:
      - split: test
        path:
          - annotations/object_color.jsonl
  - config_name: spatial_relation
    data_files:
      - split: test
        path:
          - annotations/spatial_relation.jsonl
  - config_name: scene
    data_files:
      - split: test
        path:
          - annotations/scene.jsonl
  - config_name: camera_angle
    data_files:
      - split: test
        path:
          - annotations/camera_angle.jsonl
  - config_name: OCR
    data_files:
      - split: test
        path:
          - annotations/OCR.jsonl
  - config_name: style
    data_files:
      - split: test
        path:
          - annotations/style.jsonl
  - config_name: character_identification
    data_files:
      - split: test
        path:
          - annotations/character_identification.jsonl
  - config_name: dynamic_object_number
    data_files:
      - split: test
        path:
          - annotations/dynamic_object_number.jsonl
  - config_name: action
    data_files:
      - split: test
        path:
          - annotations/action.jsonl
  - config_name: camera_movement
    data_files:
      - split: test
        path:
          - annotations/camera_movement.jsonl
  - config_name: event
    data_files:
      - split: test
        path:
          - annotations/event.jsonl

Dataset Card for Dataset Name

VideoQA Multi-Modal CAPability License

Visual caption benchmark Repo: CAPability

[🍎 Project Page] [πŸ“– ArXiv Paper] [πŸ§‘β€πŸ’» Github Repo] [πŸ† Leaderboard]

Dataset Details

Visual captioning benchmarks have become outdated with the emergence of modern MLLMs, as the brief ground-truth sentences and traditional metrics fail to assess detailed captions effectively. While recent benchmarks attempt to address this by focusing on keyword extraction or object-centric evaluation, they remain limited to vague-view or object-view analyses and incomplete visual element coverage. We introduce CAPability, a comprehensive multi-view benchmark for evaluating visual captioning across 12 dimensions spanning six critical views. We curate nearly 11K human-annotated images and videos with visual element annotations to evaluate the generated captions. CAPability stably assesses both the correctness and thoroughness of captions using F1-score. By converting annotations to QA pairs, we further introduce a heuristic metric, know but cannot tell ($K\bar{T}$), indicating a significant performance gap between QA and caption capabilities. Our work provides the first holistic analysis of MLLMs' captioning abilities, as we identify their strengths and weaknesses across various dimensions, guiding future research to enhance specific aspects of capabilities.

Uses

Direct Use

You can directly download the data folder, unzip all zip files, and put the data under in the same root of Github Repo. Then you can follow the instruction in Github to run the inference and the evaluation.

Use with lmms-eval

We have supported lmms-eval to run inference and evaluation for convience.

Copyright

CAPability is only used for academic research. Commercial use in any form is prohibited. The copyright of all images and videos belongs to the media owners. If there is any infringement in CAPability, please email liuzhihang@mail.ustc.edu.cn and we will remove it immediately. Without prior approval, you cannot distribute, publish, copy, disseminate, or modify CAPability in whole or in part. You must strictly comply with the above restrictions.

Citation

BibTeX:

@article{liu2025good,
  title={What Is a Good Caption? A Comprehensive Visual Caption Benchmark for Evaluating Both Correctness and Thoroughness},
  author={Liu, Zhihang and Xie, Chen-Wei and Wen, Bin and Yu, Feiwu and Chen, Jixuan and Zhang, Boqiang and Yang, Nianzu and Li, Pandeng and Li, Yinglu and Gao, Zuan and Zheng, Yun and Xie, Hongtao},
  journal={arXiv preprint arXiv:2502.14914},
  year={2025}
}