EmotionQA / README.md
zenyn's picture
Update README.md
e22b78d verified
metadata
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: file
      dtype: string
    - name: audio
      dtype: audio
    - name: attribute_label
      dtype: string
    - name: single_instruction
      dtype: string
    - name: single_answer
      dtype: string
    - name: multi_instruction
      dtype: string
    - name: multi_answer
      dtype: string
  splits:
    - name: test
      num_bytes: 49279816
      num_examples: 500
  download_size: 48973639
  dataset_size: 49279816

Dataset Card for SAKURA-EmotionQA

This dataset contains the audio and the single/multi-hop questions/answers of the emotion track of the SAKURA benchmark from Interspeech 2025 paper, "SAKURA: On the Multi-hop Reasoning of Large Audio-Language Models Based on Speech and Audio Information".

The fields of the dataset are:

  • file: The filename of the audio files.
  • audio: The audio recordings.
  • attribute_label: The attribute labels (i.e., the emotion of the speakers) of the audio files.
  • single_instruction: The questions (instructions) of the single-hop questions.
  • single_answer: The answer to the single-hop questions.
  • multi_instruction: The questions (instructions) of the multi-hop questions.
  • multi_answer: The answer to the multi-hop questions.

If you find this dataset helpful for you, please kindly consider to cite our paper via:

@article{
sakura,
title={SAKURA: On the Multi-hop Reasoning of Large Audio-Language Models Based on Speech and Audio Information},
author={Yang, Chih-Kai and Ho, Neo and Piao, Yen-Ting and Lee, Hung-yi},
journal={Interspeech 2025},
year={2025}
}