File size: 1,675 Bytes
c6c48b0 844c10d c6c48b0 844c10d c6c48b0 844c10d c6c48b0 5c68a85 e22b78d c6c48b0 5c68a85 e22b78d 5c68a85 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: attribute_label
dtype: string
- name: single_instruction
dtype: string
- name: single_answer
dtype: string
- name: multi_instruction
dtype: string
- name: multi_answer
dtype: string
splits:
- name: test
num_bytes: 49279816.0
num_examples: 500
download_size: 48973639
dataset_size: 49279816.0
---
# Dataset Card for SAKURA-EmotionQA
This dataset contains the audio and the single/multi-hop questions/answers of the emotion track of the SAKURA benchmark from Interspeech 2025 paper, "[**SAKURA: On the Multi-hop Reasoning of Large Audio-Language Models Based on Speech and Audio Information**](https://arxiv.org/abs/2505.13237)".
The fields of the dataset are:
- file: The filename of the audio files.
- audio: The audio recordings.
- attribute_label: The attribute labels (i.e., the emotion of the speakers) of the audio files.
- single_instruction: The questions (instructions) of the single-hop questions.
- single_answer: The answer to the single-hop questions.
- multi_instruction: The questions (instructions) of the multi-hop questions.
- multi_answer: The answer to the multi-hop questions.
If you find this dataset helpful for you, please kindly consider to cite our paper via:
```
@article{
sakura,
title={SAKURA: On the Multi-hop Reasoning of Large Audio-Language Models Based on Speech and Audio Information},
author={Yang, Chih-Kai and Ho, Neo and Piao, Yen-Ting and Lee, Hung-yi},
journal={Interspeech 2025},
year={2025}
}
``` |