diff --git a/README.md b/README.md index 7be5fc7f47d5db027d120b8024982df93db95b74..2fdbeace6774e1c218fc90572b68e4ca2c37ce9e 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,134 @@ ---- -license: mit ---- +# EARS-EMO-OpenACE: A Full-band Coded Emotional Speech Quality Dataset + +## Dataset Description + +This dataset contains emotional speech samples with human perceptual quality ratings and objective quality metrics. It is designed for research in audio quality assessment, emotion recognition, and codec evaluation. + +## Dataset Structure + +``` +ears_16_bit_emo_hugging_face/ +├── metadata.csv # Main dataset metadata +├── dataset_summary.json # Dataset statistics +├── README.md # This file +└── [speaker]/ # Speaker directories + └── emo_[emotion]_freeform/ # Emotion directories + ├── reference.wav # Reference audio + ├── [codec].wav # Coded audio files + └── ... +``` + +## Metadata Schema + +| Column | Description | Scale/Range | +|--------|-------------|-------------| +| `dataset` | Dataset identifier | EARS-EMO-OpenACE | +| `speaker` | Speaker ID | p102, p103, p104, p105, p106, p107 | +| `emotion` | Emotional expression | Anger, Ecstasy, Fear, Neutral, Pain, Sadness | +| `codec` | Audio codec used | EVS, LC3, LC3Plus, Opus, LowAnchor, MidAnchor | +| `reference_file` | Path to reference audio | Relative path | +| `distorted_file` | Path to coded audio | Relative path | +| `reference_mushra_rating` | Human quality rating for reference | 0-100 (MUSHRA scale) | +| `distorted_mushra_rating` | Human quality rating for coded audio | 0-100 (MUSHRA scale) | +| `mushra_rating_difference` | Quality degradation | Reference - Distorted | +| `visqol_score` | VISQOL objective quality score | 1-5 (higher = better) | +| `polqa_score` | POLQA objective quality score | 1-5 (higher = better) | + +## Codec Information + +- **Reference**: EARS source file +- **EVS**: Enhanced Voice Services codec +- **LC3**: Low Complexity Communication Codec +- **LC3Plus**: Enhanced version of LC3 +- **Opus**: Open-source audio codec +- **LowAnchor**: Low-quality anchor (lp3500) - Human ratings only +- **MidAnchor**: Mid-quality anchor (lp7000) - Human ratings only + +## Quality Metrics + +### MUSHRA Ratings +- **Scale**: 0-100 (higher = better quality) +- **Method**: Multiple Stimuli with Hidden Reference and Anchor +- **Raters**: Trained human listeners +- **Reference**: Original uncompressed audio (typically ~95-100) + +### VISQOL Scores +- **Scale**: 1-5 (higher = better quality) +- **Method**: Virtual Speech Quality Objective Listener +- **Type**: Objective perceptual quality metric +- **Coverage**: Available for most codecs (excluding anchors) + +### POLQA Scores +- **Scale**: 1-5 (higher = better quality) +- **Method**: Perceptual Objective Listening Quality Assessment +- **Standard**: ITU-T P.863 +- **Coverage**: Available for most codecs (excluding anchors) + +## Usage Examples + +### Load Dataset +```python +import pandas as pd + +# Load metadata +metadata = pd.read_csv('metadata.csv') + +# Filter by emotion +anger_samples = metadata[metadata['emotion'] == 'Anger'] + +# Filter by codec +opus_samples = metadata[metadata['codec'] == 'Opus'] + +# Get high-quality samples (MUSHRA > 80) +high_quality = metadata[metadata['distorted_mushra_rating'] > 80] +``` + +### Audio Loading +```python +import librosa + +# Load audio file +audio_path = metadata.iloc[0]['distorted_file'] +audio, sr = librosa.load(audio_path, sr=None) +``` + + +## Correlation Analysis + +The following table shows Pearson correlations between objective metrics and human MUSHRA ratings: + +| Metric | Description | Correlation (r) | p-value | Sample Size | Significance | +|--------|-------------|-----------------|---------|-------------|--------------| +| VISQOL | Virtual Speech Quality Objective Listener | 0.7034 | 8.3513e-23 | 144 | *** | +| POLQA | Perceptual Objective Listening Quality Assessment | 0.7939 | 2.9297e-32 | 143 | *** | + +**Significance levels:** *** p<0.001, ** p<0.01, * p<0.05, n.s. = not significant + +**Note:** Correlations computed excluding anchor codecs (lp3500/lp7000) which only have human ratings. + + +## Citation + +If you use this dataset in your research, please cite the following [paper](https://arxiv.org/abs/2409.08374): + +```bibtex +@inproceedings{OpenACE-Coldenhoff2025, + author={Coldenhoff, Jozef and Granqvist, Niclas and Cernak, Milos}, + booktitle={ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, + title={{OpenACE: An Open Benchmark for Evaluating Audio Coding Performance}}, + year={2025}, + pages={1-5}, + keywords={Codecs;Speech coding;Audio coding;Working environment noise;Benchmark testing;Data augmentation;Data models;Vectors;Reverberation;Speech processing;audio coding;benchmarks;deep learning;speech processing}, + doi={10.1109/ICASSP49660.2025.10889159} +} +``` + +## License + +[MIT License](https://github.com/JozefColdenhoff/OpenACE/blob/main/LICENSE) + +## Contact + +Milos Cernak, milos.cernak at ieee dot org + +August 1, 2025 diff --git a/dataset_card.md b/dataset_card.md new file mode 100644 index 0000000000000000000000000000000000000000..09e0e955c555c983e7960505664d3635accfc739 --- /dev/null +++ b/dataset_card.md @@ -0,0 +1,42 @@ +--- +license: MIT +task_categories: +- audio-classification +- audio-to-audio +- speech-quality-assessment +language: +- en +tags: +- emotional-speech +- audio-quality +- perceptual-evaluation +- codec-evaluation +- mushra +- visqol +- polqa +size_categories: +- 1K