Datasets:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError Exception: ArrowInvalid Message: JSON parse error: Invalid value. in row 0 Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables df = pandas_read_json(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json return pd.read_json(path_or_buf, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 791, in read_json json_reader = JsonReader( File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 905, in __init__ self.data = self._preprocess_data(data) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 917, in _preprocess_data data = data.read() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 813, in read_with_retries out = read(*args, **kwargs) File "/usr/local/lib/python3.9/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x88 in position 28: invalid start byte During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow for key, pa_table in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables raise e File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CSS10 + LJSpeech Multilingual Dataset
A unified multilingual speech dataset combining CSS10 (10 languages) and LJSpeech (English) in a consistent LJSpeech format.
Dataset Description
This dataset merges:
- CSS10: A collection of single-speaker speech datasets for 10 languages
- LJSpeech: High-quality English speech dataset (Linda Johnson)
All audio files are provided in a consistent format suitable for TTS training.
Languages and Statistics
Language | Code | Files | Hours | Speaker ID |
---|---|---|---|---|
English | en | 13,100 | ~24.0 | LJSpeech |
Spanish | es | 11,016 | ~19.2 | CSS10_es |
Russian | ru | 9,599 | ~16.9 | CSS10_ru |
French | fr | 8,649 | ~15.2 | CSS10_fr |
German | de | 7,427 | ~13.1 | CSS10_de |
Japanese | ja | 6,839 | ~14.9 | CSS10_ja |
Dutch | nl | 6,145 | ~10.8 | CSS10_nl |
Finnish | fi | 4,755 | ~8.4 | CSS10_fi |
Hungarian | hu | 4,514 | ~7.9 | CSS10_hu |
Chinese | zh | 2,971 | ~6.5 | CSS10_zh |
Greek | el | 1,844 | ~3.2 | CSS10_el |
Total: 76,859 utterances, ~140 hours, 11 speakers
File Structure
├── README.md # This file
├── metadata.csv # Standard LJSpeech format (id|text)
├── metadata_multispeaker.csv # With speaker info (speaker|id|text)
├── dataset_stats.json # Dataset statistics
├── speaker_info.json # Speaker mapping and descriptions
├── audio_durations.csv # Audio duration information
└── wavs.zip # All audio files (77,296 files)
Metadata Formats
1. Standard LJSpeech Format (metadata.csv
)
id|text
de|Hanake hatte allen Körperschmuck...
en_LJ001-0001|Printing, in the only sense...
2. Multi-speaker Format (metadata_multispeaker.csv
)
speaker|id|text
CSS10_de|de|Hanake hatte allen Körperschmuck...
LJSpeech|en_LJ001-0001|Printing, in the only sense...
Usage
Loading the Dataset
import pandas as pd
import zipfile
from pathlib import Path
# Extract audio files
with zipfile.ZipFile("wavs.zip", "r") as zip_ref:
zip_ref.extractall(".")
# Load metadata
metadata = pd.read_csv("metadata.csv", sep="|", names=["id", "text"])
# Load multi-speaker metadata
metadata_ms = pd.read_csv("metadata_multispeaker.csv",
sep="|", names=["speaker", "id", "text"])
For TTS Training (Piper)
# Single speaker training (filter by language)
python -m piper_train.preprocess \
--input-dir . \
--output-dir output \
--dataset-format ljspeech \
--sample-rate 22050
# Multi-speaker training
python -m piper_train.preprocess \
--input-dir . \
--output-dir output \
--dataset-format multispeaker \
--metadata-file metadata_multispeaker.csv \
--sample-rate 22050
File Naming Convention
- CSS10 files:
{language_code}_{original_id}.wav
(e.g.,ja_BASIC5000_0001.wav
) - LJSpeech files:
en_{original_id}.wav
(e.g.,en_LJ001-0001.wav
)
License
This dataset combines:
- CSS10: CC BY-SA 4.0
- LJSpeech: Public Domain
Please refer to the original datasets for detailed license information.
Citation
If you use this dataset, please cite both original sources:
@misc{css10,
author = {Kyubyong Park and Thomas Mulc},
title = {CSS10: A Collection of Single Speaker Speech Datasets for 10 Languages},
year = {2019},
publisher = {Interspeech},
}
@misc{ljspeech17,
author = {Keith Ito and Linda Johnson},
title = {The LJ Speech Dataset},
howpublished = {\url{https://keithito.com/LJ-Speech-Dataset/}},
year = {2017}
}
Acknowledgments
- CSS10 dataset creators and contributors
- Keith Ito for the LJSpeech dataset
- The css10-ljspeech dataset for providing CSS10 in LJSpeech format
- Downloads last month
- 141