Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Danish
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
kris927b's picture
Update wiki, wikibooks, wikisource (#87)
4781621 verified
metadata
pretty_name: Wikisource
language:
  - da
license: cc0-1.0
license_name: CC-0
size_categories:
  - 1-10k
task_categories:
  - text-generation
  - fill-mask
task_ids:
  - language-modeling
source_datasets:
  - danish-foundation-models/danish-gigaword
domains:
  - Encyclopedic

Dataset Card for Wikisource

The Danish subsection of Wikisource.

Dataset Description

  • Number of samples: 3.00K
  • Number of tokens (Llama 3): 6.28M
  • Average document length in tokens (min, max): 2.09K (17, 261.10K)

Dataset Structure

An example from the dataset looks as follows.

{
  "id": "wikisource_1292",
  "text": "Dejlig er den himmel blå\nDejlig er den himmel blå, lyst det er at se derpå, hvor de gyldne stjerner [...]",
  "source": "wikisource",
  "added": "2025-08-18",
  "created": "2022-04-18, 2022-04-18",
  "token_count": 1243
}

Data Fields

An entry in the dataset consists of the following fields:

  • id (str): An unique identifier for each document.
  • text(str): The content of the document.
  • source (str): The source of the document (see Source Data).
  • added (str): An date for when the document was added to this collection.
  • created (str): An date range for when the document was originally created.
  • token_count (int): The number of tokens in the sample computed using the Llama 8B tokenizer

Dataset Statistics

Processing

For this dataset we have pulled the latest database dump from wikimedia and extracted the texts using the wtf_wikipedia parser.

Because the parser is written in javascript you need to have Node.js installed on you machine.

To run the create.py file you first need to do:

$ cd parser/ && npm install && cd ..

We chose to use wtf_wikipedia because out of the other parsers we tested this was the imperically best one. We tested mwparserfromhell, mediawiki_dump, wikiextractor, and wtf_wikipedia. It seemed that the others still produced some sort of artifacts from the parsing of wikicode.

Additional Information

Citation Information

This dataset was initially published as part of the Danish gigaword. We recommend that you cite and reference it if you use this dataset:

Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).

@inproceedings{dagw,
 title = {{The Danish Gigaword Corpus}},
 author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
 year = 2021,
 booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
 publisher = {NEALT}
}