Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
10M - 100M
ArXiv:
DOI:
License:
File size: 3,498 Bytes
fcc0222 1c33234 fcc0222 ef0f90a fcc0222 1c33234 fcc0222 1c33234 fcc0222 1c33234 ef0f90a 1c33234 fcc0222 1c33234 fcc0222 f2ad4e1 de8bb50 243fc1f de8bb50 243fc1f fcc0222 73fd2fd 38b692a 4781621 73fd2fd 38b692a 3e28a50 fcc0222 f2ad4e1 38b692a de8bb50 38b692a 4781621 38b692a 4781621 38b692a 1c33234 38b692a 1c33234 de8bb50 243fc1f 0cef317 1c33234 0cef317 1c33234 0cef317 243fc1f 38b692a 4781621 243fc1f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
pretty_name: Wikisource
language:
- da
license: cc0-1.0
license_name: CC-0
size_categories:
- 1-10k
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
source_datasets:
- danish-foundation-models/danish-gigaword
domains:
- Encyclopedic
---
# Dataset Card for Wikisource
<!-- START-SHORT DESCRIPTION -->
The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page).
<!-- END-SHORT DESCRIPTION -->
## Dataset Description
<!-- START-DESC-STATS -->
- **Number of samples**: 3.00K
- **Number of tokens (Llama 3)**: 6.28M
- **Average document length in tokens (min, max)**: 2.09K (17, 261.10K)
<!-- END-DESC-STATS -->
## Dataset Structure
An example from the dataset looks as follows.
<!-- START-SAMPLE -->
```py
{
"id": "wikisource_1292",
"text": "Dejlig er den himmel blå\nDejlig er den himmel blå, lyst det er at se derpå, hvor de gyldne stjerner [...]",
"source": "wikisource",
"added": "2025-08-18",
"created": "2022-04-18, 2022-04-18",
"token_count": 1243
}
```
### Data Fields
An entry in the dataset consists of the following fields:
- `id` (`str`): An unique identifier for each document.
- `text`(`str`): The content of the document.
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
- `added` (`str`): An date for when the document was added to this collection.
- `created` (`str`): An date range for when the document was originally created.
- `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
<!-- END-SAMPLE -->
### Dataset Statistics
<!-- START-DATASET PLOTS -->
<p align="center">
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
</p>
<!-- END-DATASET PLOTS -->
### Processing
For this dataset we have pulled the latest [database dump from wikimedia](https://dumps.wikimedia.org/dawikisource/latest/) and extracted the texts using the [wtf_wikipedia](https://github.com/spencermountain/wtf_wikipedia/tree/dev) parser.
Because the parser is written in javascript you need to have Node.js installed on you machine.
To run the `create.py` file you first need to do:
```bash
$ cd parser/ && npm install && cd ..
```
We chose to use `wtf_wikipedia` because out of the other parsers we tested this was the imperically best one. We tested `mwparserfromhell`, `mediawiki_dump`, `wikiextractor`, and `wtf_wikipedia`. It seemed that the others still produced some sort of artifacts from the parsing of wikicode.
## Additional Information
### Citation Information
This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
> Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
```bash
@inproceedings{dagw,
title = {{The Danish Gigaword Corpus}},
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
year = 2021,
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
publisher = {NEALT}
}
```
|