wikisection / README.md
saeedabc's picture
Update README
2e3d5bf
metadata
annotations_creators:
  - machine-generated
language_creators:
  - found
language:
  - en
license:
  - mit
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - original
task_categories:
  - text-classification
  - sentence-similarity
task_ids:
  - semantic-similarity-classification
pretty_name: WikiSection (en_city, en_disease)
tags:
  - text segmentation
  - document segmentation
  - topic segmentation
  - topic shift detection
  - semantic chunking
  - chunking
  - nlp
  - wikipedia
dataset_info:
  - config_name: en_city
    features:
      - name: id
        dtype: string
      - name: title
        dtype: string
      - name: ids
        sequence: string
      - name: sentences
        sequence: string
      - name: titles_mask
        sequence: uint8
      - name: labels
        sequence:
          class_label:
            names:
              '0': semantic-continuity
              '1': semantic-shift
    splits:
      - name: train
        num_bytes: 105236889
        num_examples: 13679
      - name: validation
        num_bytes: 15693016
        num_examples: 1953
      - name: test
        num_bytes: 31140798
        num_examples: 3907
    download_size: 94042594
    dataset_size: 152070703
  - config_name: en_disease
    features:
      - name: id
        dtype: string
      - name: title
        dtype: string
      - name: ids
        sequence: string
      - name: sentences
        sequence: string
      - name: titles_mask
        sequence: uint8
      - name: labels
        sequence:
          class_label:
            names:
              '0': semantic-continuity
              '1': semantic-shift
    splits:
      - name: train
        num_bytes: 22409988
        num_examples: 2513
      - name: validation
        num_bytes: 3190201
        num_examples: 359
      - name: test
        num_bytes: 6088470
        num_examples: 718
    download_size: 94042594
    dataset_size: 31688659

Dataset Card for WikiSection (en_city, en_disease) Dataset

The WikiSection dataset is a collection of segmented Wikipedia articles related to cities and diseases, structured in this repository for a sentence-level document segmentation task.

Dataset Overview

WikiSection contains two English subsets:

  • en_city: 19.5k Wikipedia articles about cities and city-related topics.
  • en_disease: 3.6k articles on diseases and health-related scientific information.

Each subset provides segmented articles, where the task is to classify sentence boundaries as either "semantic-continuity" or "semantic-shift."

Features

The dataset provides the following features:

  • id: string - A unique identifier for each document.
  • title: string - The title of the document.
  • ids: list[string] - The sentence ids within the document
  • sentences: list[string] - The sentences within the document.
  • titles_mask: list[uint8] - A binary mask to indicate which sentences are titles.
  • labels: list[int] - Binary labels for each sentence, where 0 represents "semantic-continuity" and 1 represents "semantic-shift."

Usage

The dataset can be easily loaded using the HuggingFace datasets library:

from datasets import load_dataset

# en_city
titled_en_city = load_dataset('saeedabc/wikisection', 'en_city', trust_remote_code=True)

untitled_en_city = load_dataset('saeedabc/wikisection', 'en_city', drop_titles=True, trust_remote_code=True)

# en_disease
titled_en_disease = load_dataset('saeedabc/wikisection', 'en_disease', trust_remote_code=True)

untitled_en_disease = load_dataset('saeedabc/wikisection', 'en_disease', drop_titles=True, trust_remote_code=True)

Dataset Details