Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
SafeC4Sample / README.md
themendu's picture
Update README.md
642ef8d verified
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
size_categories:
  - 100K<n<1M

SafeC4Sample: C4 Dataset with Harmfulness Predictions

Overview

SafeC4Sample is a processed subset of the C4 dataset (Colossal, Cleaned version of Common Crawl's web crawl corpus) that includes harmfulness predictions from a HarmFormer As used in our paper. This dataset can be used for content moderation, safer language model training, or research into harmfulness detection in web text.

The original C4 dataset, created by Google, provides a cleaned version of Common Crawl's web data and serves as training data for many large language models. This project enhances the dataset by adding harmfulness predictions for various categories, making it suitable for safety-focused applications.

We would be releasing the HarmFormer's prediction on the entire C4 shortly. For more details regarding the HarmFormer, please visit our paper - Towards Safer Pretraining: Analyzing and Filtering Harmful Content in Webscale datasets for Responsible LLMs.

Model and Inference

The inference is performed using a HarmFormer (finetuned allenai/longformer-base-4096), which was trained to detect potentially harmful content across 5 different harm categories with three dimensions (Safe, Topical, Toxic):

  • H: Hate and Violence
  • IH: Ideological Harm
  • SE: Sexual Harm
  • IL: Illegal Activities
  • SI: Self-Inflicted

Each text in the dataset is processed and assigned probabilities for each harm category using the MultiTaskModel architecture, which features:

  • Base Longformer model for handling long context
  • Multiple classification heads (one per harm category)
  • Predictions with 3 risk levels per category

Dataset Structure

Each entry in the dataset includes:

  • All original C4 fields: text, url, and timestamp
  • Additional prediction field containing an array of probability distributions for each harm category and 3 dimensions (Safe, Topical, Toxic).

Example entry:

{
  "url": "https://dosalmas.us/2024/01/30/free-spins-ports-a-comprehensive-guide/",
  "text": "Vending machine have constantly been the go-to games for casino lovers. With their vibrant graphics, interesting sounds...",
  "timestamp": "2019-04-25T12:57:54Z",
  "prediction": [
    [0.982, 0.015, 0.003],  // H category probabilities for 3 risk levels
    [0.991, 0.007, 0.002],  // IH category probabilities
    [0.963, 0.035, 0.002],  // SE category probabilities
    [0.976, 0.018, 0.006],  // IL category probabilities
    [0.988, 0.010, 0.002]   // SI category probabilities
  ]
}

Usage

You can load this dataset using the Hugging Face Datasets library:

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("themendu/SafeC4Sample")

# Access an example
example = next(iter(dataset["train"]))
print(example["text"])
print(example["prediction"])

# Filter examples based on harm probability
def is_harmful(example, category_idx=2, threshold=0.7):  # SE category (index 2)
    return example["prediction"][category_idx][2] > threshold  # Removing samples with Sexual Harm's Toxic Dimension Probability greater than 0.7

harmful_examples = dataset.filter(is_harmful)

Ethical Considerations

This dataset sample is provided for research purposes to help build safer AI systems. The harm predictions should be treated as probabilistic estimates, not definitive classifications. When using this dataset:

  • Validate model predictions before making content moderation decisions
  • Consider context and nuance in text that might be incorrectly flagged
  • Be aware of potential biases in the training data of the harm detection model