Datasets:

Languages:
English
ArXiv:
License:
Dataset Viewer
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Ettin Mid-training Data

License: MIT Paper Models GitHub

Phase 2 of 3: Higher-quality filtered data with context extension (250B tokens) used for mid-training of Ettin models.

This dataset contains the mid-training phase data used to train all Ettin encoder and decoder models. This phase focuses on higher-quality filtered data and context length extension to 8K tokens. The data is provided in MDS format ready for use with Composer and the ModernBERT training repository.

πŸ“Š Data Composition

Data Source Tokens (B) Percentage Description
DCLM (Dolmino) 175.5 70.4% High-quality filtered web crawl
Starcoder 38.4 15.4% Code repositories and files
Math (Dolmino) 10.4 4.2% Mathematical content (filtered)
PeS2o 8.3 3.3% Scientific papers
Reddit 6.2 2.5% Social discussion threads
Arxiv 4.1 1.6% Academic preprints
StackExchange (Dolmino) 2.7 1.1% Q&A forums (filtered)
Tulu Flan 2.4 1.0% Instruction-following data
Books 0.8 0.3% Literature and reference books
Wikipedia 0.5 0.2% Encyclopedia articles
Total 249.3 100.0% Quality-focused mixture

πŸ”§ Key Changes from Pre-training

Data Quality Improvements

  • Filtered DCLM: Using Dolmino-filtered version instead of raw DCLM
  • Enhanced Math: Dolmino-filtered mathematical content
  • Curated StackExchange: Higher-quality Q&A content
  • Removed Noisy Sources: Dropped CC Head, CC News, and general StackExchange

Technical Improvements

  • Context Extension: Increased from 1K to 8K token sequences
  • RoPE Updates: Modified positional encoding for longer context
  • Learning Schedule: Inverse square root decay from peak LR

πŸš€ Usage

For pre-training see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT

Direct Access

from streaming import StreamingDataset

# Load the streaming dataset
dataset = StreamingDataset(
    remote='https://huggingface.co/datasets/jhu-clsp/ettin-extension-data',
    local='/tmp/ettin-extension-data',
    shuffle=True
)

# Access samples (note: these will be longer sequences)
for sample in dataset:
    text = sample['text']  # Up to 8K tokens
    # Process your data...

πŸ“ Structure

Each folder contains filtered, higher-quality data sources in MDS format:

  • arxiv/ - Academic papers from ArXiv
  • books/ - Literature and reference books
  • dclm_dolmino/ - Dolmino-filtered web crawl data (primary source)
  • math_dolmino/ - Filtered mathematical content
  • pes2o/ - Scientific papers
  • reddit/ - Reddit discussion threads
  • stackexchange_dolmino/ - Filtered StackExchange Q&A
  • starcoder/ - Code from GitHub repositories
  • tulu_flan/ - Instruction-following examples
  • wikipedia/ - Wikipedia articles

πŸ”— Related Resources

Citation

@misc{weller2025seqvsseqopen,
      title={Seq vs Seq: An Open Suite of Paired Encoders and Decoders}, 
      author={Orion Weller and Kathryn Ricci and Marc Marone and Antoine Chaffin and Dawn Lawrie and Benjamin Van Durme},
      year={2025},
      eprint={2507.11412},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.11412}, 
}
Downloads last month
997

Models trained or fine-tuned on jhu-clsp/ettin-extension-data

Collection including jhu-clsp/ettin-extension-data