sumuks's picture
sumuks HF Staff
Add 1B token sample with all columns
ca0746e verified
# Essential Web v1.0 - 1B Token Sample
Approximately 1,000,000,000 tokens sampled from Essential Web v1.0.
## Dataset Info
- **Target**: 1,000,000,000 tokens
- **Actual**: ~1,099,999,800 tokens (estimated)
- **Source**: [EssentialAI/essential-web-v1.0](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0)
## Schema
This sample preserves ALL columns from the original dataset, including:
- `id`: Document ID
- `text`: Text content
- `metadata`: URL and source information
- `quality_signals`: RedPajama quality metrics
- `eai_taxonomy`: Essential AI taxonomy labels
- `pid`: Partition ID
- And all other original columns
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("sumuks/essential-web-v1.0-sample-1B")
# Access the data with all columns
example = dataset['train'][0]
print(example['text'][:200] + "...")
# Access quality signals
print(example['quality_signals'])
# Access taxonomy
print(example['eai_taxonomy'])
```
## File Structure
The dataset is split across multiple parquet files in the `data/` directory:
- `data/part-00000.parquet`
- `data/part-00001.parquet`
- etc.
HuggingFace datasets automatically loads all parts as a single dataset.
## Sampling Method
- Random sampling across snapshots
- Preserves all original columns and metadata
- Token estimation: ~600 tokens per row