davanstrien HF Staff commited on
Commit
ce61544
·
1 Parent(s): e2896c4

Add vLLM classification script

Browse files

- Add classify-dataset.py for batch text classification
- Support for BERT-style models via vLLM
- Automatic label mapping from model config
- GPU availability check
- Comprehensive README with HFJobs examples
- Development notes in CLAUDE.md

Files changed (4) hide show
  1. .gitignore +6 -0
  2. CLAUDE.md +55 -0
  3. README.md +110 -0
  4. classify-dataset.py +286 -0
.gitignore ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ .DS_Store
2
+ __pycache__/
3
+ *.pyc
4
+ .ruff_cache/
5
+ .venv/
6
+ *.log
CLAUDE.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # vLLM Scripts Development Notes
2
+
3
+ ## Repository Purpose
4
+ This repository contains UV scripts for vLLM-based inference tasks. Focus on GPU-accelerated inference using vLLM's optimized engine.
5
+
6
+ ## Key Patterns
7
+
8
+ ### 1. GPU Requirements
9
+ All scripts MUST check for GPU availability:
10
+ ```python
11
+ if not torch.cuda.is_available():
12
+ logger.error("CUDA is not available. This script requires a GPU.")
13
+ sys.exit(1)
14
+ ```
15
+
16
+ ### 2. vLLM Docker Image
17
+ Always use `vllm/vllm-openai:latest` for HF Jobs - it has all dependencies pre-installed.
18
+
19
+ ### 3. Dependencies
20
+ Include custom PyPI indexes for vLLM and FlashInfer:
21
+ ```python
22
+ # [[tool.uv.index]]
23
+ # url = "https://flashinfer.ai/whl/cu126/torch2.6"
24
+ #
25
+ # [[tool.uv.index]]
26
+ # url = "https://wheels.vllm.ai/nightly"
27
+ ```
28
+
29
+ ## Current Scripts
30
+
31
+ 1. **classify-dataset.py**: BERT-style text classification
32
+ - Uses vLLM's classify task
33
+ - Supports batch processing with configurable size
34
+ - Automatically extracts label mappings from model config
35
+
36
+ ## Future Scripts
37
+
38
+ Potential additions:
39
+ - Text generation with vLLM
40
+ - Embedding generation using sentence transformers
41
+ - Multi-modal inference
42
+ - Structured output generation
43
+
44
+ ## Testing
45
+
46
+ Local testing requires GPU. For scripts without local GPU access:
47
+ 1. Use HF Jobs with small test datasets
48
+ 2. Verify script runs without syntax errors: `python -m py_compile script.py`
49
+ 3. Check dependencies resolve: `uv pip compile`
50
+
51
+ ## Performance Considerations
52
+
53
+ - Default batch size: 10,000 for local, up to 100,000 for HF Jobs
54
+ - L4 GPUs are cost-effective for classification
55
+ - Monitor GPU memory usage and adjust batch sizes accordingly
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ viewer: false
3
+ tags: [uv-script, vllm, gpu, inference]
4
+ ---
5
+
6
+ # vLLM Inference Scripts
7
+
8
+ Ready-to-run scripts for GPU-accelerated inference using [vLLM](https://github.com/vllm-project/vllm).
9
+
10
+ ## 📋 Available Scripts
11
+
12
+ ### classify-dataset.py
13
+
14
+ Batch text classification using BERT-style models with vLLM's optimized inference engine.
15
+
16
+ **Features:**
17
+ - 🚀 High-throughput batch processing
18
+ - 🏷️ Automatic label mapping from model config
19
+ - 📊 Confidence scores for predictions
20
+ - 🤗 Direct integration with Hugging Face Hub
21
+
22
+ **Usage:**
23
+ ```bash
24
+ # Local execution (requires GPU)
25
+ uv run classify-dataset.py \
26
+ davanstrien/ModernBERT-base-is-new-arxiv-dataset \
27
+ username/input-dataset \
28
+ username/output-dataset \
29
+ --inference-column text \
30
+ --batch-size 10000
31
+ ```
32
+
33
+ **HF Jobs execution:**
34
+ ```bash
35
+ hfjobs run \
36
+ --flavor l4x1 \
37
+ --secret HF_TOKEN=$(python -c "from huggingface_hub import HfFolder; print(HfFolder.get_token())") \
38
+ vllm/vllm-openai:latest \
39
+ /bin/bash -c '
40
+ uv run https://huggingface.co/datasets/uv-scripts/vllm/resolve/main/classify-dataset.py \
41
+ davanstrien/ModernBERT-base-is-new-arxiv-dataset \
42
+ username/input-dataset \
43
+ username/output-dataset \
44
+ --inference-column text \
45
+ --batch-size 100000
46
+ ' \
47
+ --project vllm-classify \
48
+ --name my-classification-job
49
+ ```
50
+
51
+ ## 🎯 Requirements
52
+
53
+ All scripts in this collection require:
54
+ - **NVIDIA GPU** with CUDA support
55
+ - **Python 3.10+**
56
+ - **UV package manager** (auto-installed via script)
57
+
58
+ ## 🚀 Performance Tips
59
+
60
+ ### GPU Selection
61
+ - **L4 GPU** (`--flavor l4x1`): Best value for classification tasks
62
+ - **A10 GPU** (`--flavor a10`): Higher memory for larger models
63
+ - Adjust batch size based on GPU memory
64
+
65
+ ### Batch Sizes
66
+ - **Local GPUs**: Start with 10,000 and adjust based on memory
67
+ - **HF Jobs**: Can use larger batches (50,000-100,000) with cloud GPUs
68
+
69
+ ## 📚 About vLLM
70
+
71
+ vLLM is a high-throughput inference engine optimized for:
72
+ - Fast model serving with PagedAttention
73
+ - Efficient batch processing
74
+ - Support for various model architectures
75
+ - Seamless integration with Hugging Face models
76
+
77
+ ## 🔧 Technical Details
78
+
79
+ ### Dependencies
80
+ Scripts use vLLM's nightly builds and FlashInfer for optimal performance:
81
+ ```python
82
+ # [[tool.uv.index]]
83
+ # url = "https://flashinfer.ai/whl/cu126/torch2.6"
84
+ #
85
+ # [[tool.uv.index]]
86
+ # url = "https://wheels.vllm.ai/nightly"
87
+ ```
88
+
89
+ ### Docker Image
90
+ For HF Jobs, we use the official vLLM Docker image: `vllm/vllm-openai:latest`
91
+
92
+ This image includes:
93
+ - Pre-installed CUDA libraries
94
+ - vLLM and all dependencies
95
+ - UV package manager
96
+ - Optimized for GPU inference
97
+
98
+ ## 📝 Contributing
99
+
100
+ Have a vLLM script to share? We welcome contributions that:
101
+ - Solve real inference problems
102
+ - Include clear documentation
103
+ - Follow UV script best practices
104
+ - Include HF Jobs examples
105
+
106
+ ## 🔗 Resources
107
+
108
+ - [vLLM Documentation](https://docs.vllm.ai/)
109
+ - [HF Jobs Guide](https://huggingface.co/docs/hub/spaces-gpu-jobs)
110
+ - [UV Scripts Organization](https://huggingface.co/uv-scripts)
classify-dataset.py ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.10"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "flashinfer-python",
6
+ # "httpx",
7
+ # "huggingface-hub[hf_transfer]",
8
+ # "setuptools",
9
+ # "toolz",
10
+ # "torch",
11
+ # "transformers",
12
+ # "vllm",
13
+ # ]
14
+ #
15
+ # [[tool.uv.index]]
16
+ # url = "https://flashinfer.ai/whl/cu126/torch2.6"
17
+ #
18
+ # [[tool.uv.index]]
19
+ # url = "https://wheels.vllm.ai/nightly"
20
+ # ///
21
+ """
22
+ Batch text classification using vLLM for efficient GPU inference.
23
+
24
+ This script loads a dataset from Hugging Face Hub, performs classification using
25
+ a BERT-style model via vLLM, and saves the results back to the Hub with predicted
26
+ labels and confidence scores.
27
+
28
+ Example usage:
29
+ # Local execution
30
+ uv run classify-dataset.py \\
31
+ davanstrien/ModernBERT-base-is-new-arxiv-dataset \\
32
+ username/input-dataset \\
33
+ username/output-dataset \\
34
+ --inference-column text \\
35
+ --batch-size 10000
36
+
37
+ # HF Jobs execution (see script output for full command)
38
+ hfjobs run --flavor l4x1 ...
39
+ """
40
+
41
+ import argparse
42
+ import logging
43
+ import os
44
+ import sys
45
+ from typing import Optional
46
+
47
+ import httpx
48
+ import torch
49
+ import torch.nn.functional as F
50
+ import vllm
51
+ from datasets import load_dataset
52
+ from huggingface_hub import hf_hub_url, login
53
+ from toolz import concat, keymap, partition_all
54
+ from tqdm.auto import tqdm
55
+ from vllm import LLM
56
+
57
+ logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
58
+ logger = logging.getLogger(__name__)
59
+
60
+
61
+ def check_gpu_availability():
62
+ """Check if CUDA is available and log GPU information."""
63
+ if not torch.cuda.is_available():
64
+ logger.error("CUDA is not available. This script requires a GPU.")
65
+ logger.error("Please run on a machine with NVIDIA GPU or use HF Jobs with GPU flavor.")
66
+ sys.exit(1)
67
+
68
+ gpu_name = torch.cuda.get_device_name(0)
69
+ gpu_memory = torch.cuda.get_device_properties(0).total_memory / 1024**3
70
+ logger.info(f"GPU detected: {gpu_name} with {gpu_memory:.1f} GB memory")
71
+ logger.info(f"vLLM version: {vllm.__version__}")
72
+
73
+
74
+ def get_model_id2label(hub_model_id: str) -> Optional[dict[int, str]]:
75
+ """Extract label mapping from model's config.json on Hugging Face Hub."""
76
+ try:
77
+ response = httpx.get(
78
+ hf_hub_url(hub_model_id, filename="config.json"),
79
+ follow_redirects=True
80
+ )
81
+ if response.status_code != 200:
82
+ logger.warning(f"Could not fetch config.json for {hub_model_id}")
83
+ return None
84
+
85
+ data = response.json()
86
+ id2label = data.get("id2label")
87
+
88
+ if id2label is None:
89
+ logger.info("No id2label mapping found in config.json")
90
+ return None
91
+
92
+ # Convert string keys to integers
93
+ label_map = keymap(int, id2label)
94
+ logger.info(f"Found label mapping: {label_map}")
95
+ return label_map
96
+
97
+ except Exception as e:
98
+ logger.warning(f"Failed to parse config.json: {e}")
99
+ return None
100
+
101
+
102
+ def get_top_label(output, label_map: Optional[dict[int, str]] = None):
103
+ """
104
+ Extract the top predicted label and confidence score from vLLM output.
105
+
106
+ Args:
107
+ output: vLLM ClassificationRequestOutput
108
+ label_map: Optional mapping from label indices to label names
109
+
110
+ Returns:
111
+ Tuple of (label, confidence_score)
112
+ """
113
+ logits = torch.tensor(output.outputs.probs)
114
+ probs = F.softmax(logits, dim=0)
115
+ top_idx = torch.argmax(probs).item()
116
+ top_prob = probs[top_idx].item()
117
+
118
+ # Use label name if mapping available, otherwise use index
119
+ label = label_map.get(top_idx, str(top_idx)) if label_map else str(top_idx)
120
+ return label, top_prob
121
+
122
+
123
+ def main(
124
+ hub_model_id: str,
125
+ src_dataset_hub_id: str,
126
+ output_dataset_hub_id: str,
127
+ inference_column: str = "text",
128
+ batch_size: int = 10_000,
129
+ hf_token: Optional[str] = None,
130
+ ):
131
+ """
132
+ Main classification pipeline.
133
+
134
+ Args:
135
+ hub_model_id: Hugging Face model ID for classification
136
+ src_dataset_hub_id: Input dataset on Hugging Face Hub
137
+ output_dataset_hub_id: Where to save results on Hugging Face Hub
138
+ inference_column: Column name containing text to classify
139
+ batch_size: Number of examples to process at once
140
+ hf_token: Hugging Face authentication token
141
+ """
142
+ # GPU check
143
+ check_gpu_availability()
144
+
145
+ # Authentication
146
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
147
+ if HF_TOKEN:
148
+ login(token=HF_TOKEN)
149
+ else:
150
+ logger.error("HF_TOKEN is required. Set via --hf-token or HF_TOKEN environment variable.")
151
+ sys.exit(1)
152
+
153
+ # Initialize vLLM with classification task
154
+ logger.info(f"Loading model: {hub_model_id}")
155
+ llm = LLM(model=hub_model_id, task="classify")
156
+
157
+ # Get label mapping if available
158
+ id2label = get_model_id2label(hub_model_id)
159
+
160
+ # Load dataset
161
+ logger.info(f"Loading dataset: {src_dataset_hub_id}")
162
+ dataset = load_dataset(src_dataset_hub_id, split="train")
163
+ total_examples = len(dataset)
164
+ logger.info(f"Dataset loaded with {total_examples:,} examples")
165
+
166
+ # Extract text column
167
+ if inference_column not in dataset.column_names:
168
+ logger.error(f"Column '{inference_column}' not found. Available columns: {dataset.column_names}")
169
+ sys.exit(1)
170
+
171
+ prompts = dataset[inference_column]
172
+
173
+ # Process in batches
174
+ logger.info(f"Starting classification with batch size {batch_size:,}")
175
+ all_results = []
176
+
177
+ for batch in tqdm(
178
+ list(partition_all(batch_size, prompts)),
179
+ desc="Processing batches",
180
+ unit="batch"
181
+ ):
182
+ batch_results = llm.classify(batch)
183
+ all_results.append(batch_results)
184
+
185
+ # Flatten results
186
+ outputs = list(concat(all_results))
187
+
188
+ # Extract labels and probabilities
189
+ logger.info("Extracting predictions...")
190
+ labels_and_probs = [get_top_label(output, id2label) for output in outputs]
191
+
192
+ # Add results to dataset
193
+ dataset = dataset.add_column("label", [label for label, _ in labels_and_probs])
194
+ dataset = dataset.add_column("prob", [prob for _, prob in labels_and_probs])
195
+
196
+ # Push to hub
197
+ logger.info(f"Pushing results to: {output_dataset_hub_id}")
198
+ dataset.push_to_hub(output_dataset_hub_id, token=HF_TOKEN)
199
+ logger.info("✅ Classification complete!")
200
+
201
+
202
+ if __name__ == "__main__":
203
+ if len(sys.argv) > 1:
204
+ parser = argparse.ArgumentParser(
205
+ description="Classify text data using vLLM and save results to Hugging Face Hub",
206
+ formatter_class=argparse.RawDescriptionHelpFormatter,
207
+ epilog="""
208
+ Examples:
209
+ # Basic usage
210
+ uv run classify-dataset.py model/name input-dataset output-dataset
211
+
212
+ # With custom column and batch size
213
+ uv run classify-dataset.py model/name input-dataset output-dataset \\
214
+ --inference-column prompt \\
215
+ --batch-size 50000
216
+
217
+ # Using environment variable for token
218
+ HF_TOKEN=hf_xxx uv run classify-dataset.py model/name input-dataset output-dataset
219
+ """
220
+ )
221
+
222
+ parser.add_argument(
223
+ "hub_model_id",
224
+ help="Hugging Face model ID for classification (e.g., bert-base-uncased)"
225
+ )
226
+ parser.add_argument(
227
+ "src_dataset_hub_id",
228
+ help="Input dataset on Hugging Face Hub (e.g., username/dataset-name)"
229
+ )
230
+ parser.add_argument(
231
+ "output_dataset_hub_id",
232
+ help="Output dataset name on Hugging Face Hub"
233
+ )
234
+ parser.add_argument(
235
+ "--inference-column",
236
+ type=str,
237
+ default="text",
238
+ help="Column containing text to classify (default: text)"
239
+ )
240
+ parser.add_argument(
241
+ "--batch-size",
242
+ type=int,
243
+ default=10_000,
244
+ help="Batch size for inference (default: 10,000)"
245
+ )
246
+ parser.add_argument(
247
+ "--hf-token",
248
+ type=str,
249
+ help="Hugging Face token (can also use HF_TOKEN env var)"
250
+ )
251
+
252
+ args = parser.parse_args()
253
+
254
+ main(
255
+ hub_model_id=args.hub_model_id,
256
+ src_dataset_hub_id=args.src_dataset_hub_id,
257
+ output_dataset_hub_id=args.output_dataset_hub_id,
258
+ inference_column=args.inference_column,
259
+ batch_size=args.batch_size,
260
+ hf_token=args.hf_token,
261
+ )
262
+ else:
263
+ # Show HF Jobs example when run without arguments
264
+ print("""
265
+ vLLM Classification Script
266
+ =========================
267
+
268
+ This script requires arguments. For usage information:
269
+ uv run classify-dataset.py --help
270
+
271
+ Example HF Jobs command:
272
+ hfjobs run \\
273
+ --flavor l4x1 \\
274
+ --secret HF_TOKEN=\$(python -c "from huggingface_hub import HfFolder; print(HfFolder.get_token())") \\
275
+ vllm/vllm-openai:latest \\
276
+ /bin/bash -c '
277
+ uv run https://huggingface.co/datasets/uv-scripts/vllm/resolve/main/classify-dataset.py \\
278
+ davanstrien/ModernBERT-base-is-new-arxiv-dataset \\
279
+ username/input-dataset \\
280
+ username/output-dataset \\
281
+ --inference-column text \\
282
+ --batch-size 100000
283
+ ' \\
284
+ --project vllm-classify \\
285
+ --name my-classification-job
286
+ """)