File size: 11,681 Bytes
52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 5c4f2fd 52de1e3 5c4f2fd 011bc8a 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 5c4f2fd 52de1e3 5c4f2fd 52de1e3 011bc8a 52de1e3 5c4f2fd 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 5c4f2fd 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 5c4f2fd 52de1e3 011bc8a 52de1e3 5c4f2fd 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 011bc8a 52de1e3 5c4f2fd 52de1e3 5c4f2fd 52de1e3 5c4f2fd 52de1e3 5c4f2fd 52de1e3 5c4f2fd 52de1e3 5c4f2fd 52de1e3 5c4f2fd 52de1e3 011bc8a 5c4f2fd 011bc8a 5c4f2fd 011bc8a 5c4f2fd 011bc8a 5c4f2fd 011bc8a 5c4f2fd 52de1e3 5c4f2fd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 |
---
viewer: false
tags: [uv-script, classification, vllm, structured-outputs, gpu-required]
---
# Dataset Classification Script
GPU-accelerated text classification for Hugging Face datasets with guaranteed valid outputs through structured generation.
## π Quick Start
```bash
# Classify IMDB reviews
uv run classify-dataset.py \
--input-dataset stanfordnlp/imdb \
--column text \
--labels "positive,negative" \
--output-dataset user/imdb-classified
```
That's it! No installation, no setup - just `uv run`.
## π Requirements
- **GPU Required**: Uses GPU-accelerated inference
- Python 3.10+
- UV (will handle all dependencies automatically)
- vLLM >= 0.6.6
## π― Features
- **Guaranteed valid outputs** using structured generation with guided decoding
- **Zero-shot classification** without training data required
- **GPU-optimized** for maximum throughput and efficiency
- **Default model**: HuggingFaceTB/SmolLM3-3B (fast 3B model with thinking capabilities)
- **Robust text handling** with preprocessing and validation
- **Automatic progress tracking** and detailed statistics
- **Direct Hub integration** - read and write datasets seamlessly
- **Label descriptions** support for providing context to improve accuracy
- **Reasoning mode** for interpretable classifications with thinking traces
- **JSON output parsing** for reliable extraction from reasoning mode
- **Optimized batching** with vLLM's automatic batch processing
- **Multiple guided backends** - supports outlines, xgrammar, and more
## π» Usage
### Basic Classification
```bash
uv run classify-dataset.py \
--input-dataset <dataset-id> \
--column <text-column> \
--labels <comma-separated-labels> \
--output-dataset <output-id>
```
### Arguments
**Required:**
- `--input-dataset`: Hugging Face dataset ID (e.g., `stanfordnlp/imdb`, `user/my-dataset`)
- `--column`: Name of the text column to classify
- `--labels`: Comma-separated classification labels (e.g., `"spam,ham"`)
- `--output-dataset`: Where to save the classified dataset
**Optional:**
- `--model`: Model to use (default: **`HuggingFaceTB/SmolLM3-3B`** - a fast 3B parameter model)
- `--label-descriptions`: Provide descriptions for each label to improve classification accuracy
- `--enable-reasoning`: Enable reasoning mode with thinking traces (adds reasoning column)
- `--split`: Dataset split to process (default: `train`)
- `--max-samples`: Limit samples for testing
- `--shuffle`: Shuffle dataset before selecting samples (useful for random sampling)
- `--shuffle-seed`: Random seed for shuffling (default: 42)
- `--temperature`: Generation temperature (default: 0.1)
- `--guided-backend`: Backend for guided decoding (default: `outlines`)
- `--hf-token`: Hugging Face token (or use `HF_TOKEN` env var)
### Label Descriptions
Provide context for your labels to improve classification accuracy:
```bash
uv run classify-dataset.py \
--input-dataset user/support-tickets \
--column content \
--labels "bug,feature,question,other" \
--label-descriptions "bug:something is broken,feature:request for new functionality,question:asking for help,other:anything else" \
--output-dataset user/tickets-classified
```
The model uses these descriptions to better understand what each label represents, leading to more accurate classifications.
### Reasoning Mode
Enable thinking traces for interpretable classifications:
```bash
uv run classify-dataset.py \
--input-dataset stanfordnlp/imdb \
--column text \
--labels "positive,negative,neutral" \
--enable-reasoning \
--output-dataset user/imdb-with-reasoning
```
When `--enable-reasoning` is used:
- The model generates step-by-step reasoning using SmolLM3's thinking capabilities
- Output includes three columns: `classification`, `reasoning`, and `parsing_success`
- Final answer must be in JSON format: `{"label": "chosen_label"}`
- Useful for understanding complex classification decisions
- Trade-off: Slower but more interpretable
## π Examples
### Sentiment Analysis
```bash
uv run classify-dataset.py \
--input-dataset stanfordnlp/imdb \
--column text \
--labels "positive,negative" \
--output-dataset user/imdb-sentiment
```
### Support Ticket Classification
```bash
uv run classify-dataset.py \
--input-dataset user/support-tickets \
--column content \
--labels "bug,feature_request,question,other" \
--label-descriptions "bug:code or product not working as expected,feature_request:asking for new functionality,question:seeking help or clarification,other:general comments or feedback" \
--output-dataset user/tickets-classified
```
### News Categorization
```bash
uv run classify-dataset.py \
--input-dataset ag_news \
--column text \
--labels "world,sports,business,tech" \
--output-dataset user/ag-news-categorized \
--model meta-llama/Llama-3.2-3B-Instruct
```
### Complex Classification with Reasoning
```bash
uv run classify-dataset.py \
--input-dataset user/customer-feedback \
--column text \
--labels "very_positive,positive,neutral,negative,very_negative" \
--label-descriptions "very_positive:extremely satisfied,positive:generally satisfied,neutral:mixed feelings,negative:dissatisfied,very_negative:extremely dissatisfied" \
--enable-reasoning \
--output-dataset user/feedback-analyzed
```
This combines label descriptions with reasoning mode for maximum interpretability.
### ArXiv ML Research Classification
Classify academic papers into machine learning research areas:
```bash
# Fast classification with random sampling
uv run classify-dataset.py \
--input-dataset librarian-bots/arxiv-metadata-snapshot \
--column abstract \
--labels "llm,computer_vision,reinforcement_learning,optimization,theory,other" \
--label-descriptions "llm:language models and NLP,computer_vision:image and video processing,reinforcement_learning:RL and decision making,optimization:training and efficiency,theory:theoretical ML foundations,other:other ML topics" \
--output-dataset user/arxiv-ml-classified \
--split "train[:10000]" \
--max-samples 100 \
--shuffle
# With reasoning for nuanced classification
uv run classify-dataset.py \
--input-dataset librarian-bots/arxiv-metadata-snapshot \
--column abstract \
--labels "multimodal,agents,reasoning,safety,efficiency" \
--label-descriptions "multimodal:vision-language and cross-modal models,agents:autonomous agents and tool use,reasoning:reasoning and planning systems,safety:alignment and safety research,efficiency:model optimization and deployment" \
--enable-reasoning \
--output-dataset user/arxiv-frontier-research \
--split "train[:1000]" \
--max-samples 50
```
The reasoning mode is particularly valuable for academic abstracts where papers often span multiple topics and require careful analysis to determine the primary focus.
## π Running on HF Jobs
Optimized for [Hugging Face Jobs](https://huggingface.co/docs/hub/spaces-gpu-jobs) (requires Pro subscription or Team/Enterprise organization):
```bash
# Run on L4 GPU with vLLM image
hf jobs uv run \
--flavor l4x1 \
--image vllm/vllm-openai:latest \
https://huggingface.co/datasets/uv-scripts/classification/raw/main/classify-dataset.py \
--input-dataset stanfordnlp/imdb \
--column text \
--labels "positive,negative" \
--output-dataset user/imdb-classified
```
### GPU Flavors
- `t4-small`: Budget option for smaller models
- `l4x1`: Good balance for 7B models
- `a10g-small`: Fast inference for 3B models
- `a10g-large`: More memory for larger models
- `a100-large`: Maximum performance
## π§ Advanced Usage
### Random Sampling
When working with ordered datasets, use `--shuffle` with `--max-samples` to get a representative sample:
```bash
# Get 50 random reviews instead of the first 50
uv run classify-dataset.py \
--input-dataset stanfordnlp/imdb \
--column text \
--labels "positive,negative" \
--output-dataset user/imdb-sample \
--max-samples 50 \
--shuffle \
--shuffle-seed 123 # For reproducibility
```
This is especially important for:
- Chronologically ordered datasets (news, papers, social media)
- Pre-sorted datasets (by rating, category, etc.)
- Testing on diverse samples before processing the full dataset
### Using Different Models
By default, this script uses **HuggingFaceTB/SmolLM3-3B** - a fast, efficient 3B parameter model that's perfect for most classification tasks. You can easily use any other instruction-tuned model:
```bash
# Larger model for complex classification
uv run classify-dataset.py \
--input-dataset user/legal-docs \
--column text \
--labels "contract,patent,brief,memo,other" \
--output-dataset user/legal-classified \
--model Qwen/Qwen2.5-7B-Instruct
```
### Large Datasets
vLLM automatically handles batching for optimal performance. For very large datasets, it will process efficiently without manual intervention:
```bash
uv run classify-dataset.py \
--input-dataset user/huge-dataset \
--column text \
--labels "A,B,C" \
--output-dataset user/huge-classified
```
## π Performance
- **SmolLM3-3B (default)**: ~50-100 texts/second on A10
- **7B models**: ~20-50 texts/second on A10
- vLLM automatically optimizes batching for best throughput
- Performance scales with GPU memory and compute capability
## π€ How It Works
1. **vLLM**: Provides efficient GPU batch inference with automatic batching
2. **Guided Decoding**: Uses outlines backend to guarantee valid label outputs
3. **Structured Generation**: Constrains model outputs to exact label choices
4. **UV**: Handles all dependencies automatically
The script loads your dataset, preprocesses texts, classifies each one with guaranteed valid outputs, then saves the results as a new column in the output dataset.
## π Troubleshooting
### CUDA Not Available
This script requires a GPU. Run it on:
- A machine with NVIDIA GPU
- HF Jobs (recommended)
- Cloud GPU instances
### Out of Memory
- Use a smaller model
- Use a larger GPU (e.g., a100-large)
### Invalid/Skipped Texts
- Texts shorter than 3 characters are skipped
- Empty or None values are marked as invalid
- Very long texts are truncated to 4000 characters
### Classification Quality
- With guided decoding, outputs are guaranteed to be valid labels
- For better results, use clear and distinct label names
- Try the `reasoning` prompt style for complex classifications
- Use a larger model for nuanced tasks
### vLLM Version Issues
If you see `ImportError: cannot import name 'GuidedDecodingParams'`:
- Your vLLM version is too old (requires >= 0.6.6)
- The script specifies the correct version in its dependencies
- UV should automatically install the correct version
## π¬ Advanced Workflows
For complex real-world workflows that integrate UV scripts with the Python HF Jobs API, see the [ArXiv ML Trends example](examples/arxiv-workflow/). This demonstrates:
- **Multi-stage pipelines**: Data preparation β GPU classification β Analysis
- **Python API orchestration**: Using `run_uv_job()` to manage GPU jobs programmatically
- **Production patterns**: Error handling, parallel execution, and incremental updates
- **Cost optimization**: Choosing appropriate compute resources for each task
```python
# Example: Submit a classification job via Python API
from huggingface_hub import run_uv_job
job = run_uv_job(
script="https://huggingface.co/datasets/uv-scripts/classification/raw/main/classify-dataset.py",
args=["--input-dataset", "my/dataset", "--labels", "A,B,C"],
flavor="l4x1",
image="vllm/vllm-openai:latest"
)
result = job.wait()
```
## π License
This script is provided as-is for use with the UV Scripts organization.
|