vllm-pr-analysis / README.md
Ayushnangia's picture
Update dataset card
b201a12 verified
metadata
license: mit
task_categories:
  - text-classification
  - feature-extraction
language:
  - en
tags:
  - software-engineering
  - testing
  - performance
  - llm-serving
  - vllm
  - benchmarking
  - ml-evaluation
pretty_name: vLLM PR Test Classification
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*

vLLM PR Test Classification Dataset

🎯 Overview

This dataset contains 98 vLLM project commits with their corresponding Pull Request (PR) timeline data and comprehensive test type classifications. It provides insights into testing patterns in a major LLM serving infrastructure project.

πŸ“Š Dataset Description

Purpose

This dataset was created by analyzing vLLM project PR timelines to:

  • Identify different types of testing and benchmarking activities
  • Understand testing patterns in LLM infrastructure development
  • Provide labeled data for ML models to classify test types in software PRs
  • Enable research on performance optimization trends in LLM serving

Test Categories

Each commit is classified across four test categories:

Category Description Keywords Prevalence
LM Evaluation Language model evaluation tests lm_eval, gsm8k, mmlu, hellaswag, truthfulqa 25.5%
Performance Performance benchmarking tests TTFT, throughput, latency, ITL, TPOT, tok/s 81.6%
Serving Serving functionality tests vllm serve, API server, frontend, online serving 53.1%
General Test General testing activities CI, pytest, unittest, buildkite, fastcheck 96.9%

πŸ“ˆ Dataset Statistics

Overall Distribution

  • Total commits: 98
  • Multi-category commits: 76 (77.6%)
  • Average test types per commit: 2.57

Detailed Keyword Frequency

Top Performance Keywords (80 commits)

  • throughput: 241 mentions
  • latency: 191 mentions
  • profiling: 114 mentions
  • TTFT (Time To First Token): 114 mentions
  • ITL (Inter-token Latency): 114 mentions
  • TPOT (Time Per Output Token): 108 mentions
  • optimization: 87 mentions
  • tok/s (tokens per second): 66 mentions

Top LM Evaluation Keywords (25 commits)

  • gsm8k: 62 mentions
  • lm_eval: 33 mentions
  • lm-eval: 9 mentions
  • mmlu: 3 mentions
  • humaneval: 1 mention

Top Serving Keywords (52 commits)

  • frontend: 181 mentions
  • serving: 74 mentions
  • api server: 42 mentions
  • vllm serve: 23 mentions
  • online serving: 19 mentions

πŸ—‚οΈ Data Schema

{
    'commit_hash': str,           # Git commit SHA-1 hash (40 chars)
    'pr_url': str,                # GitHub PR URL (e.g., https://github.com/vllm-project/vllm/pull/12601)
    'has_lm_eval': bool,          # True if commit contains LM evaluation tests
    'has_performance': bool,       # True if commit contains performance benchmarks
    'has_serving': bool,          # True if commit contains serving tests
    'has_general_test': bool,     # True if commit contains general tests
    'test_details': str,          # Pipe-separated test keywords (e.g., "PERF: ttft, throughput | TEST: ci, pytest")
    'timeline_text': str,         # Full PR timeline text with comments, reviews, and commit messages
    'extracted_at': str           # ISO timestamp when data was extracted
}

πŸ’» Usage Examples

Basic Loading

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("your-username/vllm-pr-test-classification")

# Explore the data
print(f"Total examples: {len(dataset['train'])}")
print(f"Features: {dataset['train'].features}")
print(f"First example: {dataset['train'][0]}")

Filtering Examples

# Get commits with performance benchmarks
perf_commits = dataset['train'].filter(lambda x: x['has_performance'])
print(f"Performance commits: {len(perf_commits)}")

# Get commits with LM evaluation
lm_eval_commits = dataset['train'].filter(lambda x: x['has_lm_eval'])
print(f"LM evaluation commits: {len(lm_eval_commits)}")

# Get commits with multiple test types
multi_test = dataset['train'].filter(
    lambda x: sum([x['has_lm_eval'], x['has_performance'], 
                   x['has_serving'], x['has_general_test']]) >= 3
)
print(f"Commits with 3+ test types: {len(multi_test)}")

Analysis Example

import pandas as pd

# Convert to pandas for analysis
df = dataset['train'].to_pandas()

# Analyze test type combinations
test_combinations = df[['has_lm_eval', 'has_performance', 'has_serving', 'has_general_test']]
combination_counts = test_combinations.value_counts()
print("Most common test combinations:")
print(combination_counts.head())

# Extract performance metrics mentioned
perf_df = df[df['has_performance']]
print(f"\nCommits mentioning specific metrics:")
print(f"TTFT mentions: {perf_df['test_details'].str.contains('TTFT').sum()}")
print(f"Throughput mentions: {perf_df['test_details'].str.contains('throughput', case=False).sum()}")

Text Classification Training

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TrainingArguments, Trainer

# Prepare for multi-label classification
def preprocess_function(examples):
    # Create multi-label targets
    labels = []
    for i in range(len(examples['commit_hash'])):
        label = [
            int(examples['has_lm_eval'][i]),
            int(examples['has_performance'][i]),
            int(examples['has_serving'][i]),
            int(examples['has_general_test'][i])
        ]
        labels.append(label)
    
    # Tokenize timeline text
    tokenized = tokenizer(
        examples['timeline_text'],
        truncation=True,
        padding='max_length',
        max_length=512
    )
    tokenized['labels'] = labels
    return tokenized

# Train a classifier to identify test types from PR text
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained(
    "bert-base-uncased",
    num_labels=4,
    problem_type="multi_label_classification"
)

πŸ” Sample Data

Example 1: Performance-focused commit

{
  "commit_hash": "fc542144c4477ffec1d3de6fa43e54f8fb5351e8",
  "pr_url": "https://github.com/vllm-project/vllm/pull/12563",
  "has_lm_eval": false,
  "has_performance": true,
  "has_serving": false,
  "has_general_test": true,
  "test_details": "PERF: tok/s, optimization | TEST: CI",
  "timeline_text": "[Guided decoding performance optimization]..."
}

Example 2: Comprehensive testing commit

{
  "commit_hash": "aea94362c9bdd08ed2b346701bdc09d278e85f66",
  "pr_url": "https://github.com/vllm-project/vllm/pull/12287",
  "has_lm_eval": true,
  "has_performance": true,
  "has_serving": true,
  "has_general_test": true,
  "test_details": "LM_EVAL: lm_eval, gsm8k | PERF: TTFT, ITL | SERVING: vllm serve | TEST: test, CI",
  "timeline_text": "[Frontend][V1] Online serving performance improvements..."
}

πŸ› οΈ Potential Use Cases

  1. Test Type Classification: Train models to automatically classify test types in software PRs
  2. Testing Pattern Analysis: Study how different test types correlate in infrastructure projects
  3. Performance Optimization Research: Analyze performance testing trends in LLM serving systems
  4. CI/CD Insights: Understand continuous integration patterns in ML infrastructure projects
  5. Documentation Generation: Generate test documentation from PR timelines
  6. Code Review Automation: Build tools to automatically suggest relevant tests based on PR content

πŸ“š Source

This dataset was extracted from the vLLM project GitHub repository PR timelines. vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs.

πŸ”„ Updates

  • v1.0.0 (2025-01): Initial release with 98 commits

πŸ“œ License

This dataset is released under the MIT License, consistent with the vLLM project's licensing.

πŸ“– Citation

If you use this dataset in your research or applications, please cite:

@dataset{vllm_pr_test_classification_2025,
  title={vLLM PR Test Classification Dataset},
  author={vLLM Community Contributors},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/your-username/vllm-pr-test-classification},
  note={A dataset of 98 vLLM commits with test type classifications extracted from GitHub PR timelines}
}

🀝 Contributing

If you'd like to contribute to this dataset or report issues:

  1. Open an issue on the Hugging Face dataset repository
  2. Submit improvements via pull requests
  3. Share your use cases and findings

⚠️ Limitations

  • Dataset size is limited to 98 commits
  • Timeline text may be truncated for very long PR discussions
  • Classification is based on keyword matching, which may miss context-dependent references
  • Dataset represents a snapshot from specific time period of vLLM development

πŸ™ Acknowledgments

Thanks to the vLLM project maintainers and contributors for their open-source work that made this dataset possible.