marksaroufim commited on
Commit
4484246
·
1 Parent(s): d21f2c7

Add NVIDIA NVFP4 submissions data

Browse files

- Add nvidia_nvfp4_submissions.parquet (~1.4GB) with 197,594 deduplicated submissions
- Includes nvfp4_gemv (35,559), nvfp4_gemm (48,682), nvfp4_dual_gemm (113,353)
- All submissions include full code content
- Add docs.md with data processing documentation
- Add queries.sql with SQL queries for data extraction
- Add scripts/nvfp4/ with helper scripts for analysis
- Update README.md with NVIDIA data documentation

.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ __pycache__/
README.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
2
  configs:
3
- - config_name: submissions
4
  data_files: "submissions.parquet"
5
- - config_name: successful_submissions
6
  data_files: "successful_submissions.parquet"
 
 
7
  - config_name: leaderboards
8
  data_files: "leaderboards.parquet"
9
  tags:
@@ -11,11 +13,62 @@ tags:
11
  license: mit
12
  ---
13
 
14
- This is the dataset that was created from the first and second AMD $100K kernel competitions, containing roughly 110K kernels for fp8-gemm, moe, mla, all2all, gemm+reducescatter, and allgather+gemm optimized to run on MI300. Learn more at gpumode.com/v2/news
15
 
16
- To see the full list of kernel competitions we've ran and are running you can checkout https://github.com/gpu-mode/reference-kernels which also contains details on reference kernels and their input shapes and distributions
17
 
18
- We are planning on adding kernels optimized for NVFP4 on Blackwell next
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  If you use this dataset in your work, please cite:
21
 
 
1
  ---
2
  configs:
3
+ - config_name: amd_submissions
4
  data_files: "submissions.parquet"
5
+ - config_name: amd_successful_submissions
6
  data_files: "successful_submissions.parquet"
7
+ - config_name: nvidia_nvfp4_submissions
8
+ data_files: "nvidia_nvfp4_submissions.parquet"
9
  - config_name: leaderboards
10
  data_files: "leaderboards.parquet"
11
  tags:
 
13
  license: mit
14
  ---
15
 
16
+ # KernelBot Competition Data
17
 
18
+ This dataset contains GPU kernel submissions from the KernelBot competition platform. Submissions are optimized GPU kernels written for specific hardware targets.
19
 
20
+ ## Data Files
21
+
22
+ ### AMD MI300 Submissions
23
+ | File | Description |
24
+ |------|-------------|
25
+ | `submissions.parquet` | All AMD competition submissions |
26
+ | `successful_submissions.parquet` | AMD submissions that passed correctness tests |
27
+ | `deduplicated_submissions.parquet` | AMD submissions deduplicated by (user, code) |
28
+ | `deduplicated_successful_submissions.parquet` | Deduplicated passing AMD submissions |
29
+
30
+ **AMD Problems:** fp8-gemm, moe (mixture of experts), mla-decode, all2all, gemm+reducescatter, allgather+gemm
31
+
32
+ ### NVIDIA Blackwell NVFP4 Submissions
33
+ | File | Size | Description |
34
+ |------|------|-------------|
35
+ | `nvidia_nvfp4_submissions.parquet` | ~1.4 GB | NVFP4 submissions deduplicated by (user, code), with full code content |
36
+
37
+ **NVIDIA NVFP4 Problems:**
38
+ | Problem | Submissions | Unique Users | Passing |
39
+ |---------|-------------|--------------|---------|
40
+ | nvfp4_gemv | 35,559 | 281 | 4,860 |
41
+ | nvfp4_gemm | 48,682 | 160 | 26,123 |
42
+ | nvfp4_dual_gemm | 113,353 | 160 | 73,802 |
43
+
44
+ **Note:** Scores are execution time in seconds. **Lower is better.**
45
+
46
+ ## Helper Scripts
47
+
48
+ - `analyze_submissions.py` - Python functions for analyzing submissions
49
+ - `skills.md` - Documentation for data processing workflows
50
+
51
+ ### Quick Start
52
+
53
+ ```python
54
+ from analyze_submissions import load_submissions, top_contestants, author_progression
55
+
56
+ # Load NVIDIA NVFP4 data
57
+ df = load_submissions()
58
+
59
+ # Get top 20 for a problem
60
+ leaders = top_contestants(df, problem_name='nvfp4_gemm', n=20)
61
+
62
+ # See a user's progression over time
63
+ progression = author_progression(df, user_name='username', problem_name='nvfp4_gemm')
64
+ ```
65
+
66
+ ## Learn More
67
+
68
+ - Competition platform: [gpumode.com](https://gpumode.com)
69
+ - Reference kernels and problem specs: [github.com/gpu-mode/reference-kernels](https://github.com/gpu-mode/reference-kernels)
70
+
71
+ ## Citation
72
 
73
  If you use this dataset in your work, please cite:
74
 
docs.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Kernelbot Data Processing Skills
2
+
3
+ This document describes how to extract and process submission data from the Kernelbot database.
4
+
5
+ ## Database Connection
6
+
7
+ The production database is hosted on Heroku. **NEVER run write operations (INSERT, UPDATE, DELETE) on this database.**
8
+
9
+ ```bash
10
+ # Get DATABASE_URL from Heroku
11
+ heroku config:get DATABASE_URL --app discord-cluster-manager
12
+ ```
13
+
14
+ ## Database Schema
15
+
16
+ The relevant tables are in the `leaderboard` schema:
17
+
18
+ | Table | Description |
19
+ |-------|-------------|
20
+ | `leaderboard.leaderboard` | Problem definitions (id, name, deadline, task, description) |
21
+ | `leaderboard.submission` | User submissions (id, leaderboard_id, user_id, code_id, submission_time, status) |
22
+ | `leaderboard.runs` | Execution results (submission_id, score, passed, mode, runner, result) |
23
+ | `leaderboard.user_info` | User details (id, user_name) |
24
+ | `leaderboard.gpu_type` | GPU types per problem (leaderboard_id, gpu_type) |
25
+ | `leaderboard.code_files` | Actual submission code content (old_code text, code bytea) |
26
+
27
+ ## Key Problem IDs
28
+
29
+ ### NVFP4 Problems
30
+ - **595**: nvfp4_gemv
31
+ - **597**: nvfp4_gemm
32
+ - **598**: nvfp4_dual_gemm
33
+ - **730**: nvfp4_group_gemm (not released yet)
34
+
35
+ ### AMD Problems
36
+ - **398**: amd-identity
37
+ - **399**: amd-fp8-mm
38
+ - **430**: amd-mixture-of-experts
39
+ - **463**: amd-mla-decode
40
+ - **563**: amd-all2all
41
+ - **564**: amd-gemm-rs
42
+ - **565**: amd-ag-gemm
43
+
44
+ ## Run Modes
45
+
46
+ | Mode | Description | Has Score? |
47
+ |------|-------------|------------|
48
+ | `test` | Correctness tests | No |
49
+ | `benchmark` | Performance benchmarks (internal) | No |
50
+ | `leaderboard` | Official leaderboard runs | **Yes** |
51
+ | `profile.0-3` | Profiling runs | No |
52
+
53
+ **Important:**
54
+ - Use `mode = 'leaderboard'` when joining runs to get scores.
55
+ - **Lower scores are better** (scores are execution time in seconds).
56
+
57
+ ## SQL Queries
58
+
59
+ All SQL queries are in `queries.sql`. Key queries:
60
+ - List all problems
61
+ - Check submission counts
62
+ - Export deduplicated submissions with code
63
+ - Get top N submissions
64
+ - Get user progression over time
65
+
66
+ ## Adding Support for a New Problem
67
+
68
+ ### Step 1: Find the Problem ID
69
+ Use the "LIST ALL PROBLEMS" query from `queries.sql`.
70
+
71
+ ### Step 2: Check Submission Counts
72
+ Use the "CHECK SUBMISSION COUNTS" query from `queries.sql`.
73
+
74
+ ### Step 3: Export Deduplicated Submissions
75
+ Use the "EXPORT DEDUPLICATED SUBMISSIONS WITH CODE" query from `queries.sql`.
76
+
77
+ ```python
78
+ import pandas as pd
79
+ import psycopg2
80
+
81
+ DATABASE_URL = "..." # from heroku config:get
82
+ conn = psycopg2.connect(DATABASE_URL)
83
+
84
+ # Read query from queries.sql and modify problem IDs as needed
85
+ with open('queries.sql') as f:
86
+ # Find and use the export query section
87
+ pass
88
+
89
+ df = pd.read_sql(query, conn)
90
+ df.to_parquet('new_problem_submissions.parquet', index=False)
91
+ ```
92
+
93
+ ### Step 4: Verify Data Quality
94
+ ```python
95
+ from analyze_submissions import load_submissions, leaderboard_summary
96
+
97
+ df = load_submissions('new_problem_submissions.parquet')
98
+ print(leaderboard_summary(df))
99
+ ```
100
+
101
+ ## Accessing Submission Code
102
+
103
+ The parquet files include the full code content for each submission:
104
+
105
+ ```python
106
+ from analyze_submissions import load_submissions
107
+
108
+ df = load_submissions()
109
+
110
+ # Get a specific user's best submission
111
+ user_subs = df[(df['user_name'] == 'gau.nernst') & (df['problem_name'] == 'nvfp4_gemv')]
112
+ best = user_subs.sort_values('score').head(1)
113
+
114
+ # Access the code
115
+ code = best['code'].values[0]
116
+ print(code)
117
+ ```
118
+
119
+ ## Helper Functions
120
+
121
+ Use `analyze_submissions.py`:
122
+
123
+ ```python
124
+ from analyze_submissions import (
125
+ load_submissions, # Load parquet file
126
+ author_progression, # See user's submissions over time
127
+ top_contestants, # Get leaderboard rankings
128
+ leaderboard_summary, # Summary stats per problem
129
+ user_stats, # Stats for a specific user
130
+ format_score # Format score with units (us, ms, s)
131
+ )
132
+ ```
133
+
134
+ ## Environment Setup
135
+
136
+ ```bash
137
+ uv venv .venv
138
+ source .venv/bin/activate
139
+ uv pip install pandas pyarrow psycopg2-binary
140
+ ```
141
+
142
+ ## Files
143
+
144
+ | File | Description |
145
+ |------|-------------|
146
+ | `nvidia_nvfp4_submissions.parquet` | Deduplicated NVIDIA NVFP4 submissions with code (~1.4 GB) |
147
+ | `queries.sql` | All SQL queries for data extraction |
148
+ | `scripts/nvfp4/analyze_submissions.py` | Helper functions library |
149
+ | `scripts/nvfp4/get_fastest_submission.py` | Print user's fastest submission |
150
+ | `scripts/nvfp4/query_submissions.py` | List submission IDs or query specific ID |
151
+
152
+ ## Review Checklist Before Pushing
153
+
154
+ 1. Verify submission counts match expectations
155
+ 2. Check for any anomalies in scores (negative, extremely large, etc.)
156
+ 3. Confirm deduplication worked correctly
157
+ 4. Test helper functions work with the new data
158
+ 5. Run `python scripts/nvfp4/query_submissions.py` to verify
nvidia_nvfp4_submissions.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f727de8e9dadd9558d4e27d98cad1cd059ca840631cb9c636907b3a1250406d6
3
+ size 1500132381
queries.sql ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Kernelbot Database Queries
2
+ -- All queries are READ ONLY. Never run INSERT/UPDATE/DELETE on production.
3
+ -- Scores are execution time in seconds. Lower is better.
4
+
5
+ --------------------------------------------------------------------------------
6
+ -- LIST ALL PROBLEMS
7
+ --------------------------------------------------------------------------------
8
+ SELECT
9
+ l.id,
10
+ l.name,
11
+ l.deadline,
12
+ l.description,
13
+ array_agg(g.gpu_type) as gpu_types
14
+ FROM leaderboard.leaderboard l
15
+ LEFT JOIN leaderboard.gpu_type g ON l.id = g.leaderboard_id
16
+ GROUP BY l.id, l.name, l.deadline, l.description
17
+ ORDER BY l.id;
18
+
19
+ --------------------------------------------------------------------------------
20
+ -- PROBLEM IDS
21
+ --------------------------------------------------------------------------------
22
+ -- NVFP4: 595 (gemv), 597 (gemm), 598 (dual_gemm), 730 (group_gemm)
23
+ -- AMD: 398 (identity), 399 (fp8-mm), 430 (moe), 463 (mla-decode),
24
+ -- 563 (all2all), 564 (gemm-rs), 565 (ag-gemm)
25
+
26
+ --------------------------------------------------------------------------------
27
+ -- CHECK SUBMISSION COUNTS FOR A PROBLEM
28
+ --------------------------------------------------------------------------------
29
+ SELECT
30
+ COUNT(*) as total_submissions,
31
+ COUNT(DISTINCT user_id) as unique_users
32
+ FROM leaderboard.submission
33
+ WHERE leaderboard_id = 595; -- Replace with problem ID
34
+
35
+ --------------------------------------------------------------------------------
36
+ -- EXPORT DEDUPLICATED SUBMISSIONS WITH CODE
37
+ -- Deduplicates by (user_id, code_id), keeping the fastest score
38
+ --------------------------------------------------------------------------------
39
+ WITH ranked AS (
40
+ SELECT
41
+ s.id as submission_id,
42
+ s.leaderboard_id,
43
+ l.name as problem_name,
44
+ s.user_id,
45
+ u.user_name,
46
+ s.code_id,
47
+ s.file_name,
48
+ s.submission_time,
49
+ s.status,
50
+ r.score,
51
+ r.passed,
52
+ r.mode,
53
+ r.runner,
54
+ COALESCE(c.old_code, convert_from(c.code, 'UTF8')) as code,
55
+ ROW_NUMBER() OVER (
56
+ PARTITION BY s.leaderboard_id, s.user_id, s.code_id
57
+ ORDER BY r.score ASC NULLS LAST
58
+ ) as rn
59
+ FROM leaderboard.submission s
60
+ JOIN leaderboard.leaderboard l ON s.leaderboard_id = l.id
61
+ LEFT JOIN leaderboard.user_info u ON s.user_id = u.id
62
+ LEFT JOIN leaderboard.runs r ON s.id = r.submission_id AND r.mode = 'leaderboard'
63
+ LEFT JOIN leaderboard.code_files c ON s.code_id = c.id
64
+ WHERE s.leaderboard_id IN (595, 597, 598) -- Replace with problem IDs
65
+ )
66
+ SELECT
67
+ submission_id, leaderboard_id, problem_name, user_id, user_name,
68
+ code_id, file_name, submission_time, status, score, passed, mode, runner, code
69
+ FROM ranked
70
+ WHERE rn = 1
71
+ ORDER BY problem_name, score ASC NULLS LAST;
72
+
73
+ --------------------------------------------------------------------------------
74
+ -- CHECK RUN MODES AND SCORES
75
+ --------------------------------------------------------------------------------
76
+ SELECT
77
+ r.mode,
78
+ COUNT(*) as cnt,
79
+ COUNT(r.score) as has_score,
80
+ MIN(r.score) as min_score,
81
+ MAX(r.score) as max_score
82
+ FROM leaderboard.runs r
83
+ JOIN leaderboard.submission s ON r.submission_id = s.id
84
+ WHERE s.leaderboard_id IN (595, 597, 598)
85
+ GROUP BY r.mode
86
+ ORDER BY cnt DESC;
87
+
88
+ --------------------------------------------------------------------------------
89
+ -- GET TOP N SUBMISSIONS FOR A PROBLEM
90
+ --------------------------------------------------------------------------------
91
+ SELECT
92
+ u.user_name,
93
+ r.score,
94
+ s.submission_time
95
+ FROM leaderboard.submission s
96
+ JOIN leaderboard.runs r ON s.id = r.submission_id AND r.mode = 'leaderboard'
97
+ LEFT JOIN leaderboard.user_info u ON s.user_id = u.id
98
+ WHERE s.leaderboard_id = 595 -- Replace with problem ID
99
+ AND r.passed = true
100
+ AND r.score IS NOT NULL
101
+ ORDER BY r.score ASC
102
+ LIMIT 20;
103
+
104
+ --------------------------------------------------------------------------------
105
+ -- GET USER'S SUBMISSIONS OVER TIME (progression)
106
+ --------------------------------------------------------------------------------
107
+ SELECT
108
+ s.submission_time,
109
+ r.score,
110
+ r.passed
111
+ FROM leaderboard.submission s
112
+ JOIN leaderboard.runs r ON s.id = r.submission_id AND r.mode = 'leaderboard'
113
+ JOIN leaderboard.user_info u ON s.user_id = u.id
114
+ WHERE u.user_name = 'gau.nernst' -- Replace with username
115
+ AND s.leaderboard_id = 595 -- Replace with problem ID
116
+ ORDER BY s.submission_time ASC;
117
+
118
+ --------------------------------------------------------------------------------
119
+ -- GET CODE FOR A SPECIFIC SUBMISSION
120
+ --------------------------------------------------------------------------------
121
+ SELECT
122
+ COALESCE(c.old_code, convert_from(c.code, 'UTF8')) as code
123
+ FROM leaderboard.code_files c
124
+ WHERE c.id = 79741; -- Replace with code_id
scripts/nvfp4/analyze_submissions.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Helper functions for analyzing kernelbot submissions.
4
+
5
+ Usage:
6
+ from analyze_submissions import load_submissions, author_progression, top_contestants
7
+ """
8
+
9
+ import pandas as pd
10
+ from pathlib import Path
11
+
12
+
13
+ def format_score(score, unit='us'):
14
+ """
15
+ Format score with appropriate units.
16
+
17
+ Args:
18
+ score: Score in seconds
19
+ unit: 'us' for microseconds, 'ms' for milliseconds, 'auto' for automatic
20
+
21
+ Returns:
22
+ Formatted string with units
23
+ """
24
+ if pd.isna(score):
25
+ return 'N/A'
26
+
27
+ if unit == 'auto':
28
+ if score < 0.001: # Less than 1ms, show in microseconds
29
+ return f"{score * 1_000_000:.2f} µs"
30
+ elif score < 1: # Less than 1s, show in milliseconds
31
+ return f"{score * 1_000:.3f} ms"
32
+ else:
33
+ return f"{score:.4f} s"
34
+ elif unit == 'us':
35
+ return f"{score * 1_000_000:.2f} µs"
36
+ elif unit == 'ms':
37
+ return f"{score * 1_000:.3f} ms"
38
+ else:
39
+ return f"{score:.6f} s"
40
+
41
+
42
+ def load_submissions(parquet_path: str = None) -> pd.DataFrame:
43
+ """Load deduplicated submissions from parquet file."""
44
+ if parquet_path is None:
45
+ parquet_path = Path(__file__).parent.parent.parent / "nvidia_nvfp4_submissions.parquet"
46
+ return pd.read_parquet(parquet_path)
47
+
48
+
49
+ def author_progression(df: pd.DataFrame, user_id: str = None, user_name: str = None,
50
+ problem_name: str = None) -> pd.DataFrame:
51
+ """
52
+ Get submissions from an author sorted by time to see their progression.
53
+
54
+ Args:
55
+ df: DataFrame of submissions
56
+ user_id: Filter by user ID (Discord ID)
57
+ user_name: Filter by username (partial match, case-insensitive)
58
+ problem_name: Filter by problem name
59
+
60
+ Returns:
61
+ DataFrame sorted by submission_time showing the author's journey
62
+ """
63
+ result = df.copy()
64
+
65
+ if user_id:
66
+ result = result[result['user_id'] == user_id]
67
+
68
+ if user_name:
69
+ result = result[result['user_name'].str.contains(user_name, case=False, na=False)]
70
+
71
+ if problem_name:
72
+ result = result[result['problem_name'] == problem_name]
73
+
74
+ return result.sort_values('submission_time')
75
+
76
+
77
+ def top_contestants(df: pd.DataFrame, problem_name: str = None, n: int = 20,
78
+ passing_only: bool = True) -> pd.DataFrame:
79
+ """
80
+ Get top contestants sorted by their best score (fastest time).
81
+
82
+ Args:
83
+ df: DataFrame of submissions
84
+ problem_name: Filter by problem name (required for meaningful results)
85
+ n: Number of top contestants to return
86
+ passing_only: Only include passing submissions
87
+
88
+ Returns:
89
+ DataFrame with top contestants and their best scores
90
+ """
91
+ result = df.copy()
92
+
93
+ if problem_name:
94
+ result = result[result['problem_name'] == problem_name]
95
+
96
+ if passing_only:
97
+ result = result[result['passed'] == True]
98
+
99
+ # Filter out rows with NA scores
100
+ result = result.dropna(subset=['score'])
101
+
102
+ if result.empty:
103
+ return pd.DataFrame(columns=['user_name', 'user_id', 'score', 'submission_time', 'problem_name'])
104
+
105
+ # Get best score per user
106
+ best_scores = result.loc[result.groupby('user_id')['score'].idxmin()]
107
+
108
+ return best_scores.sort_values('score').head(n)[
109
+ ['user_name', 'user_id', 'score', 'submission_time', 'problem_name']
110
+ ]
111
+
112
+
113
+ def leaderboard_summary(df: pd.DataFrame, score_unit='us') -> pd.DataFrame:
114
+ """
115
+ Get summary statistics for each problem.
116
+
117
+ Args:
118
+ df: DataFrame of submissions
119
+ score_unit: 'us' for microseconds, 'ms' for milliseconds, 's' for seconds
120
+
121
+ Returns:
122
+ DataFrame with submission counts, unique users, score ranges
123
+ """
124
+ summary = df.groupby('problem_name').agg({
125
+ 'submission_id': 'count',
126
+ 'user_id': 'nunique',
127
+ 'score': ['min', 'median', 'max'],
128
+ 'passed': 'sum'
129
+ })
130
+
131
+ summary.columns = ['submissions', 'unique_users', 'best_score', 'median_score',
132
+ 'worst_score', 'passing_count']
133
+
134
+ # Convert scores to specified unit
135
+ if score_unit == 'us':
136
+ multiplier = 1_000_000
137
+ summary['best_score'] = (summary['best_score'] * multiplier).round(2)
138
+ summary['median_score'] = (summary['median_score'] * multiplier).round(2)
139
+ summary['worst_score'] = (summary['worst_score'] * multiplier).round(2)
140
+ elif score_unit == 'ms':
141
+ multiplier = 1_000
142
+ summary['best_score'] = (summary['best_score'] * multiplier).round(3)
143
+ summary['median_score'] = (summary['median_score'] * multiplier).round(3)
144
+ summary['worst_score'] = (summary['worst_score'] * multiplier).round(3)
145
+
146
+ return summary
147
+
148
+
149
+ def user_stats(df: pd.DataFrame, user_id: str = None, user_name: str = None) -> pd.DataFrame:
150
+ """
151
+ Get statistics for a specific user across all problems.
152
+ """
153
+ result = df.copy()
154
+
155
+ if user_id:
156
+ result = result[result['user_id'] == user_id]
157
+ elif user_name:
158
+ result = result[result['user_name'].str.contains(user_name, case=False, na=False)]
159
+
160
+ return result.groupby('problem_name').agg({
161
+ 'submission_id': 'count',
162
+ 'score': 'min',
163
+ 'passed': 'sum'
164
+ }).rename(columns={
165
+ 'submission_id': 'num_submissions',
166
+ 'score': 'best_score',
167
+ 'passed': 'passing_count'
168
+ })
scripts/nvfp4/get_fastest_submission.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Print gau.nernst's fastest submission code to stdout."""
3
+
4
+ import pandas as pd
5
+ from pathlib import Path
6
+
7
+ df = pd.read_parquet(Path(__file__).parent.parent.parent / 'nvidia_nvfp4_submissions.parquet')
8
+
9
+ # Get fastest submission across all problems
10
+ best = df[df['user_name'] == 'gau.nernst'].sort_values('score').head(1)
11
+
12
+ problem = best['problem_name'].values[0]
13
+ score_us = best['score'].values[0] * 1_000_000
14
+
15
+ print(f"User: gau.nernst")
16
+ print(f"Problem: {problem}")
17
+ print(f"Score: {score_us:.2f} µs")
18
+ print(f"Submission ID: {best['submission_id'].values[0]}")
19
+ print("\n=== CODE ===\n")
20
+ print(best['code'].values[0])
scripts/nvfp4/query_submissions.py ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Query submissions by user/problem or by submission ID.
4
+
5
+ Usage:
6
+ python query_submissions.py # Show all submission IDs for gau.nernst on gemv
7
+ python query_submissions.py --id 187476 # Show code for specific submission ID
8
+ python query_submissions.py --user gau.nernst --problem nvfp4_gemm
9
+ """
10
+
11
+ import argparse
12
+ import pandas as pd
13
+ from pathlib import Path
14
+
15
+ df = pd.read_parquet(Path(__file__).parent.parent.parent / 'nvidia_nvfp4_submissions.parquet')
16
+
17
+ parser = argparse.ArgumentParser()
18
+ parser.add_argument('--id', type=int, help='Submission ID to query')
19
+ parser.add_argument('--user', default='gau.nernst', help='Username to filter')
20
+ parser.add_argument('--problem', default='nvfp4_gemv', help='Problem name to filter')
21
+ args = parser.parse_args()
22
+
23
+ if args.id:
24
+ # Query specific submission
25
+ sub = df[df['submission_id'] == args.id]
26
+ if sub.empty:
27
+ print(f"Submission {args.id} not found")
28
+ else:
29
+ row = sub.iloc[0]
30
+ score_us = row['score'] * 1_000_000 if pd.notna(row['score']) else 'N/A'
31
+ print(f"ID: {row['submission_id']}")
32
+ print(f"User: {row['user_name']}")
33
+ print(f"Problem: {row['problem_name']}")
34
+ print(f"Score: {score_us:.2f} µs" if isinstance(score_us, float) else f"Score: {score_us}")
35
+ print(f"\n=== CODE ===\n")
36
+ print(row['code'])
37
+ else:
38
+ # List all submission IDs for user/problem
39
+ subs = df[(df['user_name'] == args.user) & (df['problem_name'] == args.problem)]
40
+ subs = subs.sort_values('score')
41
+
42
+ ids = subs['submission_id'].tolist()
43
+ scores = [(row['submission_id'], row['score'] * 1_000_000 if pd.notna(row['score']) else None)
44
+ for _, row in subs.iterrows()]
45
+
46
+ print(f"User: {args.user} | Problem: {args.problem} | Count: {len(ids)}")
47
+ print(f"\nSubmission IDs (sorted by score, fastest first):")
48
+ print(ids)
49
+
50
+ # Get fastest/slowest with valid scores
51
+ valid_scores = [(sid, sc) for sid, sc in scores if sc is not None]
52
+ if valid_scores:
53
+ print(f"\nFastest: {valid_scores[0][0]} ({valid_scores[0][1]:.2f} µs)")
54
+ print(f"Slowest: {valid_scores[-1][0]} ({valid_scores[-1][1]:.2f} µs)")
55
+ print(f"\nQuery a specific submission: python query_submissions.py --id {valid_scores[0][0]}")
56
+ else:
57
+ print("\nNo submissions with scores found")