Original Dataset + Tokenized Data + (Buggy + Fixed Embedding Pairs) + Difference Embeddings
Overview
This repository contains 4 related datasets for [your use case]:
Datasets Included
1. Original Dataset (train-00000-of-00001.parquet)
- Description: Legacy RunBugRun Dataset
- Format: Parquet file with buggy-fixed code pairs, bug labels, and language
- Size: 456,749 samples
- Load with: dataset = load_dataset("NicholasOgenstad/my-runbugrun-dataset-filtered", split="train")
2. Difference Embeddings (diff_embeddings_chunk_XXXX.pkl)
- Description: ModernBERT-large embeddings for buggy-fixed pairs. The difference is Fixed embedding - Buggy embedding. 1024 dimensional vector.
- Format: Pickle file with XXXX arrays
- Dimensions: 456,749 × 1024, split among the different files, most 20000, last one shorter.
- Load with: XXXX
3. Tokens (token_embeddings.pkl)
- Description: Original Dataset tokenized, pairs of Buggy and Fixed code.
- Format: Pickle file XXXX
- Load with: XXXX
4. Buggy + Fixed Embeddings(tokenized_data.json)
- Description: Preprocessed tokenized sequences
- Format: Pickle file XXXX
- Load with: XXXX
Usage Examples
# Load original dataset
# Load tokens
# Load diff embeddings
# Load buggy fixed embedding pairs