metadata
dataset_info:
features:
- name: code
dtype: string
- name: package
dtype: string
- name: path
dtype: string
- name: filename
dtype: string
- name: parsed_code
dtype: string
- name: quality_prob
dtype: float64
- name: learning_prob
dtype: float64
splits:
- name: train
num_bytes: 40005369487
num_examples: 1902405
download_size: 11174800633
dataset_size: 40005369487
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Card for "pypi_labeled"
All of the latest package versions from pypi. The original data came from here. I pulled the latest versions of each package, then extracted only md
, rst
, ipynb
, and py
files.
I then applied some cleaning:
- rendering notebooks
- removing leading comments/licenses
Then filtered out some low-quality code, and labeled the rest according to learning value and quality. Subset by those columns to get higher quality code.