Datasets:

Languages:
English
Size Categories:
1B<n<10B
Tags:
DOI:
License:
File size: 2,980 Bytes
ed4bd93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81130d5
f0a690d
81130d5
 
8658541
6556880
 
 
 
 
 
 
ed4bd93
c21b182
ed4bd93
c21b182
 
1fa1b13
c21b182
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9414916
c21b182
 
 
 
9414916
c21b182
 
 
 
7ff3787
 
 
 
c21b182
 
 
 
 
 
 
 
 
 
 
53afe2d
c21b182
 
 
 
 
 
1fa1b13
c21b182
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
dataset_info:
  features:
  - name: repo
    dtype: string
  - name: file
    dtype: string
  - name: code
    dtype: string
  - name: file_length
    dtype: int64
  - name: avg_line_length
    dtype: float64
  - name: max_line_length
    dtype: int64
  - name: extension_type
    dtype: string
  splits:
  - name: train
    num_bytes: 12984199778
    num_examples: 1415924
  download_size: 4073853616
  dataset_size: 12984199778
license: bigcode-openrail-m
task_categories:
- text-generation
language:
- en
pretty_name: arxiv_python_research_code
size_categories:
- 1B<n<10B
---
# Dataset Card for "ArtifactAI/arxiv_python_research_code"

## Dataset Description

https://huggingface.co/datasets/ArtifactAI/arxiv_python_research_code


### Dataset Summary

ArtifactAI/arxiv_python_research_code contains over 4.13GB of  source code files referenced strictly in ArXiv papers. The dataset serves as a curated dataset for Code LLMs.

### How to use it
```python
from datasets import load_dataset

# full dataset (4.13GB of data)
ds = load_dataset("ArtifactAI/arxiv_python_research_code", split="train")

# dataset streaming (will only download the data as needed)
ds = load_dataset("ArtifactAI/arxiv_python_research_code", streaming=True, split="train")
for sample in iter(ds): print(sample["code"])
```

## Dataset Structure
### Data Instances
Each data instance corresponds to one file. The content of the file is in the `code` feature, and other features (`repo`, `file`, etc.) provide some metadata.
### Data Fields
- `repo` (string): code repository name.
- `file` (string): file path in the repository.
- `code` (string): code within the file.
- `file_length`: (integer): number of characters in the file.
- `avg_line_length`: (float): the average line-length of the file.
- `max_line_length`: (integer): the maximum line-length of the file.
- `extension_type`: (string): file extension.

### Data Splits

The dataset has no splits and all data is loaded as train split by default.

## Dataset Creation

### Source Data
#### Initial Data Collection and Normalization
34,099 active GitHub repository names were extracted from [ArXiv](https://arxiv.org/) papers from its inception through July 21st, 2023 totaling 773G of compressed github repositories.

These repositories were then filtered, and the code from each '.py' file extension was extracted into 1.4 million files.

#### Who are the source language producers?

The source (code) language producers are users of GitHub that created unique repository

### Personal and Sensitive Information
The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. 

## Additional Information

### Dataset Curators
Matthew Kenney, Artifact AI, matt@artifactai.com

### Citation Information
```
@misc{arxiv_python_research_code,
    title={arxiv_python_research_code},
    author={Matthew Kenney},
    year={2023}
}
```