Datasets:

Modalities:
Text
Languages:
code
ArXiv:
Libraries:
Datasets
License:
File size: 7,740 Bytes
0b4fbbd
 
 
 
 
 
1f59ce0
737fcb0
0b4fbbd
 
 
 
 
 
 
 
 
 
 
 
 
acafa6b
0b4fbbd
 
 
 
 
 
 
 
 
 
 
 
d5fe27d
 
 
 
 
 
 
 
 
 
0b4fbbd
 
d5fe27d
 
 
 
 
 
 
 
 
0b4fbbd
d5fe27d
0b4fbbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
acafa6b
0b4fbbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6ae9acb
 
0b4fbbd
 
 
 
 
 
 
 
acafa6b
0b4fbbd
 
 
a187ec5
acafa6b
0b4fbbd
 
 
 
 
a187ec5
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
languages: ["code"]
license:
- mit
multilinguality:
- multilingual
pretty_name: code-clippy-github-code
size_categories:
- unknown
source_datasets: []
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
# Code Clippy Github Dataset
## Dataset Description
The Code Clippy dataset consists of various public codebases from GitHub in 22 programming languages with 23 extensions totaling about 16 TB of data when uncompressed. The dataset was created from the public GitHub dataset on Google BigQuery.
### How to use it
This dataset is pretty large please use the streaming parameter from the  `datasets` library as seen below:
```python
from datasets import load_dataset

ds = load_dataset("CodedotAI/code_clippy_github", streaming=True)

```
## Data Structure

### Data Instances

```python
{
 'code_text': " a = mc^2",
 'repo_name': 'NotEinstein',
 'file_path': 'root/users/einstein.py',
 'language': 'Python',
 'license': 'isc',
 'size': 2
} 
```

### Data Fields

|Field|Type|Description|
|---|---|---|
|code_text|string|string of the source code contained in the code file|
|repo_name|string|name of the GitHub repository|
|file_path|string|path of the code file within the repository |
|language|string|programming language used in the file inferred by the file extension|
|license|string|license of GitHub repository|
|size|int|size of source file in bytes|
### Data Splits
Only a train split is provided in this dataset.
## Languages
The dataset contains 22 programming languages with over 23 extensions:
```python
{
    "C": [".c"],
    "C#": [".cs"],
    "C++": [".cpp"],
    "CSS": [".css"],
    "Dart" : [".dart"],
    "GO": [".go"],
    "HTML":[".html"],
    "Java": [".java"],
    "JavaScript": [".js"],
    "Jupyter Notebooks (Python)": [".ipynb"],
    "Kotlin" : [".kt"],
    "Lisp" : [".lisp"],
    "Matlab" : [".m"],
    "PHP": [".php"],
    "Perl": [".pl"],
    "Python": [".py"],
    "R" : [".r"],
    "Ruby": [".rb"],
    "Rust": [".rs"],
    "SQL": [".sql"],
    "Shell": [".sh"],
    "Swift" : [".swift"],
    "TypeScript": [".ts"],
}
```
## Licenses
Each example is also annotated with the license of the associated repository. There are in total 15 licenses:
```python
[
    'mit',
    'apache-2.0',
    'gpl-2.0',
    'gpl-3.0',
    'bsd-3-clause',
    'bsd-2-clause',
    'unlicense',
    'apacheagpl-3.0',
    'lgpl-3.0',
    'cc0-1.0',
    'epl-1.0',
    'lgpl-2.1',
    'mpl-2.0',
    'isc',
    'artistic-2.0'
 ]
```
## Dataset Statistics
The dataset is about ~ 18 TB uncompressed. We are currently working on processing it and applying further filtering.
## Dataset Creation
The dataset was created in two steps:
1. Files with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery using the following query:

```sql
SELECT
  f.id, f.repo_name, f.path, content.copies, content.size, content.content, lic.license
FROM
  `bigquery-public-data.github_repos.files` AS f
JOIN

  `bigquery-public-data.github_repos.contents` as content
ON
  f.id = content.id
JOIN
  `bigquery-public-data.github_repos.licenses` AS lic

ON
  f.repo_name = lic.repo_name 


WHERE
    NOT content.binary
    

      
    AND (

        (f.path LIKE '%.py') OR (f.path LIKE '%.java')  OR (f.path LIKE '%.js') 
         OR (f.path LIKE '%.html') OR (f.path LIKE '%.lisp') OR (f.path LIKE '%.sh') 
         OR (f.path LIKE '%.r') OR (f.path LIKE '%.pl') OR (f.path LIKE '%.css')
         OR (f.path LIKE '%.sql') OR (f.path LIKE '%.c') OR (f.path LIKE '%.cpp') 
         OR (f.path LIKE '%.ts') OR (f.path LIKE '%.cs') OR (f.path LIKE '%.go') 
         OR (f.path LIKE '%.rs') OR (f.path LIKE '%.swift') OR (f.path LIKE '%.php') 
         OR (f.path LIKE '%.dart') OR (f.path LIKE '%.kt') OR (f.path LIKE '%.m') 
         OR (f.path LIKE '%.rb') OR (f.path LIKE '%.ipynb')
        
    )
     
    -- make sure we dont go above 1 megabyte  
    AND (content.size BETWEEN 1024 AND 1000000)



```
2. Currently, our CodedotAI team is working on adding additional filters and cleaning this dataset.
### Personal and Sensitive Information

Since this data was collected from public repositories, there exists potential for personal and sensitive information to be included in the data through developers accidentally or on purpose uploading their secret keys, passwords, API keys, emails, etc.

## Considerations for Using the Data

### Social Impact of Dataset

The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discussion are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**.

1. **Over-reliance:** A language model trained on large datasets such as this one for the task of autogenerating code may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using a language model trained on this dataset.
2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
3. **Security implications:** No filtering or checking of vulnerabilities or buggy code was performed. This means that the dataset may contain code that may be malicious or contain vulnerabilities. Therefore, any model trained on this dataset may generate vulnerable, buggy, or malicious code. In safety-critical software, this could lead to software that may work improperly and could result in serious consequences depending on the software. Additionally, a model trained on this dataset may be used to generate malicious code on purpose in order to perform ransomware or other such attacks.
4. **Legal implications:** No filtering was performed on licensed code. This means that the dataset may contain restrictive licensed code. As discussed in the paper, public Github repositories may fall under "fair use." However, there have been little to no previous cases of such usages of licensed publicly available code. Therefore, any model trained on this dataset may be required to obey license terms that align with the software it was trained on such as GPL-3.0, which is why we purposefully put this dataset under the GPL-3.0 license. It is unclear the legal ramifications of using a language model trained on this dataset.


### v1.0
- The query was executed on _February 1, 2022, 12:15:59 AM EST_

## Acknowledgements
This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/about/). We would also like to thank [Dr. Razvan Bunescu](https://webpages.charlotte.edu/rbunescu/) and [The College of Computing and Informatics at UNC Charlotte](https://cci.charlotte.edu/) for their generous contributions to this project, specifically in funding the BigQuery and Google Cloud Storage costs.