Datasets:

Languages:
code
Multilinguality:
multilingual
Size Categories:
unknown
ArXiv:
Tags:
License:
taisazero commited on
Commit
0b4fbbd
1 Parent(s): d94cd60

updated README

Browse files
Files changed (1) hide show
  1. README.md +157 -1
README.md CHANGED
@@ -1 +1,157 @@
1
- "hello"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators: []
3
+ language_creators:
4
+ - crowdsourced
5
+ - expert-generated
6
+ languages: ["code"]
7
+ licenses:
8
+ - other-multiple
9
+ multilinguality:
10
+ - multilingual
11
+ pretty_name: code-clippy-github-code
12
+ size_categories:
13
+ - unknown
14
+ source_datasets: []
15
+ task_categories:
16
+ - sequence-modeling
17
+ task_ids:
18
+ - language-modeling
19
+ ---
20
+ # Code Clippy Github Dataset
21
+ ## Dataset Description
22
+ The Code Clippy dataset consists of various public codebases from GitHub in 22 programming languages with 23 extensions totalling about 16 TB of data when uncompressed. The dataset was created from the public GitHub dataset on Google BiqQuery.
23
+ ### How to use it
24
+ This dataset is pretty large please use the streaming parameter from the `datasets` library as seen below:
25
+ ```python
26
+ from datasets import load_dataset
27
+
28
+ ds = load_dataset("CodedotAI/code_clippy_github", streaming=True)
29
+
30
+ ```
31
+ ## Data Structure
32
+
33
+ ### Data Instances
34
+
35
+ WIP
36
+
37
+ ### Data Fields
38
+ WIP
39
+ ### Data Splits
40
+ WIP
41
+ ## Languages
42
+ The dataset contains 22 programming languages with over 23 extensions:
43
+ ```python
44
+ {
45
+ "C": [".c"],
46
+ "C#": [".cs"],
47
+ "C++": [".cpp"],
48
+ "CSS": [".css"],
49
+ "Dart" : [".dart"],
50
+ "GO": [".go"],
51
+ "HTML":[".html"],
52
+ "Java": [".java"],
53
+ "JavaScript": [".js"],
54
+ "Jupyter Notebooks (Python)": [".ipynb"],
55
+ "Kotlin" : [".kt"],
56
+ "Lisp" : [".lisp"],
57
+ "Matlab" : [".m"],
58
+ "PHP": [".php"],
59
+ "Perl": [".pl"],
60
+ "Python": [".py"],
61
+ "R" : [".r"],
62
+ "Ruby": [".rb"],
63
+ "Rust": [".rs"],
64
+ "SQL": [".sql"],
65
+ "Shell": [".sh"],
66
+ "Swift" : [".swift"],
67
+ "TypeScript": [".ts"],
68
+ }
69
+ ```
70
+ ## Licenses
71
+ Each example is also annotated with the license of the associated repository. There are in total 15 licenses:
72
+ ```python
73
+ [
74
+ 'mit',
75
+ 'apache-2.0',
76
+ 'gpl-2.0',
77
+ 'gpl-3.0',
78
+ 'bsd-3-clause',
79
+ 'bsd-2-clause',
80
+ 'unlicense',
81
+ 'apacheagpl-3.0',
82
+ 'lgpl-3.0',
83
+ 'cc0-1.0',
84
+ 'epl-1.0',
85
+ 'lgpl-2.1',
86
+ 'mpl-2.0',
87
+ 'isc',
88
+ 'artistic-2.0'
89
+ ]
90
+ ```
91
+ ## Dataset Statistics
92
+ The dataset is about ~ 18 TB uncompressed. We are currently working on processing it and applying further filtering.
93
+ ## Dataset Creation
94
+ The dataset was created in two steps:
95
+ 1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery using the following query:
96
+
97
+ ```sql
98
+ SELECT
99
+ f.id, f.repo_name, f.path, content.copies, content.size, content.content, lic.license
100
+ FROM
101
+ `bigquery-public-data.github_repos.files` AS f
102
+ JOIN
103
+
104
+ `bigquery-public-data.github_repos.contents` as content
105
+ ON
106
+ f.id = content.id
107
+ JOIN
108
+ `bigquery-public-data.github_repos.licenses` AS lic
109
+
110
+ ON
111
+ f.repo_name = lic.repo_name
112
+
113
+
114
+ WHERE
115
+ NOT content.binary
116
+
117
+
118
+
119
+ AND (
120
+
121
+ (f.path LIKE '%.py') OR (f.path LIKE '%.java') OR (f.path LIKE '%.js')
122
+ OR (f.path LIKE '%.html') OR (f.path LIKE '%.lisp') OR (f.path LIKE '%.sh')
123
+ OR (f.path LIKE '%.r') OR (f.path LIKE '%.pl') OR (f.path LIKE '%.css')
124
+ OR (f.path LIKE '%.sql') OR (f.path LIKE '%.c') OR (f.path LIKE '%.cpp')
125
+ OR (f.path LIKE '%.ts') OR (f.path LIKE '%.cs') OR (f.path LIKE '%.go')
126
+ OR (f.path LIKE '%.rs') OR (f.path LIKE '%.swift') OR (f.path LIKE '%.php')
127
+ OR (f.path LIKE '%.dart') OR (f.path LIKE '%.kt') OR (f.path LIKE '%.m')
128
+ OR (f.path LIKE '%.rb') OR (f.path LIKE '%.ipynb')
129
+
130
+ )
131
+
132
+ -- make sure we dont go above 1 megabyte
133
+ AND (content.size BETWEEN 1024 AND 1000000)
134
+
135
+
136
+ ```
137
+
138
+ ### Personal and Sensitive Information
139
+
140
+ Since this data was collected from public repositories, there exists potential for personal and sensitive information to be included in the data through developers accidentally or on purpose uploading their secret keys, passwords, API keys, emails, etc.
141
+
142
+ ## Considerations for Using the Data
143
+
144
+ ### Social Impact of Dataset
145
+
146
+ The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**.
147
+
148
+ 1. **Over-reliance:** A language model trained on large datasets such as this one for the task of autogenerating code may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using a language model trained on this dataset.
149
+ 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
150
+ 3. **Security implications:** No filtering or checking of vulnerabilities or buggy code was performed. This means that the dataset may contain code that may be malicious or contain vulnerabilities. Therefore, any model trained on this dataset may generate vulnerable, buggy, or malicious code. In safety critical software, this could lead to software that may work improperly and could result in serious consequences depending on the software. Additionally, a model trained on this dataset may be used to generate malicious code on purpose in order to perform ransomware or other such attacks.
151
+ 4. **Legal implications:** No filtering was performed on licensed code. This means that the dataset may contain restrictive licensed code. As discussed in the paper, public Github repositories may fall under "fair use." However, there has been little to no previous cases of such usages of licensed publicly available code. Therefore, any model trained on this dataset may be required to obey license terms that align with the software it was trained on such as GPL-3.0, which is why we purposefully put this dataset under the GPL-3.0 license. It is unclear the legal ramifications of using a language model trained on this dataset.
152
+
153
+
154
+ ### v1.0
155
+ - The query was executed on _February 1, 2022, 12:15:59 AM EST_
156
+
157
+ ## Acknowledgements