ArtifactAI commited on
Commit
9f7f431
1 Parent(s): 503f5d6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -2
README.md CHANGED
@@ -27,6 +27,70 @@ dataset_info:
27
  download_size: 1490724325
28
  dataset_size: 3590067176.125193
29
  ---
30
- # Dataset Card for "arxiv_dl_research_code"
31
 
32
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  download_size: 1490724325
28
  dataset_size: 3590067176.125193
29
  ---
30
+ # Dataset Card for "ArtifactAI/arxiv_python_research_code"
31
 
32
+ ## Dataset Description
33
+
34
+ https://huggingface.co/datasets/ArtifactAI/arxiv_deep_learning_python_research_code
35
+
36
+
37
+ ### Dataset Summary
38
+
39
+ ArtifactAI/arxiv_deep_learning_python_research_code contains over 1.49B of source code files referenced strictly in ArXiv papers. The dataset serves as a curated dataset for Code LLMs.
40
+
41
+ ### How to use it
42
+ ```python
43
+ from datasets import load_dataset
44
+
45
+ # full dataset (1.49GB of data)
46
+ ds = load_dataset("ArtifactAI/arxiv_deep_learning_python_research_code", split="train")
47
+
48
+ # dataset streaming (will only download the data as needed)
49
+ ds = load_dataset("ArtifactAI/arxiv_deep_learning_python_research_code", streaming=True, split="train")
50
+ for sample in iter(ds): print(sample["code"])
51
+ ```
52
+
53
+ ## Dataset Structure
54
+ ### Data Instances
55
+ Each data instance corresponds to one file. The content of the file is in the `code` feature, and other features (`repo`, `file`, etc.) provide some metadata.
56
+ ### Data Fields
57
+ - `repo` (string): code repository name.
58
+ - `file` (string): file path in the repository.
59
+ - `code` (string): code within the file.
60
+ - `file_length`: (integer): number of characters in the file.
61
+ - `avg_line_length`: (float): the average line-length of the file.
62
+ - `max_line_length`: (integer): the maximum line-length of the file.
63
+ - `extension_type`: (string): file extension.
64
+
65
+ ### Data Splits
66
+
67
+ The dataset has no splits and all data is loaded as train split by default.
68
+
69
+ ## Dataset Creation
70
+
71
+ ### Source Data
72
+ #### Initial Data Collection and Normalization
73
+ 34,099 active GitHub repository names were extracted from [ArXiv](https://arxiv.org/) papers from its inception through July 21st, 2023 totaling 773G of compressed github repositories.
74
+
75
+ These repositories were then filtered, and the code from each file that mentiones ["torch", "jax", "flax", "stax", "haiku", "keras", "fastai", "xgboost", "caffe", "mxnet"] was extracted into 1.4 million files.
76
+
77
+ #### Who are the source language producers?
78
+
79
+ The source (code) language producers are users of GitHub that created unique repository
80
+
81
+ ### Personal and Sensitive Information
82
+ The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub.
83
+
84
+ ## Additional Information
85
+
86
+ ### Dataset Curators
87
+ Matthew Kenney, Artifact AI, matt@artifactai.com
88
+
89
+ ### Citation Information
90
+ ```
91
+ @misc{arxiv_deep_learning_python_research_code,
92
+ title={arxiv_deep_learning_python_research_code},
93
+ author={Matthew Kenney},
94
+ year={2023}
95
+ }
96
+ ```