asaha-cdcp commited on
Commit
34a8fd2
1 Parent(s): dfb072b

Adding more languages.

Browse files
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This dataset is 10% repo sampled dataset for selected languages. We applied a repo sample rate of 10%. e.g. if sample rate is 10% then we take 10% of all repos for a
2
+ given language but include all files inside the repo.
3
+
4
+ This was generated using our codecomplete/training/completions/datagen
5
+
6
+ ```bash
7
+ ./launch.sh \
8
+ --dataset-name bigcode/starcoderdata \
9
+ --subset c,cpp,go,java,javascript,typescript,python,ruby,scala,sql \
10
+ --sample-rate 0.01 \
11
+ --hf-token <HF_TOKEN> \
12
+ --output-dir /home/${USER}/data \
13
+ --cache-dir /home/${USER}/hfcache \
14
+ --output-name c-cpp-go-java-javascript-typescript-python-ruby-scala-sql-0.01 \
15
+ --shuffle \
16
+ --build
17
+ ```
18
+
19
+ **Create the repository**
20
+
21
+ ```bash
22
+ # Install git lfs to suport large files
23
+ curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
24
+
25
+ sudo apt-get install git-lfs
26
+ ```
27
+
28
+ ```bash
29
+ # create the dataset repo
30
+ huggingface-cli repo create <your_dataset_name> --type dataset --organization codecomplete
31
+ ```
32
+
33
+ e.g.
34
+ ```bash
35
+ huggingface-cli repo create base_dataset --type dataset --organization codecomplete
36
+ ```
37
+
38
+ **Clone the repository**
39
+ ```bash
40
+ git lfs install
41
+
42
+ git clone https://huggingface.co/datasets/<your_organization_name>/<your_dataset_name>
43
+
44
+ e.g.
45
+ git clone https://huggingface.co/datasets/codecomplete/base_dataset
46
+ ```
47
+
48
+ **Prepare your files**
49
+ Create a descriptive README.md and check the dataset.json file
50
+
51
+ ```bash
52
+ cp /somewhere/base_dataset/*.json .
53
+ git lfs track *.json
54
+ git add .gitattributes
55
+ git add *.json
56
+
57
+ git add --all
58
+ ```
59
+
60
+ **Upload your files**
61
+ ```bash
62
+ git status
63
+ git commit -m "First version of the your_dataset_name dataset."
64
+ git push
65
+ ```
66
+
67
+ **Verify dataset**
68
+ ```python
69
+ from datasets import load_dataset
70
+ dataset = load_dataset("codecomplete/<your_dataset_name>")
71
+ print(dataset.num_rows)
72
+ ```
73
+
go/data.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b546807c2943daecac706f73f7f6938239eaecb3cd68c878304934da4eaef8fb
3
+ size 258838286
java/data.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b7cbc57c4c38477acd06149f708d05402806a31c282ff48e68e2f372c524a39
3
+ size 866922739
javascript/data.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31d42c3932c93c58a603751767007cfdb2c7d9426346b40119908e63a641ae4f
3
+ size 691934946
ruby/data.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39a2eeb4fe66c2f5ae8b73b8f5b9fb8fa240ca5f536ea66b9bbd66f70d867d4d
3
+ size 97314379
scala/data.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aac6e5b657ce6ae6f73f73f9bb1e53922f602b132526e77db3d3189d9fa017d0
3
+ size 49656466
typescript/data.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5c1e7367608d5ea72d668ccac868f26a1ee57e5558d589e7f32fe0b8ff79cff
3
+ size 274468075