File size: 1,803 Bytes
34a8fd2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
This dataset is 10% repo sampled dataset for selected languages. We applied a repo sample rate of 10%. e.g. if sample rate is 10% then we take 10% of all repos for a 
given language but include all files inside the repo.

This was generated using our codecomplete/training/completions/datagen

```bash
./launch.sh \
  --dataset-name bigcode/starcoderdata \
  --subset c,cpp,go,java,javascript,typescript,python,ruby,scala,sql \
  --sample-rate 0.01 \
  --hf-token <HF_TOKEN> \
  --output-dir /home/${USER}/data \
  --cache-dir /home/${USER}/hfcache \
  --output-name c-cpp-go-java-javascript-typescript-python-ruby-scala-sql-0.01 \
  --shuffle \
  --build
```

**Create the repository**  

```bash
# Install git lfs to suport large files
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash

sudo apt-get install git-lfs
```

```bash
# create the dataset repo
huggingface-cli repo create <your_dataset_name> --type dataset --organization codecomplete
```

e.g.
```bash
huggingface-cli repo create base_dataset --type dataset --organization codecomplete
```

**Clone the repository**  
```bash
git lfs install

git clone https://huggingface.co/datasets/<your_organization_name>/<your_dataset_name>

e.g.
git clone https://huggingface.co/datasets/codecomplete/base_dataset
```

**Prepare your files**  
Create a descriptive README.md and check the dataset.json file

```bash
cp /somewhere/base_dataset/*.json .
git lfs track *.json
git add .gitattributes
git add *.json

git add --all
```

**Upload your files**  
```bash
git status
git commit -m "First version of the your_dataset_name dataset."
git push
```

**Verify dataset**  
```python
from datasets import load_dataset
dataset = load_dataset("codecomplete/<your_dataset_name>")
print(dataset.num_rows)
```