asaha-cdcp's picture
Initial commit.
778541c

This dataset is 10% repo sampled dataset for selected languages. We applied a repo sample rate of 10%. e.g. if sample rate is 10% then we take 10% of all repos for a given language but include all files inside the repo.

This was generated using our codecomplete/training/completions/datagen

./launch.sh \
  --dataset-name bigcode/starcoderdata \
  --subset c,cpp,go,java,javascript,typescript,python,ruby,scala,sql \
  --sample-rate 0.01 \
  --hf-token <HF_TOKEN> \
  --output-dir /home/${USER}/data \
  --cache-dir /home/${USER}/hfcache \
  --output-name c-cpp-go-java-javascript-typescript-python-ruby-scala-sql-0.01 \
  --shuffle \
  --build

Create the repository

# Install git lfs to suport large files
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash

sudo apt-get install git-lfs
# create the dataset repo
huggingface-cli repo create <your_dataset_name> --type dataset --organization codecomplete

e.g.

huggingface-cli repo create base_dataset --type dataset --organization codecomplete

Clone the repository

git lfs install

git clone https://huggingface.co/datasets/<your_organization_name>/<your_dataset_name>

e.g.
git clone https://huggingface.co/datasets/codecomplete/base_dataset

Prepare your files
Create a descriptive README.md and check the dataset.json file

cp /somewhere/base_dataset/*.json .
git lfs track *.json
git add .gitattributes
git add *.json

git add --all

Upload your files

git status
git commit -m "First version of the your_dataset_name dataset."
git push

Verify dataset

from datasets import load_dataset
dataset = load_dataset("codecomplete/<your_dataset_name>")
print(dataset.num_rows)