This dataset is 10% repo sampled dataset for selected languages. We applied a repo sample rate of 10%. e.g. if sample rate is 10% then we take 10% of all repos for a given language but include all files inside the repo. This was generated using our codecomplete/training/completions/datagen ```bash ./launch.sh \ --dataset-name bigcode/starcoderdata \ --subset c,cpp,go,java,javascript,typescript,python,ruby,scala,sql \ --sample-rate 0.01 \ --hf-token \ --output-dir /home/${USER}/data \ --cache-dir /home/${USER}/hfcache \ --output-name c-cpp-go-java-javascript-typescript-python-ruby-scala-sql-0.01 \ --shuffle \ --build ``` **Create the repository** ```bash # Install git lfs to suport large files curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs ``` ```bash # create the dataset repo huggingface-cli repo create --type dataset --organization codecomplete ``` e.g. ```bash huggingface-cli repo create base_dataset --type dataset --organization codecomplete ``` **Clone the repository** ```bash git lfs install git clone https://huggingface.co/datasets// e.g. git clone https://huggingface.co/datasets/codecomplete/base_dataset ``` **Prepare your files** Create a descriptive README.md and check the dataset.json file ```bash cp /somewhere/base_dataset/*.json . git lfs track *.json git add .gitattributes git add *.json git add --all ``` **Upload your files** ```bash git status git commit -m "First version of the your_dataset_name dataset." git push ``` **Verify dataset** ```python from datasets import load_dataset dataset = load_dataset("codecomplete/") print(dataset.num_rows) ```