Datasets:
Size:
1M<n<10M
ArXiv:
Tags:
programming-language
code
program-synthesis
automatic-code-repair
code-retrieval
code-translation
License:
Error loading Code Translation
#2
by
ayazdan
- opened
I followed the instructions as follows
import datasets
code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation")
print(code_translation_dataset)
It raises the following error
FileNotFoundError Traceback (most recent call last)
<ipython-input-4-e5fb5f11b2e5> in <cell line: 2>()
1 import datasets
----> 2 code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation")
3 print(code_translation_dataset)
17 frames
/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_file_system.py in _raise_file_not_found(path, err)
886 elif isinstance(err, HFValidationError):
887 msg = f"{path} (invalid repository id)"
--> 888 raise FileNotFoundError(msg) from err
889
890
FileNotFoundError: datasets/NTU-NLP-sg/xCodeEval@main/code_translation/validation/C%23.jsonl
I just downloaded the entire dataset by the following command,
code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation")
Sometime in huggingface datasets
package, the cache folder gets corrupted. I would suggest deleting your huggingface datasets's cache folder. You can also try to set a different cache path by the following command,
code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation", cache_dir="path/to/the/cache/")
Please let me know if that works for you. If you are in a hurry, you can also git lfs pull
the entire repo.
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval
cd xCodeEval
git lfs pull --include "code_translation/*"
Also if you are looking into translation data, please check this thread.
For future reference, please follow, https://github.com/ntunlp/xCodeEval/issues/12#issuecomment-2356961329