Getting error 36
Can you try specifying the languages you'd like to include in the dataset? For example:
dataset = load_dataset('bible-nlp/biblenlp-corpus', languages = ['eng'])
Specifying language does not help, same error.
If you are working in Windows, you can enable long file names with a registry setting.
I work on Mac.
And I do not really understand the necessity of making the filename so long by name=f'{"-".join([x for x in languages])}'
in https://huggingface.co/datasets/bible-nlp/biblenlp-corpus/blob/main/biblenlp-corpus.py
I am getting the same error when running it in Google Colab. I haven't figured out if there's a way to enable long file names.
I'll probably implement a change that creates a md5 hash from the languages and uses that instead at some point. If anyone wants to do that and put in a pull request, that would work too. The key thing is ... it will create the same filename for the same settings so that the dataset caching features of Hugging Face work, and it will be the same length every time. If it's still too long at that point, we'd have to look if there are other ways to shorten the overall filename.
Would this mean just changing the _LANGUAGES parameter to the md5 hash? This would look like the following,
import hashlib
_LANGUAGES = hashlib.md5(_LANGUAGES.encode('UTF-8')).hexdigest()
Or would you just change the name parameter in the BiblenlpCorpusConfig class from
name=f'{"-".join([x for x in languages])}'
toname = hashlib.md5(f'{"-".join([x for x in languages])}'.encode('UTF-8')).hexdigest()