The 30M datasets are too big to download and please could you provide a samller one ?

#4
by jinbo1129 - opened

hello, thanks for your great work !!!
The 30M datasets are too big for me to download.
Could you also provide some smaller datasets, such as 1M or 100K?

Thanks !!!

Hi, I don't know how to download 'example_input_files' directory of the 30M datasets in Huggingface, could you give me a help? Thank you very much for your help.

Thank you for your questions and interest in Genecorpus-30M.

Regarding how to download data:
You can download any data in this repository by cloning the repository if you have git lfs or using wget with the link provided in the down arrow next to each file (or you can directly use the down arrow to download locally). Please see closed issue #3 for more information. (https://huggingface.co/datasets/ctheodoris/Genecorpus-30M/discussions/3)

Regarding the data size:
Genecorpus-30M is a large-scale pretraining corpus so it is necessarily a large amount of data in order to accomplish the pretraining. You should be able to download the large dataset using the methods above. If you are interested in fine-tuning with limited task-specific data, you can always tokenize your own smaller datasets with the transcriptome tokenizer provided in the Geneformer repository (https://huggingface.co/ctheodoris/Geneformer).

ctheodoris changed discussion status to closed

Sign up or log in to comment