How to load only specific files from this huge dataset?

#1
by tmsreekanth98 - opened

I have tried everything, but the loading script downloads the entire dataset at the first step itself which is very time consuming.

MHoubre changed discussion status to closed
MHoubre changed discussion status to open
MIDAS Research Laboratory, IIIT-Delhi org
edited Mar 6, 2023

Hello,
If you know exactly what part of the dataset you want, you can go to "Files and versions", click on the files you want and click download. You would then have to decompress the gz archive and load the datafiles using:

from datasets import load_dataset
data = load_dataset("JSON", data_files={}) 

data_files is a dictionary were the key is a partition of the dataset ("test", "train" etc...) and the value is the path to the file you have downloaded.

However, I do not recommend this for training an actual model but rather to explore part of the dataset or make prototypes.

You can also use git clone to download all the files one time and then use the dataset by loading it locally.

Hello,
If you know exactly what part of the dataset you want, you can go to "Files and versions", click on the files you want and click download. You would then have to decompress the gz archive and load the datafiles using:

from datasets import load_dataset
data = load_dataset("JSON", data_files={}) 

data_files is a dictionary were the key is a partition of the dataset ("test", "train" etc...) and the value is the path to the file you have downloaded.

However, I do not recommend this for training an actual model but rather to explore part of the dataset or make prototypes.

You can also use git clone to download all the files one time and then use the dataset by loading it locally.

Thanks for the info. Can you please explain why you do not recommend training an actual model using part of this data? I am a beginner. Thanks

MIDAS Research Laboratory, IIIT-Delhi org

Hello,

It depends on the task you want to work on and the architecture you use but if you use less data, the performances are likely to be significantly worse.

Hello,

It depends on the task you want to work on and the architecture you use but if you use less data, the performances are likely to be significantly worse.

Thanks for the reply. This dataset is collected from scientific articles. Would you happen to know whether it contains all domains of science or whether it is domain specific (for eg. kp20k dataset is only for computer science)

MIDAS Research Laboratory, IIIT-Delhi org

If you have questions regarding the content of any dataset, please refer to the article that introduced the data in question as you will have much more detail. In the case of OAGKX, you can look at this article https://aclanthology.org/2020.lrec-1.823/

MHoubre changed discussion status to closed

Sign up or log in to comment