Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
ArXiv:
Tags:
depth-estimation
License:

loading of multiple shards

#5
by sayakpaul HF staff - opened

@lhoestq could you please review this PR? If things seem good, I will do the following:

  • Add a dataset card.
  • Contact the dataset authors for further reviews.

Would be great if you could locally do the loading to check things are working as expected.

You can use this Colab Notebook (https://colab.research.google.com/drive/1K3ZU8XUPRDOYD38MQS9nreQXJYitlKSW?usp=sharing) to check how to visualize the dataset.

About how did I generate the multiple sharded archives

import tarfile

with tarfile.open("train.tar.gz", "w:gz") as t:
    t.add("train")

with tarfile.open("val.tar.gz", "w:gz") as t:
    t.add("val")

^ Assuming we're in the nyudepthv2 directory (which is what you get after untarring the original TAR archive).

Then I used the tarproc utility (https://github.com/tmbdev-archive/tarproc) to create multiple shard archives:

tarsplit train.tar.gz --max-size 3e9 --maxshards 16 -o train
tarsplit val.tar.gz --maxshards 4 -o val
sayakpaul changed pull request status to open

Awesome ! Thanks for the tip on tarproc :)

Is it ok to require the user to install h5py to load the dataset ? We may need to add it to the viewer @severo (it's an important one for loading vision datasets so I think it's fine)

Regarding the code, you can use relative paths here:

_URLS = {
    "train": [f"data/train-{i:06d}.tar" for i in range(12)],
    "val": [f"data/val-{i:06d}.tar" for i in range(2)],
}

Other than that it looks all good to me ! :D

Thanks for reviewing!

Is it ok to require the user to install h5py to load the dataset ? We may need to add it to the viewer @severo (it's an important one for loading vision datasets so I think it's fine)

I am on the same page i.e., no harm in having h5py as a requirement.

Regarding the code, you can use relative paths here:

Just pushed the changes. Keeps the code cleaner.

I am currently generating the info file with dataset-cli. After that's in, I guess we're good to merge?

Yup ! Just double check the generated info to make sure you have the correct number of examples in each split :)

h5py has always been installed as a dependency.

You're the best <3

Yup ! Just double check the generated info to make sure you have the correct number of examples in each split :)

@lhoestq just added the modified README and it looks okay to me.

How do I generate the INFO json file? Like this one: https://huggingface.co/datasets/scene_parse_150/blob/main/dataset_infos.json

You're all set :)
You no longer need the JSON file, it has redundant info as the readme. Feel free to merge if it's good for you

Alrighty, captain! Merging away.

sayakpaul changed pull request status to merged

Sign up or log in to comment