Supplement dataset
#9
by
narugo
- opened
We made a supplement dataset for danbooru, sync all the newest images daily: https://huggingface.co/datasets/deepghs/danbooru_newest
And we are developing some new techs to quickly download tons of images with given ids: https://github.com/deepghs/cheesechaser
It's a wip project but make sense. Here is an example:
from cheesechaser.datapool import DanbooruNewestDataPool
pool = DanbooruNewestDataPool()
# download danbooru images, to directory /data/exp2
pool.batch_download_to_directory(
resource_ids=[
# old images, download from nyanko7/danbooru2023
*range(2010000, 2010100),
# newest images, download from deepghs/danbooru_newest
*range(7508529, 7508629)
],
dst_dir='/data/exp2',
max_workers=12,
)