--- license: mit task_categories: - image-classification - zero-shot-image-classification - text-to-image language: - en tags: - art - anime - not-for-all-audiences size_categories: - 1M Which removed the non-image/truncated files and resize all of them into smaller size. This dataset already be updated to latest_id = 7,832,883. Thx to DeepGHS! **Notice**: content of updates folder and deepghs/danbooru_newest-webp-4Mpixel have been merged to 2000~2999.tar, You can ignore all the content in updates folder safely! --- ## Details This dataset employs few method to reduce the size and improve the efficiency. ### Size and Format This dataset resize all the image which have more than 2048x2048 pixel into near 2048x2048 pixels with bicubic algorithm.
And remove all the image with longer edge larger than 16383 after resize.
(one reason is beacuse webp doesn't allow that, another is that aspect ratio is too large/small.) This dataset encode/save all the image with 90% quality webp with pillow library in Python. Which is half size of the 100% quality lossy webp. The total size of this dataset is around 1.3~1.4TB. Which is less than the 20% of original file size. ### Webdataset This dataset use webdataset library to save all the tarfile, therefore, you can also use webdataset to load them easily. This is also a recommended way. The `__key__` of each files is the id of it. You can use this id to query the [metadata database](https://huggingface.co/datasets/KBlueLeaf/danbooru2023-sqlite) easily.