File size: 1,724 Bytes
5619f9f
 
343b43f
 
 
 
 
 
 
 
 
2f14e1a
343b43f
 
5619f9f
343b43f
 
 
 
 
 
2a8c703
 
 
 
 
815a93e
343b43f
 
 
 
 
6778153
343b43f
 
 
 
 
 
 
 
 
 
 
815a93e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
license: mit
task_categories:
- image-classification
- zero-shot-image-classification
- text-to-image
language:
- en
tags:
- art
- anime
- not-for-all-audiences
size_categories:
- 1M<n<10M
---

# Danbooru 2023 webp: A space-efficient version of Danbooru 2023

This dataset is a resized/re-encoded version of [danbooru2023](https://huggingface.co/datasets/nyanko7/danbooru2023).<br>
Which removed the non-image/truncated files and resize all of them into smaller size.

This dataset already be updated to latest_id = 7,832,883.
Thx to DeepGHS!

**Notice**: content of updates folder and deepghs/danbooru_newest-webp-4Mpixel have been merged to 2000~2999.tar, You can ignore all the content in updates folder safely!

---

## Details
This dataset employs few method to reduce the size and improve the efficiency.

### Size and Format
This dataset resize all the image which have more than 2048x2048 pixel into near 2048x2048 pixels with bicubic algorithm.<br>
And remove all the image with longer edge larger than 16383 after resize.<br>
(one reason is beacuse webp doesn't allow that, another is that aspect ratio is too large/small.)

This dataset encode/save all the image with 90% quality webp with pillow library in Python.
Which is half size of the 100% quality lossy webp.

The total size of this dataset is around 1.3~1.4TB. Which is less than the 20% of original file size.

### Webdataset
This dataset use webdataset library to save all the tarfile, therefore, you can also use webdataset to load them easily. This is also a recommended way.

The `__key__` of each files is the id of it. You can use this id to query the [metadata database](https://huggingface.co/datasets/KBlueLeaf/danbooru2023-sqlite) easily.