BHI / README.md
Phips's picture
Update README.md
44a584b verified
|
raw
history blame
7.53 kB
metadata
license: cc-by-4.0

BHI SISR Dataset

The BHI SISR Dataset's purpose is for training single image super-resolution models and is a result of tests on my BHI filtering method, which I made a huggingface community blogpost about, which can be extremely summarized by that removing (by filtering) only the worst quality tiles from a training set has a way bigger positive effect on training metrics than keeping only the best quality training tiles.

It consists of 390'241 images, which are all 512x512px dimensions and in the webp format.

48 first training tiles
Visual example - the first 48 training tiles

The advantage of such a big dataset is when applying degradations in a randomized manner to create a corresponding LR, the distribution of degradations and strenghts should be sufficient because of the quantity of training tiles. I will create some corresponding x4 LR datasets to this one and publish them aswell.
Though if an on-the-fly degradation pipeline is used during training, such a high quantity of training tiles would probably generally not be needed since longer training iterations make sure of distribution.

Size on disc:

du BHI_HR  
131199816	BHI_HR/  

Also for the future, I am releasing the full dataset here. But there can of course be attempts in the future to make distilled versions of this dataset that perform better since I might find additional metrics or filtering methods in the future that might help reduce dataset size while achieving better training validation metric performance.

Used Datasets

This BHI SISR Dataset consists of the following datasets:

HQ50K
ImageNet
FFHQ
LSDIR
DF2K
OST
iNaturalist 2019
COCO 2017 Train
COCO 2017 Unlabeled
Nomosv2
HFA2K
Nomos_Uni
ModernAnimation1080_v3
Digital_Art_v2

Tiling

These datasets have then been tiled to 512x512px for improved I/O training speed, and normalization of image dimensions is also nice, so it will take consistent ressources if processing.

In some cases these led to fewer images in the dataset because they contained images with < 512px dimensions which were filtered out, some examples are:
COCO 2017 unlabeled from 123'403 images -> 8'814 tiles.
COCO 2017 train from 118'287 images -> 8'442 tiles.

And in some cases this led to more images, because the original images were high resolution and therefore gave multiple 512x512 tiles per single image.
For example HQ50K -> 213'396 tiles.

Conversion

If the images in the dataset were in the jpg format, they have been converted to png format using Mogrify.

BHI Filtering

I then filtered these sets with the BHI filtering method using the following thresholds:

Blockiness < 30
HyperIQA >= 0.2
IC9600 >= 0.4

Which led to following dataset tile quantities that satisfied the filtering process, which made it into the BHI SISR Dataset:

DF2K -> 12'639 Tiles
FFHQ -> 35'112 Tiles
HQ50K -> 61'647 Tiles
ImageNet -> 4'505 Tiles
LSDIR -> 116'141 Tiles
OST -> 1'048 Tiles
COCO2017_train -> 5'621 Tiles
COCO2017_unlabeled -> 5'887 Tiles
Digital_Art_v2 -> 1'620 Tiles
HFA2K -> 2'280 Tiles
ModernAnimation1080_v3 -> 4'109 Tiles
Nomos_Uni -> 2'466 Tiles
Nomosv2 -> 5'226 Tiles
inaturalist_2019 -> 131'943 Tiles

My main point here also would be that this dataset, even though still consisting of like 390k tiles, is already a strongly reduced version of these original datasets combined.

Files

Files have been named with '{dataset_name}_{index}.webp' so that if one of these used datasets were problematic concerning public access, could still be removed in the future form this dataset.

I did convert to webp because of file size reduction, because the dataset was originally at around 200GB, when I then used oxipng ("oxipng --strip safe --alpha *.png") for optimization. But lossless webp is just the best option available currently for lossless file size reduction. Well, JPEG XL would be the absolute best modern option for lossless compression, but its relatively new as in not everything supports it yet (especially cv2 currently, which we use for training, so would be worthless having them in jpeg xl format at this moment). I will rant here about the decision of one company to disregard browser support of jpeg xl for avif, which is waaay worse (super bad) concerning lossless compression/file size, webp is way older and beats it.

TODO put paper stuff etc in here about webp / jpeg xl being superior concerning lossless compression

(Note to myself: Tiles 'inaturalist_2019_65228.png','inaturalist_2019_54615.png','inaturalist_2019_22816.png' removed because of PNG error when checking with pngcheck)

Upload

I uploaded the dataset as multi-part zip archive files with a max of 25GB per file, resulting in X archive files.
This should work with lfs file size limit, and i chose zip because its such a common format. I could have of course used another format like 7z or zpaq or something.
I actually once in the past worked on an archiver called ShareArchiver where my main idea was, that online shared data (like this dataset) generally gets archived once (by the uploader) but downloaded and extracted maybe a thousand times. So resulting file size (faster download time for those thousand downloads) and extraction speed (those thousand extraction) would be waay more important than compression speed. So the main idea is we are trading archiving time (very long time to archive) of that one person for faster downloads and extraction for all. The design of this archiver was that I chose only highly assymetrical compression algos, where compression times can very slow as long as decompression speed is high, and then it would brute force during compression, meaning of those available highly assymetric compression algos, it would compress each single file with all of them, check the resulting file sizes, and add only the smallest one to the .share archive. Just something from the past I wanted to mention. (one could also use the max flag to just use all of them, meaning also the symmetrical ones, just to brute force the smallest archive file possible (using paq8o etc), but of corse compression time would also be very long, but this flag was more for archiving purposes than online sharing purposes, in a case where store space would be waay more important than either compression or decompression speed.)