Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,12 @@ license: cc-by-4.0
|
|
5 |
|
6 |
The BHI SISR Dataset's purpose is for training single image super-resolution models and is a result of tests on my BHI filtering method, which I made [a huggingface community blogpost about](https://huggingface.co/blog/Phips/bhi-filtering), which can be extremely summarized by that removing (by filtering) only the worst quality tiles from a training set has a way bigger positive effect on training metrics than keeping only the best quality training tiles.
|
7 |
|
8 |
-
It consists of 390'241 images, which are all 512x512px dimensions and in the webp format.
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
The advantage of such a big dataset is when applying degradations in a randomized manner to create a corresponding LR, the distribution of degradations and strenghts should be sufficient because of the quantity of training tiles. I will create some corresponding x4 LR datasets to this one and publish them aswell.
|
11 |
Though if an on-the-fly degradation pipeline is used during training, such a high quantity of training tiles would probably generally not be needed since longer training iterations make sure of distribution.
|
|
|
5 |
|
6 |
The BHI SISR Dataset's purpose is for training single image super-resolution models and is a result of tests on my BHI filtering method, which I made [a huggingface community blogpost about](https://huggingface.co/blog/Phips/bhi-filtering), which can be extremely summarized by that removing (by filtering) only the worst quality tiles from a training set has a way bigger positive effect on training metrics than keeping only the best quality training tiles.
|
7 |
|
8 |
+
It consists of 390'241 images, which are all 512x512px dimensions and in the webp format.
|
9 |
+
|
10 |
+
<figure>
|
11 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/634e9aa407e669188d3912f9/bV0oaFKJzdsEqRme_lqU8.png" alt="48 first training tiles">
|
12 |
+
<figcaption>Visual example - the first 48 training tiles</figcaption>
|
13 |
+
</figure>
|
14 |
|
15 |
The advantage of such a big dataset is when applying degradations in a randomized manner to create a corresponding LR, the distribution of degradations and strenghts should be sufficient because of the quantity of training tiles. I will create some corresponding x4 LR datasets to this one and publish them aswell.
|
16 |
Though if an on-the-fly degradation pipeline is used during training, such a high quantity of training tiles would probably generally not be needed since longer training iterations make sure of distribution.
|