taesiri commited on
Commit
686cf52
1 Parent(s): 81a0c0e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -39,7 +39,9 @@ size_categories:
39
  **ImageNet-Hard-4K** is 4K version of the original [**ImageNet-Hard**](https://huggingface.co/datasets/taesiri/imagenet-hard) dataset, which is a new benchmark that comprises 10,980 images collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet). This dataset poses a significant challenge to state-of-the-art vision models as merely zooming in often fails to improve their ability to classify images correctly. As a result, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving a mere `2.02%` accuracy.
40
 
41
 
42
- "We employed [GigaGAN](https://mingukkang.github.io/GigaGAN/) to upscale each image from the original ImageNet-Hard dataset to a resolution of 4K."
 
 
43
 
44
 
45
  ### Dataset Distribution
 
39
  **ImageNet-Hard-4K** is 4K version of the original [**ImageNet-Hard**](https://huggingface.co/datasets/taesiri/imagenet-hard) dataset, which is a new benchmark that comprises 10,980 images collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet). This dataset poses a significant challenge to state-of-the-art vision models as merely zooming in often fails to improve their ability to classify images correctly. As a result, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving a mere `2.02%` accuracy.
40
 
41
 
42
+ ## Upscaling Procedure
43
+
44
+ We employed [GigaGAN](https://mingukkang.github.io/GigaGAN/) to upscale each image from the original ImageNet-Hard dataset to a resolution of 4K.
45
 
46
 
47
  ### Dataset Distribution