Transformers
EDSR
super-image
image-super-resolution
Inference Endpoints
Eugene Siow commited on
Commit
83d3d05
1 Parent(s): b372606

Add eval notebook.

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -56,7 +56,12 @@ Low Resolution (LR) images are created by using bicubic interpolation as the res
56
  During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
57
  Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
58
 
59
- The following code provides some helper functions to get the data and preprocess/augment the data.
 
 
 
 
 
60
  ```python
61
  from datasets import load_dataset
62
 
@@ -117,6 +122,10 @@ The results columns below are represented below as `PSNR/SSIM`. They are compare
117
 
118
  ![Comparing Bicubic upscaling against x2 upscaling on Set5 Image 2](images/Set5_2_compare.png "Comparing Bicubic upscaling against x2 upscaling on Set5 Image 2")
119
 
 
 
 
 
120
  ## BibTeX entry and citation info
121
  ```bibtex
122
  @InProceedings{Lim_2017_CVPR_Workshops,
 
56
  During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
57
  Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
58
 
59
+ We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
60
+ ```bash
61
+ pip install datasets
62
+ ```
63
+ The following code gets the data and preprocesses/augments the data.
64
+
65
  ```python
66
  from datasets import load_dataset
67
 
 
122
 
123
  ![Comparing Bicubic upscaling against x2 upscaling on Set5 Image 2](images/Set5_2_compare.png "Comparing Bicubic upscaling against x2 upscaling on Set5 Image 2")
124
 
125
+ You can find a notebook to easily run evaluation on pretrained models below:
126
+
127
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
128
+
129
  ## BibTeX entry and citation info
130
  ```bibtex
131
  @InProceedings{Lim_2017_CVPR_Workshops,