Divya-A commited on
Commit
e6e4be7
·
0 Parent(s):

ImageColoriser: Flask app and colorization models for Hugging Face Space

Browse files
.dockerignore ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ .git
2
+ .gitignore
3
+ **/__pycache__
4
+ **/*.pyc
5
+ uploads
6
+ outputs
7
+ .env
8
+ *.md
9
+ ImageColoriser
.gitignore ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ uploads/
2
+ outputs/
3
+ imgs/
4
+ imgs_out/
5
+ /saved_eccv16.png
6
+ /saved_siggraph17.png
7
+ ImageColoriser/
8
+ __pycache__/
9
+ *.py[cod]
10
+ *$py.class
11
+ .env
12
+ .venv/
13
+ venv/
Dockerfile ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Hugging Face Docker Space: https://huggingface.co/docs/hub/spaces-sdks-docker
2
+ FROM python:3.10-slim
3
+
4
+ RUN apt-get update && apt-get install -y --no-install-recommends \
5
+ libgl1 \
6
+ libglib2.0-0 \
7
+ && rm -rf /var/lib/apt/lists/*
8
+
9
+ RUN useradd -m -u 1000 user
10
+ USER user
11
+ ENV PATH="/home/user/.local/bin:$PATH"
12
+ WORKDIR /app
13
+
14
+ COPY --chown=user requirements.txt requirements.txt
15
+ RUN pip install --no-cache-dir --upgrade pip \
16
+ && pip install --no-cache-dir -r requirements.txt \
17
+ --index-url https://download.pytorch.org/whl/cpu \
18
+ --extra-index-url https://pypi.org/simple
19
+
20
+ COPY --chown=user . /app
21
+
22
+ EXPOSE 7860
23
+ CMD ["gunicorn", "--bind", "0.0.0.0:7860", "--workers", "1", "--threads", "2", "--timeout", "180", "app:app"]
LICENSE ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright (c) 2016, Richard Zhang, Phillip Isola, Alexei A. Efros
2
+ All rights reserved.
3
+
4
+ Redistribution and use in source and binary forms, with or without
5
+ modification, are permitted provided that the following conditions are met:
6
+
7
+ * Redistributions of source code must retain the above copyright notice, this
8
+ list of conditions and the following disclaimer.
9
+
10
+ * Redistributions in binary form must reproduce the above copyright notice,
11
+ this list of conditions and the following disclaimer in the documentation
12
+ and/or other materials provided with the distribution.
13
+
14
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
15
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
16
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
17
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
18
+ FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
19
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
20
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
21
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
22
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
23
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: ImageColoriser
3
+ emoji: 🎨
4
+ colorFrom: gray
5
+ colorTo: red
6
+ sdk: docker
7
+ pinned: false
8
+ license: apache-2.0
9
+ short_description: Colorize B&W photos with ECCV16 and SIGGRAPH17.
10
+ ---
11
+
12
+ <!--<h3><b>Colorful Image Colorization</b></h3>-->
13
+ ## <b>Colorful Image Colorization</b> [[Project Page]](http://richzhang.github.io/colorization/) <br>
14
+ [Richard Zhang](https://richzhang.github.io/), [Phillip Isola](http://web.mit.edu/phillipi/), [Alexei A. Efros](http://www.eecs.berkeley.edu/~efros/). In [ECCV, 2016](http://arxiv.org/pdf/1603.08511.pdf).
15
+
16
+ **+ automatic colorization functionality for Real-Time User-Guided Image Colorization with Learned Deep Priors, SIGGRAPH 2017!**
17
+
18
+ **[Sept20 Update]** Since it has been 3-4 years, I converted this repo to support minimal test-time usage in PyTorch. I also added our SIGGRAPH 2017 (it's an interactive method but can also do automatic). See the [Caffe branch](https://github.com/richzhang/colorization/tree/caffe) for the original release.
19
+
20
+ ![Teaser Image](http://richzhang.github.io/colorization/resources/images/teaser4.jpg)
21
+
22
+ **Clone the repository; install dependencies**
23
+
24
+ ```
25
+ git clone https://github.com/richzhang/colorization.git
26
+ pip install requirements.txt
27
+ ```
28
+
29
+ **Colorize!** This script will colorize an image. The results should match the images in the `imgs_out` folder.
30
+
31
+ ```
32
+ python demo_release.py -i imgs/ansel_adams3.jpg
33
+ ```
34
+
35
+ **Model loading in Python** The following loads pretrained colorizers. See [demo_release.py](demo_release.py) for some details on how to run the model. There are some pre and post-processing steps: convert to Lab space, resize to 256x256, colorize, and concatenate to the original full resolution, and convert to RGB.
36
+
37
+ ```python
38
+ import colorizers
39
+ colorizer_eccv16 = colorizers.eccv16().eval()
40
+ colorizer_siggraph17 = colorizers.siggraph17().eval()
41
+ ```
42
+
43
+ ### Original implementation (Caffe branch)
44
+
45
+ The original implementation contained train and testing, our network and AlexNet (for representation learning tests), as well as representation learning tests. It is in Caffe and is no longer supported. Please see the [caffe](https://github.com/richzhang/colorization/tree/caffe) branch for it.
46
+
47
+ ### Citation ###
48
+
49
+ If you find these models useful for your resesarch, please cite with these bibtexs.
50
+
51
+ ```
52
+ @inproceedings{zhang2016colorful,
53
+ title={Colorful Image Colorization},
54
+ author={Zhang, Richard and Isola, Phillip and Efros, Alexei A},
55
+ booktitle={ECCV},
56
+ year={2016}
57
+ }
58
+
59
+ @article{zhang2017real,
60
+ title={Real-Time User-Guided Image Colorization with Learned Deep Priors},
61
+ author={Zhang, Richard and Zhu, Jun-Yan and Isola, Phillip and Geng, Xinyang and Lin, Angela S and Yu, Tianhe and Efros, Alexei A},
62
+ journal={ACM Transactions on Graphics (TOG)},
63
+ volume={9},
64
+ number={4},
65
+ year={2017},
66
+ publisher={ACM}
67
+ }
68
+ ```
69
+
70
+ ### Misc ###
71
+ Contact Richard Zhang at rich.zhang at eecs.berkeley.edu for any questions or comments.
72
+
73
+ Space metadata: [Hugging Face Spaces config reference](https://huggingface.co/docs/hub/spaces-config-reference)
app.py ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Flask, render_template, request, send_file, redirect, url_for
2
+ import os
3
+ import uuid
4
+ from colorizers import *
5
+ import torch
6
+ from colorizers.util import load_img, preprocess_img, postprocess_tens
7
+ from PIL import Image
8
+ import numpy as np
9
+ import gc
10
+
11
+ # Disable CUDA to save memory on Render
12
+ torch.cuda.is_available = lambda: False
13
+
14
+ UPLOAD_FOLDER = 'uploads'
15
+ OUTPUT_FOLDER = 'outputs'
16
+ ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'bmp'}
17
+
18
+ app = Flask(__name__)
19
+ app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
20
+ app.config['OUTPUT_FOLDER'] = OUTPUT_FOLDER
21
+
22
+ os.makedirs(UPLOAD_FOLDER, exist_ok=True)
23
+ os.makedirs(OUTPUT_FOLDER, exist_ok=True)
24
+
25
+ # Load models once at startup (CPU only)
26
+ print("Loading colorization models...")
27
+ colorizer_eccv16 = eccv16(pretrained=True).eval()
28
+ colorizer_siggraph17 = siggraph17(pretrained=True).eval()
29
+ print("Models loaded successfully!")
30
+
31
+ def allowed_file(filename):
32
+ return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
33
+
34
+ @app.route('/', methods=['GET', 'POST'])
35
+ def index():
36
+ if request.method == 'POST':
37
+ files = request.files.getlist('files')
38
+
39
+ if not files or len(files) == 0:
40
+ return render_template('index.html', error='No files selected')
41
+
42
+ results = []
43
+ for file in files:
44
+ if file.filename == '':
45
+ continue
46
+ if file and allowed_file(file.filename):
47
+ filename = str(uuid.uuid4()) + os.path.splitext(file.filename)[1]
48
+ filepath = os.path.join(app.config['UPLOAD_FOLDER'], filename)
49
+ file.save(filepath)
50
+
51
+ try:
52
+ out_paths = colorize_and_save(filepath, filename)
53
+ results.append({
54
+ 'orig_img': url_for('uploaded_file', filename=filename),
55
+ 'eccv16_img': url_for('output_file', filename=os.path.basename(out_paths['eccv16'])),
56
+ 'siggraph17_img': url_for('output_file', filename=os.path.basename(out_paths['siggraph17'])),
57
+ 'filename': os.path.splitext(filename)[0]
58
+ })
59
+ except Exception as e:
60
+ print(f"Error processing {filename}: {str(e)}")
61
+ continue
62
+ else:
63
+ continue
64
+
65
+ if len(results) == 0:
66
+ return render_template('index.html', error='No valid files to process')
67
+
68
+ return render_template('result.html', images=results, total_count=len(results))
69
+
70
+ return render_template('index.html')
71
+
72
+ @app.route('/uploads/<filename>')
73
+ def uploaded_file(filename):
74
+ return send_file(os.path.join(app.config['UPLOAD_FOLDER'], filename))
75
+
76
+ @app.route('/outputs/<filename>')
77
+ def output_file(filename):
78
+ return send_file(os.path.join(app.config['OUTPUT_FOLDER'], filename))
79
+
80
+ def colorize_and_save(img_path, filename):
81
+ global colorizer_eccv16, colorizer_siggraph17
82
+ img = load_img(img_path)
83
+ (tens_l_orig, tens_l_rs) = preprocess_img(img, HW=(256,256))
84
+
85
+ # Colorize with both models (tens_l_rs is already a PyTorch tensor)
86
+ with torch.no_grad():
87
+ out_ab_eccv16 = colorizer_eccv16(tens_l_rs)
88
+ out_ab_siggraph17 = colorizer_siggraph17(tens_l_rs)
89
+
90
+ out_img_eccv16 = postprocess_tens(tens_l_orig, out_ab_eccv16.cpu())
91
+ out_img_siggraph17 = postprocess_tens(tens_l_orig, out_ab_siggraph17.cpu())
92
+
93
+ # Convert to uint8 and save with PIL
94
+ base_filename = os.path.splitext(filename)[0]
95
+
96
+ out_img_eccv16_uint8 = (np.clip(out_img_eccv16, 0, 1) * 255).astype(np.uint8)
97
+ eccv16_path = os.path.join(OUTPUT_FOLDER, f'{base_filename}_eccv16.png')
98
+ Image.fromarray(out_img_eccv16_uint8).save(eccv16_path)
99
+
100
+ out_img_siggraph17_uint8 = (np.clip(out_img_siggraph17, 0, 1) * 255).astype(np.uint8)
101
+ siggraph17_path = os.path.join(OUTPUT_FOLDER, f'{base_filename}_siggraph17.png')
102
+ Image.fromarray(out_img_siggraph17_uint8).save(siggraph17_path)
103
+
104
+ # Clean up memory
105
+ del tens_l_rs, out_ab_eccv16, out_ab_siggraph17, img, tens_l_orig
106
+ del out_img_eccv16, out_img_siggraph17, out_img_eccv16_uint8, out_img_siggraph17_uint8
107
+ gc.collect()
108
+
109
+ return {'eccv16': eccv16_path, 'siggraph17': siggraph17_path}
110
+
111
+ if __name__ == '__main__':
112
+ # Support both Render (PORT env var) and HuggingFace Spaces (default 7860)
113
+ port = int(os.getenv('PORT', os.getenv('SERVER_PORT', 7860)))
114
+ app.run(host='0.0.0.0', port=port, debug=False)
colorizers/__init__.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+
2
+ from .base_color import *
3
+ from .eccv16 import *
4
+ from .siggraph17 import *
5
+ from .util import *
6
+
colorizers/base_color.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import torch
3
+ from torch import nn
4
+
5
+ class BaseColor(nn.Module):
6
+ def __init__(self):
7
+ super(BaseColor, self).__init__()
8
+
9
+ self.l_cent = 50.
10
+ self.l_norm = 100.
11
+ self.ab_norm = 110.
12
+
13
+ def normalize_l(self, in_l):
14
+ return (in_l-self.l_cent)/self.l_norm
15
+
16
+ def unnormalize_l(self, in_l):
17
+ return in_l*self.l_norm + self.l_cent
18
+
19
+ def normalize_ab(self, in_ab):
20
+ return in_ab/self.ab_norm
21
+
22
+ def unnormalize_ab(self, in_ab):
23
+ return in_ab*self.ab_norm
24
+
colorizers/eccv16.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import torch
3
+ import torch.nn as nn
4
+ import numpy as np
5
+
6
+ from .base_color import *
7
+
8
+ class ECCVGenerator(BaseColor):
9
+ def __init__(self, norm_layer=nn.BatchNorm2d):
10
+ super(ECCVGenerator, self).__init__()
11
+
12
+ model1=[nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1, bias=True),]
13
+ model1+=[nn.ReLU(True),]
14
+ model1+=[nn.Conv2d(64, 64, kernel_size=3, stride=2, padding=1, bias=True),]
15
+ model1+=[nn.ReLU(True),]
16
+ model1+=[norm_layer(64),]
17
+
18
+ model2=[nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1, bias=True),]
19
+ model2+=[nn.ReLU(True),]
20
+ model2+=[nn.Conv2d(128, 128, kernel_size=3, stride=2, padding=1, bias=True),]
21
+ model2+=[nn.ReLU(True),]
22
+ model2+=[norm_layer(128),]
23
+
24
+ model3=[nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1, bias=True),]
25
+ model3+=[nn.ReLU(True),]
26
+ model3+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),]
27
+ model3+=[nn.ReLU(True),]
28
+ model3+=[nn.Conv2d(256, 256, kernel_size=3, stride=2, padding=1, bias=True),]
29
+ model3+=[nn.ReLU(True),]
30
+ model3+=[norm_layer(256),]
31
+
32
+ model4=[nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1, bias=True),]
33
+ model4+=[nn.ReLU(True),]
34
+ model4+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),]
35
+ model4+=[nn.ReLU(True),]
36
+ model4+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),]
37
+ model4+=[nn.ReLU(True),]
38
+ model4+=[norm_layer(512),]
39
+
40
+ model5=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
41
+ model5+=[nn.ReLU(True),]
42
+ model5+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
43
+ model5+=[nn.ReLU(True),]
44
+ model5+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
45
+ model5+=[nn.ReLU(True),]
46
+ model5+=[norm_layer(512),]
47
+
48
+ model6=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
49
+ model6+=[nn.ReLU(True),]
50
+ model6+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
51
+ model6+=[nn.ReLU(True),]
52
+ model6+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
53
+ model6+=[nn.ReLU(True),]
54
+ model6+=[norm_layer(512),]
55
+
56
+ model7=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),]
57
+ model7+=[nn.ReLU(True),]
58
+ model7+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),]
59
+ model7+=[nn.ReLU(True),]
60
+ model7+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),]
61
+ model7+=[nn.ReLU(True),]
62
+ model7+=[norm_layer(512),]
63
+
64
+ model8=[nn.ConvTranspose2d(512, 256, kernel_size=4, stride=2, padding=1, bias=True),]
65
+ model8+=[nn.ReLU(True),]
66
+ model8+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),]
67
+ model8+=[nn.ReLU(True),]
68
+ model8+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),]
69
+ model8+=[nn.ReLU(True),]
70
+
71
+ model8+=[nn.Conv2d(256, 313, kernel_size=1, stride=1, padding=0, bias=True),]
72
+
73
+ self.model1 = nn.Sequential(*model1)
74
+ self.model2 = nn.Sequential(*model2)
75
+ self.model3 = nn.Sequential(*model3)
76
+ self.model4 = nn.Sequential(*model4)
77
+ self.model5 = nn.Sequential(*model5)
78
+ self.model6 = nn.Sequential(*model6)
79
+ self.model7 = nn.Sequential(*model7)
80
+ self.model8 = nn.Sequential(*model8)
81
+
82
+ self.softmax = nn.Softmax(dim=1)
83
+ self.model_out = nn.Conv2d(313, 2, kernel_size=1, padding=0, dilation=1, stride=1, bias=False)
84
+ self.upsample4 = nn.Upsample(scale_factor=4, mode='bilinear')
85
+
86
+ def forward(self, input_l):
87
+ conv1_2 = self.model1(self.normalize_l(input_l))
88
+ conv2_2 = self.model2(conv1_2)
89
+ conv3_3 = self.model3(conv2_2)
90
+ conv4_3 = self.model4(conv3_3)
91
+ conv5_3 = self.model5(conv4_3)
92
+ conv6_3 = self.model6(conv5_3)
93
+ conv7_3 = self.model7(conv6_3)
94
+ conv8_3 = self.model8(conv7_3)
95
+ out_reg = self.model_out(self.softmax(conv8_3))
96
+
97
+ return self.unnormalize_ab(self.upsample4(out_reg))
98
+
99
+ def eccv16(pretrained=True):
100
+ model = ECCVGenerator()
101
+ if(pretrained):
102
+ import torch.utils.model_zoo as model_zoo
103
+ model.load_state_dict(model_zoo.load_url('https://colorizers.s3.us-east-2.amazonaws.com/colorization_release_v2-9b330a0b.pth',map_location='cpu',check_hash=True))
104
+ return model
colorizers/siggraph17.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+
4
+ from .base_color import *
5
+
6
+ class SIGGRAPHGenerator(BaseColor):
7
+ def __init__(self, norm_layer=nn.BatchNorm2d, classes=529):
8
+ super(SIGGRAPHGenerator, self).__init__()
9
+
10
+ # Conv1
11
+ model1=[nn.Conv2d(4, 64, kernel_size=3, stride=1, padding=1, bias=True),]
12
+ model1+=[nn.ReLU(True),]
13
+ model1+=[nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=True),]
14
+ model1+=[nn.ReLU(True),]
15
+ model1+=[norm_layer(64),]
16
+ # add a subsampling operation
17
+
18
+ # Conv2
19
+ model2=[nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1, bias=True),]
20
+ model2+=[nn.ReLU(True),]
21
+ model2+=[nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=True),]
22
+ model2+=[nn.ReLU(True),]
23
+ model2+=[norm_layer(128),]
24
+ # add a subsampling layer operation
25
+
26
+ # Conv3
27
+ model3=[nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1, bias=True),]
28
+ model3+=[nn.ReLU(True),]
29
+ model3+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),]
30
+ model3+=[nn.ReLU(True),]
31
+ model3+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),]
32
+ model3+=[nn.ReLU(True),]
33
+ model3+=[norm_layer(256),]
34
+ # add a subsampling layer operation
35
+
36
+ # Conv4
37
+ model4=[nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1, bias=True),]
38
+ model4+=[nn.ReLU(True),]
39
+ model4+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),]
40
+ model4+=[nn.ReLU(True),]
41
+ model4+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),]
42
+ model4+=[nn.ReLU(True),]
43
+ model4+=[norm_layer(512),]
44
+
45
+ # Conv5
46
+ model5=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
47
+ model5+=[nn.ReLU(True),]
48
+ model5+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
49
+ model5+=[nn.ReLU(True),]
50
+ model5+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
51
+ model5+=[nn.ReLU(True),]
52
+ model5+=[norm_layer(512),]
53
+
54
+ # Conv6
55
+ model6=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
56
+ model6+=[nn.ReLU(True),]
57
+ model6+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
58
+ model6+=[nn.ReLU(True),]
59
+ model6+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=True),]
60
+ model6+=[nn.ReLU(True),]
61
+ model6+=[norm_layer(512),]
62
+
63
+ # Conv7
64
+ model7=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),]
65
+ model7+=[nn.ReLU(True),]
66
+ model7+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),]
67
+ model7+=[nn.ReLU(True),]
68
+ model7+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=True),]
69
+ model7+=[nn.ReLU(True),]
70
+ model7+=[norm_layer(512),]
71
+
72
+ # Conv7
73
+ model8up=[nn.ConvTranspose2d(512, 256, kernel_size=4, stride=2, padding=1, bias=True)]
74
+ model3short8=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),]
75
+
76
+ model8=[nn.ReLU(True),]
77
+ model8+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),]
78
+ model8+=[nn.ReLU(True),]
79
+ model8+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=True),]
80
+ model8+=[nn.ReLU(True),]
81
+ model8+=[norm_layer(256),]
82
+
83
+ # Conv9
84
+ model9up=[nn.ConvTranspose2d(256, 128, kernel_size=4, stride=2, padding=1, bias=True),]
85
+ model2short9=[nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=True),]
86
+ # add the two feature maps above
87
+
88
+ model9=[nn.ReLU(True),]
89
+ model9+=[nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=True),]
90
+ model9+=[nn.ReLU(True),]
91
+ model9+=[norm_layer(128),]
92
+
93
+ # Conv10
94
+ model10up=[nn.ConvTranspose2d(128, 128, kernel_size=4, stride=2, padding=1, bias=True),]
95
+ model1short10=[nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1, bias=True),]
96
+ # add the two feature maps above
97
+
98
+ model10=[nn.ReLU(True),]
99
+ model10+=[nn.Conv2d(128, 128, kernel_size=3, dilation=1, stride=1, padding=1, bias=True),]
100
+ model10+=[nn.LeakyReLU(negative_slope=.2),]
101
+
102
+ # classification output
103
+ model_class=[nn.Conv2d(256, classes, kernel_size=1, padding=0, dilation=1, stride=1, bias=True),]
104
+
105
+ # regression output
106
+ model_out=[nn.Conv2d(128, 2, kernel_size=1, padding=0, dilation=1, stride=1, bias=True),]
107
+ model_out+=[nn.Tanh()]
108
+
109
+ self.model1 = nn.Sequential(*model1)
110
+ self.model2 = nn.Sequential(*model2)
111
+ self.model3 = nn.Sequential(*model3)
112
+ self.model4 = nn.Sequential(*model4)
113
+ self.model5 = nn.Sequential(*model5)
114
+ self.model6 = nn.Sequential(*model6)
115
+ self.model7 = nn.Sequential(*model7)
116
+ self.model8up = nn.Sequential(*model8up)
117
+ self.model8 = nn.Sequential(*model8)
118
+ self.model9up = nn.Sequential(*model9up)
119
+ self.model9 = nn.Sequential(*model9)
120
+ self.model10up = nn.Sequential(*model10up)
121
+ self.model10 = nn.Sequential(*model10)
122
+ self.model3short8 = nn.Sequential(*model3short8)
123
+ self.model2short9 = nn.Sequential(*model2short9)
124
+ self.model1short10 = nn.Sequential(*model1short10)
125
+
126
+ self.model_class = nn.Sequential(*model_class)
127
+ self.model_out = nn.Sequential(*model_out)
128
+
129
+ self.upsample4 = nn.Sequential(*[nn.Upsample(scale_factor=4, mode='bilinear'),])
130
+ self.softmax = nn.Sequential(*[nn.Softmax(dim=1),])
131
+
132
+ def forward(self, input_A, input_B=None, mask_B=None):
133
+ if(input_B is None):
134
+ input_B = torch.cat((input_A*0, input_A*0), dim=1)
135
+ if(mask_B is None):
136
+ mask_B = input_A*0
137
+
138
+ conv1_2 = self.model1(torch.cat((self.normalize_l(input_A),self.normalize_ab(input_B),mask_B),dim=1))
139
+ conv2_2 = self.model2(conv1_2[:,:,::2,::2])
140
+ conv3_3 = self.model3(conv2_2[:,:,::2,::2])
141
+ conv4_3 = self.model4(conv3_3[:,:,::2,::2])
142
+ conv5_3 = self.model5(conv4_3)
143
+ conv6_3 = self.model6(conv5_3)
144
+ conv7_3 = self.model7(conv6_3)
145
+
146
+ conv8_up = self.model8up(conv7_3) + self.model3short8(conv3_3)
147
+ conv8_3 = self.model8(conv8_up)
148
+ conv9_up = self.model9up(conv8_3) + self.model2short9(conv2_2)
149
+ conv9_3 = self.model9(conv9_up)
150
+ conv10_up = self.model10up(conv9_3) + self.model1short10(conv1_2)
151
+ conv10_2 = self.model10(conv10_up)
152
+ out_reg = self.model_out(conv10_2)
153
+
154
+ conv9_up = self.model9up(conv8_3) + self.model2short9(conv2_2)
155
+ conv9_3 = self.model9(conv9_up)
156
+ conv10_up = self.model10up(conv9_3) + self.model1short10(conv1_2)
157
+ conv10_2 = self.model10(conv10_up)
158
+ out_reg = self.model_out(conv10_2)
159
+
160
+ return self.unnormalize_ab(out_reg)
161
+
162
+ def siggraph17(pretrained=True):
163
+ model = SIGGRAPHGenerator()
164
+ if(pretrained):
165
+ import torch.utils.model_zoo as model_zoo
166
+ model.load_state_dict(model_zoo.load_url('https://colorizers.s3.us-east-2.amazonaws.com/siggraph17-df00044c.pth',map_location='cpu',check_hash=True))
167
+ return model
168
+
colorizers/util.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ from PIL import Image
3
+ import numpy as np
4
+ from skimage import color
5
+ import torch
6
+ import torch.nn.functional as F
7
+
8
+ def load_img(img_path):
9
+ img = Image.open(img_path)
10
+ # Convert RGBA to RGB if needed
11
+ if img.mode == 'RGBA':
12
+ img = img.convert('RGB')
13
+ out_np = np.asarray(img)
14
+ if out_np.ndim == 2:
15
+ out_np = np.tile(out_np[:, :, None], 3)
16
+ return out_np
17
+
18
+ def resize_img(img, HW=(256,256), resample=3):
19
+ return np.asarray(Image.fromarray(img).resize((HW[1],HW[0]), resample=resample))
20
+
21
+ def preprocess_img(img_rgb_orig, HW=(256,256), resample=3):
22
+ # return original size L and resized L as torch Tensors
23
+ img_rgb_rs = resize_img(img_rgb_orig, HW=HW, resample=resample)
24
+
25
+ img_lab_orig = color.rgb2lab(img_rgb_orig)
26
+ img_lab_rs = color.rgb2lab(img_rgb_rs)
27
+
28
+ img_l_orig = img_lab_orig[:,:,0]
29
+ img_l_rs = img_lab_rs[:,:,0]
30
+
31
+ tens_orig_l = torch.Tensor(img_l_orig)[None,None,:,:]
32
+ tens_rs_l = torch.Tensor(img_l_rs)[None,None,:,:]
33
+
34
+ return (tens_orig_l, tens_rs_l)
35
+
36
+ def postprocess_tens(tens_orig_l, out_ab, mode='bilinear'):
37
+ # tens_orig_l 1 x 1 x H_orig x W_orig
38
+ # out_ab 1 x 2 x H x W
39
+
40
+ HW_orig = tens_orig_l.shape[2:]
41
+ HW = out_ab.shape[2:]
42
+
43
+ # call resize function if needed
44
+ if(HW_orig[0]!=HW[0] or HW_orig[1]!=HW[1]):
45
+ out_ab_orig = F.interpolate(out_ab, size=HW_orig, mode='bilinear')
46
+ else:
47
+ out_ab_orig = out_ab
48
+
49
+ out_lab_orig = torch.cat((tens_orig_l, out_ab_orig), dim=1)
50
+ return color.lab2rgb(out_lab_orig.data.cpu().numpy()[0,...].transpose((1,2,0)))
demo_release.py ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import matplotlib.pyplot as plt
3
+ from colorizers import *
4
+
5
+ # --- GUI Imports ---
6
+ import tkinter as tk
7
+ from tkinter import filedialog, messagebox
8
+ from PIL import Image, ImageTk
9
+ import os
10
+
11
+ def colorize_image(img_path, use_gpu=False, save_prefix='saved'):
12
+ # load colorizers
13
+ colorizer_eccv16 = eccv16(pretrained=True).eval()
14
+ colorizer_siggraph17 = siggraph17(pretrained=True).eval()
15
+ if use_gpu:
16
+ colorizer_eccv16.cuda()
17
+ colorizer_siggraph17.cuda()
18
+
19
+ img = load_img(img_path)
20
+ (tens_l_orig, tens_l_rs) = preprocess_img(img, HW=(256,256))
21
+ if use_gpu:
22
+ tens_l_rs = tens_l_rs.cuda()
23
+
24
+ img_bw = postprocess_tens(tens_l_orig, torch.cat((0*tens_l_orig,0*tens_l_orig),dim=1))
25
+ out_img_eccv16 = postprocess_tens(tens_l_orig, colorizer_eccv16(tens_l_rs).cpu())
26
+ out_img_siggraph17 = postprocess_tens(tens_l_orig, colorizer_siggraph17(tens_l_rs).cpu())
27
+
28
+ plt.imsave(f'{save_prefix}_eccv16.png', out_img_eccv16)
29
+ plt.imsave(f'{save_prefix}_siggraph17.png', out_img_siggraph17)
30
+
31
+ return img, img_bw, out_img_eccv16, out_img_siggraph17, f'{save_prefix}_eccv16.png', f'{save_prefix}_siggraph17.png'
32
+
33
+ def run_cli():
34
+ parser = argparse.ArgumentParser()
35
+ parser.add_argument('-i','--img_path', type=str, default='imgs/ansel_adams3.jpg')
36
+ parser.add_argument('--use_gpu', action='store_true', help='whether to use GPU')
37
+ parser.add_argument('-o','--save_prefix', type=str, default='saved', help='will save into this file with {eccv16.png, siggraph17.png} suffixes')
38
+ opt = parser.parse_args()
39
+
40
+ img, img_bw, out_img_eccv16, out_img_siggraph17, out_eccv16_path, out_siggraph17_path = colorize_image(opt.img_path, opt.use_gpu, opt.save_prefix)
41
+
42
+ plt.figure(figsize=(12,8))
43
+ plt.subplot(2,2,1)
44
+ plt.imshow(img)
45
+ plt.title('Original')
46
+ plt.axis('off')
47
+
48
+ plt.subplot(2,2,2)
49
+ plt.imshow(img_bw)
50
+ plt.title('Input')
51
+ plt.axis('off')
52
+
53
+ plt.subplot(2,2,3)
54
+ plt.imshow(out_img_eccv16)
55
+ plt.title('Output (ECCV 16)')
56
+ plt.axis('off')
57
+
58
+ plt.subplot(2,2,4)
59
+ plt.imshow(out_img_siggraph17)
60
+ plt.title('Output (SIGGRAPH 17)')
61
+ plt.axis('off')
62
+ plt.show()
63
+
64
+ # --- GUI Implementation ---
65
+ def run_gui():
66
+ root = tk.Tk()
67
+ root.title('Image Colorization Demo')
68
+ root.geometry('600x400')
69
+
70
+ img_path_var = tk.StringVar()
71
+ save_prefix_var = tk.StringVar(value='saved')
72
+ use_gpu_var = tk.BooleanVar(value=False)
73
+
74
+ def select_image():
75
+ file_path = filedialog.askopenfilename(filetypes=[('Image Files', '*.jpg;*.jpeg;*.png;*.bmp')])
76
+ if file_path:
77
+ img_path_var.set(file_path)
78
+
79
+ def process_image():
80
+ img_path = img_path_var.get()
81
+ save_prefix = save_prefix_var.get()
82
+ use_gpu = use_gpu_var.get()
83
+ if not img_path:
84
+ messagebox.showerror('Error', 'Please select an image file.')
85
+ return
86
+ try:
87
+ img, img_bw, out_img_eccv16, out_img_siggraph17, out_eccv16_path, out_siggraph17_path = colorize_image(img_path, use_gpu, save_prefix)
88
+ messagebox.showinfo('Success', f'Colorized images saved as:\n{out_eccv16_path}\n{out_siggraph17_path}')
89
+ show_all_images(img, img_bw, out_img_eccv16, out_img_siggraph17)
90
+ except Exception as e:
91
+ messagebox.showerror('Error', str(e))
92
+
93
+ def show_all_images(img, img_bw, out_img_eccv16, out_img_siggraph17):
94
+ top = tk.Toplevel(root)
95
+ top.title('Input and Output Images')
96
+ # Convert numpy arrays to PIL Images if needed
97
+ def to_pil(im):
98
+ if isinstance(im, Image.Image):
99
+ return im
100
+ import numpy as np
101
+ arr = (im * 255).astype('uint8') if im.max() <= 1.0 else im.astype('uint8')
102
+ if arr.ndim == 2:
103
+ return Image.fromarray(arr, mode='L')
104
+ return Image.fromarray(arr)
105
+ pil_imgs = [to_pil(img), to_pil(img_bw), to_pil(out_img_eccv16), to_pil(out_img_siggraph17)]
106
+ titles = ['Original', 'Grayscale', 'ECCV16', 'SIGGRAPH17']
107
+ img_tks = []
108
+ for i, pil_img in enumerate(pil_imgs):
109
+ pil_img = pil_img.resize((200, 200))
110
+ img_tk = ImageTk.PhotoImage(pil_img)
111
+ img_tks.append(img_tk)
112
+ row, col = divmod(i, 2)
113
+ label = tk.Label(top, image=img_tk)
114
+ label.image = img_tk
115
+ label.grid(row=row*2, column=col, padx=10, pady=5)
116
+ title_label = tk.Label(top, text=titles[i])
117
+ title_label.grid(row=row*2+1, column=col)
118
+
119
+ tk.Label(root, text='Select Image:').pack(pady=10)
120
+ tk.Entry(root, textvariable=img_path_var, width=50).pack()
121
+ tk.Button(root, text='Browse', command=select_image).pack(pady=5)
122
+
123
+ tk.Label(root, text='Save Prefix:').pack(pady=10)
124
+ tk.Entry(root, textvariable=save_prefix_var, width=20).pack()
125
+
126
+ tk.Checkbutton(root, text='Use GPU', variable=use_gpu_var).pack(pady=5)
127
+
128
+ tk.Button(root, text='Colorize Image', command=process_image, bg='lightblue').pack(pady=20)
129
+
130
+ root.mainloop()
131
+
132
+ # --- Entry Point ---
133
+ if __name__ == '__main__':
134
+ import sys
135
+ if len(sys.argv) > 1:
136
+ run_cli()
137
+ else:
138
+ run_gui()
how_colorization_deep_dive.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Deep Dive: How Image Colorization Works in This Project
2
+
3
+ This document explains, step by step, how your image moves through the code, what the colorizers do, and how everything fits together. It's written for beginners, but goes deeper into the code and logic.
4
+
5
+ ---
6
+
7
+ ## 1. Where Does the Image Come From?
8
+ - The user picks an image using the GUI or command line.
9
+ - The path to the image is sent to the `colorize_image` function in `demo_release.py`.
10
+
11
+ ```python
12
+ def colorize_image(img_path, use_gpu=False, save_prefix='saved'):
13
+ # ...
14
+ ```
15
+
16
+ ---
17
+
18
+ ## 2. Loading and Preprocessing the Image
19
+ - The image is loaded as a numpy array (numbers for each pixel).
20
+ - It is resized and converted from RGB (red, green, blue) to LAB color space (L = lightness, a/b = color info).
21
+ - Only the 'L' (lightness) channel is used for colorization.
22
+
23
+ ```python
24
+ img = load_img(img_path) # Loads image as array
25
+ (tens_l_orig, tens_l_rs) = preprocess_img(img, HW=(256,256))
26
+ ```
27
+
28
+ - `tens_l_orig`: The original lightness channel (for final output size)
29
+ - `tens_l_rs`: The resized lightness channel (for the model)
30
+
31
+ ---
32
+
33
+ ## 3. What Are the Colorizers?
34
+ - **Colorizers** are deep neural networks trained to guess what colors should go where in a grayscale image.
35
+ - There are two: `eccv16` and `siggraph17`. Each is a different model architecture, trained on lots of color photos.
36
+ - They are defined in `colorizers/eccv16.py` and `colorizers/siggraph17.py`.
37
+
38
+ ### Why Two Colorizers?
39
+ - They use different tricks and training data, so their results look a bit different. You can compare both!
40
+
41
+ ---
42
+
43
+ ## 4. How Do the Colorizers Work?
44
+ - Each colorizer is a big stack of layers (like LEGO blocks):
45
+ - Convolutional layers (to look at small parts of the image)
46
+ - Activation layers (to help the model learn)
47
+ - Upsampling layers (to make the output bigger again)
48
+ - The model takes the 'L' channel and predicts the 'ab' channels (the color part).
49
+
50
+ ```python
51
+ colorizer_eccv16 = eccv16(pretrained=True).eval()
52
+ colorizer_siggraph17 = siggraph17(pretrained=True).eval()
53
+
54
+ # Move to GPU if needed
55
+ if use_gpu:
56
+ colorizer_eccv16.cuda()
57
+ colorizer_siggraph17.cuda()
58
+ ```
59
+
60
+ - The model is loaded with pretrained weights (learned from real photos).
61
+
62
+ ---
63
+
64
+ ## 5. Running the Model
65
+ - The grayscale image (L channel) is sent through the colorizer.
66
+ - The output is the 'ab' channels (color guesses).
67
+
68
+ ```python
69
+ out_img_eccv16 = postprocess_tens(tens_l_orig, colorizer_eccv16(tens_l_rs).cpu())
70
+ out_img_siggraph17 = postprocess_tens(tens_l_orig, colorizer_siggraph17(tens_l_rs).cpu())
71
+ ```
72
+
73
+ - `colorizer_eccv16(tens_l_rs)` runs the model and returns color info.
74
+ - `postprocess_tens` combines the original L with the new ab to make a color image.
75
+
76
+ ---
77
+
78
+ ## 6. Output
79
+ - The colorized images are saved and shown to you.
80
+ - You see the original, grayscale, and both colorized results.
81
+
82
+ ---
83
+
84
+ ## 7. How Data Moves (Step by Step)
85
+
86
+ ```mermaid
87
+ graph TD
88
+ A[User picks image] --> B[demo_release.py: colorize_image()]
89
+ B --> C[util.py: load_img()]
90
+ C --> D[util.py: preprocess_img()]
91
+ D --> E[colorizers: eccv16/siggraph17]
92
+ E --> F[Model predicts color (ab)]
93
+ F --> G[util.py: postprocess_tens()]
94
+ G --> H[Color image is made]
95
+ H --> I[Image is saved and shown]
96
+ ```
97
+
98
+ ---
99
+
100
+ ## 8. Code Snippet: Model Forward Pass (ECCV16 Example)
101
+
102
+ ```python
103
+ def forward(self, input_l):
104
+ conv1_2 = self.model1(self.normalize_l(input_l))
105
+ conv2_2 = self.model2(conv1_2)
106
+ # ... more layers ...
107
+ out_reg = self.model_out(self.softmax(conv8_3))
108
+ return self.unnormalize_ab(self.upsample4(out_reg))
109
+ ```
110
+ - The image goes through many layers, gets processed, and comes out as color info.
111
+
112
+ ---
113
+
114
+ ## 9. Why LAB Color Space?
115
+ - LAB splits lightness (L) from color (ab).
116
+ - The model only needs to guess the color part, not the brightness, making learning easier!
117
+
118
+ ---
119
+
120
+ ## 10. Summary Table
121
+ | Step | File | Function | What Happens |
122
+ |------|------|----------|--------------|
123
+ | 1 | demo_release.py | colorize_image | Starts the process |
124
+ | 2 | util.py | load_img | Loads the image |
125
+ | 3 | util.py | preprocess_img | Converts to LAB, gets L |
126
+ | 4 | colorizers/eccv16.py | ECCVGenerator | Predicts color |
127
+ | 5 | util.py | postprocess_tens | Combines L+ab, makes color image |
128
+ | 6 | demo_release.py | (save/show) | Shows you the result |
129
+
130
+ ---
131
+
132
+ ## 11. In Short
133
+ - The image is loaded, split into lightness and color.
134
+ - The model guesses the color.
135
+ - The color is combined with the lightness.
136
+ - You get a colorized image!
137
+
138
+ ---
139
+
140
+ If you want to see how any part works, just ask for more details!
how_colorization_works.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # How Colorization Works (Like You're 5!)
2
+
3
+ This guide explains how a black-and-white (B&W) photo gets its colors back using the code in your project. We'll use simple words and pictures (code snippets) to show how the data moves from start to end!
4
+
5
+ ---
6
+
7
+ ## 1. You Pick a B&W Picture
8
+ - You choose a photo (like a coloring book page).
9
+ - The computer loads it in as numbers (pixels).
10
+
11
+ ```python
12
+ img = load_img(img_path) # Loads your image as numbers
13
+ ```
14
+
15
+ ---
16
+
17
+ ## 2. The Computer Prepares the Picture
18
+ - The picture is made smaller (so it's easier to color).
19
+ - It gets split into 'L' (lightness) and 'ab' (color) parts.
20
+ - For B&W, only 'L' is there!
21
+
22
+ ```python
23
+ (tens_l_orig, tens_l_rs) = preprocess_img(img, HW=(256,256))
24
+ # tens_l_orig: original size, tens_l_rs: resized for the model
25
+ ```
26
+
27
+ ---
28
+
29
+ ## 3. The Magic Coloring Machine (Neural Network)
30
+ - The model (like a robot artist) looks at the 'L' part and guesses what colors ('ab') should go where.
31
+ - There are two artists: ECCV16 and SIGGRAPH17. Both try to color the picture!
32
+
33
+ ```python
34
+ colorizer_eccv16 = eccv16(pretrained=True).eval()
35
+ colorizer_siggraph17 = siggraph17(pretrained=True).eval()
36
+
37
+ out_img_eccv16 = postprocess_tens(tens_l_orig, colorizer_eccv16(tens_l_rs).cpu())
38
+ out_img_siggraph17 = postprocess_tens(tens_l_orig, colorizer_siggraph17(tens_l_rs).cpu())
39
+ ```
40
+
41
+ ---
42
+
43
+ ## 4. The Model Adds Color
44
+ - The model takes the gray picture and adds color guesses.
45
+ - It makes a new picture with both 'L' (lightness) and 'ab' (color) parts.
46
+
47
+ ```python
48
+ # The model's output is combined with the original lightness
49
+ out_lab_orig = torch.cat((tens_orig_l, out_ab_orig), dim=1)
50
+ # This is turned back into a normal color image
51
+ color_img = color.lab2rgb(...)
52
+ ```
53
+
54
+ ---
55
+
56
+ ## 5. You See the Results!
57
+ - The computer saves the new color pictures.
58
+ - You can see the original, the B&W, and the colorized versions side by side!
59
+
60
+ ```python
61
+ plt.imsave('saved_eccv16.png', out_img_eccv16)
62
+ plt.imsave('saved_siggraph17.png', out_img_siggraph17)
63
+ ```
64
+
65
+ ---
66
+
67
+ ## The Journey in a Picture
68
+
69
+ ```mermaid
70
+ graph TD
71
+ A[You pick a B&W image] --> B[Image loaded as numbers]
72
+ B --> C[Image resized and split into L (lightness)]
73
+ C --> D[Magic model guesses color (ab)]
74
+ D --> E[Model combines L + ab]
75
+ E --> F[Color image is made!]
76
+ F --> G[You see the result!]
77
+ ```
78
+
79
+ ---
80
+
81
+ ## In Short
82
+ - The computer looks at your gray picture.
83
+ - It uses a smart robot (model) to guess what colors should be there.
84
+ - It puts the colors back and shows you the new, colorful picture!
85
+
86
+ That's how the magic happens, step by step!
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ flask
2
+ gunicorn
3
+ torch
4
+ numpy
5
+ scikit-image
6
+ Pillow
templates/index.html ADDED
@@ -0,0 +1,414 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>Colorize — Bring Photos to Life</title>
7
+ <link rel="preconnect" href="https://fonts.googleapis.com">
8
+ <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
9
+ <link href="https://fonts.googleapis.com/css2?family=Cormorant+Garamond:ital,wght@0,300;0,400;0,600;1,300;1,400&family=Outfit:wght@300;400;500;600&display=swap" rel="stylesheet">
10
+ <style>
11
+ :root {
12
+ --black: #ffffff;
13
+ --dark: #f8f8f8;
14
+ --surface: #f2f2f2;
15
+ --border: #e0e0e0;
16
+ --muted: #888888;
17
+ --silver: #666666;
18
+ --light: #1a1a1a;
19
+ --amber: #ff6b35;
20
+ --amber-dim:#ff8c5a;
21
+ --amber-glow: rgba(255, 107, 53, 0.1);
22
+ --red: #e74c3c;
23
+ }
24
+
25
+ *, *::before, *::after {
26
+ margin: 0;
27
+ padding: 0;
28
+ box-sizing: border-box;
29
+ }
30
+
31
+ html, body {
32
+ height: 100%;
33
+ }
34
+
35
+ body {
36
+ font-family: 'Outfit', sans-serif;
37
+ background: linear-gradient(135deg, #ffffff 0%, #f9f5f0 100%);
38
+ color: var(--light);
39
+ min-height: 100vh;
40
+ display: flex;
41
+ align-items: center;
42
+ justify-content: center;
43
+ padding: 24px;
44
+ overflow-x: hidden;
45
+ }
46
+
47
+ /* Grain overlay */
48
+ body::before {
49
+ content: '';
50
+ position: fixed;
51
+ inset: 0;
52
+ background-image: url("data:image/svg+xml,%3Csvg viewBox='0 0 256 256' xmlns='http://www.w3.org/2000/svg'%3E%3Cfilter id='noise'%3E%3CfeTurbulence type='fractalNoise' baseFrequency='0.9' numOctaves='4' stitchTiles='stitch'/%3E%3C/filter%3E%3Crect width='100%25' height='100%25' filter='url(%23noise)' opacity='0.01'/%3E%3C/svg%3E");
53
+ pointer-events: none;
54
+ z-index: 0;
55
+ opacity: 0.3;
56
+ }
57
+
58
+ /* Ambient amber glow behind card */
59
+ body::after {
60
+ content: '';
61
+ position: fixed;
62
+ width: 900px;
63
+ height: 900px;
64
+ background: radial-gradient(ellipse, rgba(255, 107, 53, 0.08) 0%, transparent 70%);
65
+ top: 50%;
66
+ left: 50%;
67
+ transform: translate(-50%, -50%);
68
+ pointer-events: none;
69
+ z-index: 0;
70
+ }
71
+
72
+ .page-wrapper {
73
+ position: relative;
74
+ z-index: 1;
75
+ width: 100%;
76
+ max-width: 520px;
77
+ animation: fadeUp 0.7s cubic-bezier(0.22, 1, 0.36, 1) both;
78
+ }
79
+
80
+ @keyframes fadeUp {
81
+ from { opacity: 0; transform: translateY(32px); }
82
+ to { opacity: 1; transform: translateY(0); }
83
+ }
84
+
85
+ /* ── Header ── */
86
+ .header {
87
+ text-align: center;
88
+ margin-bottom: 44px;
89
+ }
90
+
91
+ .eyebrow {
92
+ font-family: 'Outfit', sans-serif;
93
+ font-size: 0.7rem;
94
+ font-weight: 500;
95
+ letter-spacing: 0.22em;
96
+ text-transform: uppercase;
97
+ color: var(--amber);
98
+ margin-bottom: 14px;
99
+ display: flex;
100
+ align-items: center;
101
+ justify-content: center;
102
+ gap: 10px;
103
+ }
104
+
105
+ .eyebrow::before,
106
+ .eyebrow::after {
107
+ content: '';
108
+ display: inline-block;
109
+ width: 32px;
110
+ height: 1px;
111
+ background: var(--amber);
112
+ }
113
+
114
+ .title {
115
+ font-family: 'Cormorant Garamond', serif;
116
+ font-size: 3.6rem;
117
+ font-weight: 300;
118
+ line-height: 1;
119
+ letter-spacing: -0.02em;
120
+ color: var(--light);
121
+ margin-bottom: 10px;
122
+ }
123
+
124
+ .title em {
125
+ font-style: italic;
126
+ color: var(--amber);
127
+ }
128
+
129
+ .subtitle {
130
+ font-size: 0.875rem;
131
+ font-weight: 300;
132
+ color: var(--silver);
133
+ letter-spacing: 0.02em;
134
+ }
135
+
136
+ /* ── Card ── */
137
+ .card {
138
+ background: var(--dark);
139
+ border: 1px solid var(--border);
140
+ border-radius: 4px;
141
+ padding: 40px;
142
+ position: relative;
143
+ overflow: hidden;
144
+ box-shadow: 0 4px 16px rgba(0, 0, 0, 0.08);
145
+ }
146
+
147
+ .card::before {
148
+ content: '';
149
+ position: absolute;
150
+ top: 0; left: 0; right: 0;
151
+ height: 1px;
152
+ background: linear-gradient(90deg, transparent, var(--amber), transparent);
153
+ }
154
+
155
+ /* ── Error ── */
156
+ .error {
157
+ background: rgba(231, 76, 60, 0.08);
158
+ border: 1px solid rgba(231, 76, 60, 0.2);
159
+ border-left: 3px solid var(--red);
160
+ color: #d63031;
161
+ padding: 14px 18px;
162
+ border-radius: 3px;
163
+ margin-bottom: 28px;
164
+ font-size: 0.875rem;
165
+ animation: shake 0.4s ease;
166
+ }
167
+
168
+ @keyframes shake {
169
+ 0%, 100% { transform: translateX(0); }
170
+ 20% { transform: translateX(-6px); }
171
+ 60% { transform: translateX(6px); }
172
+ }
173
+
174
+ /* ── Upload Zone ── */
175
+ .drop-label {
176
+ font-family: 'Outfit', sans-serif;
177
+ font-size: 0.7rem;
178
+ font-weight: 500;
179
+ letter-spacing: 0.16em;
180
+ text-transform: uppercase;
181
+ color: var(--silver);
182
+ display: block;
183
+ margin-bottom: 16px;
184
+ }
185
+
186
+ .drop-zone {
187
+ position: relative;
188
+ display: flex;
189
+ flex-direction: column;
190
+ align-items: center;
191
+ justify-content: center;
192
+ gap: 14px;
193
+ width: 100%;
194
+ padding: 52px 24px;
195
+ border: 2px dashed var(--border);
196
+ border-radius: 3px;
197
+ background: var(--surface);
198
+ cursor: pointer;
199
+ transition: border-color 0.25s, background 0.25s, box-shadow 0.25s;
200
+ margin-bottom: 10px;
201
+ }
202
+
203
+ .drop-zone:hover {
204
+ border-color: var(--amber);
205
+ background: rgba(255, 107, 53, 0.04);
206
+ box-shadow: 0 0 40px var(--amber-glow);
207
+ }
208
+
209
+ .drop-zone.dragover {
210
+ border-color: var(--amber);
211
+ background: rgba(255, 107, 53, 0.08);
212
+ box-shadow: 0 0 60px rgba(255, 107, 53, 0.2);
213
+ }
214
+
215
+ .drop-icon {
216
+ width: 44px;
217
+ height: 44px;
218
+ color: var(--amber);
219
+ transition: color 0.25s, transform 0.25s;
220
+ }
221
+
222
+ .drop-zone:hover .drop-icon {
223
+ color: var(--amber);
224
+ transform: translateY(-2px);
225
+ }
226
+
227
+ .drop-text-main {
228
+ font-family: 'Cormorant Garamond', serif;
229
+ font-size: 1.25rem;
230
+ font-weight: 400;
231
+ color: var(--light);
232
+ letter-spacing: 0.01em;
233
+ }
234
+
235
+ .drop-text-sub {
236
+ font-size: 0.78rem;
237
+ color: var(--muted);
238
+ letter-spacing: 0.04em;
239
+ }
240
+
241
+ #file {
242
+ display: none;
243
+ }
244
+
245
+ /* ── File name pill ── */
246
+ .file-name {
247
+ display: none;
248
+ align-items: center;
249
+ gap: 8px;
250
+ padding: 9px 14px;
251
+ background: var(--surface);
252
+ border: 1px solid var(--border);
253
+ border-radius: 3px;
254
+ font-size: 0.8rem;
255
+ color: var(--muted);
256
+ margin-bottom: 28px;
257
+ animation: fadeIn 0.3s ease;
258
+ }
259
+
260
+ .file-name.show {
261
+ display: flex;
262
+ }
263
+
264
+ .file-name svg {
265
+ color: var(--amber);
266
+ flex-shrink: 0;
267
+ }
268
+
269
+ @keyframes fadeIn {
270
+ from { opacity: 0; transform: translateY(-4px); }
271
+ to { opacity: 1; transform: translateY(0); }
272
+ }
273
+
274
+ /* ── Submit ── */
275
+ .submit-btn {
276
+ width: 100%;
277
+ padding: 16px;
278
+ font-family: 'Outfit', sans-serif;
279
+ font-size: 0.8rem;
280
+ font-weight: 600;
281
+ letter-spacing: 0.14em;
282
+ text-transform: uppercase;
283
+ color: white;
284
+ background: var(--amber);
285
+ border: none;
286
+ border-radius: 3px;
287
+ cursor: pointer;
288
+ transition: background 0.2s, box-shadow 0.2s, transform 0.15s;
289
+ position: relative;
290
+ overflow: hidden;
291
+ }
292
+
293
+ .submit-btn::after {
294
+ content: '';
295
+ position: absolute;
296
+ inset: 0;
297
+ background: linear-gradient(90deg, transparent 0%, rgba(255,255,255,0.2) 50%, transparent 100%);
298
+ transform: translateX(-100%);
299
+ transition: transform 0.5s;
300
+ }
301
+
302
+ .submit-btn:hover {
303
+ background: #ff5522;
304
+ box-shadow: 0 4px 24px rgba(255, 107, 53, 0.35);
305
+ transform: translateY(-1px);
306
+ }
307
+
308
+ .submit-btn:hover::after {
309
+ transform: translateX(100%);
310
+ }
311
+
312
+ .submit-btn:active {
313
+ transform: translateY(0);
314
+ box-shadow: none;
315
+ }
316
+
317
+ /* ── Footer tag ── */
318
+ .footer-tag {
319
+ text-align: center;
320
+ margin-top: 28px;
321
+ font-size: 0.72rem;
322
+ color: var(--muted);
323
+ letter-spacing: 0.06em;
324
+ }
325
+
326
+ .footer-tag span {
327
+ color: var(--amber);
328
+ }
329
+ </style>
330
+ </head>
331
+ <body>
332
+
333
+ <div class="page-wrapper">
334
+
335
+ <div class="header">
336
+ <div class="eyebrow">AI Photo Lab</div>
337
+ <h1 class="title">Colo<em>rize</em></h1>
338
+ <p class="subtitle">Restore life to black &amp; white photographs</p>
339
+ </div>
340
+
341
+ <div class="card">
342
+ {% if error %}
343
+ <div class="error">⚠ {{ error }}</div>
344
+ {% endif %}
345
+
346
+ <form method="post" enctype="multipart/form-data" id="uploadForm">
347
+
348
+ <label class="drop-label">Upload Image</label>
349
+
350
+ <label for="file" class="drop-zone" id="dropZone">
351
+ <svg class="drop-icon" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="1.2">
352
+ <path stroke-linecap="round" stroke-linejoin="round" d="M3 16.5v2.25A2.25 2.25 0 005.25 21h13.5A2.25 2.25 0 0021 18.75V16.5m-13.5-9L12 3m0 0l4.5 4.5M12 3v13.5" />
353
+ </svg>
354
+ <span class="drop-text-main">Drop your photos here</span>
355
+ <span class="drop-text-sub">or click to browse &nbsp;·&nbsp; JPG, PNG, BMP &nbsp;·&nbsp; Multiple files</span>
356
+ </label>
357
+
358
+ <input type="file" id="file" name="files" accept="image/*" multiple required>
359
+
360
+ <div class="file-name" id="fileName">
361
+ <svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
362
+ <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
363
+ </svg>
364
+ <span id="fileNameText"></span>
365
+ </div>
366
+
367
+ <button class="submit-btn" type="submit">Colorize Now</button>
368
+
369
+ </form>
370
+ </div>
371
+
372
+ <p class="footer-tag">Powered by <span>AI colorization</span> &nbsp;·&nbsp; Results in seconds</p>
373
+
374
+ </div>
375
+
376
+ <script>
377
+ const fileInput = document.getElementById('file');
378
+ const dropZone = document.getElementById('dropZone');
379
+ const fileNameEl = document.getElementById('fileName');
380
+ const fileNameTx = document.getElementById('fileNameText');
381
+
382
+ fileInput.addEventListener('change', function () {
383
+ if (this.files.length > 0) {
384
+ const fileCount = this.files.length;
385
+ fileNameTx.textContent = fileCount + ' file' + (fileCount > 1 ? 's' : '') + ' selected';
386
+ fileNameEl.classList.add('show');
387
+ }
388
+ });
389
+
390
+ dropZone.addEventListener('dragover', function (e) {
391
+ e.preventDefault();
392
+ dropZone.classList.add('dragover');
393
+ });
394
+
395
+ dropZone.addEventListener('dragleave', function (e) {
396
+ e.preventDefault();
397
+ dropZone.classList.remove('dragover');
398
+ });
399
+
400
+ dropZone.addEventListener('drop', function (e) {
401
+ e.preventDefault();
402
+ dropZone.classList.remove('dragover');
403
+ const files = e.dataTransfer.files;
404
+ fileInput.files = files;
405
+ if (files.length > 0) {
406
+ const fileCount = files.length;
407
+ fileNameTx.textContent = fileCount + ' file' + (fileCount > 1 ? 's' : '') + ' selected';
408
+ fileNameEl.classList.add('show');
409
+ }
410
+ });
411
+ </script>
412
+
413
+ </body>
414
+ </html>
templates/result.html ADDED
@@ -0,0 +1,382 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>Colorize — Results</title>
7
+ <link rel="preconnect" href="https://fonts.googleapis.com">
8
+ <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
9
+ <link href="https://fonts.googleapis.com/css2?family=Cormorant+Garamond:ital,wght@0,300;0,400;0,600;1,300;1,400&family=Outfit:wght@300;400;500;600&display=swap" rel="stylesheet">
10
+ <style>
11
+ :root {
12
+ --black: #ffffff;
13
+ --dark: #f8f8f8;
14
+ --surface: #f2f2f2;
15
+ --border: #e0e0e0;
16
+ --muted: #888888;
17
+ --silver: #666666;
18
+ --light: #1a1a1a;
19
+ --amber: #ff6b35;
20
+ --amber-dim: #ff8c5a;
21
+ --amber-glow: rgba(255, 107, 53, 0.1);
22
+ }
23
+
24
+ * {
25
+ margin: 0; padding: 0; box-sizing: border-box;
26
+ }
27
+
28
+ body {
29
+ font-family: 'Outfit', sans-serif;
30
+ background: linear-gradient(135deg, #ffffff 0%, #f9f5f0 100%);
31
+ color: var(--light);
32
+ min-height: 100vh;
33
+ display: flex;
34
+ align-items: flex-start;
35
+ justify-content: center;
36
+ padding: 40px 24px;
37
+ overflow-x: hidden;
38
+ }
39
+
40
+ body::before {
41
+ content: '';
42
+ position: fixed;
43
+ inset: 0;
44
+ background-image: url("data:image/svg+xml,%3Csvg viewBox='0 0 256 256' xmlns='http://www.w3.org/2000/svg'%3E%3Cfilter id='noise'%3E%3CfeTurbulence type='fractalNoise' baseFrequency='0.9' numOctaves='4' stitchTiles='stitch'/%3E%3C/filter%3E%3Crect width='100%25' height='100%25' filter='url(%23noise)' opacity='0.01'/%3E%3C/svg%3E");
45
+ pointer-events: none;
46
+ z-index: 0;
47
+ opacity: 0.3;
48
+ }
49
+
50
+ body::after {
51
+ content: '';
52
+ position: fixed;
53
+ width: 900px; height: 900px;
54
+ background: radial-gradient(ellipse, rgba(255,107,53,0.08) 0%, transparent 70%);
55
+ top: 20%; left: 50%;
56
+ transform: translate(-50%, -50%);
57
+ pointer-events: none;
58
+ z-index: 0;
59
+ }
60
+
61
+ .page-wrapper {
62
+ position: relative;
63
+ z-index: 1;
64
+ width: 100%;
65
+ max-width: 1200px;
66
+ animation: fadeUp 0.7s cubic-bezier(0.22, 1, 0.36, 1) both;
67
+ }
68
+
69
+ @keyframes fadeUp {
70
+ from { opacity: 0; transform: translateY(32px); }
71
+ to { opacity: 1; transform: translateY(0); }
72
+ }
73
+
74
+ .nav {
75
+ display: flex;
76
+ align-items: center;
77
+ justify-content: space-between;
78
+ margin-bottom: 48px;
79
+ }
80
+
81
+ .nav-brand {
82
+ font-family: 'Cormorant Garamond', serif;
83
+ font-size: 1.5rem;
84
+ font-weight: 300;
85
+ color: var(--light);
86
+ text-decoration: none;
87
+ letter-spacing: 0.02em;
88
+ }
89
+
90
+ .nav-brand em {
91
+ font-style: italic;
92
+ color: var(--amber);
93
+ }
94
+
95
+ .back-btn {
96
+ display: inline-flex;
97
+ align-items: center;
98
+ gap: 8px;
99
+ font-size: 0.78rem;
100
+ font-weight: 500;
101
+ letter-spacing: 0.12em;
102
+ text-transform: uppercase;
103
+ color: var(--silver);
104
+ text-decoration: none;
105
+ padding: 9px 18px;
106
+ border: 1px solid var(--border);
107
+ border-radius: 3px;
108
+ background: var(--surface);
109
+ transition: all 0.2s;
110
+ }
111
+
112
+ .back-btn:hover {
113
+ border-color: var(--amber);
114
+ color: var(--amber);
115
+ background: rgba(255, 107, 53, 0.08);
116
+ }
117
+
118
+ .result-header {
119
+ margin-bottom: 36px;
120
+ }
121
+
122
+ .result-eyebrow {
123
+ font-size: 0.7rem;
124
+ font-weight: 500;
125
+ letter-spacing: 0.22em;
126
+ text-transform: uppercase;
127
+ color: var(--amber);
128
+ display: flex;
129
+ align-items: center;
130
+ gap: 10px;
131
+ margin-bottom: 12px;
132
+ }
133
+
134
+ .result-eyebrow::before {
135
+ content: '';
136
+ display: inline-block;
137
+ width: 24px; height: 1px;
138
+ background: var(--amber);
139
+ }
140
+
141
+ .result-title {
142
+ font-family: 'Cormorant Garamond', serif;
143
+ font-size: 2.8rem;
144
+ font-weight: 300;
145
+ line-height: 1.1;
146
+ color: var(--light);
147
+ }
148
+
149
+ .result-title em {
150
+ font-style: italic;
151
+ color: var(--amber);
152
+ }
153
+
154
+ .gallery {
155
+ display: grid;
156
+ grid-template-columns: repeat(auto-fill, minmax(380px, 1fr));
157
+ gap: 20px;
158
+ margin-bottom: 40px;
159
+ }
160
+
161
+ @media (max-width: 768px) {
162
+ .gallery {
163
+ grid-template-columns: 1fr;
164
+ }
165
+ }
166
+
167
+ .gallery-item {
168
+ background: var(--dark);
169
+ border: 1px solid var(--border);
170
+ border-radius: 4px;
171
+ overflow: hidden;
172
+ transition: all 0.3s cubic-bezier(0.22,1,0.36,1);
173
+ box-shadow: 0 2px 8px rgba(0, 0, 0, 0.06);
174
+ }
175
+
176
+ .gallery-item:hover {
177
+ border-color: var(--amber);
178
+ box-shadow: 0 8px 32px rgba(255, 107, 53, 0.15);
179
+ transform: translateY(-4px);
180
+ }
181
+
182
+ .comparison-split {
183
+ display: flex;
184
+ width: 100%;
185
+ height: 240px;
186
+ position: relative;
187
+ overflow: hidden;
188
+ }
189
+
190
+ .comparison-side {
191
+ flex: 1;
192
+ overflow: hidden;
193
+ position: relative;
194
+ }
195
+
196
+ .comparison-side img {
197
+ width: 100%;
198
+ height: 100%;
199
+ object-fit: cover;
200
+ display: block;
201
+ transition: transform 0.3s ease;
202
+ }
203
+
204
+ .gallery-item:hover .comparison-side img {
205
+ transform: scale(1.05);
206
+ }
207
+
208
+ .gallery-side-label {
209
+ position: absolute;
210
+ top: 12px;
211
+ left: 12px;
212
+ font-size: 0.65rem;
213
+ font-weight: 600;
214
+ letter-spacing: 0.16em;
215
+ text-transform: uppercase;
216
+ padding: 4px 10px;
217
+ background: rgba(255, 255, 255, 0.95);
218
+ color: var(--silver);
219
+ border-radius: 2px;
220
+ backdrop-filter: blur(8px);
221
+ }
222
+
223
+ .gallery-side-label.colorized {
224
+ background: rgba(255, 107, 53, 0.15);
225
+ color: var(--amber);
226
+ border: 1px solid rgba(255, 107, 53, 0.3);
227
+ left: auto;
228
+ right: 12px;
229
+ }
230
+
231
+ .gallery-info {
232
+ padding: 18px;
233
+ display: flex;
234
+ align-items: center;
235
+ justify-content: space-between;
236
+ gap: 12px;
237
+ border-top: 1px solid var(--border);
238
+ }
239
+
240
+ .gallery-filename {
241
+ font-size: 0.8rem;
242
+ color: var(--muted);
243
+ flex: 1;
244
+ overflow: hidden;
245
+ text-overflow: ellipsis;
246
+ white-space: nowrap;
247
+ }
248
+
249
+ .download-link {
250
+ display: inline-flex;
251
+ align-items: center;
252
+ gap: 6px;
253
+ padding: 7px 14px;
254
+ font-size: 0.75rem;
255
+ font-weight: 600;
256
+ letter-spacing: 0.12em;
257
+ text-transform: uppercase;
258
+ color: white;
259
+ background: var(--amber);
260
+ border-radius: 2px;
261
+ text-decoration: none;
262
+ transition: background 0.2s, box-shadow 0.2s;
263
+ white-space: nowrap;
264
+ flex-shrink: 0;
265
+ }
266
+
267
+ .download-link:hover {
268
+ background: #ff5522;
269
+ box-shadow: 0 4px 16px rgba(255, 107, 53, 0.3);
270
+ }
271
+
272
+ .download-link svg {
273
+ width: 12px; height: 12px;
274
+ }
275
+
276
+ .summary {
277
+ text-align: center;
278
+ padding: 28px;
279
+ background: var(--surface);
280
+ border: 1px solid var(--border);
281
+ border-radius: 4px;
282
+ margin-bottom: 32px;
283
+ box-shadow: 0 2px 8px rgba(0, 0, 0, 0.04);
284
+ }
285
+
286
+ .summary-value {
287
+ font-family: 'Cormorant Garamond', serif;
288
+ font-size: 3rem;
289
+ font-weight: 400;
290
+ color: var(--amber);
291
+ line-height: 1;
292
+ margin-bottom: 8px;
293
+ }
294
+
295
+ .summary-label {
296
+ font-size: 0.78rem;
297
+ letter-spacing: 0.12em;
298
+ text-transform: uppercase;
299
+ color: var(--muted);
300
+ }
301
+
302
+ .footer-tag {
303
+ text-align: center;
304
+ margin-top: 32px;
305
+ font-size: 0.72rem;
306
+ color: var(--muted);
307
+ letter-spacing: 0.06em;
308
+ }
309
+
310
+ .footer-tag span {
311
+ color: var(--amber);
312
+ }
313
+ </style>
314
+ </head>
315
+ <body>
316
+
317
+ <div class="page-wrapper">
318
+
319
+ <nav class="nav">
320
+ <a href="/" class="nav-brand">Colo<em>rize</em></a>
321
+ <a href="/" class="back-btn">
322
+ <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2" style="width: 14px; height: 14px;">
323
+ <path stroke-linecap="round" stroke-linejoin="round" d="M10.5 19.5L3 12m0 0l7.5-7.5M3 12h18" />
324
+ </svg>
325
+ New Photos
326
+ </a>
327
+ </nav>
328
+
329
+ <div class="result-header">
330
+ <div class="result-eyebrow">Colorization Complete</div>
331
+ <h1 class="result-title">All photos are <em>alive</em></h1>
332
+ </div>
333
+
334
+ <div class="summary">
335
+ <div class="summary-value">{{ total_count }}</div>
336
+ <div class="summary-label">Images Colorized Successfully</div>
337
+ </div>
338
+
339
+ <div class="gallery">
340
+ {% for item in images %}
341
+ <div class="gallery-item">
342
+ <div class="comparison-split">
343
+ <div class="comparison-side">
344
+ <img src="{{ item.orig_img }}" alt="Original">
345
+ <span class="gallery-side-label">Original</span>
346
+ </div>
347
+ <div class="comparison-side">
348
+ <img src="{{ item.eccv16_img }}" alt="ECCV16">
349
+ <span class="gallery-side-label colorized">ECCV16</span>
350
+ </div>
351
+ <div class="comparison-side">
352
+ <img src="{{ item.siggraph17_img }}" alt="SIGGRAPH17">
353
+ <span class="gallery-side-label colorized">✦ SIGGRAPH17</span>
354
+ </div>
355
+ </div>
356
+ <div class="gallery-info">
357
+ <span class="gallery-filename">{{ item.filename }}</span>
358
+ <div style="display: flex; gap: 6px;">
359
+ <a href="{{ item.eccv16_img }}" download class="download-link">
360
+ <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2.2">
361
+ <path stroke-linecap="round" stroke-linejoin="round" d="M3 16.5v2.25A2.25 2.25 0 005.25 21h13.5A2.25 2.25 0 0021 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12M12 16.5V3" />
362
+ </svg>
363
+ ECCV16
364
+ </a>
365
+ <a href="{{ item.siggraph17_img }}" download class="download-link">
366
+ <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2.2">
367
+ <path stroke-linecap="round" stroke-linejoin="round" d="M3 16.5v2.25A2.25 2.25 0 005.25 21h13.5A2.25 2.25 0 0021 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12M12 16.5V3" />
368
+ </svg>
369
+ SG17
370
+ </a>
371
+ </div>
372
+ </div>
373
+ </div>
374
+ {% endfor %}
375
+ </div>
376
+
377
+ <p class="footer-tag">Powered by <span>AI colorization</span> &nbsp;·&nbsp; Results in seconds</p>
378
+
379
+ </div>
380
+
381
+ </body>
382
+ </html>