File size: 2,702 Bytes
0c87db7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
from chain_img_processor import ChainImgProcessor, ChainImgPlugin
import os
import gfpgan
import threading
from PIL import Image
from numpy import asarray
import cv2

from roop.utilities import resolve_relative_path, conditional_download
modname = os.path.basename(__file__)[:-3] # calculating modname

model_gfpgan = None
THREAD_LOCK_GFPGAN = threading.Lock()


# start function
def start(core:ChainImgProcessor):
    manifest = { # plugin settings
        "name": "GFPGAN", # name
        "version": "1.4", # version

        "default_options": {},
        "img_processor": {
            "gfpgan": GFPGAN
        }
    }
    return manifest

def start_with_options(core:ChainImgProcessor, manifest:dict):
    pass


class GFPGAN(ChainImgPlugin):

    def init_plugin(self):
        global model_gfpgan

        if model_gfpgan is None:
            model_path = resolve_relative_path('../models/GFPGANv1.4.pth')
            model_gfpgan = gfpgan.GFPGANer(model_path=model_path, upscale=1, device=self.device) # type: ignore[attr-defined]



    def process(self, frame, params:dict):
        import copy

        global model_gfpgan

        if model_gfpgan is None:
            return frame 
        
        if "face_detected" in params:
            if not params["face_detected"]:
                return frame
        # don't touch original    
        temp_frame = copy.copy(frame)
        if "processed_faces" in params:
            for face in params["processed_faces"]:
                start_x, start_y, end_x, end_y = map(int, face['bbox'])
                padding_x = int((end_x - start_x) * 0.5)
                padding_y = int((end_y - start_y) * 0.5)
                start_x = max(0, start_x - padding_x)
                start_y = max(0, start_y - padding_y)
                end_x = max(0, end_x + padding_x)
                end_y = max(0, end_y + padding_y)
                temp_face = temp_frame[start_y:end_y, start_x:end_x]
                if temp_face.size:
                    with THREAD_LOCK_GFPGAN:
                        _, _, temp_face = model_gfpgan.enhance(
                                temp_face,
                                paste_back=True
                            )
                    temp_frame[start_y:end_y, start_x:end_x] = temp_face
        else:
            with THREAD_LOCK_GFPGAN:
                _, _, temp_frame = model_gfpgan.enhance(
                        temp_frame,
                        paste_back=True
                    )

        if not "blend_ratio" in params: 
            return temp_frame

        temp_frame = Image.blend(Image.fromarray(frame), Image.fromarray(temp_frame), params["blend_ratio"])
        return asarray(temp_frame)