crystantine commited on
Commit
08abcaf
1 Parent(s): f01e71a

added Comfy UI Impact Pack

Browse files
ComfyUI-Impact-Pack/.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ __pycache__
2
+ *.ini
ComfyUI-Impact-Pack/README.md ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ComfyUI-Impact-Pack
2
+
3
+ This custom node helps to conveniently enhance images through Detector, Detailer, Upscaler, Pipe, and more.
4
+
5
+ ## Custom nodes pack for ComfyUI
6
+
7
+ # Custom Nodes
8
+ * SAMLoader - Load SAM model
9
+ * MMDetDetectorProvider - Load MMDet model to provide BBOX_DETECTOR, SEGM_DETECTOR
10
+ * ONNXDetectorProvider - Load ONNX model to provide SEGM_DETECTOR
11
+ * CLIPSegDetectorProvider - CLIPSeg wrapper to provide BBOX_DETECTOR
12
+ * You need to install the [ComfyUI-CLIPSeg](https://github.com/biegert/ComfyUI-CLIPSeg) node extension.
13
+ * SEGM Detector (combined) - Detect segmentation and return mask from input image.
14
+ * BBOX Detector (combined) - Detect bbox(bounding box) and return mask from input image.
15
+ * SAMDetector (combined) - Using the technology of SAM, extract the segment at the location indicated by the input SEGS on the input image, and output it as a unified mask.
16
+ * Bitwise(SEGS & SEGS) - Perform 'bitwise and' operations between 2 SEGS.
17
+ * Bitwise(SEGS - SEGS) - Perform subtract operations between 2 SEGS.
18
+ * Bitwise(SEGS & MASK) - Perform a bitwise AND operation on SEGS and MASK.
19
+ * Bitwise(MASK & MASK) - Perform 'bitwise and' operations between 2 masks
20
+ * Bitwise(MASK - MASK) - Perform subtract operations between 2 masks
21
+ * SEGM Detector (SEGS) - Detect segmentation and return SEGS from input image.
22
+ * BBOX Detector (SEGS) - Detect bbox(bounding box) and return SEGS from input image.
23
+ * ONNX Detector (SEGS) - Using the ONNX model, identify the bbox and retrieve the SEGS from the input image
24
+ * Detailer (SEGS) - Refine image rely on SEGS.
25
+ * DetailerDebug (SEGS) - Refine image rely on SEGS. Additionally, you can monitor cropped image and refined image of cropped image.
26
+ * To prevent the regeneration caused by the seed that does not change every time when using 'external_seed', please disable the 'seed random generate' option in the 'Detailer...' node
27
+ * MASK to SEGS - This node generates SEGS based on the mask.
28
+ * ToBinaryMask - This node separates the mask generated with alpha values between 0 and 255 into 0 and 255. The non-zero parts are always set to 255.
29
+ * EmptySEGS - This node provides a empty SEGS.
30
+ * MaskPainter - This node provides a feature to draw masks.
31
+ * FaceDetailer - This is a node that can easily detect faces and improve them.
32
+ * FaceDetailer (pipe) - This is a node that can easily detect faces and improve them. (for multipass)
33
+ * Pipe nodes
34
+ * ToDetailerPipe, FromDetailerPipe - These nodes are used to bundle multiple inputs used in the detailer, such as models and vae, ..., into a single DETAILER_PIPE or extract the elements that are bundled in the DETAILER_PIPE.
35
+ * ToBasicPipe, FromBasicPipe - These nodes are used to bundle model, clip, vae, positive conditioning, and negative conditioning into a single BASIC_PIPE, or extract each element from the BASIC_PIPE.
36
+ * EditBasicPipe, EditDetailerPipe - These nodes are used to replace some elements in BASIC_PIPE or DETAILER_PIPE.
37
+ * Latent Scale (on Pixel Space) - This node converts latent to pixel space, upscales it, and then converts it back to latent.
38
+ * If upscale_model_opt is provided, it uses the model to upscale the pixel and then downscales it using the interpolation method provided in scale_method to the target resolution.
39
+ * PixelKSampleUpscalerProvider - An upscaler is provided that converts latent to pixels using VAEDecode, performs upscaling, converts back to latent using VAEEncode, and then performs k-sampling. This upscaler can be attached to nodes such as 'Iterative Upscale' for use.
40
+ * Similar to 'Latent Scale (on Pixel Space)', if upscale_model_opt is provided, it performs pixel upscaling using the model.
41
+ * PixelTiledKSampleUpscalerProvider - It is similar to PixelKSampleUpscalerProvider, but it uses ComfyUI_TiledKSampler and Tiled VAE Decoder/Encoder to avoid GPU VRAM issues at high resolutions.
42
+ * You need to install the [ComfyUI_TiledKSampler](https://github.com/BlenderNeko/ComfyUI_TiledKSampler) node extension.
43
+
44
+ * DenoiseScheduleHookProvider - IterativeUpscale provides a hook that gradually changes the denoise to target_denoise as the step progresses.
45
+ * CfgScheduleHookProvider - IterativeUpscale provides a hook that gradually changes the cfg to target_cfg as the step progresses.
46
+ * PixelKSampleHookCombine - This is used to connect two PK_HOOKs. hook1 is executed first and then hook2 is executed.
47
+ * If you want to simultaneously change cfg and denoise, you can combine the PK_HOOKs of CfgScheduleHookProvider and PixelKSampleHookCombine.
48
+
49
+ * Iterative Upscale (Latent) - The upscaler takes the input upscaler and splits the scale_factor into steps, then iteratively performs upscaling.
50
+ This takes latent as input and outputs latent as the result.
51
+ * Iterative Upscale (Image) - The upscaler takes the input upscaler and splits the scale_factor into steps, then iteratively performs upscaling. This takes image as input and outputs image as the result.
52
+ * Internally, this node uses 'Iterative Upscale (Latent)'.
53
+
54
+ * TwoSamplersForMask - This node can apply two samplers depending on the mask area. The base_sampler is applied to the area where the mask is 0, while the mask_sampler is applied to the area where the mask is 1.
55
+ * Note: The latent encoded through VAEEncodeForInpaint cannot be used.
56
+ * KSamplerProvider - This is a wrapper that enables KSampler to be used in TwoSamplersForMask TwoSamplersForMaskUpscalerProvider.
57
+ * TiledKSamplerProvider - ComfyUI_TiledKSampler is a wrapper that provides KSAMPLER.
58
+ * You need to install the [ComfyUI_TiledKSampler](https://github.com/BlenderNeko/ComfyUI_TiledKSampler) node extension.
59
+
60
+ * TwoSamplersForMaskUpscalerProvider - This is an Upscaler that extends TwoSamplersForMask to be used in Iterative Upscale.
61
+ * TwoSamplersForMaskUpscalerProviderPipe - pipe version of TwoSamplersForMaskUpscalerProvider.
62
+
63
+ * PreviewBridge - This custom node can be used with a bridge when using the MaskEditor feature of Clipspace.
64
+
65
+ # Feature
66
+ * Interactive SAM Detector (Clipspace) - When you right-click on a node that has 'MASK' and 'IMAGE' outputs, a context menu will open. From this menu, you can either open a dialog to create a SAM Mask using 'Open in SAM Detector', or copy the content (likely mask data) using 'Copy (Clipspace)' and generate a mask using 'Impact SAM Detector' from the clipspace menu, and then paste it using 'Paste (Clipspace)'.
67
+
68
+
69
+ # Deprecated
70
+ * The following nodes have been kept only for compatibility with existing workflows, and are no longer supported. Please replace them with new nodes.
71
+ * MMDetLoader -> MMDetDetectorProvider
72
+ * SegsMaskCombine -> SEGS to MASK (combined)
73
+ * BboxDetectorForEach -> BBOX Detector (SEGS)
74
+ * SegmDetectorForEach -> SEGM Detector (SEGS)
75
+ * BboxDetectorCombined -> BBOX Detector (combined)
76
+ * SegmDetectorCombined -> SEGM Detector (combined)
77
+ * MaskPainter -> PreviewBridge
78
+
79
+ # Installation
80
+
81
+ 1. cd custom_nodes
82
+ 1. git clone https://github.com/ltdrdata/ComfyUI-Impact-Pack.git
83
+ 3. cd ComfyUI-Impact-Pack
84
+ 4. (optional) python install.py
85
+ * Impact Pack will automatically install its dependencies during its initial launch.
86
+ 5. Restart ComfyUI
87
+
88
+ * You can use this colab notebook [colab notebook](https://colab.research.google.com/github/ltdrdata/ComfyUI-Impact-Pack/blob/Main/notebook/comfyui_colab_impact_pack.ipynb) to launch it. This notebook automatically downloads the impact pack to the custom_nodes directory, installs the tested dependencies, and runs it.
89
+
90
+ # Package Dependencies (If you need to manual setup.)
91
+
92
+ * pip install
93
+ * openmim
94
+ * segment-anything
95
+ * pycocotools
96
+ * onnxruntime
97
+
98
+ * mim install
99
+ * mmcv==2.0.0, mmdet==3.0.0, mmengine==0.7.2
100
+
101
+ * linux packages (ubuntu)
102
+ * libgl1-mesa-glx
103
+ * libglib2.0-0
104
+
105
+ # Other Materials (auto-download on initial startup)
106
+
107
+ * ComfyUI/models/mmdets/bbox <= https://huggingface.co/dustysys/ddetailer/resolve/main/mmdet/bbox/mmdet_anime-face_yolov3.pth
108
+ * ComfyUI/models/mmdets/bbox <= https://raw.githubusercontent.com/Bing-su/dddetailer/master/config/mmdet_anime-face_yolov3.py
109
+ * ComfyUI/models/sams <= https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth
110
+
111
+ # Troubleshooting page
112
+ * [Troubleshooting Page](troubleshooting/TROUBLESHOOTING.md)
113
+
114
+
115
+ # How to use (DDetailer feature)
116
+
117
+ #### 1. Basic auto face detection and refine exapmle.
118
+ ![simple](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/simple.png)
119
+ * The face that has been damaged due to low resolution is restored with high resolution by generating and synthesizing it, in order to restore the details.
120
+ * The FaceDetailer node is a combination of a Detector node for face detection and a Detailer node for image enhancement. See the [Advanced Tutorial](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/tutorial/advanced.md) for a more detailed explanation.
121
+ * Pass the MMDetLoader 's bbox model and the detection model loaded by SAMLoader to FaceDetailer . Since it performs the function of KSampler for image enhancement, it overlaps with KSampler's options.
122
+ * The MASK output of FaceDetailer provides a visualization of where the detected and enhanced areas are.
123
+
124
+ ![simple-orig](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/simple-original.png) ![simple-refined](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/simple-refined.png)
125
+ * You can see that the face in the image on the left has increased detail as in the image on the right.
126
+
127
+ #### 2. 2Pass refine (restore a severely damaged face)
128
+ ![2pass-workflow-example](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/2pass-simple.png)
129
+ * Although two FaceDetailers can be attached together for a 2-pass configuration, various common inputs used in KSampler can be passed through DETAILER_PIPE, so FaceDetailerPipe can be used to configure easily.
130
+ * In 1pass, only rough outline recovery is required, so restore with a reasonable resolution and low options. However, if you increase the dilation at this time, not only the face but also the surrounding parts are included in the recovery range, so it is useful when you need to reshape the face other than the facial part.
131
+
132
+ ![2pass-example-original](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/2pass-original.png) ![2pass-example-middle](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/2pass-1pass.png) ![2pass-example-result](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/2pass-2pass.png)
133
+ * In the first stage, the severely damaged face is restored to some extent, and in the second stage, the details are restored
134
+
135
+ #### 3. Face Bbox(bounding box) + Person silhouette segmentation (prevent distortion of the background.)
136
+ ![combination-workflow-example](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/combination.png)
137
+ ![combination-example-original](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/combination-original.png) ![combination-example-refined](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/combination-refined.png)
138
+
139
+ * Facial synthesis that emphasizes details is delicately aligned with the contours of the face, and it can be observed that it does not affect the image outside of the face.
140
+
141
+ * The BBoxDetectorForEach node is used to detect faces, and the SAMDetectorCombined node is used to find the segment related to the detected face. By using the Segs & Mask node with the two masks obtained in this way, an accurate mask that intersects based on segs can be generated. If this generated mask is input to the DetailerForEach node, only the target area can be created in high resolution from the image and then composited.
142
+
143
+ #### 4. Iterative Upscale
144
+ ![upscale-workflow-example](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/upscale-workflow.png)
145
+
146
+ * The IterativeUpscale node is a node that enlarges an image/latent by a scale_factor. In this process, the upscale is carried out progressively by dividing it into steps.
147
+ * IterativeUpscale takes an Upscaler as an input, similar to a plugin, and uses it during each iteration. PixelKSampleUpscalerProvider is an Upscaler that converts the latent representation to pixel space and applies ksampling.
148
+ * The upscale_model_opt is an optional parameter that determines whether to use the upscale function of the model base if available. Using the upscale function of the model base can significantly reduce the number of iterative steps required. If an x2 upscaler is used, the image/latent is first upscaled by a factor of 2 and then downscaled to the target scale at each step before further processing is done.
149
+
150
+ * The following image is an image of 304x512 pixels and the same image scaled up to three times its original size using IterativeUpscale.
151
+
152
+ ![combination-example-original](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/upscale-original.png) ![combination-example-refined](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/upscale-3x.png)
153
+
154
+
155
+ #### 5. Interactive SAM Detector (Clipspace)
156
+
157
+ * When you right-click on the node that outputs 'MASK' and 'IMAGE', a menu called "Open in SAM Detector" appears, as shown in the following picture. Clicking on the menu opens a dialog in SAM's functionality, allowing you to generate a segment mask.
158
+ ![samdetector-menu](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/SAMDetector-menu.png)
159
+
160
+ * By clicking the left mouse button on a coordinate, a positive prompt in blue color is entered, indicating the area that should be included. Clicking the right mouse button on a coordinate enters a negative prompt in red color, indicating the area that should be excluded. Positive prompts represent the areas that should be included, while negative prompts represent the areas that should be excluded.
161
+ * You can remove the points that were added by using the "undo" button. After selecting the points, pressing the "detect" button generates the mask. Additionally, you can adjust the fidelity slider to determine the extent to which the mask belongs to the confidence region.
162
+
163
+ ![samdetector-dialog](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/SAMDetector-dialog.png)
164
+
165
+ * If you opened the dialog through "Open in SAM Detector" from the node, you can directly apply the changes by clicking the "Save to node" button. However, if you opened the dialog through the "clipspace" menu, you can save it to clipspace by clicking the "Save" button.
166
+
167
+ ![samdetector-result](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/SAMDetector-result.png)
168
+
169
+ * When you execute using the reflected mask in the node, you can observe that the image and mask are displayed separately.
170
+
171
+ # Others Tutorials
172
+ * [ComfyUI-extension-tutorials/ComfyUI-Impact-Pack](https://github.com/ltdrdata/ComfyUI-extension-tutorials/tree/Main/ComfyUI-Impact-Pack) - You can find various tutorials and workflows on this page.
173
+ * [Advanced Tutorial](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/advanced.md)
174
+ * [SAM Application](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/sam.md)
175
+ * [PreviewBridge](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/previewbridge.md)
176
+ * [Mask Pointer](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/maskpointer.md)
177
+ * [ONNX Tutorial](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/ONNX.md)
178
+ * [CLIPSeg Tutorial](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/clipseg.md)
179
+ * [Extreme Highresolution Upscale](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/extreme-upscale.md)
180
+ * [TwoSamplersForMask](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/TwoSamplers.md)
181
+ * [Advanced Iterative Upscale: PK_HOOK](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/pk_hook.md)
182
+ * [Advanced Iterative Upscale: TwoSamplersForMask Upscale Provider](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/TwoSamplersUpscale.md)
183
+
184
+ * [Interactive SAM + PreviewBridge](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/sam_with_preview_bridge.md)
185
+
186
+ # Credits
187
+
188
+ ComfyUI/[ComfyUI](https://github.com/comfyanonymous/ComfyUI) - A powerful and modular stable diffusion GUI.
189
+
190
+ dustysys/[ddetailer](https://github.com/dustysys/ddetailer) - DDetailer for Stable-diffusion-webUI extension.
191
+
192
+ Bing-su/[dddetailer](https://github.com/Bing-su/dddetailer) - The anime-face-detector used in ddetailer has been updated to be compatible with mmdet 3.0.0, and we have also applied a patch to the pycocotools dependency for Windows environment in ddetailer.
193
+
194
+ facebook/[segment-anything](https://github.com/facebookresearch/segment-anything) - Segmentation Anything!
195
+
196
+ hysts/[anime-face-detector](https://github.com/hysts/anime-face-detector) - Creator of `anime-face_yolov3`, which has impressive performance on a variety of art styles.
197
+
198
+ open-mmlab/[mmdetection](https://github.com/open-mmlab/mmdetection) - Object detection toolset. `dd-person_mask2former` was trained via transfer learning using their [R-50 Mask2Former instance segmentation model](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask2former#instance-segmentation) as a base.
199
+
200
+ biegert/[ComfyUI-CLIPSeg](https://github.com/biegert/ComfyUI-CLIPSeg) - This is a custom node that enables the use of CLIPSeg technology, which can find segments through prompts, in ComfyUI.
201
+
202
+ BlenderNeok/[ComfyUI-TiledKSampler](https://github.com/BlenderNeko/ComfyUI_TiledKSampler) -
203
+ The tile sampler allows high-resolution sampling even in places with low GPU VRAM.
204
+
205
+ WASasquatch/[was-node-suite-comfyui](https://github.com/WASasquatch/was-node-suite-comfyui) - A powerful custom node extensions of ComfyUI.
ComfyUI-Impact-Pack/__init__.py ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import shutil
2
+ import folder_paths
3
+ import os
4
+ import sys
5
+
6
+ comfy_path = os.path.dirname(folder_paths.__file__)
7
+ impact_path = os.path.dirname(__file__)
8
+
9
+ sys.path.append(impact_path)
10
+
11
+ import impact_config
12
+ print(f"### Loading: ComfyUI-Impact-Pack ({impact_config.version})")
13
+
14
+ # ensure dependency
15
+ if impact_config.read_config()[1] < impact_config.dependency_version:
16
+ import install # to install dependencies
17
+ # Core
18
+ # recheck dependencies for colab
19
+ try:
20
+ import folder_paths
21
+ import torch
22
+ import cv2
23
+ import mmcv
24
+ import numpy as np
25
+ from mmdet.apis import (inference_detector, init_detector)
26
+ import comfy.samplers
27
+ import comfy.sd
28
+ import warnings
29
+ from PIL import Image, ImageFilter
30
+ from mmdet.evaluation import get_classes
31
+ from skimage.measure import label, regionprops
32
+ from collections import namedtuple
33
+ except:
34
+ print("### ComfyUI-Impact-Pack: Reinstall dependencies (several dependencies are missing.)")
35
+ import install
36
+
37
+ import impact_server # to load server api
38
+
39
+ def setup_js():
40
+ # remove garbage
41
+ old_js_path = os.path.join(comfy_path, "web", "extensions", "core", "impact-pack.js")
42
+ if os.path.exists(old_js_path):
43
+ os.remove(old_js_path)
44
+
45
+ # setup js
46
+ js_dest_path = os.path.join(comfy_path, "web", "extensions", "impact-pack")
47
+ if not os.path.exists(js_dest_path):
48
+ os.makedirs(js_dest_path)
49
+
50
+ js_src_path = os.path.join(impact_path, "js", "impact-pack.js")
51
+ shutil.copy(js_src_path, js_dest_path)
52
+
53
+ js_src_path = os.path.join(impact_path, "js", "impact-sam-editor.js")
54
+ shutil.copy(js_src_path, js_dest_path)
55
+
56
+ setup_js()
57
+
58
+ import legacy_nodes
59
+ from impact_pack import *
60
+ from detectors import *
61
+ from impact_pipe import *
62
+
63
+ NODE_CLASS_MAPPINGS = {
64
+ "SAMLoader": SAMLoader,
65
+ "MMDetDetectorProvider": MMDetDetectorProvider,
66
+ "CLIPSegDetectorProvider": CLIPSegDetectorProvider,
67
+ "ONNXDetectorProvider": ONNXDetectorProvider,
68
+
69
+ "BitwiseAndMaskForEach": BitwiseAndMaskForEach,
70
+ "SubtractMaskForEach": SubtractMaskForEach,
71
+
72
+ "DetailerForEach": DetailerForEach,
73
+ "DetailerForEachDebug": DetailerForEachTest,
74
+ "DetailerForEachPipe": DetailerForEachPipe,
75
+ "DetailerForEachDebugPipe": DetailerForEachTestPipe,
76
+
77
+ "SAMDetectorCombined": SAMDetectorCombined,
78
+
79
+ "FaceDetailer": FaceDetailer,
80
+ "FaceDetailerPipe": FaceDetailerPipe,
81
+
82
+ "ToDetailerPipe": ToDetailerPipe ,
83
+ "FromDetailerPipe": FromDetailerPipe,
84
+ "ToBasicPipe": ToBasicPipe,
85
+ "FromBasicPipe": FromBasicPipe,
86
+ "BasicPipeToDetailerPipe": BasicPipeToDetailerPipe,
87
+ "DetailerPipeToBasicPipe": DetailerPipeToBasicPipe,
88
+ "EditBasicPipe": EditBasicPipe,
89
+ "EditDetailerPipe": EditDetailerPipe,
90
+
91
+ "LatentPixelScale": LatentPixelScale,
92
+ "PixelKSampleUpscalerProvider": PixelKSampleUpscalerProvider,
93
+ "PixelKSampleUpscalerProviderPipe": PixelKSampleUpscalerProviderPipe,
94
+ "IterativeLatentUpscale": IterativeLatentUpscale,
95
+ "IterativeImageUpscale": IterativeImageUpscale,
96
+ "PixelTiledKSampleUpscalerProvider": PixelTiledKSampleUpscalerProvider,
97
+ "PixelTiledKSampleUpscalerProviderPipe": PixelTiledKSampleUpscalerProviderPipe,
98
+ "TwoSamplersForMaskUpscalerProvider": TwoSamplersForMaskUpscalerProvider,
99
+ "TwoSamplersForMaskUpscalerProviderPipe": TwoSamplersForMaskUpscalerProviderPipe,
100
+
101
+ "PixelKSampleHookCombine": PixelKSampleHookCombine,
102
+ "DenoiseScheduleHookProvider": DenoiseScheduleHookProvider,
103
+ "CfgScheduleHookProvider": CfgScheduleHookProvider,
104
+
105
+ "BitwiseAndMask": BitwiseAndMask,
106
+ "SubtractMask": SubtractMask,
107
+ "Segs & Mask": SegsBitwiseAndMask,
108
+ "EmptySegs": EmptySEGS,
109
+
110
+ "MaskToSEGS": MaskToSEGS,
111
+ "ToBinaryMask": ToBinaryMask,
112
+
113
+ "BboxDetectorSEGS": BboxDetectorForEach,
114
+ "SegmDetectorSEGS": SegmDetectorForEach,
115
+ "ONNXDetectorSEGS": ONNXDetectorForEach,
116
+
117
+ "BboxDetectorCombined": BboxDetectorCombined,
118
+ "SegmDetectorCombined": SegmDetectorCombined,
119
+ "SegsToCombinedMask": SegsToCombinedMask,
120
+
121
+ "KSamplerProvider": KSamplerProvider,
122
+ "TwoSamplersForMask": TwoSamplersForMask,
123
+ "TiledKSamplerProvider": TiledKSamplerProvider,
124
+
125
+ "PreviewBridge": PreviewBridge,
126
+
127
+ "MaskPainter": legacy_nodes.MaskPainter,
128
+ "MMDetLoader": legacy_nodes.MMDetLoader,
129
+ "SegsMaskCombine": legacy_nodes.SegsMaskCombine,
130
+ "BboxDetectorForEach": legacy_nodes.BboxDetectorForEach,
131
+ "SegmDetectorForEach": legacy_nodes.SegmDetectorForEach,
132
+ "BboxDetectorCombined": legacy_nodes.BboxDetectorCombined,
133
+ "SegmDetectorCombined": legacy_nodes.SegmDetectorCombined,
134
+ }
135
+
136
+ NODE_DISPLAY_NAME_MAPPINGS = {
137
+ "BboxDetectorSEGS": "BBOX Detector (SEGS)",
138
+ "SegmDetectorSEGS": "SEGM Detector (SEGS)",
139
+ "ONNXDetectorSEGS": "ONNX Detector (SEGS)",
140
+ "BboxDetectorCombined": "BBOX Detector (combined)",
141
+ "SegmDetectorCombined": "SEGM Detector (combined)",
142
+ "SegsToCombinedMask": "SEGS to MASK (combined)",
143
+ "MaskToSEGS": "MASK to SEGS",
144
+ "BitwiseAndMaskForEach": "Bitwise(SEGS & SEGS)",
145
+ "SubtractMaskForEach": "Bitwise(SEGS - SEGS)",
146
+ "Segs & Mask": "Bitwise(SEGS & MASK)",
147
+ "BitwiseAndMask": "Bitwise(MASK & MASK)",
148
+ "SubtractMask": "Bitwise(MASK - MASK)",
149
+ "DetailerForEach": "Detailer (SEGS)",
150
+ "DetailerForEachPipe": "Detailer (SEGS/pipe)",
151
+ "DetailerForEachDebug": "DetailerDebug (SEGS)",
152
+ "DetailerForEachDebugPipe": "DetailerDebug (SEGS/pipe)",
153
+ "SAMDetectorCombined": "SAMDetector (combined)",
154
+ "FaceDetailerPipe": "FaceDetailer (pipe)",
155
+
156
+ "BasicPipeToDetailerPipe": "BasicPipe -> DetailerPipe",
157
+ "DetailerPipeToBasicPipe": "DetailerPipe -> BasicPipe",
158
+ "EditBasicPipe": "Edit BasicPipe",
159
+ "EditDetailerPipe": "Edit DetailerPipe",
160
+
161
+ "LatentPixelScale": "Latent Scale (on Pixel Space)",
162
+ "IterativeLatentUpscale": "Iterative Upscale (Latent)",
163
+ "IterativeImageUpscale": "Iterative Upscale (Image)",
164
+
165
+ "TwoSamplersForMaskUpscalerProvider": "TwoSamplersForMask Upscaler Provider",
166
+ "TwoSamplersForMaskUpscalerProviderPipe": "TwoSamplersForMask Upscaler Provider (pipe)",
167
+
168
+ "MaskPainter": "MaskPainter (Deprecated)",
169
+ "MMDetLoader": "MMDetLoader (Legacy)",
170
+ "SegsMaskCombine": "SegsMaskCombine (Legacy)",
171
+ "BboxDetectorForEach": "BboxDetectorForEach (Legacy)",
172
+ "SegmDetectorForEach": "SegmDetectorForEach (Legacy)",
173
+ "BboxDetectorCombined": "BboxDetectorCombined (Legacy)",
174
+ "SegmDetectorCombined": "SegmDetectorCombined (Legacy)",
175
+ }
176
+
177
+ __all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS']
ComfyUI-Impact-Pack/additional_dependencies.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import subprocess
3
+
4
+
5
+ def ensure_onnx_package():
6
+ try:
7
+ import onnxruntime
8
+ except Exception:
9
+ if "python_embeded" in sys.executable or "python_embedded" in sys.executable:
10
+ subprocess.check_call([sys.executable, '-s', '-m', 'pip', 'install', '--user', 'onnxruntime'])
11
+ else:
12
+ subprocess.check_call([sys.executable, '-s', '-m', 'pip', 'install', 'onnxruntime'])
ComfyUI-Impact-Pack/detectors.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import impact_core as core
2
+ from impact_config import MAX_RESOLUTION
3
+
4
+
5
+ class SAMDetectorCombined:
6
+ @classmethod
7
+ def INPUT_TYPES(s):
8
+ return {"required": {
9
+ "sam_model": ("SAM_MODEL", ),
10
+ "segs": ("SEGS", ),
11
+ "image": ("IMAGE", ),
12
+ "detection_hint": (["center-1", "horizontal-2", "vertical-2", "rect-4", "diamond-4", "mask-area",
13
+ "mask-points", "mask-point-bbox", "none"],),
14
+ "dilation": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
15
+ "threshold": ("FLOAT", {"default": 0.93, "min": 0.0, "max": 1.0, "step": 0.01}),
16
+ "bbox_expansion": ("INT", {"default": 0, "min": 0, "max": 1000, "step": 1}),
17
+ "mask_hint_threshold": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 1.0, "step": 0.01}),
18
+ "mask_hint_use_negative": (["False", "Small", "Outter"], )
19
+ }
20
+ }
21
+
22
+ RETURN_TYPES = ("MASK",)
23
+ FUNCTION = "doit"
24
+
25
+ CATEGORY = "ImpactPack/Detector"
26
+
27
+ def doit(self, sam_model, segs, image, detection_hint, dilation,
28
+ threshold, bbox_expansion, mask_hint_threshold, mask_hint_use_negative):
29
+ return (core.make_sam_mask(sam_model, segs, image, detection_hint, dilation,
30
+ threshold, bbox_expansion, mask_hint_threshold, mask_hint_use_negative), )
31
+
32
+ class BboxDetectorForEach:
33
+ @classmethod
34
+ def INPUT_TYPES(s):
35
+ return {"required": {
36
+ "bbox_detector": ("BBOX_DETECTOR", ),
37
+ "image": ("IMAGE", ),
38
+ "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
39
+ "dilation": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}),
40
+ "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 10, "step": 0.1}),
41
+ "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}),
42
+ }
43
+ }
44
+
45
+ RETURN_TYPES = ("SEGS", )
46
+ FUNCTION = "doit"
47
+
48
+ CATEGORY = "ImpactPack/Detector"
49
+
50
+ def doit(self, bbox_detector, image, threshold, dilation, crop_factor, drop_size):
51
+ segs = bbox_detector.detect(image, threshold, dilation, crop_factor, drop_size)
52
+ return (segs, )
53
+
54
+
55
+ class SegmDetectorForEach:
56
+ @classmethod
57
+ def INPUT_TYPES(s):
58
+ return {"required": {
59
+ "segm_detector": ("SEGM_DETECTOR", ),
60
+ "image": ("IMAGE", ),
61
+ "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
62
+ "dilation": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}),
63
+ "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 10, "step": 0.1}),
64
+ "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}),
65
+ }
66
+ }
67
+
68
+ RETURN_TYPES = ("SEGS", )
69
+ FUNCTION = "doit"
70
+
71
+ CATEGORY = "ImpactPack/Detector"
72
+
73
+ def doit(self, segm_detector, image, threshold, dilation, crop_factor, drop_size):
74
+ segs = segm_detector.detect(image, threshold, dilation, crop_factor, drop_size)
75
+ return (segs, )
76
+
77
+
78
+ class SegmDetectorCombined:
79
+ @classmethod
80
+ def INPUT_TYPES(s):
81
+ return {"required": {
82
+ "segm_detector": ("SEGM_DETECTOR", ),
83
+ "image": ("IMAGE", ),
84
+ "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
85
+ "dilation": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
86
+ }
87
+ }
88
+
89
+ RETURN_TYPES = ("MASK",)
90
+ FUNCTION = "doit"
91
+
92
+ CATEGORY = "ImpactPack/Detector"
93
+
94
+ def doit(self, segm_detector, image, threshold, dilation):
95
+ mask = segm_detector.detect_combined(image, threshold, dilation)
96
+ return (mask,)
97
+
98
+
99
+ class BboxDetectorCombined(SegmDetectorCombined):
100
+ @classmethod
101
+ def INPUT_TYPES(s):
102
+ return {"required": {
103
+ "bbox_detector": ("BBOX_DETECTOR", ),
104
+ "image": ("IMAGE", ),
105
+ "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
106
+ "dilation": ("INT", {"default": 4, "min": 0, "max": 255, "step": 1}),
107
+ }
108
+ }
109
+
110
+ def doit(self, bbox_detector, image, threshold, dilation):
111
+ mask = bbox_detector.detect_combined(image, threshold, dilation)
112
+ return (mask,)
ComfyUI-Impact-Pack/impact_config.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import configparser
2
+ import os
3
+
4
+ version = "V2.7.5"
5
+
6
+ dependency_version = 1
7
+
8
+ my_path = os.path.dirname(__file__)
9
+ config_path = os.path.join(my_path, "impact-pack.ini")
10
+ MAX_RESOLUTION = 8192
11
+
12
+ def write_config(comfy_path):
13
+ config = configparser.ConfigParser()
14
+ config['default'] = {
15
+ 'dependency_version': dependency_version,
16
+ 'comfy_path': comfy_path
17
+ }
18
+ with open(config_path, 'w') as configfile:
19
+ config.write(configfile)
20
+
21
+
22
+ def read_config():
23
+ try:
24
+ config = configparser.ConfigParser()
25
+ config.read(config_path)
26
+ default_conf = config['default']
27
+
28
+ return default_conf['comfy_path'], int(default_conf['dependency_version'])
29
+ except Exception:
30
+ return "", 0
ComfyUI-Impact-Pack/impact_core.py ADDED
@@ -0,0 +1,1160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import mmcv
4
+ from mmdet.apis import (inference_detector, init_detector)
5
+ from mmdet.evaluation import get_classes
6
+ from segment_anything import SamPredictor
7
+ import torch.nn.functional as F
8
+
9
+ from impact_utils import *
10
+ from collections import namedtuple
11
+ import numpy as np
12
+ from skimage.measure import label, regionprops
13
+
14
+ main_dir = os.path.dirname(os.path.abspath(sys.argv[0]))
15
+ sys.path.append(os.path.dirname(__file__))
16
+ sys.path.append(main_dir)
17
+
18
+ import nodes
19
+ import comfy_extras.nodes_upscale_model as model_upscale
20
+
21
+ SEG = namedtuple("SEG", ['cropped_image', 'cropped_mask', 'confidence', 'crop_region', 'bbox', 'label'],
22
+ defaults=[None])
23
+
24
+
25
+ class NO_BBOX_DETECTOR:
26
+ pass
27
+
28
+
29
+ class NO_SEGM_DETECTOR:
30
+ pass
31
+
32
+
33
+ def load_mmdet(model_path):
34
+ model_config = os.path.splitext(model_path)[0] + ".py"
35
+ model = init_detector(model_config, model_path, device="cpu")
36
+ return model
37
+
38
+
39
+ def create_segmasks(results):
40
+ bboxs = results[1]
41
+ segms = results[2]
42
+ confidence = results[3]
43
+
44
+ results = []
45
+ for i in range(len(segms)):
46
+ item = (bboxs[i], segms[i].astype(np.float32), confidence[i])
47
+ results.append(item)
48
+ return results
49
+
50
+
51
+ def inference_segm_old(model, image, conf_threshold):
52
+ image = image.numpy()[0] * 255
53
+ mmdet_results = inference_detector(model, image)
54
+
55
+ bbox_results, segm_results = mmdet_results
56
+ label = "A"
57
+
58
+ classes = get_classes("coco")
59
+ labels = [
60
+ np.full(bbox.shape[0], i, dtype=np.int32)
61
+ for i, bbox in enumerate(bbox_results)
62
+ ]
63
+ n, m = bbox_results[0].shape
64
+ if n == 0:
65
+ return [[], [], []]
66
+ labels = np.concatenate(labels)
67
+ bboxes = np.vstack(bbox_results)
68
+ segms = mmcv.concat_list(segm_results)
69
+ filter_idxs = np.where(bboxes[:, -1] > conf_threshold)[0]
70
+ results = [[], [], []]
71
+ for i in filter_idxs:
72
+ results[0].append(label + "-" + classes[labels[i]])
73
+ results[1].append(bboxes[i])
74
+ results[2].append(segms[i])
75
+
76
+ return results
77
+
78
+
79
+ def inference_segm(image, modelname, conf_thres, lab="A"):
80
+ image = image.numpy()[0] * 255
81
+ mmdet_results = inference_detector(modelname, image).pred_instances
82
+ bboxes = mmdet_results.bboxes.numpy()
83
+ segms = mmdet_results.masks.numpy()
84
+ scores = mmdet_results.scores.numpy()
85
+
86
+ classes = get_classes("coco")
87
+
88
+ n, m = bboxes.shape
89
+ if n == 0:
90
+ return [[], [], [], []]
91
+ labels = mmdet_results.labels
92
+ filter_inds = np.where(mmdet_results.scores > conf_thres)[0]
93
+ results = [[], [], [], []]
94
+ for i in filter_inds:
95
+ results[0].append(lab + "-" + classes[labels[i]])
96
+ results[1].append(bboxes[i])
97
+ results[2].append(segms[i])
98
+ results[3].append(scores[i])
99
+
100
+ return results
101
+
102
+
103
+ def inference_bbox(modelname, image, conf_threshold):
104
+ image = image.numpy()[0] * 255
105
+ label = "A"
106
+ output = inference_detector(modelname, image).pred_instances
107
+ cv2_image = np.array(image)
108
+ cv2_image = cv2_image[:, :, ::-1].copy()
109
+ cv2_gray = cv2.cvtColor(cv2_image, cv2.COLOR_BGR2GRAY)
110
+
111
+ segms = []
112
+ for x0, y0, x1, y1 in output.bboxes:
113
+ cv2_mask = np.zeros(cv2_gray.shape, np.uint8)
114
+ cv2.rectangle(cv2_mask, (int(x0), int(y0)), (int(x1), int(y1)), 255, -1)
115
+ cv2_mask_bool = cv2_mask.astype(bool)
116
+ segms.append(cv2_mask_bool)
117
+
118
+ n, m = output.bboxes.shape
119
+ if n == 0:
120
+ return [[], [], [], []]
121
+
122
+ bboxes = output.bboxes.numpy()
123
+ scores = output.scores.numpy()
124
+ filter_idxs = np.where(scores > conf_threshold)[0]
125
+ results = [[], [], [], []]
126
+ for i in filter_idxs:
127
+ results[0].append(label)
128
+ results[1].append(bboxes[i])
129
+ results[2].append(segms[i])
130
+ results[3].append(scores[i])
131
+
132
+ return results
133
+
134
+
135
+ def gen_detection_hints_from_mask_area(x, y, mask, threshold, use_negative):
136
+ points = []
137
+ plabs = []
138
+
139
+ # minimum sampling step >= 3
140
+ y_step = max(3, int(mask.shape[0]/20))
141
+ x_step = max(3, int(mask.shape[1]/20))
142
+
143
+ for i in range(0, len(mask), y_step):
144
+ for j in range(0, len(mask[i]), x_step):
145
+ if mask[i][j] > threshold:
146
+ points.append((x+j, y+i))
147
+ plabs.append(1)
148
+ elif use_negative and mask[i][j] == 0:
149
+ points.append((x+j, y+i))
150
+ plabs.append(0)
151
+
152
+ return points, plabs
153
+
154
+
155
+ def gen_negative_hints(w, h, x1, y1, x2, y2):
156
+ npoints = []
157
+ nplabs = []
158
+
159
+ # minimum sampling step >= 3
160
+ y_step = max(3, int(w/20))
161
+ x_step = max(3, int(h/20))
162
+
163
+ for i in range(10, h-10, y_step):
164
+ for j in range(10, w-10, x_step):
165
+ if not (x1-10 <= j and j <= x2+10 and y1-10 <= i and i <= y2+10):
166
+ npoints.append((j, i))
167
+ nplabs.append(0)
168
+
169
+ return npoints, nplabs
170
+
171
+
172
+ def enhance_detail(image, model, vae, guide_size, guide_size_for, bbox, seed, steps, cfg, sampler_name, scheduler,
173
+ positive, negative, denoise, noise_mask, force_inpaint):
174
+ h = image.shape[1]
175
+ w = image.shape[2]
176
+
177
+ bbox_h = bbox[3] - bbox[1]
178
+ bbox_w = bbox[2] - bbox[0]
179
+
180
+ # Skip processing if the detected bbox is already larger than the guide_size
181
+ if bbox_h >= guide_size and bbox_w >= guide_size:
182
+ print(f"Detailer: segment skip")
183
+ None
184
+
185
+ if guide_size_for == "bbox":
186
+ # Scale up based on the smaller dimension between width and height.
187
+ upscale = guide_size / min(bbox_w, bbox_h)
188
+ else:
189
+ # for cropped_size
190
+ upscale = guide_size / min(w, h)
191
+
192
+ new_w = int(w * upscale)
193
+ new_h = int(h * upscale)
194
+
195
+ if not force_inpaint:
196
+ if upscale <= 1.0:
197
+ print(f"Detailer: segment skip [determined upscale factor={upscale}]")
198
+ return None
199
+
200
+ if new_w == 0 or new_h == 0:
201
+ print(f"Detailer: segment skip [zero size={new_w, new_h}]")
202
+ return None
203
+ else:
204
+ if upscale <= 1.0 or new_w == 0 or new_h == 0:
205
+ print(f"Detailer: force inpaint")
206
+ upscale = 1.0
207
+ new_w = w
208
+ new_h = h
209
+
210
+ print(f"Detailer: segment upscale for ({bbox_w, bbox_h}) | crop region {w, h} x {upscale} -> {new_w, new_h}")
211
+
212
+ # upscale
213
+ upscaled_image = scale_tensor(new_w, new_h, torch.from_numpy(image))
214
+
215
+ # ksampler
216
+ latent_image = to_latent_image(upscaled_image, vae)
217
+
218
+ if noise_mask is not None:
219
+ # upscale the mask tensor by a factor of 2 using bilinear interpolation
220
+ noise_mask = torch.from_numpy(noise_mask)
221
+ upscaled_mask = torch.nn.functional.interpolate(noise_mask.unsqueeze(0).unsqueeze(0), size=(new_h, new_w),
222
+ mode='bilinear', align_corners=False)
223
+
224
+ # remove the extra dimensions added by unsqueeze
225
+ upscaled_mask = upscaled_mask.squeeze().squeeze()
226
+ latent_image['noise_mask'] = upscaled_mask
227
+
228
+ sampler = nodes.KSampler()
229
+ refined_latent = sampler.sample(model, seed, steps, cfg, sampler_name, scheduler,
230
+ positive, negative, latent_image, denoise)
231
+ refined_latent = refined_latent[0]
232
+
233
+ # non-latent downscale - latent downscale cause bad quality
234
+ refined_image = vae.decode(refined_latent['samples'])
235
+
236
+ # downscale
237
+ refined_image = scale_tensor_and_to_pil(w, h, refined_image)
238
+
239
+ # don't convert to latent - latent break image
240
+ # preserving pil is much better
241
+ return refined_image
242
+
243
+
244
+ def composite_to(dest_latent, crop_region, src_latent):
245
+ x1 = crop_region[0]
246
+ y1 = crop_region[1]
247
+
248
+ # composite to original latent
249
+ lc = nodes.LatentComposite()
250
+
251
+ # 현재 mask 를 고려한 composite 가 없음... 이거 처리 필요.
252
+
253
+ orig_image = lc.composite(dest_latent, src_latent, x1, y1)
254
+
255
+ return orig_image[0]
256
+
257
+
258
+ def sam_predict(predictor, points, plabs, bbox, threshold):
259
+ point_coords = None if not points else np.array(points)
260
+ point_labels = None if not plabs else np.array(plabs)
261
+
262
+ box = np.array([bbox]) if bbox is not None else None
263
+
264
+ cur_masks, scores, _ = predictor.predict(point_coords=point_coords, point_labels=point_labels, box=box)
265
+
266
+ total_masks = []
267
+
268
+ selected = False
269
+ max_score = 0
270
+ for idx in range(len(scores)):
271
+ if scores[idx] > max_score:
272
+ max_score = scores[idx]
273
+ max_mask = cur_masks[idx]
274
+
275
+ if scores[idx] >= threshold:
276
+ selected = True
277
+ total_masks.append(cur_masks[idx])
278
+ else:
279
+ pass
280
+
281
+ if not selected:
282
+ total_masks.append(max_mask)
283
+
284
+ return total_masks
285
+
286
+
287
+ def make_sam_mask(sam_model, segs, image, detection_hint, dilation,
288
+ threshold, bbox_expansion, mask_hint_threshold, mask_hint_use_negative):
289
+ predictor = SamPredictor(sam_model)
290
+ image = np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)
291
+
292
+ predictor.set_image(image, "RGB")
293
+
294
+ total_masks = []
295
+
296
+ use_small_negative = mask_hint_use_negative == "Small"
297
+
298
+ # seg_shape = segs[0]
299
+ segs = segs[1]
300
+ if detection_hint == "mask-points":
301
+ points = []
302
+ plabs = []
303
+
304
+ for i in range(len(segs)):
305
+ bbox = segs[i].bbox
306
+ center = center_of_bbox(segs[i].bbox)
307
+ points.append(center)
308
+
309
+ # small point is background, big point is foreground
310
+ if use_small_negative and bbox[2] - bbox[0] < 10:
311
+ plabs.append(0)
312
+ else:
313
+ plabs.append(1)
314
+
315
+ detected_masks = sam_predict(predictor, points, plabs, None, threshold)
316
+ total_masks += detected_masks
317
+
318
+ else:
319
+ for i in range(len(segs)):
320
+ bbox = segs[i].bbox
321
+ center = center_of_bbox(bbox)
322
+
323
+ x1 = max(bbox[0] - bbox_expansion, 0)
324
+ y1 = max(bbox[1] - bbox_expansion, 0)
325
+ x2 = min(bbox[2] + bbox_expansion, image.shape[1])
326
+ y2 = min(bbox[3] + bbox_expansion, image.shape[0])
327
+
328
+ dilated_bbox = [x1, y1, x2, y2]
329
+
330
+ points = []
331
+ plabs = []
332
+ if detection_hint == "center-1":
333
+ points.append(center)
334
+ plabs = [1] # 1 = foreground point, 0 = background point
335
+
336
+ elif detection_hint == "horizontal-2":
337
+ gap = (x2 - x1) / 3
338
+ points.append((x1 + gap, center[1]))
339
+ points.append((x1 + gap * 2, center[1]))
340
+ plabs = [1, 1]
341
+
342
+ elif detection_hint == "vertical-2":
343
+ gap = (y2 - y1) / 3
344
+ points.append((center[0], y1 + gap))
345
+ points.append((center[0], y1 + gap * 2))
346
+ plabs = [1, 1]
347
+
348
+ elif detection_hint == "rect-4":
349
+ x_gap = (x2 - x1) / 3
350
+ y_gap = (y2 - y1) / 3
351
+ points.append((x1 + x_gap, center[1]))
352
+ points.append((x1 + x_gap * 2, center[1]))
353
+ points.append((center[0], y1 + y_gap))
354
+ points.append((center[0], y1 + y_gap * 2))
355
+ plabs = [1, 1, 1, 1]
356
+
357
+ elif detection_hint == "diamond-4":
358
+ x_gap = (x2 - x1) / 3
359
+ y_gap = (y2 - y1) / 3
360
+ points.append((x1 + x_gap, y1 + y_gap))
361
+ points.append((x1 + x_gap * 2, y1 + y_gap))
362
+ points.append((x1 + x_gap, y1 + y_gap * 2))
363
+ points.append((x1 + x_gap * 2, y1 + y_gap * 2))
364
+ plabs = [1, 1, 1, 1]
365
+
366
+ elif detection_hint == "mask-point-bbox":
367
+ center = center_of_bbox(segs[i].bbox)
368
+ points.append(center)
369
+ plabs = [1]
370
+
371
+ elif detection_hint == "mask-area":
372
+ points, plabs = gen_detection_hints_from_mask_area(segs[i].crop_region[0], segs[i].crop_region[1],
373
+ segs[i].cropped_mask,
374
+ mask_hint_threshold, use_small_negative)
375
+
376
+ if mask_hint_use_negative == "Outter":
377
+ npoints, nplabs = gen_negative_hints(image.shape[0], image.shape[1],
378
+ segs[i].crop_region[0], segs[i].crop_region[1],
379
+ segs[i].crop_region[2], segs[i].crop_region[3])
380
+
381
+ points += npoints
382
+ plabs += nplabs
383
+
384
+ detected_masks = sam_predict(predictor, points, plabs, dilated_bbox, threshold)
385
+ total_masks += detected_masks
386
+
387
+ # merge every collected masks
388
+ mask = combine_masks2(total_masks)
389
+
390
+ if mask is not None:
391
+ mask = mask.float()
392
+ mask = dilate_mask(mask.cpu().numpy(), dilation)
393
+ mask = torch.from_numpy(mask)
394
+ else:
395
+ mask = torch.zeros((8, 8), dtype=torch.float32, device="cpu") # empty mask
396
+
397
+ return mask
398
+
399
+
400
+ def segs_bitwise_and_mask(segs, mask):
401
+ if mask is None:
402
+ print("[SegsBitwiseAndMask] Cannot operate: MASK is empty.")
403
+ return ([], )
404
+
405
+ items = []
406
+
407
+ mask = (mask.cpu().numpy() * 255).astype(np.uint8)
408
+
409
+ for seg in segs[1]:
410
+ cropped_mask = (seg.cropped_mask * 255).astype(np.uint8)
411
+ crop_region = seg.crop_region
412
+
413
+ cropped_mask2 = mask[crop_region[1]:crop_region[3], crop_region[0]:crop_region[2]]
414
+
415
+ new_mask = np.bitwise_and(cropped_mask.astype(np.uint8), cropped_mask2)
416
+ new_mask = new_mask.astype(np.float32) / 255.0
417
+
418
+ item = SEG(seg.cropped_image, new_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label)
419
+ items.append(item)
420
+
421
+ return segs[0], items
422
+
423
+
424
+ class BBoxDetector:
425
+ bbox_model = None
426
+
427
+ def __init__(self, bbox_model):
428
+ self.bbox_model = bbox_model
429
+
430
+ def detect(self, image, threshold, dilation, crop_factor, drop_size=1):
431
+ drop_size = max(drop_size, 1)
432
+ mmdet_results = inference_bbox(self.bbox_model, image, threshold)
433
+ segmasks = create_segmasks(mmdet_results)
434
+
435
+ if dilation > 0:
436
+ segmasks = dilate_masks(segmasks, dilation)
437
+
438
+ items = []
439
+ h = image.shape[1]
440
+ w = image.shape[2]
441
+
442
+ for x in segmasks:
443
+ item_bbox = x[0]
444
+ item_mask = x[1]
445
+
446
+ y1, x1, y2, x2 = item_bbox
447
+
448
+ if x2 - x1 > drop_size and y2 - y1 > drop_size: # minimum dimension must be (2,2) to avoid squeeze issue
449
+ crop_region = make_crop_region(w, h, item_bbox, crop_factor)
450
+ cropped_image = crop_image(image, crop_region)
451
+ cropped_mask = crop_ndarray2(item_mask, crop_region)
452
+ confidence = x[2]
453
+ # bbox_size = (item_bbox[2]-item_bbox[0],item_bbox[3]-item_bbox[1]) # (w,h)
454
+
455
+ item = SEG(cropped_image, cropped_mask, confidence, crop_region, item_bbox)
456
+
457
+ items.append(item)
458
+
459
+ shape = image.shape[1], image.shape[2]
460
+ return shape, items
461
+
462
+ def detect_combined(self, image, threshold, dilation):
463
+ mmdet_results = inference_bbox(self.bbox_model, image, threshold)
464
+ segmasks = create_segmasks(mmdet_results)
465
+ if dilation > 0:
466
+ segmasks = dilate_masks(segmasks, dilation)
467
+
468
+ return combine_masks(segmasks)
469
+
470
+ def setAux(self, x):
471
+ pass
472
+
473
+
474
+ class ONNXDetector(BBoxDetector):
475
+ onnx_model = None
476
+
477
+ def __init__(self, onnx_model):
478
+ self.onnx_model = onnx_model
479
+
480
+ def detect(self, image, threshold, dilation, crop_factor, drop_size=1):
481
+ drop_size = max(drop_size, 1)
482
+ try:
483
+ import onnx
484
+
485
+ h = image.shape[1]
486
+ w = image.shape[2]
487
+
488
+ labels, scores, boxes = onnx.onnx_inference(image, self.onnx_model)
489
+
490
+ # collect feasible item
491
+ result = []
492
+
493
+ for i in range(len(labels)):
494
+ if scores[i] > threshold:
495
+ item_bbox = boxes[i]
496
+ x1, y1, x2, y2 = item_bbox
497
+
498
+ if x2 - x1 > drop_size and y2 - y1 > drop_size: # minimum dimension must be (2,2) to avoid squeeze issue
499
+ crop_region = make_crop_region(w, h, item_bbox, crop_factor)
500
+ crop_x1, crop_y1, crop_x2, crop_y2, = crop_region
501
+
502
+ # prepare cropped mask
503
+ cropped_mask = np.zeros((crop_y2-crop_y1,crop_x2-crop_x1))
504
+ inner_mask = np.ones((y2-y1, x2-x1))
505
+ cropped_mask[y1-crop_y1:y2-crop_y1, x1-crop_x1:x2-crop_x1] = inner_mask
506
+
507
+ # make items
508
+ item = SEG(None, cropped_mask, scores[i], crop_region, item_bbox)
509
+ result.append(item)
510
+
511
+ shape = h, w
512
+ return shape, result
513
+ except Exception as e:
514
+ print(f"ONNXDetector: unable to execute.\n{e}")
515
+ pass
516
+
517
+ def detect_combined(self, image, threshold, dilation):
518
+ return segs_to_combined_mask(self.detect(image, threshold, dilation, 1))
519
+
520
+ def setAux(self, x):
521
+ pass
522
+
523
+
524
+ class SegmDetector(BBoxDetector):
525
+ segm_model = None
526
+
527
+ def __init__(self, segm_model):
528
+ self.segm_model = segm_model
529
+
530
+ def detect(self, image, threshold, dilation, crop_factor, drop_size=1):
531
+ drop_size = max(drop_size, 1)
532
+ mmdet_results = inference_segm(image, self.segm_model, threshold)
533
+ segmasks = create_segmasks(mmdet_results)
534
+
535
+ if dilation > 0:
536
+ segmasks = dilate_masks(segmasks, dilation)
537
+
538
+ items = []
539
+ h = image.shape[1]
540
+ w = image.shape[2]
541
+ for x in segmasks:
542
+ item_bbox = x[0]
543
+ item_mask = x[1]
544
+
545
+ y1, x1, y2, x2 = item_bbox
546
+
547
+ if x2 - x1 > drop_size and y2 - y1 > drop_size: # minimum dimension must be (2,2) to avoid squeeze issue
548
+ crop_region = make_crop_region(w, h, item_bbox, crop_factor)
549
+ cropped_image = crop_image(image, crop_region)
550
+ cropped_mask = crop_ndarray2(item_mask, crop_region)
551
+ confidence = x[2]
552
+
553
+ item = SEG(cropped_image, cropped_mask, confidence, crop_region, item_bbox)
554
+ items.append(item)
555
+
556
+ return image.shape, items
557
+
558
+ def detect_combined(self, image, threshold, dilation):
559
+ mmdet_results = inference_bbox(self.bbox_model, image, threshold)
560
+ segmasks = create_segmasks(mmdet_results)
561
+ if dilation > 0:
562
+ segmasks = dilate_masks(segmasks, dilation)
563
+
564
+ return combine_masks(segmasks)
565
+
566
+ def setAux(self, x):
567
+ pass
568
+
569
+
570
+ def mask_to_segs(mask, combined, crop_factor, bbox_fill, drop_size=1):
571
+ drop_size = max(drop_size, 1)
572
+ if mask is None:
573
+ print("[mask_to_segs] Cannot operate: MASK is empty.")
574
+ return ([], )
575
+
576
+ mask = mask.cpu().numpy()
577
+
578
+ result = []
579
+ if combined == "True":
580
+ # Find the indices of the non-zero elements
581
+ indices = np.nonzero(mask)
582
+
583
+ if len(indices[0]) > 0 and len(indices[1]) > 0:
584
+ # Determine the bounding box of the non-zero elements
585
+ bbox = np.min(indices[1]), np.min(indices[0]), np.max(indices[1]), np.max(indices[0])
586
+ crop_region = make_crop_region(mask.shape[1], mask.shape[0], bbox, crop_factor)
587
+ x1, y1, x2, y2 = crop_region
588
+
589
+ if x2 - x1 > 0 and y2 - y1 > 0:
590
+ cropped_mask = mask[y1:y2, x1:x2]
591
+ item = SEG(None, cropped_mask, 1.0, crop_region, bbox, 'A')
592
+ result.append(item)
593
+
594
+ else:
595
+ # label the connected components
596
+ labelled_mask = label(mask)
597
+
598
+ # get the region properties for each connected component
599
+ regions = regionprops(labelled_mask)
600
+
601
+ # iterate over the regions and print their bounding boxes
602
+ for region in regions:
603
+ y1, x1, y2, x2 = region.bbox
604
+ bbox = x1, y1, x2, y2
605
+ crop_region = make_crop_region(mask.shape[1], mask.shape[0], bbox, crop_factor)
606
+
607
+ if x2 - x1 > drop_size and y2 - y1 > drop_size: # minimum dimension must be (2,2) to avoid squeeze issue
608
+ cropped_mask = np.array(mask[crop_region[1]:crop_region[3], crop_region[0]:crop_region[2]])
609
+
610
+ if bbox_fill:
611
+ cropped_mask.fill(1.0)
612
+
613
+ item = SEG(None, cropped_mask, 1.0, crop_region, bbox, 'A')
614
+
615
+ result.append(item)
616
+
617
+ if not result:
618
+ print(f"[mask_to_segs] Empty mask.")
619
+
620
+ print(f"# of Detected SEGS: {len(result)}")
621
+ # for r in result:
622
+ # print(f"\tbbox={r.bbox}, crop={r.crop_region}, label={r.label}")
623
+
624
+ return mask.shape, result
625
+
626
+
627
+ def segs_to_combined_mask(segs):
628
+ shape = segs[0]
629
+ h = shape[0]
630
+ w = shape[1]
631
+
632
+ mask = np.zeros((h, w), dtype=np.uint8)
633
+
634
+ for seg in segs[1]:
635
+ cropped_mask = seg.cropped_mask
636
+ crop_region = seg.crop_region
637
+ mask[crop_region[1]:crop_region[3], crop_region[0]:crop_region[2]] |= (cropped_mask * 255).astype(np.uint8)
638
+
639
+ return torch.from_numpy(mask.astype(np.float32) / 255.0)
640
+
641
+
642
+ def vae_decode(vae, samples, use_tile, hook):
643
+ if use_tile:
644
+ pixels = nodes.VAEDecodeTiled().decode(vae, samples)[0]
645
+ else:
646
+ pixels = nodes.VAEDecode().decode(vae, samples)[0]
647
+
648
+ if hook is not None:
649
+ hook.post_decode(pixels)
650
+
651
+ return pixels
652
+
653
+
654
+ def vae_encode(vae, pixels, use_tile, hook):
655
+ if use_tile:
656
+ samples = nodes.VAEEncodeTiled().encode(vae, pixels[0])[0]
657
+ else:
658
+ samples = nodes.VAEEncode().encode(vae, pixels[0])[0]
659
+
660
+ if hook is not None:
661
+ hook.post_encode(samples)
662
+
663
+ return samples
664
+
665
+
666
+ class KSamplerWrapper:
667
+ params = None
668
+
669
+ def __init__(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise):
670
+ self.params = model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise
671
+
672
+ def sample(self, latent_image, hook):
673
+ model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params
674
+
675
+ if hook is not None:
676
+ model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise = \
677
+ hook.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise)
678
+
679
+ return nodes.common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)[0]
680
+
681
+
682
+ class PixelKSampleHook:
683
+ cur_step = 0
684
+ total_step = 0
685
+
686
+ def __init__(self):
687
+ pass
688
+
689
+ def set_steps(self, info):
690
+ self.cur_step, self.total_step = info
691
+
692
+ def post_decode(self, pixels):
693
+ return pixels
694
+
695
+ def post_upscale(self, pixels):
696
+ return pixels
697
+
698
+ def post_encode(self, samples):
699
+ return samples
700
+
701
+ def pre_ksample(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise):
702
+ return model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise
703
+
704
+
705
+ class PixelKSampleHookCombine(PixelKSampleHook):
706
+ hook1 = None
707
+ hook2 = None
708
+
709
+ def __init__(self, hook1, hook2):
710
+ super().__init__()
711
+ self.hook1 = hook1
712
+ self.hook2 = hook2
713
+
714
+ def set_steps(self, info):
715
+ self.hook1.set_steps(info)
716
+ self.hook2.set_steps(info)
717
+
718
+ def post_decode(self, pixels):
719
+ return self.hook2.post_decode(self.hook1.post_decode(pixels))
720
+
721
+ def post_upscale(self, pixels):
722
+ return self.hook2.post_upscale(self.hook1.post_upscale(pixels))
723
+
724
+ def post_encode(self, samples):
725
+ return self.hook2.post_encode(self.hook1.post_encode(samples))
726
+
727
+ def pre_ksample(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent,
728
+ denoise):
729
+ model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise = \
730
+ self.hook1.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise)
731
+
732
+ return self.hook2.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise)
733
+
734
+
735
+ class SimpleCfgScheduleHook(PixelKSampleHook):
736
+ target_cfg = 0
737
+
738
+ def __init__(self, target_cfg):
739
+ super().__init__()
740
+ self.target_cfg = target_cfg
741
+
742
+ def pre_ksample(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise):
743
+ progress = self.cur_step/self.total_step
744
+ gap = self.target_cfg - cfg
745
+ current_cfg = cfg + gap*progress
746
+ return model, seed, steps, current_cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise
747
+
748
+
749
+ class SimpleDenoiseScheduleHook(PixelKSampleHook):
750
+ target_denoise = 0
751
+
752
+ def __init__(self, target_denoise):
753
+ super().__init__()
754
+ self.target_denoise = target_denoise
755
+
756
+ def pre_ksample(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise):
757
+ progress = self.cur_step / self.total_step
758
+ gap = self.target_denoise - denoise
759
+ current_denoise = denoise + gap * progress
760
+ return model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, current_denoise
761
+
762
+
763
+ def latent_upscale_on_pixel_space_shape(samples, scale_method, w, h, vae, use_tile=False, save_temp_prefix=None, hook=None):
764
+ pixels = vae_decode(vae, samples, use_tile, hook)
765
+
766
+ if save_temp_prefix is not None:
767
+ nodes.PreviewImage().save_images(pixels, filename_prefix=save_temp_prefix)
768
+
769
+ pixels = nodes.ImageScale().upscale(pixels, scale_method, int(w), int(h), False)
770
+
771
+ if hook is not None:
772
+ pixels = hook.post_upscale(pixels)
773
+
774
+ return vae_encode(vae, pixels, use_tile, hook)
775
+
776
+
777
+ def latent_upscale_on_pixel_space(samples, scale_method, scale_factor, vae, use_tile=False, save_temp_prefix=None, hook=None):
778
+ pixels = vae_decode(vae, samples, use_tile, hook)
779
+
780
+ if save_temp_prefix is not None:
781
+ nodes.PreviewImage().save_images(pixels, filename_prefix=save_temp_prefix)
782
+
783
+ w = pixels.shape[2] * scale_factor
784
+ h = pixels.shape[1] * scale_factor
785
+ pixels = nodes.ImageScale().upscale(pixels, scale_method, int(w), int(h), False)
786
+
787
+ if hook is not None:
788
+ pixels = hook.post_upscale(pixels)
789
+
790
+ return vae_encode(vae, pixels, use_tile, hook)
791
+
792
+
793
+ def latent_upscale_on_pixel_space_with_model_shape(samples, scale_method, upscale_model, new_w, new_h, vae, use_tile=False, save_temp_prefix=None, hook=None):
794
+ pixels = vae_decode(vae, samples, use_tile, hook)
795
+
796
+ if save_temp_prefix is not None:
797
+ nodes.PreviewImage().save_images(pixels, filename_prefix=save_temp_prefix)
798
+
799
+ w = pixels.shape[2]
800
+
801
+ # upscale by model upscaler
802
+ current_w = w
803
+ while current_w < new_w:
804
+ pixels = model_upscale.ImageUpscaleWithModel().upscale(upscale_model, pixels)[0]
805
+ current_w = pixels.shape[2]
806
+
807
+ # downscale to target scale
808
+ pixels = nodes.ImageScale().upscale(pixels, scale_method, int(new_w), int(new_h), False)
809
+
810
+ if hook is not None:
811
+ pixels = hook.post_upscale(pixels)
812
+
813
+ return vae_encode(vae, pixels, use_tile, hook)
814
+
815
+
816
+ def latent_upscale_on_pixel_space_with_model(samples, scale_method, upscale_model, scale_factor, vae, use_tile=False, save_temp_prefix=None, hook=None):
817
+ pixels = vae_decode(vae, samples, use_tile, hook)
818
+
819
+ if save_temp_prefix is not None:
820
+ nodes.PreviewImage().save_images(pixels, filename_prefix=save_temp_prefix)
821
+
822
+ w = pixels.shape[2]
823
+ h = pixels.shape[1]
824
+
825
+ new_w = w * scale_factor
826
+ new_h = h * scale_factor
827
+
828
+ # upscale by model upscaler
829
+ current_w = w
830
+ while current_w < new_w:
831
+ pixels = model_upscale.ImageUpscaleWithModel().upscale(upscale_model, pixels)[0]
832
+ current_w = pixels.shape[2]
833
+
834
+ # downscale to target scale
835
+ pixels = nodes.ImageScale().upscale(pixels, scale_method, int(new_w), int(new_h), False)
836
+
837
+ if hook is not None:
838
+ pixels = hook.post_upscale(pixels)
839
+
840
+ return vae_encode(vae, pixels, use_tile, hook)
841
+
842
+
843
+ class TwoSamplersForMaskUpscaler:
844
+ params = None
845
+ upscale_model = None
846
+ hook_base = None
847
+ hook_mask = None
848
+ hook_full = None
849
+ use_tiled_vae = False
850
+ is_tiled = False
851
+
852
+ def __init__(self, scale_method, sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, vae,
853
+ full_sampler_opt=None, upscale_model_opt=None, hook_base_opt=None, hook_mask_opt=None, hook_full_opt=None):
854
+ mask = mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1]))
855
+
856
+ self.params = scale_method, sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, vae
857
+ self.upscale_model = upscale_model_opt
858
+ self.full_sampler = full_sampler_opt
859
+ self.hook_base = hook_base_opt
860
+ self.hook_mask = hook_mask_opt
861
+ self.hook_full = hook_full_opt
862
+ self.use_tiled_vae = use_tiled_vae
863
+
864
+ def upscale(self, step_info, samples, upscale_factor, save_temp_prefix=None):
865
+ scale_method, sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, vae = self.params
866
+
867
+ self.prepare_hook(step_info)
868
+
869
+ # upscale latent
870
+ if self.upscale_model is None:
871
+ upscaled_latent = latent_upscale_on_pixel_space(samples, scale_method, upscale_factor, vae,
872
+ use_tile=self.use_tiled_vae,
873
+ save_temp_prefix=save_temp_prefix, hook=self.hook_base)
874
+ else:
875
+ upscaled_latent = latent_upscale_on_pixel_space_with_model(samples, scale_method, self.upscale_model, upscale_factor, vae,
876
+ save_temp_prefix=save_temp_prefix, hook=self.hook_mask)
877
+
878
+ return self.do_samples(step_info, base_sampler, mask_sampler, sample_schedule, mask, upscaled_latent)
879
+
880
+ def prepare_hook(self, step_info):
881
+ if self.hook_base is not None:
882
+ self.hook_base.set_steps(step_info)
883
+ if self.hook_mask is not None:
884
+ self.hook_mask.set_steps(step_info)
885
+ if self.hook_full is not None:
886
+ self.hook_full.set_steps(step_info)
887
+
888
+ def upscale_shape(self, step_info, samples, w, h, save_temp_prefix=None):
889
+ scale_method, sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, vae = self.params
890
+
891
+ self.prepare_hook(step_info)
892
+
893
+ # upscale latent
894
+ if self.upscale_model is None:
895
+ upscaled_latent = latent_upscale_on_pixel_space_shape(samples, scale_method, w, h, vae,
896
+ use_tile=self.use_tiled_vae,
897
+ save_temp_prefix=save_temp_prefix, hook=self.hook_base)
898
+ else:
899
+ upscaled_latent = latent_upscale_on_pixel_space_with_model_shape(samples, scale_method, self.upscale_model, w, h, vae,
900
+ save_temp_prefix=save_temp_prefix, hook=self.hook_mask)
901
+
902
+ return self.do_samples(step_info, base_sampler, mask_sampler, sample_schedule, mask, upscaled_latent)
903
+
904
+ def is_full_sample_time(self, step_info, sample_schedule):
905
+ cur_step, total_step = step_info
906
+
907
+ # make start from 1 instead of zero
908
+ cur_step += 1
909
+ total_step += 1
910
+
911
+ if sample_schedule == "none":
912
+ return False
913
+
914
+ elif sample_schedule == "interleave1":
915
+ return cur_step % 2 == 0
916
+
917
+ elif sample_schedule == "interleave2":
918
+ return cur_step % 3 == 0
919
+
920
+ elif sample_schedule == "interleave3":
921
+ return cur_step % 4 == 0
922
+
923
+ elif sample_schedule == "last1":
924
+ return cur_step == total_step
925
+
926
+ elif sample_schedule == "last2":
927
+ return cur_step >= total_step-1
928
+
929
+ elif sample_schedule == "interleave1+last1":
930
+ return cur_step % 2 == 0 or cur_step >= total_step-1
931
+
932
+ elif sample_schedule == "interleave2+last1":
933
+ return cur_step % 2 == 0 or cur_step >= total_step-1
934
+
935
+ elif sample_schedule == "interleave3+last1":
936
+ return cur_step % 2 == 0 or cur_step >= total_step-1
937
+
938
+ def do_samples(self, step_info, base_sampler, mask_sampler, sample_schedule, mask, upscaled_latent):
939
+ if self.is_full_sample_time(step_info, sample_schedule):
940
+ print(f"step_info={step_info} / full time")
941
+
942
+ upscaled_latent = base_sampler.sample(upscaled_latent, self.hook_base)
943
+ sampler = self.full_sampler if self.full_sampler is not None else base_sampler
944
+ return sampler.sample(upscaled_latent, self.hook_full)
945
+
946
+ else:
947
+ print(f"step_info={step_info} / non-full time")
948
+ # upscale mask
949
+ upscaled_mask = F.interpolate(mask, size=(upscaled_latent['samples'].shape[2], upscaled_latent['samples'].shape[3]),
950
+ mode='bilinear', align_corners=True)
951
+ upscaled_mask = upscaled_mask[:, :, :upscaled_latent['samples'].shape[2], :upscaled_latent['samples'].shape[3]]
952
+
953
+ # base sampler
954
+ upscaled_inv_mask = torch.where(upscaled_mask != 1.0, torch.tensor(1.0), torch.tensor(0.0))
955
+ upscaled_latent['noise_mask'] = upscaled_inv_mask
956
+ upscaled_latent = base_sampler.sample(upscaled_latent, self.hook_base)
957
+
958
+ # mask sampler
959
+ upscaled_latent['noise_mask'] = upscaled_mask
960
+ upscaled_latent = mask_sampler.sample(upscaled_latent, self.hook_mask)
961
+
962
+ # remove mask
963
+ del upscaled_latent['noise_mask']
964
+ return upscaled_latent
965
+
966
+
967
+ class PixelKSampleUpscaler:
968
+ params = None
969
+ upscale_model = None
970
+ hook = None
971
+ use_tiled_vae = False
972
+ is_tiled = False
973
+
974
+ def __init__(self, scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise,
975
+ use_tiled_vae, upscale_model_opt=None, hook_opt=None):
976
+ self.params = scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise
977
+ self.upscale_model = upscale_model_opt
978
+ self.hook = hook_opt
979
+ self.use_tiled_vae = use_tiled_vae
980
+
981
+ def upscale(self, step_info, samples, upscale_factor, save_temp_prefix=None):
982
+ scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params
983
+
984
+ if self.hook is not None:
985
+ self.hook.set_steps(step_info)
986
+
987
+ if self.upscale_model is None:
988
+ upscaled_latent = latent_upscale_on_pixel_space(samples, scale_method, upscale_factor, vae,
989
+ use_tile=self.use_tiled_vae,
990
+ save_temp_prefix=save_temp_prefix, hook=self.hook)
991
+ else:
992
+ upscaled_latent = latent_upscale_on_pixel_space_with_model(samples, scale_method, self.upscale_model, upscale_factor, vae,
993
+ save_temp_prefix=save_temp_prefix, hook=self.hook)
994
+
995
+ if self.hook is not None:
996
+ model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise = \
997
+ self.hook.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise)
998
+
999
+ refined_latent = nodes.KSampler().sample(model, seed, steps, cfg, sampler_name, scheduler,
1000
+ positive, negative, upscaled_latent, denoise)
1001
+ return refined_latent[0]
1002
+
1003
+ def upscale_shape(self, step_info, samples, w, h, save_temp_prefix=None):
1004
+ scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params
1005
+
1006
+ if self.hook is not None:
1007
+ self.hook.set_steps(step_info)
1008
+
1009
+ if self.upscale_model is None:
1010
+ upscaled_latent = latent_upscale_on_pixel_space_shape(samples, scale_method, w, h, vae,
1011
+ use_tile=self.use_tiled_vae,
1012
+ save_temp_prefix=save_temp_prefix, hook=self.hook)
1013
+ else:
1014
+ upscaled_latent = latent_upscale_on_pixel_space_with_model_shape(samples, scale_method, self.upscale_model, w, h, vae,
1015
+ use_tile=self.use_tiled_vae,
1016
+ save_temp_prefix=save_temp_prefix, hook=self.hook)
1017
+
1018
+ if self.hook is not None:
1019
+ model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise = \
1020
+ self.hook.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise)
1021
+
1022
+ refined_latent = nodes.KSampler().sample(model, seed, steps, cfg, sampler_name, scheduler,
1023
+ positive, negative, upscaled_latent, denoise)
1024
+ return refined_latent[0]
1025
+
1026
+
1027
+ # REQUIREMENTS: BlenderNeko/ComfyUI_TiledKSampler
1028
+ try:
1029
+ class TiledKSamplerWrapper:
1030
+ params = None
1031
+
1032
+ def __init__(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise,
1033
+ tile_width, tile_height, tiling_strategy):
1034
+ self.params = model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, tile_width, tile_height, tiling_strategy
1035
+
1036
+ def sample(self, latent_image, hook):
1037
+ from custom_nodes.ComfyUI_TiledKSampler.nodes import TiledKSamplerAdvanced
1038
+
1039
+ model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, tile_width, tile_height, tiling_strategy = self.params
1040
+
1041
+ steps = int(steps/denoise)
1042
+ start_at_step = int(steps*(1.0 - denoise))
1043
+ end_at_step = steps
1044
+
1045
+ if hook is not None:
1046
+ model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise = \
1047
+ hook.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise)
1048
+
1049
+ return TiledKSamplerAdvanced().sample(model, "enable", seed, tile_width, tile_height, tiling_strategy, steps, cfg, sampler_name, scheduler,
1050
+ positive, negative, latent_image, start_at_step, end_at_step, "disable")[0]
1051
+
1052
+ class PixelTiledKSampleUpscaler:
1053
+ params = None
1054
+ upscale_model = None
1055
+ tile_params = None
1056
+ hook = None
1057
+ is_tiled = True
1058
+
1059
+ def __init__(self, scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise,
1060
+ tile_width, tile_height, tiling_strategy,
1061
+ upscale_model_opt=None, hook_opt=None):
1062
+ self.params = scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise
1063
+ self.tile_params = tile_width, tile_height, tiling_strategy
1064
+ self.upscale_model = upscale_model_opt
1065
+ self.hook = hook_opt
1066
+
1067
+ def emulate_non_advanced(self, latent):
1068
+ from custom_nodes.ComfyUI_TiledKSampler.nodes import TiledKSamplerAdvanced
1069
+
1070
+ scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params
1071
+ tile_width, tile_height, tiling_strategy = self.tile_params
1072
+
1073
+ steps = int(steps/denoise)
1074
+ start_at_step = int(steps*(1.0 - denoise))
1075
+ end_at_step = steps
1076
+
1077
+ #print(f"steps={steps}, start_at_step={start_at_step}, end_at_step={end_at_step}")
1078
+ return TiledKSamplerAdvanced().sample(model, "enable", seed, tile_width, tile_height, tiling_strategy, steps, cfg, sampler_name, scheduler,
1079
+ positive, negative, latent, start_at_step, end_at_step, "disable")[0]
1080
+
1081
+ def upscale(self, step_info, samples, upscale_factor, save_temp_prefix=None):
1082
+ scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params
1083
+
1084
+ if self.hook is not None:
1085
+ self.hook.set_steps(step_info)
1086
+
1087
+ if self.upscale_model is None:
1088
+ upscaled_latent = latent_upscale_on_pixel_space(samples, scale_method, upscale_factor, vae, True,
1089
+ save_temp_prefix=save_temp_prefix, hook=self.hook)
1090
+ else:
1091
+ upscaled_latent = latent_upscale_on_pixel_space_with_model(samples, scale_method, self.upscale_model, upscale_factor, vae, True,
1092
+ save_temp_prefix=save_temp_prefix, hook=self.hook)
1093
+
1094
+ refined_latent = self.emulate_non_advanced(upscaled_latent)
1095
+
1096
+ return refined_latent
1097
+
1098
+ def upscale_shape(self, step_info, samples, w, h, save_temp_prefix=None):
1099
+ scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params
1100
+
1101
+ if self.hook is not None:
1102
+ self.hook.set_steps(step_info)
1103
+
1104
+ if self.upscale_model is None:
1105
+ upscaled_latent = latent_upscale_on_pixel_space_shape(samples, scale_method, w, h, vae, True,
1106
+ save_temp_prefix=save_temp_prefix, hook=self.hook)
1107
+ else:
1108
+ upscaled_latent = latent_upscale_on_pixel_space_with_model_shape(samples, scale_method, self.upscale_model, w, h, vae, True,
1109
+ save_temp_prefix=save_temp_prefix, hook=self.hook)
1110
+
1111
+ refined_latent = self.emulate_non_advanced(upscaled_latent)
1112
+
1113
+ return refined_latent
1114
+ except:
1115
+ pass
1116
+
1117
+
1118
+ # REQUIREMENTS: biegert/ComfyUI-CLIPSeg
1119
+ try:
1120
+ class BBoxDetectorBasedOnCLIPSeg(BBoxDetector):
1121
+ prompt = None
1122
+ blur = None
1123
+ threshold = None
1124
+ dilation_factor = None
1125
+ aux = None
1126
+
1127
+ def __init__(self, prompt, blur, threshold, dilation_factor):
1128
+ self.prompt = prompt
1129
+ self.blur = blur
1130
+ self.threshold = threshold
1131
+ self.dilation_factor = dilation_factor
1132
+
1133
+ def detect(self, image, bbox_threshold, bbox_dilation, bbox_crop_factor, drop_size=1):
1134
+ mask = self.detect_combined(image, bbox_threshold, bbox_dilation)
1135
+ segs = mask_to_segs(mask, False, bbox_crop_factor, True, drop_size)
1136
+ return segs
1137
+
1138
+ def detect_combined(self, image, bbox_threshold, bbox_dilation):
1139
+ from custom_nodes.clipseg import CLIPSeg
1140
+
1141
+ if self.threshold is None:
1142
+ threshold = bbox_threshold
1143
+ else:
1144
+ threshold = self.threshold
1145
+
1146
+ if self.dilation_factor is None:
1147
+ dilation_factor = bbox_dilation
1148
+ else:
1149
+ dilation_factor = self.dilation_factor
1150
+
1151
+ prompt = self.aux if self.prompt == '' and self.aux is not None else self.prompt
1152
+
1153
+ mask, _, _ = CLIPSeg().segment_image(image, prompt, self.blur, threshold, dilation_factor)
1154
+ mask = to_binary_mask(mask)
1155
+ return mask
1156
+
1157
+ def setAux(self, x):
1158
+ self.aux = x
1159
+ except:
1160
+ pass
ComfyUI-Impact-Pack/impact_pack.py ADDED
@@ -0,0 +1,1321 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import folder_paths
3
+ import comfy.samplers
4
+ import comfy.sd
5
+ import warnings
6
+ from segment_anything import sam_model_registry
7
+
8
+ from impact_utils import *
9
+ import impact_core as core
10
+ from impact_core import SEG, NO_BBOX_DETECTOR, NO_SEGM_DETECTOR
11
+ from impact_config import MAX_RESOLUTION
12
+
13
+ warnings.filterwarnings('ignore', category=UserWarning, message='TypedStorage is deprecated')
14
+
15
+ model_path = folder_paths.models_dir
16
+
17
+
18
+ # Nodes
19
+ # folder_paths.supported_pt_extensions
20
+ folder_paths.folder_names_and_paths["mmdets_bbox"] = ([os.path.join(model_path, "mmdets", "bbox")], folder_paths.supported_pt_extensions)
21
+ folder_paths.folder_names_and_paths["mmdets_segm"] = ([os.path.join(model_path, "mmdets", "segm")], folder_paths.supported_pt_extensions)
22
+ folder_paths.folder_names_and_paths["mmdets"] = ([os.path.join(model_path, "mmdets")], folder_paths.supported_pt_extensions)
23
+ folder_paths.folder_names_and_paths["sams"] = ([os.path.join(model_path, "sams")], folder_paths.supported_pt_extensions)
24
+ folder_paths.folder_names_and_paths["onnx"] = ([os.path.join(model_path, "onnx")], {'.onnx'})
25
+
26
+
27
+ class ONNXDetectorProvider:
28
+ @classmethod
29
+ def INPUT_TYPES(s):
30
+ return {"required": {"model_name": (folder_paths.get_filename_list("onnx"), )}}
31
+
32
+ RETURN_TYPES = ("ONNX_DETECTOR", )
33
+ FUNCTION = "load_onnx"
34
+
35
+ CATEGORY = "ImpactPack"
36
+
37
+ def load_onnx(self, model_name):
38
+ model = folder_paths.get_full_path("onnx", model_name)
39
+ return (core.ONNXDetector(model), )
40
+
41
+
42
+ class MMDetDetectorProvider:
43
+ @classmethod
44
+ def INPUT_TYPES(s):
45
+ bboxs = ["bbox/"+x for x in folder_paths.get_filename_list("mmdets_bbox")]
46
+ segms = ["segm/"+x for x in folder_paths.get_filename_list("mmdets_segm")]
47
+ return {"required": {"model_name": (bboxs + segms, )}}
48
+ RETURN_TYPES = ("BBOX_DETECTOR", "SEGM_DETECTOR")
49
+ FUNCTION = "load_mmdet"
50
+
51
+ CATEGORY = "ImpactPack"
52
+
53
+ def load_mmdet(self, model_name):
54
+ mmdet_path = folder_paths.get_full_path("mmdets", model_name)
55
+ model = core.load_mmdet(mmdet_path)
56
+
57
+ if model_name.startswith("bbox"):
58
+ return core.BBoxDetector(model), NO_SEGM_DETECTOR()
59
+ else:
60
+ return NO_BBOX_DETECTOR(), model
61
+
62
+
63
+ class CLIPSegDetectorProvider:
64
+ @classmethod
65
+ def INPUT_TYPES(s):
66
+ return {"required": {
67
+ "text": ("STRING", {"multiline": False}),
68
+ "blur": ("FLOAT", {"min": 0, "max": 15, "step": 0.1, "default": 7}),
69
+ "threshold": ("FLOAT", {"min": 0, "max": 1, "step": 0.05, "default": 0.4}),
70
+ "dilation_factor": ("INT", {"min": 0, "max": 10, "step": 1, "default": 4}),
71
+ }
72
+ }
73
+
74
+ RETURN_TYPES = ("BBOX_DETECTOR", )
75
+ FUNCTION = "doit"
76
+
77
+ CATEGORY = "ImpactPack/Util"
78
+
79
+ def doit(self, text, blur, threshold, dilation_factor):
80
+ try:
81
+ import custom_nodes.clipseg
82
+ return (core.BBoxDetectorBasedOnCLIPSeg(text, blur, threshold, dilation_factor), )
83
+ except Exception as e:
84
+ print("[ERROR] CLIPSegToBboxDetector: CLIPSeg custom node isn't installed. You must install biegert/ComfyUI-CLIPSeg extension to use this node.")
85
+ print(f"\t{e}")
86
+ pass
87
+
88
+
89
+ class SAMLoader:
90
+ @classmethod
91
+ def INPUT_TYPES(s):
92
+ return {"required": {"model_name": (folder_paths.get_filename_list("sams"), )}}
93
+
94
+ RETURN_TYPES = ("SAM_MODEL", )
95
+ FUNCTION = "load_model"
96
+
97
+ CATEGORY = "ImpactPack"
98
+
99
+ def load_model(self, model_name):
100
+ modelname = folder_paths.get_full_path("sams", model_name)
101
+
102
+ if 'vit_h' in model_name:
103
+ model_kind = 'vit_h'
104
+ elif 'vit_l' in model_name:
105
+ model_kind = 'vit_l'
106
+ else:
107
+ model_kind = 'vit_b'
108
+
109
+ sam = sam_model_registry[model_kind](checkpoint=modelname)
110
+ print(f"Loads SAM model: {modelname}")
111
+ return (sam, )
112
+
113
+
114
+ class ONNXDetectorForEach:
115
+ @classmethod
116
+ def INPUT_TYPES(s):
117
+ return {"required": {
118
+ "onnx_detector": ("ONNX_DETECTOR",),
119
+ "image": ("IMAGE",),
120
+ "threshold": ("FLOAT", {"default": 0.8, "min": 0.0, "max": 1.0, "step": 0.01}),
121
+ "dilation": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}),
122
+ "crop_factor": ("FLOAT", {"default": 1.0, "min": 0.5, "max": 10, "step": 0.1}),
123
+ "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}),
124
+ }
125
+ }
126
+
127
+ RETURN_TYPES = ("SEGS", )
128
+ FUNCTION = "doit"
129
+
130
+ CATEGORY = "ImpactPack/Detector"
131
+
132
+ OUTPUT_NODE = True
133
+
134
+ def doit(self, onnx_detector, image, threshold, dilation, crop_factor, drop_size):
135
+ segs = onnx_detector.detect(image, threshold, dilation, crop_factor, drop_size)
136
+ return (segs, )
137
+
138
+
139
+ class DetailerForEach:
140
+ @classmethod
141
+ def INPUT_TYPES(s):
142
+ return {"required": {
143
+ "image": ("IMAGE", ),
144
+ "segs": ("SEGS", ),
145
+ "model": ("MODEL",),
146
+ "vae": ("VAE",),
147
+ "guide_size": ("FLOAT", {"default": 256, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}),
148
+ "guide_size_for": (["bbox", "crop_region"],),
149
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
150
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
151
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
152
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS,),
153
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS,),
154
+ "positive": ("CONDITIONING",),
155
+ "negative": ("CONDITIONING",),
156
+ "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}),
157
+ "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}),
158
+ "noise_mask": (["enabled", "disabled"], ),
159
+ "force_inpaint": (["disabled", "enabled"], ),
160
+ },
161
+ }
162
+
163
+ RETURN_TYPES = ("IMAGE", )
164
+ FUNCTION = "doit"
165
+
166
+ CATEGORY = "ImpactPack/Detailer"
167
+
168
+ @staticmethod
169
+ def do_detail(image, segs, model, vae, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
170
+ positive, negative, denoise, feather, noise_mask, force_inpaint):
171
+
172
+ image_pil = tensor2pil(image).convert('RGBA')
173
+
174
+ for seg in segs[1]:
175
+ cropped_image = seg.cropped_image if seg.cropped_image is not None \
176
+ else crop_ndarray4(image.numpy(), seg.crop_region)
177
+
178
+ mask_pil = feather_mask(seg.cropped_mask, feather)
179
+
180
+ if noise_mask == "enabled":
181
+ cropped_mask = seg.cropped_mask
182
+ else:
183
+ cropped_mask = None
184
+
185
+ enhanced_pil = core.enhance_detail(cropped_image, model, vae, guide_size, guide_size_for, seg.bbox,
186
+ seed, steps, cfg, sampler_name, scheduler,
187
+ positive, negative, denoise, cropped_mask, force_inpaint == "enabled")
188
+
189
+ if not (enhanced_pil is None):
190
+ # don't latent composite-> converting to latent caused poor quality
191
+ # use image paste
192
+ image_pil.paste(enhanced_pil, (seg.crop_region[0], seg.crop_region[1]), mask_pil)
193
+
194
+ image_tensor = pil2tensor(image_pil.convert('RGB'))
195
+
196
+ if len(segs[1]) > 0:
197
+ enhanced_tensor = pil2tensor(enhanced_pil) if enhanced_pil is not None else None
198
+ return image_tensor, torch.from_numpy(cropped_image), enhanced_tensor,
199
+ else:
200
+ return image_tensor, None, None,
201
+
202
+ def doit(self, image, segs, model, vae, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
203
+ positive, negative, denoise, feather, noise_mask, force_inpaint):
204
+
205
+ enhanced_img, cropped, cropped_enhanced = \
206
+ DetailerForEach.do_detail(image, segs, model, vae, guide_size, guide_size_for, seed, steps, cfg,
207
+ sampler_name, scheduler, positive, negative, denoise, feather, noise_mask,
208
+ force_inpaint)
209
+
210
+ return (enhanced_img, )
211
+
212
+
213
+ class DetailerForEachPipe:
214
+ @classmethod
215
+ def INPUT_TYPES(s):
216
+ return {"required": {
217
+ "image": ("IMAGE", ),
218
+ "segs": ("SEGS", ),
219
+ "guide_size": ("FLOAT", {"default": 256, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}),
220
+ "guide_size_for": (["bbox", "crop_region"],),
221
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
222
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
223
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
224
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS,),
225
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS,),
226
+ "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}),
227
+ "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}),
228
+ "noise_mask": (["enabled", "disabled"], ),
229
+ "force_inpaint": (["disabled", "enabled"], ),
230
+ "basic_pipe": ("BASIC_PIPE", )
231
+ },
232
+ }
233
+
234
+ RETURN_TYPES = ("IMAGE", )
235
+ FUNCTION = "doit"
236
+
237
+ CATEGORY = "ImpactPack/Detailer"
238
+
239
+ def doit(self, image, segs, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
240
+ denoise, feather, noise_mask, force_inpaint, basic_pipe):
241
+
242
+ model, _, vae, positive, negative = basic_pipe
243
+ enhanced_img, cropped, cropped_enhanced = \
244
+ DetailerForEach.do_detail(image, segs, model, vae, guide_size, guide_size_for, seed, steps, cfg,
245
+ sampler_name, scheduler, positive, negative, denoise, feather, noise_mask,
246
+ force_inpaint)
247
+
248
+ return (enhanced_img, )
249
+
250
+
251
+ class KSamplerProvider:
252
+ @classmethod
253
+ def INPUT_TYPES(s):
254
+ return {"required": {
255
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
256
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
257
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
258
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ),
259
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ),
260
+ "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
261
+ "basic_pipe": ("BASIC_PIPE", )
262
+ },
263
+ }
264
+
265
+ RETURN_TYPES = ("KSAMPLER",)
266
+ FUNCTION = "doit"
267
+
268
+ CATEGORY = "ImpactPack/Sampler"
269
+
270
+ def doit(self, seed, steps, cfg, sampler_name, scheduler, denoise, basic_pipe):
271
+ model, _, _, positive, negative = basic_pipe
272
+ sampler = core.KSamplerWrapper(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise)
273
+ return (sampler, )
274
+
275
+
276
+ class TwoSamplersForMask:
277
+ @classmethod
278
+ def INPUT_TYPES(s):
279
+ return {"required": {
280
+ "latent_image": ("LATENT", ),
281
+ "base_sampler": ("KSAMPLER", ),
282
+ "mask_sampler": ("KSAMPLER", ),
283
+ "mask": ("MASK", )
284
+ },
285
+ }
286
+
287
+ RETURN_TYPES = ("LATENT", )
288
+ FUNCTION = "doit"
289
+
290
+ CATEGORY = "ImpactPack/Sampler"
291
+
292
+ def doit(self, latent_image, base_sampler, mask_sampler, mask):
293
+ inv_mask = torch.where(mask != 1.0, torch.tensor(1.0), torch.tensor(0.0))
294
+
295
+ latent_image['noise_mask'] = inv_mask
296
+ new_latent_image = base_sampler.sample(latent_image)[0]
297
+
298
+ new_latent_image['noise_mask'] = mask
299
+ new_latent_image = mask_sampler.sample(new_latent_image)[0]
300
+
301
+ del new_latent_image['noise_mask']
302
+
303
+ return (new_latent_image, )
304
+
305
+
306
+ class FaceDetailer:
307
+ @classmethod
308
+ def INPUT_TYPES(s):
309
+ return {"required": {
310
+ "image": ("IMAGE", ),
311
+ "model": ("MODEL",),
312
+ "vae": ("VAE",),
313
+ "guide_size": ("FLOAT", {"default": 256, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}),
314
+ "guide_size_for": (["bbox", "crop_region"],),
315
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
316
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
317
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
318
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS,),
319
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS,),
320
+ "positive": ("CONDITIONING",),
321
+ "negative": ("CONDITIONING",),
322
+ "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}),
323
+ "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}),
324
+ "noise_mask": (["enabled", "disabled"], ),
325
+ "force_inpaint": (["disabled", "enabled"], ),
326
+
327
+ "bbox_threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
328
+ "bbox_dilation": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}),
329
+ "bbox_crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 10, "step": 0.1}),
330
+
331
+ "sam_detection_hint": (["center-1", "horizontal-2", "vertical-2", "rect-4", "diamond-4", "mask-area", "mask-points", "mask-point-bbox", "none"],),
332
+ "sam_dilation": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
333
+ "sam_threshold": ("FLOAT", {"default": 0.93, "min": 0.0, "max": 1.0, "step": 0.01}),
334
+ "sam_bbox_expansion": ("INT", {"default": 0, "min": 0, "max": 1000, "step": 1}),
335
+ "sam_mask_hint_threshold": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 1.0, "step": 0.01}),
336
+ "sam_mask_hint_use_negative": (["False", "Small", "Outter"],),
337
+
338
+ "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}),
339
+
340
+ "bbox_detector": ("BBOX_DETECTOR", ),
341
+ },
342
+ "optional": {
343
+ "sam_model_opt": ("SAM_MODEL", ),
344
+ }}
345
+
346
+ RETURN_TYPES = ("IMAGE", "IMAGE", "MASK", "DETAILER_PIPE", )
347
+ RETURN_NAMES = ("image", "cropped_refined", "mask", "detailer_pipe")
348
+ FUNCTION = "doit"
349
+
350
+ CATEGORY = "ImpactPack/Simple"
351
+
352
+ @staticmethod
353
+ def enhance_face(image, model, vae, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
354
+ positive, negative, denoise, feather, noise_mask, force_inpaint,
355
+ bbox_threshold, bbox_dilation, bbox_crop_factor,
356
+ sam_detection_hint, sam_dilation, sam_threshold, sam_bbox_expansion, sam_mask_hint_threshold,
357
+ sam_mask_hint_use_negative, drop_size,
358
+ bbox_detector, sam_model_opt=None):
359
+ # make default prompt as 'face' if empty prompt for CLIPSeg
360
+ bbox_detector.setAux('face')
361
+ segs = bbox_detector.detect(image, bbox_threshold, bbox_dilation, bbox_crop_factor, drop_size)
362
+ bbox_detector.setAux(None)
363
+
364
+ # bbox + sam combination
365
+ if sam_model_opt is not None:
366
+ sam_mask = core.make_sam_mask(sam_model_opt, segs, image, sam_detection_hint, sam_dilation,
367
+ sam_threshold, sam_bbox_expansion, sam_mask_hint_threshold,
368
+ sam_mask_hint_use_negative, )
369
+ segs = core.segs_bitwise_and_mask(segs, sam_mask)
370
+
371
+ enhanced_img, _, cropped_enhanced = \
372
+ DetailerForEach.do_detail(image, segs, model, vae, guide_size, guide_size_for, seed, steps, cfg,
373
+ sampler_name, scheduler, positive, negative, denoise, feather, noise_mask,
374
+ force_inpaint)
375
+
376
+ # Mask Generator
377
+ mask = core.segs_to_combined_mask(segs)
378
+
379
+ return enhanced_img, cropped_enhanced, mask
380
+
381
+ def doit(self, image, model, vae, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
382
+ positive, negative, denoise, feather, noise_mask, force_inpaint,
383
+ bbox_threshold, bbox_dilation, bbox_crop_factor,
384
+ sam_detection_hint, sam_dilation, sam_threshold, sam_bbox_expansion, sam_mask_hint_threshold,
385
+ sam_mask_hint_use_negative, drop_size, bbox_detector, sam_model_opt=None):
386
+
387
+ enhanced_img, cropped_enhanced, mask = FaceDetailer.enhance_face(
388
+ image, model, vae, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
389
+ positive, negative, denoise, feather, noise_mask, force_inpaint,
390
+ bbox_threshold, bbox_dilation, bbox_crop_factor,
391
+ sam_detection_hint, sam_dilation, sam_threshold, sam_bbox_expansion, sam_mask_hint_threshold,
392
+ sam_mask_hint_use_negative, drop_size, bbox_detector, sam_model_opt)
393
+
394
+ pipe = (model, vae, positive, negative, bbox_detector, sam_model_opt)
395
+ return enhanced_img, cropped_enhanced, mask, pipe
396
+
397
+
398
+ class LatentPixelScale:
399
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
400
+
401
+ @classmethod
402
+ def INPUT_TYPES(s):
403
+ return {"required": {
404
+ "samples": ("LATENT", ),
405
+ "scale_method": (s.upscale_methods,),
406
+ "scale_factor": ("FLOAT", {"default": 1.5, "min": 0.1, "max": 10000, "step": 0.1}),
407
+ "vae": ("VAE", ),
408
+ },
409
+ "optional": {
410
+ "upscale_model_opt": ("UPSCALE_MODEL", ),
411
+ }
412
+ }
413
+
414
+ RETURN_TYPES = ("LATENT",)
415
+ FUNCTION = "doit"
416
+
417
+ CATEGORY = "ImpactPack/Upscale"
418
+
419
+ def doit(self, samples, scale_method, scale_factor, vae, upscale_model_opt=None):
420
+ if upscale_model_opt is None:
421
+ latent = core.latent_upscale_on_pixel_space(samples, scale_method, scale_factor, vae)
422
+ else:
423
+ latent = core.latent_upscale_on_pixel_space_with_model(samples, scale_method, upscale_model_opt, scale_factor, vae)
424
+ return (latent,)
425
+
426
+
427
+ class CfgScheduleHookProvider:
428
+ schedules = ["simple"]
429
+
430
+ @classmethod
431
+ def INPUT_TYPES(s):
432
+ return {"required": {
433
+ "schedule_for_iteration": (s.schedules,),
434
+ "target_cfg": ("FLOAT", {"default": 3.0, "min": 0.0, "max": 100.0}),
435
+ },
436
+ }
437
+
438
+ RETURN_TYPES = ("PK_HOOK",)
439
+ FUNCTION = "doit"
440
+
441
+ CATEGORY = "ImpactPack/Upscale"
442
+
443
+ def doit(self, schedule_for_iteration, target_cfg):
444
+ hook = None
445
+ if schedule_for_iteration == "simple":
446
+ hook = core.SimpleCfgScheduleHook(target_cfg)
447
+
448
+ return (hook, )
449
+
450
+
451
+ class DenoiseScheduleHookProvider:
452
+ schedules = ["simple"]
453
+
454
+ @classmethod
455
+ def INPUT_TYPES(s):
456
+ return {"required": {
457
+ "schedule_for_iteration": (s.schedules,),
458
+ "target_denoise": ("FLOAT", {"default": 0.2, "min": 0.0, "max": 100.0}),
459
+ },
460
+ }
461
+
462
+ RETURN_TYPES = ("PK_HOOK",)
463
+ FUNCTION = "doit"
464
+
465
+ CATEGORY = "ImpactPack/Upscale"
466
+
467
+ def doit(self, schedule_for_iteration, target_denoise):
468
+ hook = None
469
+ if schedule_for_iteration == "simple":
470
+ hook = core.SimpleDenoiseScheduleHook(target_denoise)
471
+
472
+ return (hook, )
473
+
474
+
475
+ class PixelKSampleHookCombine:
476
+ @classmethod
477
+ def INPUT_TYPES(s):
478
+ return {"required": {
479
+ "hook1": ("PK_HOOK",),
480
+ "hook2": ("PK_HOOK",),
481
+ },
482
+ }
483
+
484
+ RETURN_TYPES = ("PK_HOOK",)
485
+ FUNCTION = "doit"
486
+
487
+ CATEGORY = "ImpactPack/Upscale"
488
+
489
+ def doit(self, hook1, hook2):
490
+ hook = core.PixelKSampleHookCombine(hook1, hook2)
491
+ return (hook, )
492
+
493
+
494
+ class TiledKSamplerProvider:
495
+ @classmethod
496
+ def INPUT_TYPES(s):
497
+ return {"required": {
498
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
499
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
500
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
501
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ),
502
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ),
503
+ "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
504
+ "tile_width": ("INT", {"default": 512, "min": 256, "max": MAX_RESOLUTION, "step": 64}),
505
+ "tile_height": ("INT", {"default": 512, "min": 256, "max": MAX_RESOLUTION, "step": 64}),
506
+ "tiling_strategy": (["random", "padded", 'simple'], ),
507
+ "basic_pipe": ("BASIC_PIPE", )
508
+ }}
509
+
510
+ RETURN_TYPES = ("KSAMPLER",)
511
+ FUNCTION = "doit"
512
+
513
+ CATEGORY = "ImpactPack/Sampler"
514
+
515
+ def doit(self, seed, steps, cfg, sampler_name, scheduler, denoise,
516
+ tile_width, tile_height, tiling_strategy, basic_pipe):
517
+ model, _, _, positive, negative = basic_pipe
518
+ sampler = core.TiledKSamplerWrapper(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise,
519
+ tile_width, tile_height, tiling_strategy)
520
+ return (sampler, )
521
+
522
+
523
+ class PixelTiledKSampleUpscalerProvider:
524
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
525
+
526
+ @classmethod
527
+ def INPUT_TYPES(s):
528
+ return {"required": {
529
+ "scale_method": (s.upscale_methods,),
530
+ "model": ("MODEL",),
531
+ "vae": ("VAE",),
532
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
533
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
534
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
535
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ),
536
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ),
537
+ "positive": ("CONDITIONING", ),
538
+ "negative": ("CONDITIONING", ),
539
+ "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
540
+ "tile_width": ("INT", {"default": 512, "min": 256, "max": MAX_RESOLUTION, "step": 64}),
541
+ "tile_height": ("INT", {"default": 512, "min": 256, "max": MAX_RESOLUTION, "step": 64}),
542
+ "tiling_strategy": (["random", "padded", 'simple'], ),
543
+ },
544
+ "optional": {
545
+ "upscale_model_opt": ("UPSCALE_MODEL", ),
546
+ "pk_hook_opt": ("PK_HOOK", ),
547
+ }
548
+ }
549
+
550
+ RETURN_TYPES = ("UPSCALER",)
551
+ FUNCTION = "doit"
552
+
553
+ CATEGORY = "ImpactPack/Upscale"
554
+
555
+ def doit(self, scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, tile_width, tile_height, tiling_strategy, upscale_model_opt=None, pk_hook_opt=None):
556
+ try:
557
+ import custom_nodes.ComfyUI_TiledKSampler.nodes
558
+ upscaler = core.PixelTiledKSampleUpscaler(scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, tile_width, tile_height, tiling_strategy, upscale_model_opt, pk_hook_opt)
559
+ return (upscaler, )
560
+ except Exception as e:
561
+ print("[ERROR] PixelTiledKSampleUpscalerProvider: ComfyUI_TiledKSampler custom node isn't installed. You must install BlenderNeko/ComfyUI_TiledKSampler extension to use this node.")
562
+ print(f"\t{e}")
563
+ pass
564
+
565
+
566
+ class PixelTiledKSampleUpscalerProviderPipe:
567
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
568
+
569
+ @classmethod
570
+ def INPUT_TYPES(s):
571
+ return {"required": {
572
+ "scale_method": (s.upscale_methods,),
573
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
574
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
575
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
576
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ),
577
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ),
578
+ "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
579
+ "tile_width": ("INT", {"default": 512, "min": 256, "max": MAX_RESOLUTION, "step": 64}),
580
+ "tile_height": ("INT", {"default": 512, "min": 256, "max": MAX_RESOLUTION, "step": 64}),
581
+ "tiling_strategy": (["random", "padded", 'simple'], ),
582
+ "basic_pipe": ("BASIC_PIPE",)
583
+ },
584
+ "optional": {
585
+ "upscale_model_opt": ("UPSCALE_MODEL", ),
586
+ "pk_hook_opt": ("PK_HOOK", ),
587
+ }
588
+ }
589
+
590
+ RETURN_TYPES = ("UPSCALER",)
591
+ FUNCTION = "doit"
592
+
593
+ CATEGORY = "ImpactPack/Upscale"
594
+
595
+ def doit(self, scale_method, seed, steps, cfg, sampler_name, scheduler, denoise, tile_width, tile_height, tiling_strategy, basic_pipe, upscale_model_opt=None, pk_hook_opt=None):
596
+ try:
597
+ import custom_nodes.ComfyUI_TiledKSampler.nodes
598
+ model, _, vae, positive, negative = basic_pipe
599
+ upscaler = core.PixelTiledKSampleUpscaler(scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, tile_width, tile_height, tiling_strategy, upscale_model_opt, pk_hook_opt)
600
+ return (upscaler, )
601
+ except Exception as e:
602
+ print("[ERROR] PixelTiledKSampleUpscalerProviderPipe: ComfyUI_TiledKSampler custom node isn't installed. You must install BlenderNeko/ComfyUI_TiledKSampler extension to use this node.")
603
+ print(f"\t{e}")
604
+ pass
605
+
606
+
607
+ class PixelKSampleUpscalerProvider:
608
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
609
+
610
+ @classmethod
611
+ def INPUT_TYPES(s):
612
+ return {"required": {
613
+ "scale_method": (s.upscale_methods,),
614
+ "model": ("MODEL",),
615
+ "vae": ("VAE",),
616
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
617
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
618
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
619
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ),
620
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ),
621
+ "positive": ("CONDITIONING", ),
622
+ "negative": ("CONDITIONING", ),
623
+ "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
624
+ "use_tiled_vae": (["disabled", "enabled"],),
625
+ },
626
+ "optional": {
627
+ "upscale_model_opt": ("UPSCALE_MODEL", ),
628
+ "pk_hook_opt": ("PK_HOOK", ),
629
+ }
630
+ }
631
+
632
+ RETURN_TYPES = ("UPSCALER",)
633
+ FUNCTION = "doit"
634
+
635
+ CATEGORY = "ImpactPack/Upscale"
636
+
637
+ def doit(self, scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise,
638
+ use_tiled_vae, upscale_model_opt=None, pk_hook_opt=None):
639
+ upscaler = core.PixelKSampleUpscaler(scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler,
640
+ positive, negative, denoise, use_tiled_vae == "enabled", upscale_model_opt, pk_hook_opt)
641
+ return (upscaler, )
642
+
643
+
644
+ class PixelKSampleUpscalerProviderPipe(PixelKSampleUpscalerProvider):
645
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
646
+
647
+ @classmethod
648
+ def INPUT_TYPES(s):
649
+ return {"required": {
650
+ "scale_method": (s.upscale_methods,),
651
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
652
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
653
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
654
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ),
655
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ),
656
+ "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
657
+ "use_tiled_vae": (["disabled", "enabled"],),
658
+ "basic_pipe": ("BASIC_PIPE",)
659
+ },
660
+ "optional": {
661
+ "upscale_model_opt": ("UPSCALE_MODEL", ),
662
+ "pk_hook_opt": ("PK_HOOK", ),
663
+ }
664
+ }
665
+
666
+ RETURN_TYPES = ("UPSCALER",)
667
+ FUNCTION = "doit_pipe"
668
+
669
+ CATEGORY = "ImpactPack/Upscale"
670
+
671
+ def doit_pipe(self, scale_method, seed, steps, cfg, sampler_name, scheduler, denoise,
672
+ use_tiled_vae, basic_pipe, upscale_model_opt=None, pk_hook_opt=None):
673
+ model, _, vae, positive, negative = basic_pipe
674
+ upscaler = core.PixelKSampleUpscaler(scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler,
675
+ positive, negative, denoise, use_tiled_vae == "enabled", upscale_model_opt, pk_hook_opt)
676
+ return (upscaler, )
677
+
678
+
679
+ class TwoSamplersForMaskUpscalerProvider:
680
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
681
+
682
+ @classmethod
683
+ def INPUT_TYPES(s):
684
+ return {"required": {
685
+ "scale_method": (s.upscale_methods,),
686
+ "full_sample_schedule": (
687
+ ["none", "interleave1", "interleave2", "interleave3",
688
+ "last1", "last2",
689
+ "interleave1+last1", "interleave2+last1", "interleave3+last1",
690
+ ],),
691
+ "use_tiled_vae": (["disabled", "enabled"],),
692
+ "base_sampler": ("KSAMPLER", ),
693
+ "mask_sampler": ("KSAMPLER", ),
694
+ "mask": ("MASK", ),
695
+ "vae": ("VAE",),
696
+ },
697
+ "optional": {
698
+ "full_sampler_opt": ("KSAMPLER",),
699
+ "upscale_model_opt": ("UPSCALE_MODEL", ),
700
+ "pk_hook_base_opt": ("PK_HOOK", ),
701
+ "pk_hook_mask_opt": ("PK_HOOK", ),
702
+ "pk_hook_full_opt": ("PK_HOOK", ),
703
+ }
704
+ }
705
+
706
+ RETURN_TYPES = ("UPSCALER", )
707
+ FUNCTION = "doit"
708
+
709
+ CATEGORY = "ImpactPack/Upscale"
710
+
711
+ def doit(self, scale_method, full_sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, vae,
712
+ full_sampler_opt=None, upscale_model_opt=None,
713
+ pk_hook_base_opt=None, pk_hook_mask_opt=None, pk_hook_full_opt=None):
714
+ upscaler = core.TwoSamplersForMaskUpscaler(scale_method, full_sample_schedule, use_tiled_vae == "enabled",
715
+ base_sampler, mask_sampler, mask, vae, full_sampler_opt, upscale_model_opt,
716
+ pk_hook_base_opt, pk_hook_mask_opt, pk_hook_full_opt)
717
+ return (upscaler, )
718
+
719
+
720
+ class TwoSamplersForMaskUpscalerProviderPipe:
721
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
722
+
723
+ @classmethod
724
+ def INPUT_TYPES(s):
725
+ return {"required": {
726
+ "scale_method": (s.upscale_methods,),
727
+ "full_sample_schedule": (
728
+ ["none", "interleave1", "interleave2", "interleave3",
729
+ "last1", "last2",
730
+ "interleave1+last1", "interleave2+last1", "interleave3+last1",
731
+ ],),
732
+ "use_tiled_vae": (["disabled", "enabled"],),
733
+ "base_sampler": ("KSAMPLER", ),
734
+ "mask_sampler": ("KSAMPLER", ),
735
+ "mask": ("MASK", ),
736
+ "basic_pipe": ("BASIC_PIPE",),
737
+ },
738
+ "optional": {
739
+ "full_sampler_opt": ("KSAMPLER",),
740
+ "upscale_model_opt": ("UPSCALE_MODEL", ),
741
+ "pk_hook_base_opt": ("PK_HOOK", ),
742
+ "pk_hook_mask_opt": ("PK_HOOK", ),
743
+ "pk_hook_full_opt": ("PK_HOOK", ),
744
+ }
745
+ }
746
+
747
+ RETURN_TYPES = ("UPSCALER", )
748
+ FUNCTION = "doit"
749
+
750
+ CATEGORY = "ImpactPack/Upscale"
751
+
752
+ def doit(self, scale_method, full_sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, basic_pipe,
753
+ full_sampler_opt=None, upscale_model_opt=None,
754
+ pk_hook_base_opt=None, pk_hook_mask_opt=None, pk_hook_full_opt=None):
755
+ _, _, vae, _, _ = basic_pipe
756
+ upscaler = core.TwoSamplersForMaskUpscaler(scale_method, full_sample_schedule, use_tiled_vae == "enabled",
757
+ base_sampler, mask_sampler, mask, vae, full_sampler_opt, upscale_model_opt,
758
+ pk_hook_base_opt, pk_hook_mask_opt, pk_hook_full_opt)
759
+ return (upscaler, )
760
+
761
+
762
+ class IterativeLatentUpscale:
763
+ @classmethod
764
+ def INPUT_TYPES(s):
765
+ return {"required": {
766
+ "samples": ("LATENT", ),
767
+ "upscale_factor": ("FLOAT", {"default": 1.5, "min": 1, "max": 10000, "step": 0.1}),
768
+ "steps": ("INT", {"default": 3, "min": 1, "max": 10000, "step": 1}),
769
+ "temp_prefix": ("STRING", {"default": ""}),
770
+ "upscaler": ("UPSCALER",)
771
+ }}
772
+
773
+ RETURN_TYPES = ("LATENT",)
774
+ RETURN_NAMES = ("latent",)
775
+ FUNCTION = "doit"
776
+
777
+ CATEGORY = "ImpactPack/Upscale"
778
+
779
+ def doit(self, samples, upscale_factor, steps, temp_prefix, upscaler):
780
+ w = samples['samples'].shape[3]*8 # image width
781
+ h = samples['samples'].shape[2]*8 # image height
782
+
783
+ if temp_prefix == "":
784
+ temp_prefix = None
785
+
786
+ upscale_factor_unit = max(0, (upscale_factor-1.0)/steps)
787
+ current_latent = samples
788
+ scale = 1
789
+
790
+ for i in range(steps-1):
791
+ scale += upscale_factor_unit
792
+ new_w = w*scale
793
+ new_h = h*scale
794
+ print(f"IterativeLatentUpscale[{i+1}/{steps}]: {new_w:.1f}x{new_h:.1f} (scale:{scale:.2f}) ")
795
+ step_info = i, steps
796
+ current_latent = upscaler.upscale_shape(step_info, current_latent, new_w, new_h, temp_prefix)
797
+
798
+ if scale < upscale_factor:
799
+ new_w = w*upscale_factor
800
+ new_h = h*upscale_factor
801
+ print(f"IterativeLatentUpscale[Final]: {new_w:.1f}x{new_h:.1f} (scale:{upscale_factor:.2f}) ")
802
+ step_info = steps, steps
803
+ current_latent = upscaler.upscale_shape(step_info, current_latent, new_w, new_h, temp_prefix)
804
+
805
+ return (current_latent, )
806
+
807
+
808
+ class IterativeImageUpscale:
809
+ @classmethod
810
+ def INPUT_TYPES(s):
811
+ return {"required": {
812
+ "pixels": ("IMAGE", ),
813
+ "upscale_factor": ("FLOAT", {"default": 1.5, "min": 1, "max": 10000, "step": 0.1}),
814
+ "steps": ("INT", {"default": 3, "min": 1, "max": 10000, "step": 1}),
815
+ "temp_prefix": ("STRING", {"default": ""}),
816
+ "upscaler": ("UPSCALER",),
817
+ "vae": ("VAE",),
818
+ }}
819
+
820
+ RETURN_TYPES = ("IMAGE",)
821
+ RETURN_NAMES = ("image",)
822
+ FUNCTION = "doit"
823
+
824
+ CATEGORY = "ImpactPack/Upscale"
825
+
826
+ def doit(self, pixels, upscale_factor, steps, temp_prefix, upscaler, vae):
827
+ if temp_prefix == "":
828
+ temp_prefix = None
829
+
830
+ if upscaler.is_tiled:
831
+ latent = nodes.VAEEncodeTiled().encode(vae, pixels)[0]
832
+ else:
833
+ latent = nodes.VAEEncode().encode(vae, pixels)[0]
834
+
835
+ refined_latent = IterativeLatentUpscale().doit(latent, upscale_factor, steps, temp_prefix, upscaler)
836
+
837
+ if upscaler.is_tiled:
838
+ pixels = nodes.VAEDecodeTiled().decode(vae, refined_latent[0])[0]
839
+ else:
840
+ pixels = nodes.VAEDecode().decode(vae, refined_latent[0])[0]
841
+
842
+ return (pixels, )
843
+
844
+
845
+ class FaceDetailerPipe:
846
+ @classmethod
847
+ def INPUT_TYPES(s):
848
+ return {"required": {
849
+ "image": ("IMAGE", ),
850
+ "detailer_pipe": ("DETAILER_PIPE",),
851
+ "guide_size": ("FLOAT", {"default": 256, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}),
852
+ "guide_size_for": (["bbox", "crop_region"],),
853
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
854
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
855
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
856
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS,),
857
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS,),
858
+ "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}),
859
+ "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}),
860
+ "noise_mask": (["enabled", "disabled"], ),
861
+ "force_inpaint": (["disabled", "enabled"], ),
862
+
863
+ "bbox_threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
864
+ "bbox_dilation": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}),
865
+ "bbox_crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 10, "step": 0.1}),
866
+
867
+ "sam_detection_hint": (["center-1", "horizontal-2", "vertical-2", "rect-4", "diamond-4", "mask-area", "mask-points", "mask-point-bbox", "none"],),
868
+ "sam_dilation": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
869
+ "sam_threshold": ("FLOAT", {"default": 0.93, "min": 0.0, "max": 1.0, "step": 0.01}),
870
+ "sam_bbox_expansion": ("INT", {"default": 0, "min": 0, "max": 1000, "step": 1}),
871
+ "sam_mask_hint_threshold": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 1.0, "step": 0.01}),
872
+ "sam_mask_hint_use_negative": (["False", "Small", "Outter"],),
873
+
874
+ "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}),
875
+ },
876
+ }
877
+
878
+ RETURN_TYPES = ("IMAGE", "IMAGE", "MASK", "DETAILER_PIPE", )
879
+ RETURN_NAMES = ("image", "cropped_refined", "mask", "detailer_pipe")
880
+ FUNCTION = "doit"
881
+
882
+ CATEGORY = "ImpactPack/Simple"
883
+
884
+ def doit(self, image, detailer_pipe, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
885
+ denoise, feather, noise_mask, force_inpaint, bbox_threshold, bbox_dilation, bbox_crop_factor,
886
+ sam_detection_hint, sam_dilation, sam_threshold, sam_bbox_expansion,
887
+ sam_mask_hint_threshold, sam_mask_hint_use_negative, drop_size):
888
+
889
+ model, vae, positive, negative, bbox_detector, sam_model_opt = detailer_pipe
890
+
891
+ enhanced_img, cropped_enhanced, mask = FaceDetailer.enhance_face(
892
+ image, model, vae, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
893
+ positive, negative, denoise, feather, noise_mask, force_inpaint,
894
+ bbox_threshold, bbox_dilation, bbox_crop_factor,
895
+ sam_detection_hint, sam_dilation, sam_threshold, sam_bbox_expansion, sam_mask_hint_threshold,
896
+ sam_mask_hint_use_negative, drop_size, bbox_detector, sam_model_opt)
897
+
898
+ return enhanced_img, cropped_enhanced, mask, detailer_pipe
899
+
900
+
901
+ class DetailerForEachTest(DetailerForEach):
902
+ RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", )
903
+ RETURN_NAMES = ("image", "cropped", "cropped_refined")
904
+ FUNCTION = "doit"
905
+
906
+ CATEGORY = "ImpactPack/Detailer"
907
+
908
+ def doit(self, image, segs, model, vae, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
909
+ positive, negative, denoise, feather, noise_mask, force_inpaint):
910
+
911
+ enhanced_img, cropped, cropped_enhanced = \
912
+ DetailerForEach.do_detail(image, segs, model, vae, guide_size, guide_size_for, seed, steps, cfg,
913
+ sampler_name, scheduler, positive, negative, denoise, feather, noise_mask,
914
+ force_inpaint)
915
+
916
+ # set fallback image
917
+ if cropped is None:
918
+ cropped = enhanced_img
919
+
920
+ if cropped_enhanced is None:
921
+ cropped_enhanced = enhanced_img
922
+
923
+ return enhanced_img, cropped, cropped_enhanced,
924
+
925
+
926
+ class DetailerForEachTestPipe(DetailerForEachPipe):
927
+ RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", )
928
+ RETURN_NAMES = ("image", "cropped", "cropped_refined")
929
+ FUNCTION = "doit"
930
+
931
+ CATEGORY = "ImpactPack/Detailer"
932
+
933
+ def doit(self, image, segs, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
934
+ denoise, feather, noise_mask, force_inpaint, basic_pipe):
935
+
936
+ model, _, vae, positive, negative = basic_pipe
937
+ enhanced_img, cropped, cropped_enhanced = \
938
+ DetailerForEach.do_detail(image, segs, model, vae, guide_size, guide_size_for, seed, steps, cfg,
939
+ sampler_name, scheduler, positive, negative, denoise, feather, noise_mask,
940
+ force_inpaint)
941
+
942
+ # set fallback image
943
+ if cropped is None:
944
+ cropped = enhanced_img
945
+
946
+ if cropped_enhanced is None:
947
+ cropped_enhanced = enhanced_img
948
+
949
+ return enhanced_img, cropped, cropped_enhanced,
950
+
951
+
952
+ class EmptySEGS:
953
+ @classmethod
954
+ def INPUT_TYPES(s):
955
+ return {"required": {},}
956
+
957
+ RETURN_TYPES = ("SEGS",)
958
+ FUNCTION = "doit"
959
+
960
+ CATEGORY = "ImpactPack/Util"
961
+
962
+ def doit(self):
963
+ shape = 0, 0
964
+ return ((shape, []),)
965
+
966
+
967
+ class SegsToCombinedMask:
968
+ @classmethod
969
+ def INPUT_TYPES(s):
970
+ return {"required": {
971
+ "segs": ("SEGS", ),
972
+ }
973
+ }
974
+
975
+ RETURN_TYPES = ("MASK",)
976
+ FUNCTION = "doit"
977
+
978
+ CATEGORY = "ImpactPack/Operation"
979
+
980
+ def doit(self, segs):
981
+ return (core.segs_to_combined_mask(segs), )
982
+
983
+
984
+ class SegsBitwiseAndMask:
985
+ @classmethod
986
+ def INPUT_TYPES(s):
987
+ return {"required": {
988
+ "segs": ("SEGS",),
989
+ "mask": ("MASK",),
990
+ }
991
+ }
992
+
993
+ RETURN_TYPES = ("SEGS",)
994
+ FUNCTION = "doit"
995
+
996
+ CATEGORY = "ImpactPack/Operation"
997
+
998
+ def doit(self, segs, mask):
999
+ return (core.segs_bitwise_and_mask(segs, mask), )
1000
+
1001
+
1002
+ class BitwiseAndMaskForEach:
1003
+ @classmethod
1004
+ def INPUT_TYPES(s):
1005
+ return {"required":
1006
+ {
1007
+ "base_segs": ("SEGS",),
1008
+ "mask_segs": ("SEGS",),
1009
+ }
1010
+ }
1011
+
1012
+ RETURN_TYPES = ("SEGS",)
1013
+ FUNCTION = "doit"
1014
+
1015
+ CATEGORY = "ImpactPack/Operation"
1016
+
1017
+ def doit(self, base_segs, mask_segs):
1018
+
1019
+ result = []
1020
+
1021
+ for bseg in base_segs[1]:
1022
+ cropped_mask1 = bseg.cropped_mask.copy()
1023
+ crop_region1 = bseg.crop_region
1024
+
1025
+ for mseg in mask_segs[1]:
1026
+ cropped_mask2 = mseg.cropped_mask
1027
+ crop_region2 = mseg.crop_region
1028
+
1029
+ # compute the intersection of the two crop regions
1030
+ intersect_region = (max(crop_region1[0], crop_region2[0]),
1031
+ max(crop_region1[1], crop_region2[1]),
1032
+ min(crop_region1[2], crop_region2[2]),
1033
+ min(crop_region1[3], crop_region2[3]))
1034
+
1035
+ overlapped = False
1036
+
1037
+ # set all pixels in cropped_mask1 to 0 except for those that overlap with cropped_mask2
1038
+ for i in range(intersect_region[0], intersect_region[2]):
1039
+ for j in range(intersect_region[1], intersect_region[3]):
1040
+ if cropped_mask1[j - crop_region1[1], i - crop_region1[0]] == 1 and \
1041
+ cropped_mask2[j - crop_region2[1], i - crop_region2[0]] == 1:
1042
+ # pixel overlaps with both masks, keep it as 1
1043
+ overlapped = True
1044
+ pass
1045
+ else:
1046
+ # pixel does not overlap with both masks, set it to 0
1047
+ cropped_mask1[j - crop_region1[1], i - crop_region1[0]] = 0
1048
+
1049
+ if overlapped:
1050
+ item = SEG(bseg.cropped_image, cropped_mask1, bseg.confidence, bseg.crop_region, bseg.bbox, bseg.label)
1051
+ result.append(item)
1052
+
1053
+ return ((base_segs[0], result),)
1054
+
1055
+
1056
+ class SubtractMaskForEach:
1057
+ @classmethod
1058
+ def INPUT_TYPES(s):
1059
+ return {"required": {
1060
+ "base_segs": ("SEGS",),
1061
+ "mask_segs": ("SEGS",),
1062
+ }
1063
+ }
1064
+
1065
+ RETURN_TYPES = ("SEGS",)
1066
+ FUNCTION = "doit"
1067
+
1068
+ CATEGORY = "ImpactPack/Operation"
1069
+
1070
+ def doit(self, base_segs, mask_segs):
1071
+
1072
+ result = []
1073
+
1074
+ for bseg in base_segs[1]:
1075
+ cropped_mask1 = bseg.cropped_mask.copy()
1076
+ crop_region1 = bseg.crop_region
1077
+
1078
+ for mseg in mask_segs[1]:
1079
+ cropped_mask2 = mseg.cropped_mask
1080
+ crop_region2 = mseg.crop_region
1081
+
1082
+ # compute the intersection of the two crop regions
1083
+ intersect_region = (max(crop_region1[0], crop_region2[0]),
1084
+ max(crop_region1[1], crop_region2[1]),
1085
+ min(crop_region1[2], crop_region2[2]),
1086
+ min(crop_region1[3], crop_region2[3]))
1087
+
1088
+ changed = False
1089
+
1090
+ # subtract operation
1091
+ for i in range(intersect_region[0], intersect_region[2]):
1092
+ for j in range(intersect_region[1], intersect_region[3]):
1093
+ if cropped_mask1[j - crop_region1[1], i - crop_region1[0]] == 1 and \
1094
+ cropped_mask2[j - crop_region2[1], i - crop_region2[0]] == 1:
1095
+ # pixel overlaps with both masks, set it as 0
1096
+ changed = True
1097
+ cropped_mask1[j - crop_region1[1], i - crop_region1[0]] = 0
1098
+ else:
1099
+ # pixel does not overlap with both masks, don't care
1100
+ pass
1101
+
1102
+ if changed:
1103
+ item = SEG(bseg.cropped_image, cropped_mask1, bseg.confidence, bseg.crop_region, bseg.bbox, bseg.label)
1104
+ result.append(item)
1105
+ else:
1106
+ result.append(base_segs)
1107
+
1108
+ return ((base_segs[0], result),)
1109
+
1110
+
1111
+ class MaskToSEGS:
1112
+ @classmethod
1113
+ def INPUT_TYPES(s):
1114
+ return {"required": {
1115
+ "mask": ("MASK",),
1116
+ "combined": (["False", "True"], ),
1117
+ "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 10, "step": 0.1}),
1118
+ "bbox_fill": (["disabled", "enabled"], ),
1119
+ "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}),
1120
+ }
1121
+ }
1122
+
1123
+ RETURN_TYPES = ("SEGS",)
1124
+ FUNCTION = "doit"
1125
+
1126
+ CATEGORY = "ImpactPack/Operation"
1127
+
1128
+ def doit(self, mask, combined, crop_factor, bbox_fill, drop_size):
1129
+ result = core.mask_to_segs(mask, combined, crop_factor, bbox_fill == "enabled", drop_size)
1130
+ return (result, )
1131
+
1132
+
1133
+ class ToBinaryMask:
1134
+ @classmethod
1135
+ def INPUT_TYPES(s):
1136
+ return {"required": {
1137
+ "mask": ("MASK",),
1138
+ }
1139
+ }
1140
+
1141
+ RETURN_TYPES = ("MASK",)
1142
+ FUNCTION = "doit"
1143
+
1144
+ CATEGORY = "ImpactPack/Operation"
1145
+
1146
+ def doit(self, mask,):
1147
+ mask = to_binary_mask(mask)
1148
+ return (mask,)
1149
+
1150
+
1151
+ class BitwiseAndMask:
1152
+ @classmethod
1153
+ def INPUT_TYPES(s):
1154
+ return {"required": {
1155
+ "mask1": ("MASK",),
1156
+ "mask2": ("MASK",),
1157
+ }
1158
+ }
1159
+
1160
+ RETURN_TYPES = ("MASK",)
1161
+ FUNCTION = "doit"
1162
+
1163
+ CATEGORY = "ImpactPack/Operation"
1164
+
1165
+ def doit(self, mask1, mask2):
1166
+ mask = bitwise_and_masks(mask1, mask2)
1167
+ return (mask,)
1168
+
1169
+
1170
+ class SubtractMask:
1171
+ @classmethod
1172
+ def INPUT_TYPES(s):
1173
+ return {"required": {
1174
+ "mask1": ("MASK", ),
1175
+ "mask2": ("MASK", ),
1176
+ }
1177
+ }
1178
+
1179
+ RETURN_TYPES = ("MASK",)
1180
+ FUNCTION = "doit"
1181
+
1182
+ CATEGORY = "ImpactPack/Operation"
1183
+
1184
+ def doit(self, mask1, mask2):
1185
+ mask = subtract_masks(mask1, mask2)
1186
+ return (mask,)
1187
+
1188
+
1189
+ import nodes
1190
+
1191
+ class PreviewBridge(nodes.PreviewImage):
1192
+ @classmethod
1193
+ def INPUT_TYPES(s):
1194
+ return {"required": {"images": ("IMAGE",), },
1195
+ "hidden": {"prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO", },
1196
+ "optional": {"image": (["#placeholder"], )},
1197
+ }
1198
+
1199
+ RETURN_TYPES = ("IMAGE", "MASK", )
1200
+
1201
+ FUNCTION = "doit"
1202
+
1203
+ CATEGORY = "ImpactPack/Util"
1204
+
1205
+ def doit(self, images, image, filename_prefix="ComfyUI", prompt=None, extra_pnginfo=None):
1206
+ if image == "#placeholder" or image['image_hash'] != id(images):
1207
+ # new input image
1208
+ res = self.save_images(images, filename_prefix, prompt, extra_pnginfo)
1209
+
1210
+ item = res['ui']['images'][0]
1211
+
1212
+ if not item['filename'].endswith(']'):
1213
+ filepath = f"{item['filename']} [{item['type']}]"
1214
+ else:
1215
+ filepath = item['filename']
1216
+
1217
+ image, mask = nodes.LoadImage().load_image(filepath)
1218
+
1219
+ res['ui']['aux'] = [id(images), res['ui']['images']]
1220
+ res['result'] = (image, mask, )
1221
+
1222
+ return res
1223
+
1224
+ else:
1225
+ # new mask
1226
+ forward = {'filename': image['forward_filename'],
1227
+ 'subfolder': image['forward_subfolder'],
1228
+ 'type': image['forward_type'], }
1229
+
1230
+ res = {'ui': {'images': [forward]}}
1231
+
1232
+ imgpath = ""
1233
+ if 'subfolder' in image and image['subfolder'] != "":
1234
+ imgpath = image['subfolder'] + "/"
1235
+
1236
+ imgpath += f"{image['filename']}"
1237
+
1238
+ if 'type' in image and image['type'] != "":
1239
+ imgpath += f" [{image['type']}]"
1240
+
1241
+ res['ui']['aux'] = [id(images), [forward]]
1242
+ res['result'] = nodes.LoadImage().load_image(imgpath)
1243
+
1244
+ return res
1245
+
1246
+
1247
+ class DetailerForEach:
1248
+ @classmethod
1249
+ def INPUT_TYPES(s):
1250
+ return {"required": {
1251
+ "image": ("IMAGE",),
1252
+ "segs": ("SEGS",),
1253
+ "model": ("MODEL",),
1254
+ "vae": ("VAE",),
1255
+ "guide_size": ("FLOAT", {"default": 256, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}),
1256
+ "guide_size_for": (["bbox", "crop_region"],),
1257
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
1258
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
1259
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
1260
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS,),
1261
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS,),
1262
+ "positive": ("CONDITIONING",),
1263
+ "negative": ("CONDITIONING",),
1264
+ "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}),
1265
+ "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}),
1266
+ "noise_mask": (["enabled", "disabled"],),
1267
+ "force_inpaint": (["disabled", "enabled"],),
1268
+ },
1269
+ }
1270
+
1271
+ RETURN_TYPES = ("IMAGE",)
1272
+ FUNCTION = "doit"
1273
+
1274
+ CATEGORY = "ImpactPack/Detailer"
1275
+
1276
+ @staticmethod
1277
+ def do_detail(image, segs, model, vae, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
1278
+ positive, negative, denoise, feather, noise_mask, force_inpaint):
1279
+
1280
+ image_pil = tensor2pil(image).convert('RGBA')
1281
+
1282
+ # shape = segs[0]
1283
+ segs = segs[1]
1284
+ for seg in segs:
1285
+ cropped_image = seg.cropped_image if seg.cropped_image is not None \
1286
+ else crop_ndarray4(image.numpy(), seg.crop_region)
1287
+
1288
+ mask_pil = feather_mask(seg.cropped_mask, feather)
1289
+
1290
+ if noise_mask == "enabled":
1291
+ cropped_mask = seg.cropped_mask
1292
+ else:
1293
+ cropped_mask = None
1294
+
1295
+ enhanced_pil = core.enhance_detail(cropped_image, model, vae, guide_size, guide_size_for, seg.bbox,
1296
+ seed, steps, cfg, sampler_name, scheduler,
1297
+ positive, negative, denoise, cropped_mask, force_inpaint == "enabled")
1298
+
1299
+ if not (enhanced_pil is None):
1300
+ # don't latent composite-> converting to latent caused poor quality
1301
+ # use image paste
1302
+ image_pil.paste(enhanced_pil, (seg.crop_region[0], seg.crop_region[1]), mask_pil)
1303
+
1304
+ image_tensor = pil2tensor(image_pil.convert('RGB'))
1305
+
1306
+ if len(segs) > 0:
1307
+ enhanced_tensor = pil2tensor(enhanced_pil) if enhanced_pil is not None else None
1308
+ return image_tensor, torch.from_numpy(cropped_image), enhanced_tensor,
1309
+ else:
1310
+ return image_tensor, None, None,
1311
+
1312
+ def doit(self, image, segs, model, vae, guide_size, guide_size_for, seed, steps, cfg, sampler_name, scheduler,
1313
+ positive, negative, denoise, feather, noise_mask, force_inpaint):
1314
+
1315
+ enhanced_img, cropped, cropped_enhanced = \
1316
+ DetailerForEach.do_detail(image, segs, model, vae, guide_size, guide_size_for, seed, steps, cfg,
1317
+ sampler_name, scheduler, positive, negative, denoise, feather, noise_mask,
1318
+ force_inpaint)
1319
+
1320
+ return (enhanced_img,)
1321
+
ComfyUI-Impact-Pack/impact_pipe.py ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ class ToDetailerPipe:
2
+ @classmethod
3
+ def INPUT_TYPES(s):
4
+ return {"required": {
5
+ "model": ("MODEL",),
6
+ "vae": ("VAE",),
7
+ "positive": ("CONDITIONING",),
8
+ "negative": ("CONDITIONING",),
9
+ "bbox_detector": ("BBOX_DETECTOR", ),
10
+ },
11
+ "optional": {
12
+ "sam_model_opt": ("SAM_MODEL", ),
13
+ }}
14
+
15
+ RETURN_TYPES = ("DETAILER_PIPE", )
16
+ RETURN_NAMES = ("detailer_pipe", )
17
+ FUNCTION = "doit"
18
+
19
+ CATEGORY = "ImpactPack/Pipe"
20
+
21
+ def doit(self, model, vae, positive, negative, bbox_detector, sam_model_opt=None):
22
+ pipe = (model, vae, positive, negative, bbox_detector, sam_model_opt)
23
+ return (pipe, )
24
+
25
+
26
+ class FromDetailerPipe:
27
+ @classmethod
28
+ def INPUT_TYPES(s):
29
+ return {"required": {"detailer_pipe": ("DETAILER_PIPE",), }, }
30
+
31
+ RETURN_TYPES = ("MODEL", "VAE", "CONDITIONING", "CONDITIONING", "BBOX_DETECTOR", "SAM_MODEL")
32
+ RETURN_NAMES = ("model", "vae", "positive", "negative", "bbox_detector", "sam_model_opt")
33
+ FUNCTION = "doit"
34
+
35
+ CATEGORY = "ImpactPack/Pipe"
36
+
37
+ def doit(self, detailer_pipe):
38
+ model, vae, positive, negative, bbox_detector, sam_model_opt = detailer_pipe
39
+ return model, vae, positive, negative, bbox_detector, sam_model_opt
40
+
41
+
42
+ class ToBasicPipe:
43
+ @classmethod
44
+ def INPUT_TYPES(s):
45
+ return {"required": {
46
+ "model": ("MODEL",),
47
+ "clip": ("CLIP",),
48
+ "vae": ("VAE",),
49
+ "positive": ("CONDITIONING",),
50
+ "negative": ("CONDITIONING",),
51
+ },
52
+ }
53
+
54
+ RETURN_TYPES = ("BASIC_PIPE", )
55
+ RETURN_NAMES = ("basic_pipe", )
56
+ FUNCTION = "doit"
57
+
58
+ CATEGORY = "ImpactPack/Pipe"
59
+
60
+ def doit(self, model, clip, vae, positive, negative):
61
+ pipe = (model, clip, vae, positive, negative)
62
+ return (pipe, )
63
+
64
+
65
+ class FromBasicPipe:
66
+ @classmethod
67
+ def INPUT_TYPES(s):
68
+ return {"required": {"basic_pipe": ("BASIC_PIPE",), }, }
69
+
70
+ RETURN_TYPES = ("MODEL", "CLIP", "VAE", "CONDITIONING", "CONDITIONING")
71
+ RETURN_NAMES = ("model", "clip", "vae", "positive", "negative")
72
+ FUNCTION = "doit"
73
+
74
+ CATEGORY = "ImpactPack/Pipe"
75
+
76
+ def doit(self, basic_pipe):
77
+ model, clip, vae, positive, negative = basic_pipe
78
+ return model, clip, vae, positive, negative
79
+
80
+
81
+ class BasicPipeToDetailerPipe:
82
+ @classmethod
83
+ def INPUT_TYPES(s):
84
+ return {"required": {"basic_pipe": ("BASIC_PIPE",),
85
+ "bbox_detector": ("BBOX_DETECTOR", ), },
86
+ "optional": {"sam_model_opt": ("SAM_MODEL", ), },
87
+ }
88
+
89
+ RETURN_TYPES = ("DETAILER_PIPE", )
90
+ RETURN_NAMES = ("detailer_pipe", )
91
+ FUNCTION = "doit"
92
+
93
+ CATEGORY = "ImpactPack/Pipe"
94
+
95
+ def doit(self, basic_pipe, bbox_detector, sam_model_opt=None):
96
+ model, _, vae, positive, negative = basic_pipe
97
+ pipe = model, vae, positive, negative, bbox_detector, sam_model_opt
98
+ return (pipe, )
99
+
100
+
101
+ class DetailerPipeToBasicPipe:
102
+ @classmethod
103
+ def INPUT_TYPES(s):
104
+ return {"required": {"detailer_pipe": ("DETAILER_PIPE",),
105
+ "clip": ("CLIP",), }, }
106
+
107
+ RETURN_TYPES = ("BASIC_PIPE", )
108
+ RETURN_NAMES = ("basic_pipe", )
109
+ FUNCTION = "doit"
110
+
111
+ CATEGORY = "ImpactPack/Pipe"
112
+
113
+ def doit(self, detailer_pipe, clip):
114
+ model, vae, positive, negative, _, _ = detailer_pipe
115
+ pipe = model, clip, vae, positive, negative
116
+ return (pipe, )
117
+
118
+
119
+ class EditBasicPipe:
120
+ @classmethod
121
+ def INPUT_TYPES(s):
122
+ return {
123
+ "required": {"basic_pipe": ("BASIC_PIPE",), },
124
+ "optional": {
125
+ "model": ("MODEL",),
126
+ "clip": ("CLIP",),
127
+ "vae": ("VAE",),
128
+ "positive": ("CONDITIONING",),
129
+ "negative": ("CONDITIONING",),
130
+ },
131
+ }
132
+
133
+ RETURN_TYPES = ("BASIC_PIPE", )
134
+ RETURN_NAMES = ("basic_pipe", )
135
+ FUNCTION = "doit"
136
+
137
+ CATEGORY = "ImpactPack/Pipe"
138
+
139
+ def doit(self, basic_pipe, model=None, clip=None, vae=None, positive=None, negative=None):
140
+ res_model, res_clip, res_vae, res_positive, res_negative = basic_pipe
141
+
142
+ if model is not None:
143
+ res_model = model
144
+
145
+ if clip is not None:
146
+ res_clip = clip
147
+
148
+ if vae is not None:
149
+ res_vae = vae
150
+
151
+ if positive is not None:
152
+ res_positive = positive
153
+
154
+ if negative is not None:
155
+ res_negative = negative
156
+
157
+ pipe = res_model, res_clip, res_vae, res_positive, res_negative
158
+
159
+ return (pipe, )
160
+
161
+
162
+ class EditDetailerPipe:
163
+ @classmethod
164
+ def INPUT_TYPES(s):
165
+ return {
166
+ "required": {"detailer_pipe": ("DETAILER_PIPE",), },
167
+ "optional": {
168
+ "model": ("MODEL",),
169
+ "vae": ("VAE",),
170
+ "positive": ("CONDITIONING",),
171
+ "negative": ("CONDITIONING",),
172
+ "bbox_detector": ("BBOX_DETECTOR",),
173
+ "sam_model": ("SAM_MODEL",), },
174
+ }
175
+
176
+ RETURN_TYPES = ("DETAILER_PIPE",)
177
+ RETURN_NAMES = ("detailer_pipe",)
178
+ FUNCTION = "doit"
179
+
180
+ CATEGORY = "ImpactPack/Pipe"
181
+
182
+ def doit(self, detailer_pipe, model=None, vae=None, positive=None, negative=None, bbox_detector=None, sam_model=None):
183
+ res_model, res_vae, res_positive, res_negative, res_bbox_detector, res_sam_model = detailer_pipe
184
+
185
+ if model is not None:
186
+ res_model = model
187
+
188
+ if vae is not None:
189
+ res_vae = vae
190
+
191
+ if positive is not None:
192
+ res_positive = positive
193
+
194
+ if negative is not None:
195
+ res_negative = negative
196
+
197
+ if bbox_detector is not None:
198
+ res_bbox_detector = bbox_detector
199
+
200
+ if sam_model is not None:
201
+ res_sam_model = sam_model
202
+
203
+ pipe = res_model, res_vae, res_positive, res_negative, res_bbox_detector, res_sam_model
204
+
205
+ return (pipe, )
ComfyUI-Impact-Pack/impact_server.py ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import threading
3
+
4
+ from aiohttp import web
5
+ import server
6
+ import folder_paths
7
+
8
+ import impact_core as core
9
+ import impact_pack
10
+ from segment_anything import SamPredictor, sam_model_registry
11
+ import numpy as np
12
+ import nodes
13
+ from PIL import Image
14
+ import io
15
+
16
+ @server.PromptServer.instance.routes.post("/upload/temp")
17
+ async def upload_image(request):
18
+ upload_dir = folder_paths.get_temp_directory()
19
+
20
+ if not os.path.exists(upload_dir):
21
+ os.makedirs(upload_dir)
22
+
23
+ post = await request.post()
24
+ image = post.get("image")
25
+
26
+ if image and image.file:
27
+ filename = image.filename
28
+ if not filename:
29
+ return web.Response(status=400)
30
+
31
+ split = os.path.splitext(filename)
32
+ i = 1
33
+ while os.path.exists(os.path.join(upload_dir, filename)):
34
+ filename = f"{split[0]} ({i}){split[1]}"
35
+ i += 1
36
+
37
+ filepath = os.path.join(upload_dir, filename)
38
+
39
+ with open(filepath, "wb") as f:
40
+ f.write(image.file.read())
41
+
42
+ return web.json_response({"name": filename})
43
+ else:
44
+ return web.Response(status=400)
45
+
46
+
47
+ sam_predictor = None
48
+ default_sam_model_name = os.path.join(impact_pack.model_path, "sams", "sam_vit_b_01ec64.pth")
49
+
50
+ sam_lock = threading.Condition()
51
+
52
+ last_prepare_data = None
53
+
54
+ @server.PromptServer.instance.routes.post("/sam/prepare")
55
+ async def load_sam_model(request):
56
+ global sam_predictor
57
+ global last_prepare_data
58
+ data = await request.json()
59
+
60
+ with sam_lock:
61
+ if last_prepare_data is not None and last_prepare_data == data:
62
+ # already loaded: skip -- prevent redundant loading
63
+ return web.Response(status=200)
64
+
65
+ last_prepare_data = data
66
+
67
+ model_name = os.path.join(impact_pack.model_path, "sams", data['sam_model_name'])
68
+
69
+ print(f"ComfyUI-Impact-Pack: Loading SAM model '{impact_pack.model_path}'")
70
+
71
+ filename, image_dir = folder_paths.annotated_filepath(data["filename"])
72
+
73
+ if image_dir is None:
74
+ typ = data['type'] if data['type'] != '' else 'output'
75
+ image_dir = folder_paths.get_directory_by_type(typ)
76
+
77
+ if image_dir is None:
78
+ return web.Response(status=400)
79
+
80
+ if 'vit_h' in model_name:
81
+ model_kind = 'vit_h'
82
+ elif 'vit_l' in model_name:
83
+ model_kind = 'vit_l'
84
+ else:
85
+ model_kind = 'vit_b'
86
+
87
+ sam_model = sam_model_registry[model_kind](checkpoint=model_name)
88
+ sam_predictor = SamPredictor(sam_model)
89
+
90
+ image_path = os.path.join(image_dir, filename)
91
+ image = nodes.LoadImage().load_image(image_path)[0]
92
+ image = np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)
93
+
94
+ sam_predictor.set_image(image, "RGB")
95
+
96
+
97
+ @server.PromptServer.instance.routes.post("/sam/release")
98
+ async def release_sam(request):
99
+ global sam_predictor
100
+
101
+ with sam_lock:
102
+ sam_predictor = None
103
+
104
+ print(f"ComfyUI-Impact-Pack: unloading SAM model")
105
+
106
+
107
+ @server.PromptServer.instance.routes.post("/sam/detect")
108
+ async def sam_detect(request):
109
+ global sam_predictor
110
+ with sam_lock:
111
+ if sam_predictor is not None:
112
+ data = await request.json()
113
+
114
+ positive_points = data['positive_points']
115
+ negative_points = data['negative_points']
116
+ threshold = data['threshold']
117
+
118
+ points = []
119
+ plabs = []
120
+
121
+ for p in positive_points:
122
+ points.append(p)
123
+ plabs.append(1)
124
+
125
+ for p in negative_points:
126
+ points.append(p)
127
+ plabs.append(0)
128
+
129
+ detected_masks = core.sam_predict(sam_predictor, points, plabs, None, threshold)
130
+ mask = core.combine_masks2(detected_masks)
131
+
132
+ if mask is None:
133
+ return web.Response(status=400)
134
+
135
+ image = mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])).movedim(1, -1).expand(-1, -1, -1, 3)
136
+ i = 255. * image.cpu().numpy()
137
+
138
+ img = Image.fromarray(np.clip(i[0], 0, 255).astype(np.uint8))
139
+
140
+ img_buffer = io.BytesIO()
141
+ img.save(img_buffer, format='png')
142
+
143
+ headers = {'Content-Type': 'image/png'}
144
+
145
+ return web.Response(body=img_buffer.getvalue(), headers=headers)
146
+
147
+ else:
148
+ return web.Response(status=400)
ComfyUI-Impact-Pack/impact_utils.py ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import cv2
3
+ import numpy as np
4
+ from PIL import Image, ImageFilter
5
+
6
+ LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
7
+
8
+ def pil2tensor(image):
9
+ return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0)
10
+
11
+
12
+ def tensor2pil(image):
13
+ return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8))
14
+
15
+
16
+ def center_of_bbox(bbox):
17
+ w, h = bbox[2] - bbox[0], bbox[3] - bbox[1]
18
+ return bbox[0] + w/2, bbox[1] + h/2
19
+
20
+
21
+ def combine_masks(masks):
22
+ if len(masks) == 0:
23
+ return None
24
+ else:
25
+ initial_cv2_mask = np.array(masks[0][1])
26
+ combined_cv2_mask = initial_cv2_mask
27
+
28
+ for i in range(1, len(masks)):
29
+ cv2_mask = np.array(masks[i][1])
30
+ combined_cv2_mask = cv2.bitwise_or(combined_cv2_mask, cv2_mask)
31
+
32
+ mask = torch.from_numpy(combined_cv2_mask)
33
+ return mask
34
+
35
+
36
+ def combine_masks2(masks):
37
+ if len(masks) == 0:
38
+ return None
39
+ else:
40
+ initial_cv2_mask = np.array(masks[0]).astype(np.uint8)
41
+ combined_cv2_mask = initial_cv2_mask
42
+
43
+ for i in range(1, len(masks)):
44
+ cv2_mask = np.array(masks[i]).astype(np.uint8)
45
+ combined_cv2_mask = cv2.bitwise_or(combined_cv2_mask, cv2_mask)
46
+
47
+ mask = torch.from_numpy(combined_cv2_mask)
48
+ return mask
49
+
50
+
51
+ def bitwise_and_masks(mask1, mask2):
52
+ mask1 = mask1.cpu()
53
+ mask2 = mask2.cpu()
54
+ cv2_mask1 = np.array(mask1)
55
+ cv2_mask2 = np.array(mask2)
56
+ cv2_mask = cv2.bitwise_and(cv2_mask1, cv2_mask2)
57
+ mask = torch.from_numpy(cv2_mask)
58
+ return mask
59
+
60
+
61
+ def to_binary_mask(mask):
62
+ mask = mask.clone().cpu()
63
+ mask[mask != 0] = 1.
64
+ return mask
65
+
66
+
67
+ def dilate_mask(mask, dilation_factor, iter=1):
68
+ if dilation_factor == 0:
69
+ return mask
70
+
71
+ kernel = np.ones((dilation_factor,dilation_factor), np.uint8)
72
+ return cv2.dilate(mask, kernel, iter)
73
+
74
+
75
+ def dilate_masks(segmasks, dilation_factor, iter=1):
76
+ if dilation_factor == 0:
77
+ return segmasks
78
+
79
+ dilated_masks = []
80
+ kernel = np.ones((dilation_factor,dilation_factor), np.uint8)
81
+ for i in range(len(segmasks)):
82
+ cv2_mask = segmasks[i][1]
83
+ dilated_mask = cv2.dilate(cv2_mask, kernel, iter)
84
+ item = (segmasks[i][0], dilated_mask, segmasks[i][2])
85
+ dilated_masks.append(item)
86
+ return dilated_masks
87
+
88
+
89
+ def feather_mask(mask, thickness):
90
+ pil_mask = Image.fromarray(np.uint8(mask * 255))
91
+
92
+ # Create a feathered mask by applying a Gaussian blur to the mask
93
+ blurred_mask = pil_mask.filter(ImageFilter.GaussianBlur(thickness))
94
+ feathered_mask = Image.new("L", pil_mask.size, 0)
95
+ feathered_mask.paste(blurred_mask, (0, 0), blurred_mask)
96
+ return feathered_mask
97
+
98
+
99
+ def subtract_masks(mask1, mask2):
100
+ mask1 = mask1.cpu()
101
+ mask2 = mask2.cpu()
102
+ cv2_mask1 = np.array(mask1) * 255
103
+ cv2_mask2 = np.array(mask2) * 255
104
+ cv2_mask = cv2.subtract(cv2_mask1, cv2_mask2)
105
+ mask = torch.from_numpy(cv2_mask) / 255.0
106
+ return mask
107
+
108
+
109
+ def normalize_region(limit, startp, size):
110
+ if startp < 0:
111
+ new_endp = min(limit, size)
112
+ new_startp = 0
113
+ elif startp + size > limit:
114
+ new_startp = max(0, limit - size)
115
+ new_endp = limit
116
+ else:
117
+ new_startp = startp
118
+ new_endp = min(limit, startp+size)
119
+
120
+ return int(new_startp), int(new_endp)
121
+
122
+
123
+ def make_crop_region(w, h, bbox, crop_factor):
124
+ x1 = bbox[0]
125
+ y1 = bbox[1]
126
+ x2 = bbox[2]
127
+ y2 = bbox[3]
128
+
129
+ bbox_w = x2 - x1
130
+ bbox_h = y2 - y1
131
+
132
+ crop_w = bbox_w * crop_factor
133
+ crop_h = bbox_h * crop_factor
134
+
135
+ kernel_x = x1 + bbox_w / 2
136
+ kernel_y = y1 + bbox_h / 2
137
+
138
+ new_x1 = int(kernel_x - crop_w / 2)
139
+ new_y1 = int(kernel_y - crop_h / 2)
140
+
141
+ # make sure position in (w,h)
142
+ new_x1, new_x2 = normalize_region(w, new_x1, crop_w)
143
+ new_y1, new_y2 = normalize_region(h, new_y1, crop_h)
144
+
145
+ return [new_x1, new_y1, new_x2, new_y2]
146
+
147
+
148
+ def crop_ndarray4(npimg, crop_region):
149
+ x1 = crop_region[0]
150
+ y1 = crop_region[1]
151
+ x2 = crop_region[2]
152
+ y2 = crop_region[3]
153
+
154
+ cropped = npimg[:, y1:y2, x1:x2, :]
155
+
156
+ return cropped
157
+
158
+
159
+ def crop_ndarray2(npimg, crop_region):
160
+ x1 = crop_region[0]
161
+ y1 = crop_region[1]
162
+ x2 = crop_region[2]
163
+ y2 = crop_region[3]
164
+
165
+ cropped = npimg[y1:y2, x1:x2]
166
+
167
+ return cropped
168
+
169
+
170
+ def crop_image(image, crop_region):
171
+ return crop_ndarray4(np.array(image), crop_region)
172
+
173
+
174
+ def to_latent_image(pixels, vae):
175
+ x = pixels.shape[1]
176
+ y = pixels.shape[2]
177
+ if pixels.shape[1] != x or pixels.shape[2] != y:
178
+ pixels = pixels[:, :x, :y, :]
179
+ t = vae.encode(pixels[:, :, :, :3])
180
+ return {"samples": t}
181
+
182
+
183
+ def scale_tensor(w, h, image):
184
+ image = tensor2pil(image)
185
+ scaled_image = image.resize((w, h), resample=LANCZOS)
186
+ return pil2tensor(scaled_image)
187
+
188
+
189
+ def scale_tensor_and_to_pil(w,h, image):
190
+ image = tensor2pil(image)
191
+ return image.resize((w, h), resample=LANCZOS)
192
+
193
+
ComfyUI-Impact-Pack/install.py ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import subprocess
4
+
5
+
6
+ comfy_path = '../..'
7
+
8
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.realpath(__file__)), "comfy"))
9
+ sys.path.append('.') # for portable version
10
+ sys.path.append(comfy_path)
11
+
12
+
13
+ import platform
14
+ import folder_paths
15
+ from torchvision.datasets.utils import download_url
16
+ import impact_config
17
+
18
+
19
+ print("### ComfyUI-Impact-Pack: Check dependencies")
20
+
21
+ if "python_embeded" in sys.executable or "python_embedded" in sys.executable:
22
+ pip_install = [sys.executable, '-s', '-m', 'pip', 'install', '--user']
23
+ mim_install = [sys.executable, '-s', '-m', 'mim', 'install', '--user']
24
+ else:
25
+ pip_install = [sys.executable, '-s', '-m', 'pip', 'install']
26
+ mim_install = [sys.executable, '-s', '-m', 'mim', 'install']
27
+
28
+
29
+ def remove_olds():
30
+ comfy_path = os.path.dirname(folder_paths.__file__)
31
+ custom_nodes_path = os.path.join(comfy_path, "custom_nodes")
32
+ old_ini_path = os.path.join(custom_nodes_path, "impact-pack.ini")
33
+ old_py_path = os.path.join(custom_nodes_path, "comfyui-impact-pack.py")
34
+
35
+ if os.path.exists(old_ini_path):
36
+ print(f"Delete legacy file: {old_ini_path}")
37
+ os.remove(old_ini_path)
38
+
39
+ if os.path.exists(old_py_path):
40
+ print(f"Delete legacy file: {old_py_path}")
41
+ os.remove(old_py_path)
42
+
43
+
44
+ def ensure_pip_packages():
45
+ try:
46
+ import cv2
47
+ except Exception:
48
+ try:
49
+ subprocess.check_call(pip_install + ['opencv-python'])
50
+ except:
51
+ print(f"ComfyUI-Impact-Pack: failed to install 'opencv-python'. Please, install manually.")
52
+
53
+ try:
54
+ import segment_anything
55
+ from skimage.measure import label, regionprops
56
+ except Exception:
57
+ my_path = os.path.dirname(__file__)
58
+ requirements_path = os.path.join(my_path, "requirements.txt")
59
+ subprocess.check_call(pip_install + ['-r', requirements_path])
60
+
61
+ try:
62
+ import pycocotools
63
+ except Exception:
64
+ if platform.system() not in ["Windows"] or platform.machine() not in ["AMD64", "x86_64"]:
65
+ print(f"Your system is {platform.system()}; !! You need to install 'libpython3-dev' for this step. !!")
66
+
67
+ subprocess.check_call(pip_install + ['pycocotools'])
68
+ else:
69
+ pycocotools = {
70
+ (3, 8): "https://github.com/Bing-su/dddetailer/releases/download/pycocotools/pycocotools-2.0.6-cp38-cp38-win_amd64.whl",
71
+ (3, 9): "https://github.com/Bing-su/dddetailer/releases/download/pycocotools/pycocotools-2.0.6-cp39-cp39-win_amd64.whl",
72
+ (3, 10): "https://github.com/Bing-su/dddetailer/releases/download/pycocotools/pycocotools-2.0.6-cp310-cp310-win_amd64.whl",
73
+ (3, 11): "https://github.com/Bing-su/dddetailer/releases/download/pycocotools/pycocotools-2.0.6-cp311-cp311-win_amd64.whl",
74
+ }
75
+
76
+ version = sys.version_info[:2]
77
+ url = pycocotools[version]
78
+ subprocess.check_call(pip_install + [url])
79
+
80
+
81
+ def ensure_mmdet_package():
82
+ try:
83
+ import mmcv
84
+ import mmdet
85
+ from mmdet.evaluation import get_classes
86
+ except Exception:
87
+ subprocess.check_call(pip_install + ['-U', 'openmim'])
88
+ subprocess.check_call(mim_install + ['mmcv==2.0.0'])
89
+ subprocess.check_call(mim_install + ['mmdet==3.0.0'])
90
+ subprocess.check_call(mim_install + ['mmengine==0.7.3'])
91
+
92
+
93
+ def install():
94
+ remove_olds()
95
+ ensure_pip_packages()
96
+ ensure_mmdet_package()
97
+
98
+ # Download model
99
+ print("### ComfyUI-Impact-Pack: Check basic models")
100
+
101
+ model_path = folder_paths.models_dir
102
+
103
+ bbox_path = os.path.join(model_path, "mmdets", "bbox")
104
+ #segm_path = os.path.join(model_path, "mmdets", "segm") -- deprecated
105
+ sam_path = os.path.join(model_path, "sams")
106
+ onnx_path = os.path.join(model_path, "onnx")
107
+
108
+ if not os.path.exists(os.path.join(bbox_path, "mmdet_anime-face_yolov3.pth")):
109
+ download_url("https://huggingface.co/dustysys/ddetailer/resolve/main/mmdet/bbox/mmdet_anime-face_yolov3.pth", bbox_path)
110
+
111
+ if not os.path.exists(os.path.join(bbox_path, "mmdet_anime-face_yolov3.py")):
112
+ download_url("https://raw.githubusercontent.com/Bing-su/dddetailer/master/config/mmdet_anime-face_yolov3.py", bbox_path)
113
+
114
+ if not os.path.exists(os.path.join(sam_path, "sam_vit_b_01ec64.pth")):
115
+ download_url("https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth", sam_path)
116
+
117
+ if not os.path.exists(onnx_path):
118
+ print(f"### ComfyUI-Impact-Pack: onnx model directory created ({onnx_path})")
119
+ os.mkdir(onnx_path)
120
+
121
+ impact_config.write_config(comfy_path)
122
+
123
+
124
+ install()
ComfyUI-Impact-Pack/js/impact-pack.js ADDED
@@ -0,0 +1,356 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import { app } from "/scripts/app.js";
2
+ import { ComfyDialog, $el } from "/scripts/ui.js";
3
+ import { api } from "/scripts/api.js";
4
+
5
+ // Helper function to convert a data URL to a Blob object
6
+ function dataURLToBlob(dataURL) {
7
+ const parts = dataURL.split(';base64,');
8
+ const contentType = parts[0].split(':')[1];
9
+ const byteString = atob(parts[1]);
10
+ const arrayBuffer = new ArrayBuffer(byteString.length);
11
+ const uint8Array = new Uint8Array(arrayBuffer);
12
+ for (let i = 0; i < byteString.length; i++) {
13
+ uint8Array[i] = byteString.charCodeAt(i);
14
+ }
15
+ return new Blob([arrayBuffer], { type: contentType });
16
+ }
17
+
18
+ async function invalidateImage(node, formData) {
19
+ const filepath = node.images[0];
20
+
21
+ await fetch('/upload/temp', {
22
+ method: 'POST',
23
+ body: formData
24
+ }).then(response => {
25
+ }).catch(error => {
26
+ console.error('Error:', error);
27
+ });
28
+
29
+ const img = new Image();
30
+ img.onload = () => {
31
+ node.imgs = [img];
32
+ app.graph.setDirtyCanvas(true);
33
+ };
34
+
35
+ img.src = `view?filename=${filepath.filename}&type=${filepath.type}`;;
36
+ }
37
+
38
+ class ImpactInpaintDialog extends ComfyDialog {
39
+ constructor() {
40
+ super();
41
+ this.element = $el("div.comfy-modal", { parent: document.body },
42
+ [
43
+ $el("div.comfy-modal-content",
44
+ [
45
+ ...this.createButtons()]),
46
+ ]);
47
+ }
48
+
49
+ createButtons() {
50
+ return [
51
+ $el("button", {
52
+ type: "button",
53
+ textContent: "Save",
54
+ onclick: () => {
55
+ const backupCtx = this.backupCanvas.getContext('2d', {transparent: true});
56
+ backupCtx.clearRect(0,0,this.backupCanvas.width,this.backupCanvas.height);
57
+ backupCtx.drawImage(this.maskCanvas,
58
+ 0, 0, this.maskCanvas.width, this.maskCanvas.height,
59
+ 0, 0, this.backupCanvas.width, this.backupCanvas.height);
60
+
61
+ // paste mask data into alpha channel
62
+ const backupData = backupCtx.getImageData(0, 0, this.backupCanvas.width, this.backupCanvas.height);
63
+
64
+ for (let i = 0; i < backupData.data.length; i += 4) {
65
+ if(backupData.data[i+3] == 255)
66
+ backupData.data[i+3] = 0;
67
+ else
68
+ backupData.data[i+3] = 255;
69
+
70
+ backupData.data[i] = 0;
71
+ backupData.data[i+1] = 0;
72
+ backupData.data[i+2] = 0;
73
+ }
74
+
75
+ backupCtx.globalCompositeOperation = 'source-over';
76
+ backupCtx.putImageData(backupData, 0, 0);
77
+
78
+ const dataURL = this.backupCanvas.toDataURL();
79
+ const blob = dataURLToBlob(dataURL);
80
+
81
+ const formData = new FormData();
82
+ const filename = "impact-mask-" + performance.now() + ".png";
83
+
84
+ const item =
85
+ {
86
+ "filename": filename,
87
+ "subfolder": "",
88
+ "type": "temp",
89
+ };
90
+
91
+ this.node.images[0] = item;
92
+ this.node.widgets[1].value = item;
93
+
94
+ formData.append('image', blob, filename);
95
+ invalidateImage(this.node, formData);
96
+ this.close();
97
+ }
98
+ }),
99
+ $el("button", {
100
+ type: "button",
101
+ textContent: "Cancel",
102
+ onclick: () => this.close(),
103
+ }),
104
+ $el("button", {
105
+ type: "button",
106
+ textContent: "Clear",
107
+ onclick: () => {
108
+ this.maskCtx.clearRect(0, 0, this.maskCanvas.width, this.maskCanvas.height);
109
+ },
110
+ }),
111
+ ];
112
+ }
113
+
114
+ show() {
115
+ const imgCanvas = document.createElement('canvas');
116
+ const maskCanvas = document.createElement('canvas');
117
+ const backupCanvas = document.createElement('canvas');
118
+ imgCanvas.id = "imageCanvas";
119
+ maskCanvas.id = "maskCanvas";
120
+ backupCanvas.id = "backupCanvas";
121
+
122
+ this.element.appendChild(imgCanvas);
123
+ this.element.appendChild(maskCanvas);
124
+
125
+ this.node.widgets[1].value = null;
126
+
127
+ this.element.style.display = "block";
128
+ imgCanvas.style.position = "relative";
129
+ imgCanvas.style.top = "200";
130
+ imgCanvas.style.left = "0";
131
+
132
+ maskCanvas.style.position = "absolute";
133
+
134
+ const imgCtx = imgCanvas.getContext('2d');
135
+ const maskCtx = maskCanvas.getContext('2d');
136
+ const backupCtx = backupCanvas.getContext('2d');
137
+
138
+ this.maskCanvas = maskCanvas;
139
+ this.maskCtx = maskCtx;
140
+ this.backupCanvas = backupCanvas;
141
+
142
+ window.addEventListener("resize", () => {
143
+ // repositioning
144
+ imgCanvas.width = window.innerWidth - 250;
145
+ imgCanvas.height = window.innerHeight - 300;
146
+
147
+ // redraw image
148
+ let drawWidth = image.width;
149
+ let drawHeight = image.height;
150
+ if (image.width > imgCanvas.width) {
151
+ drawWidth = imgCanvas.width;
152
+ drawHeight = (drawWidth / image.width) * image.height;
153
+ }
154
+ if (drawHeight > imgCanvas.height) {
155
+ drawHeight = imgCanvas.height;
156
+ drawWidth = (drawHeight / image.height) * image.width;
157
+ }
158
+
159
+ imgCtx.drawImage(image, 0, 0, drawWidth, drawHeight);
160
+
161
+ // update mask
162
+ backupCtx.drawImage(maskCanvas, 0, 0, maskCanvas.width, maskCanvas.height, 0, 0, backupCanvas.width, backupCanvas.height);
163
+
164
+ maskCanvas.width = drawWidth;
165
+ maskCanvas.height = drawHeight;
166
+ maskCanvas.style.top = imgCanvas.offsetTop + "px";
167
+ maskCanvas.style.left = imgCanvas.offsetLeft + "px";
168
+
169
+ maskCtx.drawImage(backupCanvas, 0, 0, backupCanvas.width, backupCanvas.height, 0, 0, maskCanvas.width, maskCanvas.height);
170
+ });
171
+
172
+
173
+ // image load
174
+ const image = new Image();
175
+ image.onload = function() {
176
+ backupCanvas.width = image.width;
177
+ backupCanvas.height = image.height;
178
+ window.dispatchEvent(new Event('resize'));
179
+ };
180
+
181
+ const filepath = this.node.images[0];
182
+ image.src = this.node.imgs[0].src;
183
+ this.image = image;
184
+
185
+
186
+ // event handler for user drawing ------
187
+ let brush_size = 10;
188
+
189
+ function mouse_down(event) {
190
+ if (event.buttons === 1) {
191
+ const maskRect = maskCanvas.getBoundingClientRect();
192
+ const x = event.offsetX || event.targetTouches[0].clientX - maskRect.left;
193
+ const y = event.offsetY || event.targetTouches[0].clientY - maskRect.top;
194
+
195
+ maskCtx.beginPath();
196
+ maskCtx.fillStyle = "rgb(0,0,0)";
197
+ maskCtx.globalCompositeOperation = "source-over";
198
+ maskCtx.arc(x, y, brush_size, 0, Math.PI * 2, false);
199
+ maskCtx.fill();
200
+ }
201
+ }
202
+
203
+ function mouse_move(event) {
204
+ if (event.buttons === 1) {
205
+ event.preventDefault();
206
+ const maskRect = maskCanvas.getBoundingClientRect();
207
+ const x = event.offsetX || event.targetTouches[0].clientX - maskRect.left;
208
+ const y = event.offsetY || event.targetTouches[0].clientY - maskRect.top;
209
+
210
+ maskCtx.beginPath();
211
+ maskCtx.fillStyle = "rgb(0,0,0)";
212
+ maskCtx.globalCompositeOperation = "source-over";
213
+ maskCtx.arc(x, y, brush_size, 0, Math.PI * 2, false);
214
+ maskCtx.fill();
215
+ }
216
+ else if(event.buttons === 2) {
217
+ event.preventDefault();
218
+ const maskRect = maskCanvas.getBoundingClientRect();
219
+ const x = event.offsetX || event.targetTouches[0].clientX - maskRect.left;
220
+ const y = event.offsetY || event.targetTouches[0].clientY - maskRect.top;
221
+
222
+ maskCtx.beginPath();
223
+ maskCtx.globalCompositeOperation = "destination-out";
224
+ maskCtx.arc(x, y, brush_size, 0, Math.PI * 2, false);
225
+ maskCtx.fill();
226
+ }
227
+ }
228
+
229
+ function touch_move(event) {
230
+ event.preventDefault();
231
+ const maskRect = maskCanvas.getBoundingClientRect();
232
+ const x = event.offsetX || event.targetTouches[0].clientX - maskRect.left;
233
+ const y = event.offsetY || event.targetTouches[0].clientY - maskRect.top;
234
+
235
+ maskCtx.beginPath();
236
+ maskCtx.fillStyle = "rgb(0,0,0)";
237
+ maskCtx.globalCompositeOperation = "source-over";
238
+ maskCtx.arc(x, y, brush_size, 0, Math.PI * 2, false);
239
+ maskCtx.fill();
240
+ }
241
+
242
+ function handleWheelEvent(event) {
243
+
244
+ if(event.deltaY < 0)
245
+ brush_size = Math.min(brush_size+2, 100);
246
+ else
247
+ brush_size = Math.max(brush_size-2, 1);
248
+ }
249
+
250
+ maskCanvas.addEventListener("contextmenu", (event) => {
251
+ event.preventDefault();
252
+ });
253
+ maskCanvas.addEventListener('wheel', handleWheelEvent);
254
+ maskCanvas.addEventListener('mousedown', mouse_down);
255
+ maskCanvas.addEventListener('mousemove', mouse_move);
256
+ maskCanvas.addEventListener('touchmove', touch_move);
257
+ }
258
+ }
259
+
260
+ const input_tracking = {};
261
+ const input_dirty = {};
262
+ const output_tracking = {};
263
+
264
+ function executeHandler(event) {
265
+ if(event.detail.output.aux){
266
+ const id = event.detail.node;
267
+ if(input_tracking.hasOwnProperty(id)) {
268
+ if(input_tracking.hasOwnProperty(id) && input_tracking[id][0] != event.detail.output.aux[0]) {
269
+ input_dirty[id] = true;
270
+ }
271
+ else{
272
+
273
+ }
274
+ }
275
+
276
+ input_tracking[id] = event.detail.output.aux;
277
+ }
278
+ }
279
+
280
+ var eventRegistered = false;
281
+
282
+ app.registerExtension({
283
+ name: "Comfy.Impack",
284
+ loadedGraphNode(node, app) {
285
+ if (node.comfyClass == "PreviewBridge") {
286
+ if (!eventRegistered) {
287
+ api.addEventListener("executed", executeHandler);
288
+ eventRegistered = true;
289
+ }
290
+
291
+ input_dirty[node.id + ""] = false;
292
+ }
293
+ },
294
+ nodeCreated(node, app) {
295
+ if(node.comfyClass == "MaskPainter") {
296
+ node.addWidget("button", "Edit mask", null, () => {
297
+ this.dlg = new ImpactInpaintDialog(app);
298
+ this.dlg.node = node;
299
+
300
+ if('images' in node) {
301
+ this.dlg.show();
302
+ }
303
+ });
304
+
305
+ node.addWidget("hidden", "mask_image", null, null);
306
+ }
307
+ else if (node.comfyClass == "PreviewBridge") {
308
+ Object.defineProperty(node, "images", {
309
+ set: function(value) {
310
+ node._images = value;
311
+ },
312
+ get: function() {
313
+ const id = node.id+"";
314
+ if(node.widgets[0].value != '#placeholder') {
315
+ var need_invalidate = false;
316
+
317
+ if(input_dirty.hasOwnProperty(id) && input_dirty[id]) {
318
+ node.widgets[0].value = {...input_tracking[id][1]};
319
+ input_dirty[id] = false;
320
+ need_invalidate = true
321
+ }
322
+
323
+ node.widgets[0].value['image_hash'] = app.nodeOutputs[id]['aux'][0];
324
+ node.widgets[0].value['forward_filename'] = app.nodeOutputs[id]['aux'][1][0]['filename'];
325
+ node.widgets[0].value['forward_subfolder'] = app.nodeOutputs[id]['aux'][1][0]['subfolder'];
326
+ node.widgets[0].value['forward_type'] = app.nodeOutputs[id]['aux'][1][0]['type'];
327
+ app.nodeOutputs[id].images = [node.widgets[0].value];
328
+
329
+ if(need_invalidate) {
330
+ Promise.all(
331
+ app.nodeOutputs[id].images.map((src) => {
332
+ return new Promise((r) => {
333
+ const img = new Image();
334
+ img.onload = () => r(img);
335
+ img.onerror = () => r(null);
336
+ img.src = "/view?" + new URLSearchParams(src[0]).toString();
337
+ console.log(`new img => ${img.src}`);
338
+ });
339
+ })
340
+ ).then((imgs) => {
341
+ this.imgs = imgs.filter(Boolean);
342
+ this.setSizeForImage?.();
343
+ app.graph.setDirtyCanvas(true);
344
+ });
345
+ }
346
+
347
+ return app.nodeOutputs[id].images;
348
+ }
349
+ else {
350
+ return node._images;
351
+ }
352
+ }
353
+ });
354
+ }
355
+ }
356
+ });
ComfyUI-Impact-Pack/js/impact-sam-editor.js ADDED
@@ -0,0 +1,626 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import { app } from "/scripts/app.js";
2
+ import { ComfyDialog, $el } from "/scripts/ui.js";
3
+ import { ComfyApp } from "/scripts/app.js";
4
+ import { ClipspaceDialog } from "/extensions/core/clipspace.js";
5
+
6
+ function addMenuHandler(nodeType, cb) {
7
+ const getOpts = nodeType.prototype.getExtraMenuOptions;
8
+ nodeType.prototype.getExtraMenuOptions = function () {
9
+ const r = getOpts.apply(this, arguments);
10
+ cb.apply(this, arguments);
11
+ return r;
12
+ };
13
+ }
14
+
15
+ // Helper function to convert a data URL to a Blob object
16
+ function dataURLToBlob(dataURL) {
17
+ const parts = dataURL.split(';base64,');
18
+ const contentType = parts[0].split(':')[1];
19
+ const byteString = atob(parts[1]);
20
+ const arrayBuffer = new ArrayBuffer(byteString.length);
21
+ const uint8Array = new Uint8Array(arrayBuffer);
22
+ for (let i = 0; i < byteString.length; i++) {
23
+ uint8Array[i] = byteString.charCodeAt(i);
24
+ }
25
+ return new Blob([arrayBuffer], { type: contentType });
26
+ }
27
+
28
+ function loadedImageToBlob(image) {
29
+ const canvas = document.createElement('canvas');
30
+
31
+ canvas.width = image.width;
32
+ canvas.height = image.height;
33
+
34
+ const ctx = canvas.getContext('2d');
35
+
36
+ ctx.drawImage(image, 0, 0);
37
+
38
+ const dataURL = canvas.toDataURL('image/png', 1);
39
+ const blob = dataURLToBlob(dataURL);
40
+
41
+ return blob;
42
+ }
43
+
44
+ async function uploadMask(filepath, formData) {
45
+ await fetch('/upload/mask', {
46
+ method: 'POST',
47
+ body: formData
48
+ }).then(response => {}).catch(error => {
49
+ console.error('Error:', error);
50
+ });
51
+
52
+ ComfyApp.clipspace.imgs[ComfyApp.clipspace['selectedIndex']] = new Image();
53
+ ComfyApp.clipspace.imgs[ComfyApp.clipspace['selectedIndex']].src = `view?filename=${filepath.filename}&type=${filepath.type}`;
54
+
55
+ if(ComfyApp.clipspace.images)
56
+ ComfyApp.clipspace.images[ComfyApp.clipspace['selectedIndex']] = filepath;
57
+
58
+ ClipspaceDialog.invalidatePreview();
59
+ }
60
+
61
+ class ImpactSamEditorDialog extends ComfyDialog {
62
+ static instance = null;
63
+
64
+ static getInstance() {
65
+ if(!ImpactSamEditorDialog.instance) {
66
+ ImpactSamEditorDialog.instance = new ImpactSamEditorDialog();
67
+ }
68
+
69
+ return ImpactSamEditorDialog.instance;
70
+ }
71
+
72
+ constructor() {
73
+ super();
74
+ this.element = $el("div.comfy-modal", { parent: document.body },
75
+ [ $el("div.comfy-modal-content",
76
+ [...this.createButtons()]),
77
+ ]);
78
+ }
79
+
80
+ createButtons() {
81
+ return [];
82
+ }
83
+
84
+ createButton(name, callback) {
85
+ var button = document.createElement("button");
86
+ button.innerText = name;
87
+ button.addEventListener("click", callback);
88
+ return button;
89
+ }
90
+
91
+ createLeftButton(name, callback) {
92
+ var button = this.createButton(name, callback);
93
+ button.style.cssFloat = "left";
94
+ button.style.marginRight = "4px";
95
+ return button;
96
+ }
97
+
98
+ createRightButton(name, callback) {
99
+ var button = this.createButton(name, callback);
100
+ button.style.cssFloat = "right";
101
+ button.style.marginLeft = "4px";
102
+ return button;
103
+ }
104
+
105
+ createLeftSlider(self, name, callback) {
106
+ const divElement = document.createElement('div');
107
+ divElement.id = "sam-confidence-slider";
108
+ divElement.style.cssFloat = "left";
109
+ divElement.style.fontFamily = "sans-serif";
110
+ divElement.style.marginRight = "4px";
111
+ divElement.style.color = "var(--input-text)";
112
+ divElement.style.backgroundColor = "var(--comfy-input-bg)";
113
+ divElement.style.borderRadius = "8px";
114
+ divElement.style.borderColor = "var(--border-color)";
115
+ divElement.style.borderStyle = "solid";
116
+ divElement.style.fontSize = "15px";
117
+ divElement.style.height = "21px";
118
+ divElement.style.padding = "1px 6px";
119
+ divElement.style.display = "flex";
120
+ divElement.style.position = "relative";
121
+ divElement.style.top = "2px";
122
+ self.confidence_slider_input = document.createElement('input');
123
+ self.confidence_slider_input.setAttribute('type', 'range');
124
+ self.confidence_slider_input.setAttribute('min', '0');
125
+ self.confidence_slider_input.setAttribute('max', '100');
126
+ self.confidence_slider_input.setAttribute('value', '70');
127
+ const labelElement = document.createElement("label");
128
+ labelElement.textContent = name;
129
+
130
+ divElement.appendChild(labelElement);
131
+ divElement.appendChild(self.confidence_slider_input);
132
+
133
+ self.confidence_slider_input.addEventListener("change", callback);
134
+
135
+ return divElement;
136
+ }
137
+
138
+ async detect_and_invalidate_mask_canvas(self) {
139
+ const mask_img = await self.detect(self);
140
+
141
+ const canvas = self.maskCtx.canvas;
142
+ const ctx = self.maskCtx;
143
+
144
+ ctx.clearRect(0, 0, canvas.width, canvas.height);
145
+
146
+ await new Promise((resolve, reject) => {
147
+ self.mask_image = new Image();
148
+ self.mask_image.onload = function() {
149
+ ctx.drawImage(self.mask_image, 0, 0, canvas.width, canvas.height);
150
+ resolve();
151
+ };
152
+ self.mask_image.onerror = reject;
153
+ self.mask_image.src = mask_img.src;
154
+ });
155
+ }
156
+
157
+ setlayout(imgCanvas, maskCanvas, pointsCanvas) {
158
+ const self = this;
159
+
160
+ // If it is specified as relative, using it only as a hidden placeholder for padding is recommended
161
+ // to prevent anomalies where it exceeds a certain size and goes outside of the window.
162
+ var placeholder = document.createElement("div");
163
+ placeholder.style.position = "relative";
164
+ placeholder.style.height = "50px";
165
+
166
+ var bottom_panel = document.createElement("div");
167
+ bottom_panel.style.position = "absolute";
168
+ bottom_panel.style.bottom = "0px";
169
+ bottom_panel.style.left = "20px";
170
+ bottom_panel.style.right = "20px";
171
+ bottom_panel.style.height = "50px";
172
+
173
+ var brush = document.createElement("div");
174
+ brush.id = "sam-brush";
175
+ brush.style.backgroundColor = "blue";
176
+ brush.style.outline = "2px solid pink";
177
+ brush.style.borderRadius = "50%";
178
+ brush.style.MozBorderRadius = "50%";
179
+ brush.style.WebkitBorderRadius = "50%";
180
+ brush.style.position = "absolute";
181
+ brush.style.zIndex = 100;
182
+ brush.style.pointerEvents = "none";
183
+ this.brush = brush;
184
+ this.element.appendChild(imgCanvas);
185
+ this.element.appendChild(maskCanvas);
186
+ this.element.appendChild(pointsCanvas);
187
+ this.element.appendChild(placeholder); // must below z-index than bottom_panel to avoid covering button
188
+ this.element.appendChild(bottom_panel);
189
+ document.body.appendChild(brush);
190
+ this.brush_size = 5;
191
+
192
+ var confidence_slider = this.createLeftSlider(self, "Confidence", (event) => {
193
+ self.confidence = event.target.value;
194
+ });
195
+
196
+ var clearButton = this.createLeftButton("Clear", () => {
197
+ self.maskCtx.clearRect(0, 0, self.maskCanvas.width, self.maskCanvas.height);
198
+ self.pointsCtx.clearRect(0, 0, self.pointsCanvas.width, self.pointsCanvas.height);
199
+
200
+ self.prompt_points = [];
201
+
202
+ self.invalidatePointsCanvas(self);
203
+ });
204
+
205
+ var detectButton = this.createLeftButton("Detect", () => self.detect_and_invalidate_mask_canvas(self));
206
+
207
+ var cancelButton = this.createRightButton("Cancel", () => {
208
+ document.removeEventListener("mouseup", ImpactSamEditorDialog.handleMouseUp);
209
+ document.removeEventListener("keydown", ImpactSamEditorDialog.handleKeyDown);
210
+ self.close();
211
+ });
212
+
213
+ self.saveButton = this.createRightButton("Save", () => {
214
+ document.removeEventListener("mouseup", ImpactSamEditorDialog.handleMouseUp);
215
+ document.removeEventListener("keydown", ImpactSamEditorDialog.handleKeyDown);
216
+ self.save(self);
217
+ });
218
+
219
+ var undoButton = this.createLeftButton("Undo", () => {
220
+ if(self.prompt_points.length > 0) {
221
+ self.prompt_points.pop();
222
+ self.pointsCtx.clearRect(0, 0, self.pointsCanvas.width, self.pointsCanvas.height);
223
+ self.invalidatePointsCanvas(self);
224
+ }
225
+ });
226
+
227
+ bottom_panel.appendChild(clearButton);
228
+ bottom_panel.appendChild(detectButton);
229
+ bottom_panel.appendChild(self.saveButton);
230
+ bottom_panel.appendChild(cancelButton);
231
+ bottom_panel.appendChild(confidence_slider);
232
+ bottom_panel.appendChild(undoButton);
233
+
234
+ imgCanvas.style.position = "relative";
235
+ imgCanvas.style.top = "200";
236
+ imgCanvas.style.left = "0";
237
+
238
+ maskCanvas.style.position = "absolute";
239
+ maskCanvas.style.opacity = 0.5;
240
+ pointsCanvas.style.position = "absolute";
241
+ }
242
+
243
+ show() {
244
+ this.mask_image = null;
245
+ self.prompt_points = [];
246
+
247
+ this.message_box = $el("p", ["Please wait a moment while the SAM model and the image are being loaded."]);
248
+ this.element.appendChild(this.message_box);
249
+
250
+ if(self.imgCtx) {
251
+ self.imgCtx.clearRect(0, 0, self.imageCanvas.width, self.imageCanvas.height);
252
+ }
253
+
254
+ const target_image_path = ComfyApp.clipspace.imgs[ComfyApp.clipspace['selectedIndex']].src;
255
+ this.load_sam(target_image_path);
256
+
257
+ if(!this.is_layout_created) {
258
+ // layout
259
+ const imgCanvas = document.createElement('canvas');
260
+ const maskCanvas = document.createElement('canvas');
261
+ const pointsCanvas = document.createElement('canvas');
262
+
263
+ imgCanvas.id = "imageCanvas";
264
+ maskCanvas.id = "maskCanvas";
265
+ pointsCanvas.id = "pointsCanvas";
266
+
267
+ this.setlayout(imgCanvas, maskCanvas, pointsCanvas);
268
+
269
+ // prepare content
270
+ this.imgCanvas = imgCanvas;
271
+ this.maskCanvas = maskCanvas;
272
+ this.pointsCanvas = pointsCanvas;
273
+ this.maskCtx = maskCanvas.getContext('2d');
274
+ this.pointsCtx = pointsCanvas.getContext('2d');
275
+
276
+ this.is_layout_created = true;
277
+
278
+ // replacement of onClose hook since close is not real close
279
+ const self = this;
280
+ const observer = new MutationObserver(function(mutations) {
281
+ mutations.forEach(function(mutation) {
282
+ if (mutation.type === 'attributes' && mutation.attributeName === 'style') {
283
+ if(self.last_display_style && self.last_display_style != 'none' && self.element.style.display == 'none') {
284
+ ComfyApp.onClipspaceEditorClosed();
285
+ }
286
+
287
+ self.last_display_style = self.element.style.display;
288
+ }
289
+ });
290
+ });
291
+
292
+ const config = { attributes: true };
293
+ observer.observe(this.element, config);
294
+ }
295
+
296
+ this.setImages(target_image_path, this.imgCanvas, this.pointsCanvas);
297
+
298
+ if(ComfyApp.clipspace_return_node) {
299
+ this.saveButton.innerText = "Save to node";
300
+ }
301
+ else {
302
+ this.saveButton.innerText = "Save";
303
+ }
304
+ this.saveButton.disabled = true;
305
+
306
+ this.element.style.display = "block";
307
+ this.element.style.zIndex = 8888; // NOTE: alert dialog must be high priority.
308
+ }
309
+
310
+ updateBrushPreview(self, event) {
311
+ event.preventDefault();
312
+
313
+ const centerX = event.pageX;
314
+ const centerY = event.pageY;
315
+
316
+ const brush = self.brush;
317
+
318
+ brush.style.width = self.brush_size * 2 + "px";
319
+ brush.style.height = self.brush_size * 2 + "px";
320
+ brush.style.left = (centerX - self.brush_size) + "px";
321
+ brush.style.top = (centerY - self.brush_size) + "px";
322
+ }
323
+
324
+ setImages(target_image_path, imgCanvas, pointsCanvas) {
325
+ const imgCtx = imgCanvas.getContext('2d');
326
+ const maskCtx = this.maskCtx;
327
+ const maskCanvas = this.maskCanvas;
328
+
329
+ const self = this;
330
+
331
+ // image load
332
+ const orig_image = new Image();
333
+ window.addEventListener("resize", () => {
334
+ // repositioning
335
+ imgCanvas.width = window.innerWidth - 250;
336
+ imgCanvas.height = window.innerHeight - 200;
337
+
338
+ // redraw image
339
+ let drawWidth = orig_image.width;
340
+ let drawHeight = orig_image.height;
341
+
342
+ if (orig_image.width > imgCanvas.width) {
343
+ drawWidth = imgCanvas.width;
344
+ drawHeight = (drawWidth / orig_image.width) * orig_image.height;
345
+ }
346
+
347
+ if (drawHeight > imgCanvas.height) {
348
+ drawHeight = imgCanvas.height;
349
+ drawWidth = (drawHeight / orig_image.height) * orig_image.width;
350
+ }
351
+
352
+ imgCtx.drawImage(orig_image, 0, 0, drawWidth, drawHeight);
353
+
354
+ // update mask
355
+ pointsCanvas.width = drawWidth;
356
+ pointsCanvas.height = drawHeight;
357
+ pointsCanvas.style.top = imgCanvas.offsetTop + "px";
358
+ pointsCanvas.style.left = imgCanvas.offsetLeft + "px";
359
+
360
+ maskCanvas.width = drawWidth;
361
+ maskCanvas.height = drawHeight;
362
+ maskCanvas.style.top = imgCanvas.offsetTop + "px";
363
+ maskCanvas.style.left = imgCanvas.offsetLeft + "px";
364
+
365
+ self.invalidateMaskCanvas(self);
366
+ self.invalidatePointsCanvas(self);
367
+ });
368
+
369
+ // original image load
370
+ orig_image.onload = () => self.onLoaded(self);
371
+ const rgb_url = new URL(target_image_path);
372
+ rgb_url.searchParams.delete('channel');
373
+ rgb_url.searchParams.set('channel', 'rgb');
374
+ orig_image.src = rgb_url;
375
+ self.image = orig_image;
376
+ }
377
+
378
+ onLoaded(self) {
379
+ if(self.message_box) {
380
+ self.element.removeChild(self.message_box);
381
+ self.message_box = null;
382
+ }
383
+
384
+ window.dispatchEvent(new Event('resize'));
385
+
386
+ self.setEventHandler(pointsCanvas);
387
+ self.saveButton.disabled = false;
388
+ }
389
+
390
+ setEventHandler(targetCanvas) {
391
+ targetCanvas.addEventListener("contextmenu", (event) => {
392
+ event.preventDefault();
393
+ });
394
+
395
+ const self = this;
396
+ targetCanvas.addEventListener('pointermove', (event) => this.updateBrushPreview(self,event));
397
+ targetCanvas.addEventListener('pointerdown', (event) => this.handlePointerDown(self,event));
398
+ targetCanvas.addEventListener('pointerover', (event) => { this.brush.style.display = "block"; });
399
+ targetCanvas.addEventListener('pointerleave', (event) => { this.brush.style.display = "none"; });
400
+ document.addEventListener('keydown', ImpactSamEditorDialog.handleKeyDown);
401
+ }
402
+
403
+ static handleKeyDown(event) {
404
+ const self = ImpactSamEditorDialog.instance;
405
+ if (event.key === '=') { // positive
406
+ brush.style.backgroundColor = "blue";
407
+ brush.style.outline = "2px solid pink";
408
+ self.is_positive_mode = true;
409
+ } else if (event.key === '-') { // negative
410
+ brush.style.backgroundColor = "red";
411
+ brush.style.outline = "2px solid skyblue";
412
+ self.is_positive_mode = false;
413
+ }
414
+ }
415
+
416
+ is_positive_mode = true;
417
+ prompt_points = [];
418
+ confidence = 70;
419
+
420
+ invalidatePointsCanvas(self) {
421
+ const ctx = self.pointsCtx;
422
+
423
+ for (const i in self.prompt_points) {
424
+ const [is_positive, x, y] = self.prompt_points[i];
425
+
426
+ const scaledX = x * ctx.canvas.width / self.image.width;
427
+ const scaledY = y * ctx.canvas.height / self.image.height;
428
+
429
+ if(is_positive)
430
+ ctx.fillStyle = "blue";
431
+ else
432
+ ctx.fillStyle = "red";
433
+ ctx.beginPath();
434
+ ctx.arc(scaledX, scaledY, 3, 0, 3 * Math.PI);
435
+ ctx.fill();
436
+ }
437
+ }줘
438
+
439
+ invalidateMaskCanvas(self) {
440
+ if(self.mask_image) {
441
+ self.maskCtx.clearRect(0, 0, self.maskCanvas.width, self.maskCanvas.height);
442
+ self.maskCtx.drawImage(self.mask_image, 0, 0, self.maskCanvas.width, self.maskCanvas.height);
443
+ }
444
+ }
445
+
446
+ async load_sam(url) {
447
+ const parsedUrl = new URL(url);
448
+ const searchParams = new URLSearchParams(parsedUrl.search);
449
+
450
+ const filename = searchParams.get("filename") || "";
451
+ const fileType = searchParams.get("type") || "";
452
+ const subfolder = searchParams.get("subfolder") || "";
453
+
454
+ const data = {
455
+ sam_model_name: "sam_vit_b_01ec64.pth",
456
+ filename: filename,
457
+ type: fileType,
458
+ subfolder: subfolder
459
+ };
460
+
461
+ fetch('/sam/prepare', {
462
+ method: 'POST',
463
+ headers: { 'Content-Type': 'application/json' },
464
+ body: JSON.stringify(data)
465
+ });
466
+ }
467
+
468
+ async detect(self) {
469
+ const positive_points = [];
470
+ const negative_points = [];
471
+
472
+ for(const i in self.prompt_points) {
473
+ const [is_positive, x, y] = self.prompt_points[i];
474
+ const point = [x,y];
475
+ if(is_positive)
476
+ positive_points.push(point);
477
+ else
478
+ negative_points.push(point);
479
+ }
480
+
481
+ const data = {
482
+ positive_points: positive_points,
483
+ negative_points: negative_points,
484
+ threshold: self.confidence/100
485
+ };
486
+
487
+ const response = await fetch('/sam/detect', {
488
+ method: 'POST',
489
+ headers: { 'Content-Type': 'image/png' },
490
+ body: JSON.stringify(data)
491
+ });
492
+
493
+ const blob = await response.blob();
494
+ const url = URL.createObjectURL(blob);
495
+
496
+ return new Promise((resolve, reject) => {
497
+ const image = new Image();
498
+ image.onload = () => resolve(image);
499
+ image.onerror = reject;
500
+ image.src = url;
501
+ });
502
+ }
503
+
504
+ handlePointerDown(self, event) {
505
+ if ([0, 2, 5].includes(event.button)) {
506
+ event.preventDefault();
507
+ const x = event.offsetX || event.targetTouches[0].clientX - maskRect.left;
508
+ const y = event.offsetY || event.targetTouches[0].clientY - maskRect.top;
509
+
510
+ const originalX = x * self.image.width / self.pointsCanvas.width;
511
+ const originalY = y * self.image.height / self.pointsCanvas.height;
512
+
513
+ var point = null;
514
+ if (event.button == 0) {
515
+ // positive
516
+ point = [true, originalX, originalY];
517
+ } else {
518
+ // negative
519
+ point = [false, originalX, originalY];
520
+ }
521
+
522
+ self.prompt_points.push(point);
523
+
524
+ self.invalidatePointsCanvas(self);
525
+ }
526
+ }
527
+
528
+ async save(self) {
529
+ if(!self.mask_image) {
530
+ this.close();
531
+ return;
532
+ }
533
+
534
+ const save_canvas = document.createElement('canvas');
535
+
536
+ const save_ctx = save_canvas.getContext('2d', {willReadFrequently:true});
537
+ save_canvas.width = self.mask_image.width;
538
+ save_canvas.height = self.mask_image.height;
539
+
540
+ save_ctx.drawImage(self.mask_image, 0, 0, save_canvas.width, save_canvas.height);
541
+
542
+ const save_data = save_ctx.getImageData(0, 0, save_canvas.width, save_canvas.height);
543
+
544
+ // refine mask image
545
+ for (let i = 0; i < save_data.data.length; i += 4) {
546
+ if(save_data.data[i]) {
547
+ save_data.data[i+3] = 0;
548
+ }
549
+ else {
550
+ save_data.data[i+3] = 255;
551
+ }
552
+
553
+ save_data.data[i] = 0;
554
+ save_data.data[i+1] = 0;
555
+ save_data.data[i+2] = 0;
556
+ }
557
+
558
+ save_ctx.globalCompositeOperation = 'source-over';
559
+ save_ctx.putImageData(save_data, 0, 0);
560
+
561
+ const formData = new FormData();
562
+ const filename = "clipspace-mask-" + performance.now() + ".png";
563
+
564
+ const item =
565
+ {
566
+ "filename": filename,
567
+ "subfolder": "",
568
+ "type": "temp",
569
+ };
570
+
571
+ if(ComfyApp.clipspace.images)
572
+ ComfyApp.clipspace.images[0] = item;
573
+
574
+ if(ComfyApp.clipspace.widgets) {
575
+ const index = ComfyApp.clipspace.widgets.findIndex(obj => obj.name === 'image');
576
+
577
+ if(index >= 0)
578
+ ComfyApp.clipspace.widgets[index].value = item;
579
+ }
580
+
581
+ const dataURL = save_canvas.toDataURL();
582
+ const blob = dataURLToBlob(dataURL);
583
+
584
+ const original_blob = loadedImageToBlob(this.image);
585
+
586
+ formData.append('image', blob, filename);
587
+ formData.append('original_image', original_blob);
588
+ formData.append('type', "temp");
589
+
590
+ await uploadMask(item, formData);
591
+ ComfyApp.onClipspaceEditorSave();
592
+ this.close();
593
+ }
594
+ }
595
+
596
+ app.registerExtension({
597
+ name: "Comfy.Impact.SAMEditor",
598
+ init(app) {
599
+ const callback =
600
+ function () {
601
+ let dlg = ImpactSamEditorDialog.getInstance();
602
+ dlg.show();
603
+ };
604
+
605
+ const context_predicate = () => ComfyApp.clipspace && ComfyApp.clipspace.imgs && ComfyApp.clipspace.imgs.length > 0
606
+ ClipspaceDialog.registerButton("Impact SAM Detector", context_predicate, callback);
607
+ },
608
+
609
+ async beforeRegisterNodeDef(nodeType, nodeData, app) {
610
+ if (nodeData.output.includes("MASK") && nodeData.output.includes("IMAGE")) {
611
+ addMenuHandler(nodeType, function (_, options) {
612
+ options.unshift({
613
+ content: "Open in SAM Detector",
614
+ callback: () => {
615
+ ComfyApp.copyToClipspace(this);
616
+ ComfyApp.clipspace_return_node = this;
617
+
618
+ let dlg = ImpactSamEditorDialog.getInstance();
619
+ dlg.show();
620
+ },
621
+ });
622
+ });
623
+ }
624
+ }
625
+ });
626
+
ComfyUI-Impact-Pack/legacy.py ADDED
File without changes
ComfyUI-Impact-Pack/legacy_nodes.py ADDED
@@ -0,0 +1,258 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import folder_paths
2
+ import impact_core as core
3
+ from impact_utils import *
4
+ from impact_core import SEG
5
+ import nodes
6
+ import os
7
+
8
+ class NO_BBOX_MODEL:
9
+ pass
10
+
11
+
12
+ class NO_SEGM_MODEL:
13
+ pass
14
+
15
+
16
+ class MMDetLoader:
17
+ @classmethod
18
+ def INPUT_TYPES(s):
19
+ bboxs = ["bbox/"+x for x in folder_paths.get_filename_list("mmdets_bbox")]
20
+ segms = ["segm/"+x for x in folder_paths.get_filename_list("mmdets_segm")]
21
+ return {"required": {"model_name": (bboxs + segms, )}}
22
+ RETURN_TYPES = ("BBOX_MODEL", "SEGM_MODEL")
23
+ FUNCTION = "load_mmdet"
24
+
25
+ CATEGORY = "ImpactPack/Legacy"
26
+
27
+ def load_mmdet(self, model_name):
28
+ mmdet_path = folder_paths.get_full_path("mmdets", model_name)
29
+ model = core.load_mmdet(mmdet_path)
30
+
31
+ if model_name.startswith("bbox"):
32
+ return model, NO_SEGM_MODEL()
33
+ else:
34
+ return NO_BBOX_MODEL(), model
35
+
36
+
37
+ class BboxDetectorForEach:
38
+ @classmethod
39
+ def INPUT_TYPES(s):
40
+ return {"required": {
41
+ "bbox_model": ("BBOX_MODEL", ),
42
+ "image": ("IMAGE", ),
43
+ "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
44
+ "dilation": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}),
45
+ "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 10, "step": 0.1}),
46
+ }
47
+ }
48
+
49
+ RETURN_TYPES = ("SEGS", )
50
+ FUNCTION = "doit"
51
+
52
+ CATEGORY = "ImpactPack/Legacy"
53
+
54
+ @staticmethod
55
+ def detect(bbox_model, image, threshold, dilation, crop_factor, drop_size=1):
56
+ mmdet_results = core.inference_bbox(bbox_model, image, threshold)
57
+ segmasks = core.create_segmasks(mmdet_results)
58
+
59
+ if dilation > 0:
60
+ segmasks = dilate_masks(segmasks, dilation)
61
+
62
+ items = []
63
+ h = image.shape[1]
64
+ w = image.shape[2]
65
+ for x in segmasks:
66
+ item_bbox = x[0]
67
+ item_mask = x[1]
68
+
69
+ y1, x1, y2, x2 = item_bbox
70
+
71
+ if x2 - x1 > drop_size and y2 - y1 > drop_size:
72
+ crop_region = make_crop_region(w, h, item_bbox, crop_factor)
73
+ cropped_image = crop_image(image, crop_region)
74
+ cropped_mask = crop_ndarray2(item_mask, crop_region)
75
+ confidence = x[2]
76
+ # bbox_size = (item_bbox[2]-item_bbox[0],item_bbox[3]-item_bbox[1]) # (w,h)
77
+
78
+ item = SEG(cropped_image, cropped_mask, confidence, crop_region, item_bbox)
79
+ items.append(item)
80
+
81
+ shape = h, w
82
+ return shape, items
83
+
84
+ def doit(self, bbox_model, image, threshold, dilation, crop_factor):
85
+ return (BboxDetectorForEach.detect(bbox_model, image, threshold, dilation, crop_factor), )
86
+
87
+
88
+ class SegmDetectorCombined:
89
+ @classmethod
90
+ def INPUT_TYPES(s):
91
+ return {"required": {
92
+ "segm_model": ("SEGM_MODEL", ),
93
+ "image": ("IMAGE", ),
94
+ "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
95
+ "dilation": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
96
+ }
97
+ }
98
+
99
+ RETURN_TYPES = ("MASK",)
100
+ FUNCTION = "doit"
101
+
102
+ CATEGORY = "ImpactPack/Legacy"
103
+
104
+ def doit(self, segm_model, image, threshold, dilation):
105
+ mmdet_results = core.inference_segm(image, segm_model, threshold)
106
+ segmasks = core.create_segmasks(mmdet_results)
107
+ if dilation > 0:
108
+ segmasks = dilate_masks(segmasks, dilation)
109
+
110
+ mask = combine_masks(segmasks)
111
+ return (mask,)
112
+
113
+
114
+ class BboxDetectorCombined(SegmDetectorCombined):
115
+ @classmethod
116
+ def INPUT_TYPES(s):
117
+ return {"required": {
118
+ "bbox_model": ("BBOX_MODEL", ),
119
+ "image": ("IMAGE", ),
120
+ "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
121
+ "dilation": ("INT", {"default": 4, "min": 0, "max": 255, "step": 1}),
122
+ }
123
+ }
124
+
125
+ def doit(self, bbox_model, image, threshold, dilation):
126
+ mmdet_results = core.inference_bbox(bbox_model, image, threshold)
127
+ segmasks = core.create_segmasks(mmdet_results)
128
+ if dilation > 0:
129
+ segmasks = dilate_masks(segmasks, dilation)
130
+
131
+ mask = combine_masks(segmasks)
132
+ return (mask,)
133
+
134
+
135
+ class SegmDetectorForEach:
136
+ @classmethod
137
+ def INPUT_TYPES(s):
138
+ return {"required": {
139
+ "segm_model": ("SEGM_MODEL", ),
140
+ "image": ("IMAGE", ),
141
+ "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
142
+ "dilation": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}),
143
+ "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 10, "step": 0.1}),
144
+ }
145
+ }
146
+
147
+ RETURN_TYPES = ("SEGS", )
148
+ FUNCTION = "doit"
149
+
150
+ CATEGORY = "ImpactPack/Legacy"
151
+
152
+ def doit(self, segm_model, image, threshold, dilation, crop_factor):
153
+ mmdet_results = core.inference_segm(image, segm_model, threshold)
154
+ segmasks = core.create_segmasks(mmdet_results)
155
+
156
+ if dilation > 0:
157
+ segmasks = dilate_masks(segmasks, dilation)
158
+
159
+ items = []
160
+ h = image.shape[1]
161
+ w = image.shape[2]
162
+ for x in segmasks:
163
+ item_bbox = x[0]
164
+ item_mask = x[1]
165
+
166
+ crop_region = make_crop_region(w, h, item_bbox, crop_factor)
167
+ cropped_image = crop_image(image, crop_region)
168
+ cropped_mask = crop_ndarray2(item_mask, crop_region)
169
+ confidence = x[2]
170
+
171
+ item = SEG(cropped_image, cropped_mask, confidence, crop_region, item_bbox)
172
+ items.append(item)
173
+
174
+ shape = h,w
175
+ return ((shape, items), )
176
+
177
+
178
+ class SegsMaskCombine:
179
+ @classmethod
180
+ def INPUT_TYPES(s):
181
+ return {"required": {
182
+ "segs": ("SEGS", ),
183
+ "image": ("IMAGE", ),
184
+ }
185
+ }
186
+
187
+ RETURN_TYPES = ("MASK",)
188
+ FUNCTION = "doit"
189
+
190
+ CATEGORY = "ImpactPack/Legacy"
191
+
192
+ @staticmethod
193
+ def combine(segs, image):
194
+ h = image.shape[1]
195
+ w = image.shape[2]
196
+
197
+ mask = np.zeros((h, w), dtype=np.uint8)
198
+
199
+ for seg in segs[1]:
200
+ cropped_mask = seg.cropped_mask
201
+ crop_region = seg.crop_region
202
+ mask[crop_region[1]:crop_region[3], crop_region[0]:crop_region[2]] |= (cropped_mask * 255).astype(np.uint8)
203
+
204
+ return torch.from_numpy(mask.astype(np.float32) / 255.0)
205
+
206
+ def doit(self, segs, image):
207
+ return (SegsMaskCombine.combine(segs, image), )
208
+
209
+
210
+ class MaskPainter(nodes.PreviewImage):
211
+ @classmethod
212
+ def INPUT_TYPES(s):
213
+ return {"required": {"images": ("IMAGE",), },
214
+ "hidden": {
215
+ "prompt": "PROMPT",
216
+ "extra_pnginfo": "EXTRA_PNGINFO",
217
+ },
218
+ "optional": {"mask_image": ("IMAGE_PATH",), },
219
+ }
220
+
221
+ RETURN_TYPES = ("MASK",)
222
+
223
+ FUNCTION = "save_painted_images"
224
+
225
+ CATEGORY = "ImpactPack/Legacy"
226
+
227
+ def load_mask(self, imagepath):
228
+ if imagepath['type'] == "temp":
229
+ input_dir = folder_paths.get_temp_directory()
230
+ else:
231
+ input_dir = folder_paths.get_input_directory()
232
+
233
+ image_path = os.path.join(input_dir, imagepath['filename'])
234
+
235
+ if os.path.exists(image_path):
236
+ i = Image.open(image_path)
237
+
238
+ if 'A' in i.getbands():
239
+ mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0
240
+ mask = 1. - torch.from_numpy(mask)
241
+ else:
242
+ mask = torch.zeros((8, 8), dtype=torch.float32, device="cpu")
243
+ else:
244
+ mask = torch.zeros((8, 8), dtype=torch.float32, device="cpu")
245
+
246
+ return (mask,)
247
+
248
+ def save_painted_images(self, images, filename_prefix="impact-mask",
249
+ prompt=None, extra_pnginfo=None, mask_image=None):
250
+ res = self.save_images(images, filename_prefix, prompt, extra_pnginfo)
251
+
252
+ if mask_image is not None:
253
+ res['result'] = self.load_mask(mask_image)
254
+ else:
255
+ mask = torch.zeros((8, 8), dtype=torch.float32, device="cpu")
256
+ res['result'] = (mask,)
257
+
258
+ return res
ComfyUI-Impact-Pack/notebook/comfyui_colab_impact_pack.ipynb ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "attachments": {},
5
+ "cell_type": "markdown",
6
+ "metadata": {
7
+ "id": "aaaaaaaaaa"
8
+ },
9
+ "source": [
10
+ "Git clone the repo and install the requirements. (ignore the pip errors about protobuf)"
11
+ ]
12
+ },
13
+ {
14
+ "cell_type": "code",
15
+ "execution_count": null,
16
+ "metadata": {
17
+ "id": "bbbbbbbbbb"
18
+ },
19
+ "outputs": [],
20
+ "source": [
21
+ "#@title Environment Setup\n",
22
+ "\n",
23
+ "from pathlib import Path\n",
24
+ "\n",
25
+ "OPTIONS = {}\n",
26
+ "\n",
27
+ "WORKSPACE = 'ComfyUI'\n",
28
+ "USE_GOOGLE_DRIVE = True #@param {type:\"boolean\"}\n",
29
+ "UPDATE_COMFY_UI = True #@param {type:\"boolean\"}\n",
30
+ "\n",
31
+ "OPTIONS['USE_GOOGLE_DRIVE'] = USE_GOOGLE_DRIVE\n",
32
+ "OPTIONS['UPDATE_COMFY_UI'] = UPDATE_COMFY_UI\n",
33
+ "\n",
34
+ "if OPTIONS['USE_GOOGLE_DRIVE']:\n",
35
+ " !echo \"Mounting Google Drive...\"\n",
36
+ " %cd /\n",
37
+ " \n",
38
+ " from google.colab import drive\n",
39
+ " drive.mount('/content/drive')\n",
40
+ "\n",
41
+ " WORKSPACE = \"/content/drive/MyDrive/ComfyUI\"\n",
42
+ " \n",
43
+ " %cd /content/drive/MyDrive\n",
44
+ "\n",
45
+ "![ ! -d $WORKSPACE ] && echo \"-= Initial setup ComfyUI (Original)=-\" && git clone https://github.com/comfyanonymous/ComfyUI\n",
46
+ "%cd $WORKSPACE\n",
47
+ "\n",
48
+ "if OPTIONS['UPDATE_COMFY_UI']:\n",
49
+ " !echo \"-= Updating ComfyUI =-\"\n",
50
+ " !git pull\n",
51
+ " !rm \"/content/drive/MyDrive/ComfyUI/custom_nodes/comfyui-impact-pack.py\"\n",
52
+ "\n",
53
+ "%cd custom_nodes\n",
54
+ "!git clone https://github.com/ltdrdata/ComfyUI-Impact-Pack\n",
55
+ "%cd $WORKSPACE\n",
56
+ "\n",
57
+ "!echo -= Install dependencies =-\n",
58
+ "!pip -q install xformers -r requirements.txt\n"
59
+ ]
60
+ },
61
+ {
62
+ "attachments": {},
63
+ "cell_type": "markdown",
64
+ "metadata": {
65
+ "id": "kkkkkkkkkkkkkk"
66
+ },
67
+ "source": [
68
+ "### Run ComfyUI with localtunnel (Recommended Way)\n",
69
+ "\n",
70
+ "\n"
71
+ ]
72
+ },
73
+ {
74
+ "cell_type": "code",
75
+ "execution_count": null,
76
+ "metadata": {
77
+ "colab": {
78
+ "base_uri": "https://localhost:8080/"
79
+ },
80
+ "id": "jjjjjjjjjjjjj",
81
+ "outputId": "83be9411-d939-4813-e6c1-80e75bf8e80d"
82
+ },
83
+ "outputs": [],
84
+ "source": [
85
+ "!npm install -g localtunnel\n",
86
+ "\n",
87
+ "import subprocess\n",
88
+ "import threading\n",
89
+ "import time\n",
90
+ "import socket\n",
91
+ "def iframe_thread(port):\n",
92
+ " while True:\n",
93
+ " time.sleep(0.5)\n",
94
+ " sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n",
95
+ " result = sock.connect_ex(('127.0.0.1', port))\n",
96
+ " if result == 0:\n",
97
+ " break\n",
98
+ " sock.close()\n",
99
+ " print(\"\\nComfyUI finished loading, trying to launch localtunnel (if it gets stuck here localtunnel is having issues)\")\n",
100
+ " p = subprocess.Popen([\"lt\", \"--port\", \"{}\".format(port)], stdout=subprocess.PIPE)\n",
101
+ " for line in p.stdout:\n",
102
+ " print(line.decode(), end='')\n",
103
+ "\n",
104
+ "\n",
105
+ "threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()\n",
106
+ "\n",
107
+ "!python main.py --dont-print-server"
108
+ ]
109
+ },
110
+ {
111
+ "attachments": {},
112
+ "cell_type": "markdown",
113
+ "metadata": {
114
+ "id": "gggggggggg"
115
+ },
116
+ "source": [
117
+ "### Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work)\n",
118
+ "\n",
119
+ "You should see the ui appear in an iframe. If you get a 403 error, it's your firefox settings or an extension that's messing things up.\n",
120
+ "\n",
121
+ "If you want to open it in another window use the link.\n",
122
+ "\n",
123
+ "Note that some UI features like live image previews won't work because the colab iframe blocks websockets."
124
+ ]
125
+ },
126
+ {
127
+ "cell_type": "code",
128
+ "execution_count": null,
129
+ "metadata": {
130
+ "id": "hhhhhhhhhh"
131
+ },
132
+ "outputs": [],
133
+ "source": [
134
+ "import threading\n",
135
+ "import time\n",
136
+ "import socket\n",
137
+ "def iframe_thread(port):\n",
138
+ " while True:\n",
139
+ " time.sleep(0.5)\n",
140
+ " sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n",
141
+ " result = sock.connect_ex(('127.0.0.1', port))\n",
142
+ " if result == 0:\n",
143
+ " break\n",
144
+ " sock.close()\n",
145
+ " from google.colab import output\n",
146
+ " output.serve_kernel_port_as_iframe(port, height=1024)\n",
147
+ " print(\"to open it in a window you can open this link here:\")\n",
148
+ " output.serve_kernel_port_as_window(port)\n",
149
+ "\n",
150
+ "threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()\n",
151
+ "\n",
152
+ "!python main.py --dont-print-server"
153
+ ]
154
+ }
155
+ ],
156
+ "metadata": {
157
+ "accelerator": "GPU",
158
+ "colab": {
159
+ "provenance": []
160
+ },
161
+ "gpuClass": "standard",
162
+ "kernelspec": {
163
+ "display_name": "Python 3",
164
+ "name": "python3"
165
+ },
166
+ "language_info": {
167
+ "name": "python"
168
+ }
169
+ },
170
+ "nbformat": 4,
171
+ "nbformat_minor": 0
172
+ }
ComfyUI-Impact-Pack/onnx.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import additional_dependencies
2
+ from impact_utils import *
3
+
4
+ additional_dependencies.ensure_onnx_package()
5
+
6
+ try:
7
+ import onnxruntime
8
+
9
+ def onnx_inference(image, onnx_model):
10
+ # prepare image
11
+ pil = tensor2pil(image)
12
+ image = np.ascontiguousarray(pil)
13
+ image = image[:, :, ::-1] # to BGR image
14
+ image = image.astype(np.float32)
15
+ image -= [103.939, 116.779, 123.68] # 'caffe' mode image preprocessing
16
+
17
+ # do detection
18
+ onnx_model = onnxruntime.InferenceSession(onnx_model)
19
+ outputs = onnx_model.run(
20
+ [s_i.name for s_i in onnx_model.get_outputs()],
21
+ {onnx_model.get_inputs()[0].name: np.expand_dims(image, axis=0)},
22
+ )
23
+
24
+ labels = [op for op in outputs if op.dtype == "int32"][0]
25
+ scores = [op for op in outputs if isinstance(op[0][0], np.float32)][0]
26
+ boxes = [op for op in outputs if isinstance(op[0][0], np.ndarray)][0]
27
+
28
+ # filter-out useless item
29
+ idx = np.where(labels[0] == -1)[0][0]
30
+
31
+ labels = labels[0][:idx]
32
+ scores = scores[0][:idx]
33
+ boxes = boxes[0][:idx].astype(np.uint32)
34
+
35
+ return labels, scores, boxes
36
+ except Exception as e:
37
+ print("[ERROR] ComfyUI-Impact-Pack: 'onnxruntime' package doesn't support 'python 3.11', yet.")
38
+ print(f"\t{e}")
ComfyUI-Impact-Pack/requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ openmim
2
+ segment-anything
3
+ scikit-image
ComfyUI-Impact-Pack/troubleshooting/TROUBLESHOOTING.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ ## Destortion on Detailer
2
+
3
+ * Please also note that this issue may be caused by a bug in xformers 0.0.18. If you encounter this problem, please try adjusting the guide_size parameter.
4
+
5
+ ![example](black1.png)
6
+
7
+ ![example](black2.png)
8
+ * guide_size changed from 256 -> 192
ComfyUI-Impact-Pack/troubleshooting/black1.png ADDED

Git LFS Details

  • SHA256: 6e32fe1606d35a26ddf08d2a3ff24c8fcd62831b9ed11eeaa76468e27a2b5f0f
  • Pointer size: 131 Bytes
  • Size of remote file: 753 kB
ComfyUI-Impact-Pack/troubleshooting/black2.png ADDED

Git LFS Details

  • SHA256: 829d72c3cc1034f72bbd0945e1a2aed69e1f38060126159c0b911c4c102e2fcc
  • Pointer size: 131 Bytes
  • Size of remote file: 710 kB