wolverinn
commited on
Commit
•
31f2f07
1
Parent(s):
9c8d2ad
add lightning
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .lightning +1 -0
- .lightningignore +32 -0
- README.md +53 -1
- app.py +129 -0
- handler.py +1 -0
- play.py +19 -0
- predict.py +1 -147
- repositories/BLIP/BLIP.gif +0 -3
- repositories/CodeFormer/assets/CodeFormer_logo.png +0 -3
- repositories/CodeFormer/assets/color_enhancement_result1.png +0 -3
- repositories/CodeFormer/assets/color_enhancement_result2.png +0 -3
- repositories/CodeFormer/assets/inpainting_result1.png +0 -3
- repositories/CodeFormer/assets/inpainting_result2.png +0 -3
- repositories/CodeFormer/assets/network.jpg +0 -3
- repositories/CodeFormer/assets/restoration_result1.png +0 -3
- repositories/CodeFormer/assets/restoration_result2.png +0 -3
- repositories/CodeFormer/assets/restoration_result3.png +0 -3
- repositories/CodeFormer/assets/restoration_result4.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0143.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0240.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0342.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0345.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0368.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0412.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0444.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0478.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0500.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0599.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0717.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0720.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0729.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0763.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0770.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0777.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0885.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/0934.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/Solvay_conference_1927_0018.png +0 -3
- repositories/CodeFormer/inputs/cropped_faces/Solvay_conference_1927_2_16.png +0 -3
- repositories/CodeFormer/inputs/whole_imgs/00.jpg +0 -3
- repositories/CodeFormer/inputs/whole_imgs/01.jpg +0 -3
- repositories/CodeFormer/inputs/whole_imgs/02.png +0 -3
- repositories/CodeFormer/inputs/whole_imgs/03.jpg +0 -3
- repositories/CodeFormer/inputs/whole_imgs/04.jpg +0 -3
- repositories/CodeFormer/inputs/whole_imgs/05.jpg +0 -3
- repositories/CodeFormer/inputs/whole_imgs/06.png +0 -3
- repositories/taming-transformers/assets/birddrawnbyachild.png +0 -3
- repositories/taming-transformers/assets/coco_scene_images_training.svg +0 -2574
- repositories/taming-transformers/assets/drin.jpg +0 -3
- repositories/taming-transformers/assets/faceshq.jpg +0 -3
- repositories/taming-transformers/assets/first_stage_mushrooms.png +0 -3
.lightning
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
name: famous-carson-8575
|
.lightningignore
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
__pycache__
|
2 |
+
/ESRGAN/*
|
3 |
+
/SwinIR/*
|
4 |
+
/venv
|
5 |
+
/tmp
|
6 |
+
/GFPGANv1.3.pth
|
7 |
+
/gfpgan/weights/*.pth
|
8 |
+
/ui-config.json
|
9 |
+
/outputs
|
10 |
+
/log
|
11 |
+
/webui.settings.bat
|
12 |
+
/embeddings
|
13 |
+
/styles.csv
|
14 |
+
/params.txt
|
15 |
+
/styles.csv.bak
|
16 |
+
/interrogate
|
17 |
+
/user.css
|
18 |
+
/.idea
|
19 |
+
notification.mp3
|
20 |
+
/SwinIR
|
21 |
+
/textual_inversion
|
22 |
+
.vscode
|
23 |
+
/extensions
|
24 |
+
/test/stdout.txt
|
25 |
+
/test/stderr.txt
|
26 |
+
/cache.json
|
27 |
+
.git
|
28 |
+
*/chilloutmix_NiPrunedFp32Fix.safetensors
|
29 |
+
*/vae-ft-mse-840000-ema-pruned.ckpt
|
30 |
+
*/stLouisLuxuriousWheels_v1.safetensors
|
31 |
+
*/taiwanDollLikeness_v10.safetensors
|
32 |
+
*/koreanDollLikeness_v10.safetensors
|
README.md
CHANGED
@@ -1,7 +1,59 @@
|
|
1 |
# Chill Watcher
|
2 |
consider deploy on:
|
3 |
-
-
|
4 |
- replicate api
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
### some stackoverflow:
|
7 |
install docker:
|
|
|
1 |
# Chill Watcher
|
2 |
consider deploy on:
|
3 |
+
- huggingface inference point
|
4 |
- replicate api
|
5 |
+
- lightning.ai
|
6 |
+
|
7 |
+
# platform comparison
|
8 |
+
> all support autoscaling
|
9 |
+
|
10 |
+
|platform|prediction speed|charges|deploy handiness|
|
11 |
+
|-|-|-|-|
|
12 |
+
|huggingface|fast:20s|high:$0.6/hr (without autoscaling)|easy:git push|
|
13 |
+
|replicate|fast if used frequently: 30s, slow if needs initialization: 5min|low: $0.02 per generation|difficult: build image and upload|
|
14 |
+
|lightning.ai|fast with app running: 20s, slow if idle: XXs|low: free $30 per month, XX per run|easy: one command|
|
15 |
+
|
16 |
+
# platform deploy options
|
17 |
+
## huggingface
|
18 |
+
> [docs](https://huggingface.co/docs/inference-endpoints/guides/custom_handler)
|
19 |
+
|
20 |
+
- requirements: use pip packages in `requirements.txt`
|
21 |
+
- `init()` and `predict()` function: use `handler.py`, implement the `EndpointHandler` class
|
22 |
+
- more: modify `handler.py` for requests and inference and explore more highly-customized features
|
23 |
+
- deploy: git (lfs) push to huggingface repository(the whole directory including models and weights, etc.), and use inference endpoints to deploy. Click and deploy automaticly, very simple.
|
24 |
+
- call api: use the url provide by inference endpoints after endpoint is ready(build, initialize and in a "running" state), make a post request to the url using request schema definied in the `handler.py`
|
25 |
+
|
26 |
+
## replicate
|
27 |
+
> [docs](https://replicate.com/docs/guides/push-a-model)
|
28 |
+
|
29 |
+
- requirements: specify all requirements(pip packages, system packages, python version, cuda, etc.) in `cog.yaml`
|
30 |
+
- `init()` and `predict()` function: use `predict.py`, implement the `Predictor` class
|
31 |
+
- more: modify `predict.py`
|
32 |
+
- deploy:
|
33 |
+
1. get a linux GPU machine with 60GB disk space;
|
34 |
+
2. install [cog](https://replicate.com/docs/guides/push-a-model) and [docker](https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository)
|
35 |
+
3. `git pull` the current repository from huggingface, including large model files
|
36 |
+
4. after `predict.py` and `cog.yaml` is correctly coded, run `cog login`, `cog push`, then cog will build a docker image locally and push the image to replicate. As the image could take 30GB or so disk space, it would cost a lot network bandwidth.
|
37 |
+
- call api: if everything runs successfully and the docker image is pushed to replicate, you will see a web-ui and an API example directly in your replicate repository
|
38 |
+
|
39 |
+
## lightning.ai
|
40 |
+
> docs: [code](https://lightning.ai/docs/app/stable/levels/basic/real_lightning_component_implementations.html), [deploy](https://lightning.ai/docs/app/stable/workflows/run_app_on_cloud/)
|
41 |
+
|
42 |
+
- requirements:
|
43 |
+
- pip packages are listed in `requirements.txt`, note that some requirements are different from those in huggingface, and you need to modify some lines in `requirements.txt` according to the comment in the `requirements.txt`
|
44 |
+
- other pip packages, system packages and some big model weight files download commands, can be listed using a custom build config. Checkout `class CustomBuildConfig(BuildConfig)` in `app.py`. In a custom build config you can use many linux commands such as `wget` and `sudo apt-get update`. The custom build config will be executed on the `__init__()` of the `PythonServer` class
|
45 |
+
- `init()` and `predict()` function: use `app.py`, implement the `PythonServer` class. Note:
|
46 |
+
- some packages haven't been installed when the file is called(these packages may be installed when `__init__()` is called), so some import code should be in the function, not at the top of the file, or you may get import errors.
|
47 |
+
- you can't save your own value to `PythonServer.self` unless it's predifined in the variables, so don't assign any self-defined variables to `self`
|
48 |
+
- if you use the custom build config, you should implement `PythonServer`'s `__init()__` yourself, so don't forget to use the correct function signature
|
49 |
+
- more: ...
|
50 |
+
- deploy:
|
51 |
+
- `pip install lightning`
|
52 |
+
- prepare the directory on your local computer(no need to have a GPU)
|
53 |
+
- list big files in the `.lightningignore` file to avoid big file upload and save deploy time cost
|
54 |
+
- run `lightning run app app.py --cloud` in the local terminal, and it will upload the files in the directory to lightning cloud, and start deploying on the cloud
|
55 |
+
- check error logs on the web-ui, use `all logs`
|
56 |
+
- call api: only if the app starts successfully, you can see a valid url in the `settings` page of the web-ui. Open that url, and you can see the api
|
57 |
|
58 |
### some stackoverflow:
|
59 |
install docker:
|
app.py
ADDED
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# inference handler for lightning ai
|
2 |
+
|
3 |
+
import re
|
4 |
+
import os
|
5 |
+
import logging
|
6 |
+
# import json
|
7 |
+
from pydantic import BaseModel
|
8 |
+
from typing import Any, Dict, Optional, TYPE_CHECKING
|
9 |
+
from dataclasses import dataclass
|
10 |
+
logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())
|
11 |
+
|
12 |
+
import lightning as L
|
13 |
+
from lightning.app.components.serve import PythonServer, Text
|
14 |
+
from lightning.app import BuildConfig
|
15 |
+
|
16 |
+
|
17 |
+
class _DefaultInputData(BaseModel):
|
18 |
+
prompt: str
|
19 |
+
|
20 |
+
class _DefaultOutputData(BaseModel):
|
21 |
+
prediction: str
|
22 |
+
parameters: str
|
23 |
+
|
24 |
+
|
25 |
+
@dataclass
|
26 |
+
class CustomBuildConfig(BuildConfig):
|
27 |
+
def build_commands(self):
|
28 |
+
dir_path = os.path.dirname(os.path.abspath(__file__))
|
29 |
+
model_path = os.path.join(dir_path, "models/Stable-diffusion")
|
30 |
+
model_url = "https://huggingface.co/Hardy01/chill_watcher/resolve/main/models/Stable-diffusion/chilloutmix_NiPrunedFp32Fix.safetensors"
|
31 |
+
download_cmd = "wget -P {} {}".format(str(model_path), model_url)
|
32 |
+
vae_url = "https://huggingface.co/Hardy01/chill_watcher/resolve/main/models/VAE/vae-ft-mse-840000-ema-pruned.ckpt"
|
33 |
+
vae_path = os.path.join(dir_path, "models/VAE")
|
34 |
+
down2 = "wget -P {} {}".format(str(vae_path), vae_url)
|
35 |
+
lora_url1 = "https://huggingface.co/Hardy01/chill_watcher/resolve/main/models/Lora/koreanDollLikeness_v10.safetensors"
|
36 |
+
lora_url2 = "https://huggingface.co/Hardy01/chill_watcher/resolve/main/models/Lora/taiwanDollLikeness_v10.safetensors"
|
37 |
+
lora_path = os.path.join(dir_path, "models/Lora")
|
38 |
+
down3 = "wget -P {} {}".format(str(lora_path), lora_url1)
|
39 |
+
down4 = "wget -P {} {}".format(str(lora_path), lora_url2)
|
40 |
+
# https://stackoverflow.com/questions/55313610/importerror-libgl-so-1-cannot-open-shared-object-file-no-such-file-or-directo
|
41 |
+
cmd1 = "pip3 install torch==1.13.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117"
|
42 |
+
cmd2 = "pip3 install torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117"
|
43 |
+
cmd_31 = "sudo apt-get update"
|
44 |
+
cmd3 = "sudo apt-get install libgl1-mesa-glx"
|
45 |
+
cmd4 = "sudo apt-get install libglib2.0-0"
|
46 |
+
return [download_cmd, down2, down3, down4, cmd1, cmd2, cmd_31, cmd3, cmd4]
|
47 |
+
|
48 |
+
|
49 |
+
class PyTorchServer(PythonServer):
|
50 |
+
def __init__(
|
51 |
+
self,
|
52 |
+
input_type: type = _DefaultInputData,
|
53 |
+
output_type: type = _DefaultOutputData,
|
54 |
+
**kwargs: Any,
|
55 |
+
):
|
56 |
+
super().__init__(input_type=input_type, output_type=output_type, **kwargs)
|
57 |
+
|
58 |
+
# Use the custom build config
|
59 |
+
self.cloud_build_config = CustomBuildConfig()
|
60 |
+
def setup(self):
|
61 |
+
# need to install dependancies first to import packages
|
62 |
+
import torch
|
63 |
+
# Truncate version number of nightly/local build of PyTorch to not cause exceptions with CodeFormer or Safetensors
|
64 |
+
if ".dev" in torch.__version__ or "+git" in torch.__version__:
|
65 |
+
torch.__long_version__ = torch.__version__
|
66 |
+
torch.__version__ = re.search(r'[\d.]+[\d]', torch.__version__).group(0)
|
67 |
+
|
68 |
+
from handler import initialize
|
69 |
+
initialize()
|
70 |
+
|
71 |
+
def predict(self, request):
|
72 |
+
from modules.api.api import encode_pil_to_base64
|
73 |
+
from modules import shared
|
74 |
+
from modules.processing import StableDiffusionProcessingTxt2Img, process_images
|
75 |
+
args = {
|
76 |
+
# todo: don't output png
|
77 |
+
"outpath_samples": "C:\\Users\\wolvz\\Desktop",
|
78 |
+
"prompt": "lora:koreanDollLikeness_v15:0.66, best quality, ultra high res, (photorealistic:1.4), 1girl, beige sweater, black choker, smile, laughing, bare shoulders, solo focus, ((full body), (brown hair:1), looking at viewer",
|
79 |
+
"negative_prompt": "paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, (ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, 3hands,4fingers,3arms, bad anatomy, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts,poorly drawn face,mutation,deformed",
|
80 |
+
"sampler_name": "DPM++ SDE Karras",
|
81 |
+
"steps": 20, # 25
|
82 |
+
"cfg_scale": 8,
|
83 |
+
"width": 512,
|
84 |
+
"height": 768,
|
85 |
+
"seed": -1,
|
86 |
+
}
|
87 |
+
print("&&&&&&&&&&&&&&&&&&&&&&&&",request)
|
88 |
+
if request.prompt:
|
89 |
+
prompt = request.prompt
|
90 |
+
print("get prompt from request: ", prompt)
|
91 |
+
args["prompt"] = prompt
|
92 |
+
p = StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args)
|
93 |
+
processed = process_images(p)
|
94 |
+
single_image_b64 = encode_pil_to_base64(processed.images[0]).decode('utf-8')
|
95 |
+
return {
|
96 |
+
"prediction": single_image_b64,
|
97 |
+
"parameters": processed.images[0].info.get('parameters', ""),
|
98 |
+
}
|
99 |
+
|
100 |
+
|
101 |
+
component = PyTorchServer(
|
102 |
+
cloud_compute=L.CloudCompute('gpu', disk_size=20, idle_timeout=30)
|
103 |
+
)
|
104 |
+
app = L.LightningApp(component)
|
105 |
+
|
106 |
+
# class Flow(L.LightningFlow):
|
107 |
+
# # 1. Define the state
|
108 |
+
# def __init__(self):
|
109 |
+
# self.cloud_build_config = CustomBuildConfig()
|
110 |
+
# super().__init__()
|
111 |
+
# self.component = PyTorchServer(
|
112 |
+
# input_type=Text, output_type=Text, cloud_compute=L.CloudCompute('gpu', disk_size=20, idle_timeout=30)
|
113 |
+
# )
|
114 |
+
|
115 |
+
# # 2. Optional, but used to validate names
|
116 |
+
# def run(self):
|
117 |
+
# self.component.run()
|
118 |
+
|
119 |
+
# # 3. Method executed when a request is received.
|
120 |
+
# def handle_post(self, prompt: str):
|
121 |
+
# return f'The name {name} was registered'
|
122 |
+
|
123 |
+
# # 4. Defines this Component's Restful API. You can have several routes.
|
124 |
+
# def configure_api(self):
|
125 |
+
# return [Post(route="/name", method=self.handle_post)]
|
126 |
+
|
127 |
+
|
128 |
+
# app = L.LightningApp(Flow())
|
129 |
+
|
handler.py
CHANGED
@@ -1,3 +1,4 @@
|
|
|
|
1 |
import os
|
2 |
import sys
|
3 |
import time
|
|
|
1 |
+
# inference handler for huggingface
|
2 |
import os
|
3 |
import sys
|
4 |
import time
|
play.py
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import requests
|
2 |
+
import random
|
3 |
+
import time
|
4 |
+
import base64
|
5 |
+
import hashlib
|
6 |
+
import json
|
7 |
+
|
8 |
+
def lightning():
|
9 |
+
start = int(time.time())
|
10 |
+
url = "https://wsoqr-01gwy9mc1gzh3b4ce9b708vp31.litng-ai-03.litng.ai/predict"
|
11 |
+
form = {
|
12 |
+
"prompt": "extremely detailed CG unity 8k wallpaper, masterpiece, best quality, ultra-detailed, best illustration, best shadow, photorealistic:1.4, 1 gorgeous girls,oversize pink_hoodie,under eiffel tower,grey_hair:1.1, collarbone,puffy breasts:1.5,full body shot,shiny eyes,enjoyable expression,evil smile,slim legs,narrow waist,detailed face, looking at viewer,looking back,gorgeous skin,short curly hair,kneeling,puffy ass up,climbing,lying,rosy pussy,nsfw,insert left_hand into pussy",
|
13 |
+
}
|
14 |
+
resp = requests.post(url, json=form)
|
15 |
+
print(resp.status_code, '\n', resp.content)
|
16 |
+
print("time cost(ms): ", int(time.time())*1e3-start*1e3)
|
17 |
+
|
18 |
+
|
19 |
+
lightning()
|
predict.py
CHANGED
@@ -5,15 +5,9 @@ from cog import BasePredictor, Input, Path
|
|
5 |
|
6 |
import os
|
7 |
import sys
|
8 |
-
import time
|
9 |
-
import importlib
|
10 |
import signal
|
11 |
import re
|
12 |
from typing import Dict, List, Any
|
13 |
-
# from fastapi import FastAPI
|
14 |
-
# from fastapi.middleware.cors import CORSMiddleware
|
15 |
-
# from fastapi.middleware.gzip import GZipMiddleware
|
16 |
-
from packaging import version
|
17 |
|
18 |
import logging
|
19 |
logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())
|
@@ -29,6 +23,7 @@ if ".dev" in torch.__version__ or "+git" in torch.__version__:
|
|
29 |
torch.__version__ = re.search(r'[\d.]+[\d]', torch.__version__).group(0)
|
30 |
|
31 |
from modules import shared, devices, ui_tempdir
|
|
|
32 |
import modules.codeformer_model as codeformer
|
33 |
import modules.face_restoration
|
34 |
import modules.gfpgan_model as gfpgan
|
@@ -51,38 +46,19 @@ from modules.shared import cmd_opts, opts
|
|
51 |
import modules.hypernetworks.hypernetwork
|
52 |
|
53 |
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images
|
54 |
-
import base64
|
55 |
-
import io
|
56 |
-
from fastapi import HTTPException
|
57 |
-
from io import BytesIO
|
58 |
-
import piexif
|
59 |
-
import piexif.helper
|
60 |
-
from PIL import PngImagePlugin,Image
|
61 |
|
62 |
|
63 |
def initialize():
|
64 |
-
# check_versions()
|
65 |
-
|
66 |
-
# extensions.list_extensions()
|
67 |
-
# localization.list_localizations(cmd_opts.localizations_dir)
|
68 |
-
|
69 |
-
# if cmd_opts.ui_debug_mode:
|
70 |
-
# shared.sd_upscalers = upscaler.UpscalerLanczos().scalers
|
71 |
-
# modules.scripts.load_scripts()
|
72 |
-
# return
|
73 |
-
|
74 |
modelloader.cleanup_models()
|
75 |
modules.sd_models.setup_model()
|
76 |
codeformer.setup_model(cmd_opts.codeformer_models_path)
|
77 |
gfpgan.setup_model(cmd_opts.gfpgan_models_path)
|
78 |
|
79 |
modelloader.list_builtin_upscalers()
|
80 |
-
# modules.scripts.load_scripts()
|
81 |
modelloader.load_upscalers()
|
82 |
|
83 |
modules.sd_vae.refresh_vae_list()
|
84 |
|
85 |
-
# modules.textual_inversion.textual_inversion.list_textual_inversion_templates()
|
86 |
|
87 |
try:
|
88 |
modules.sd_models.load_model()
|
@@ -93,35 +69,11 @@ def initialize():
|
|
93 |
exit(1)
|
94 |
|
95 |
shared.opts.data["sd_model_checkpoint"] = shared.sd_model.sd_checkpoint_info.title
|
96 |
-
|
97 |
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()))
|
98 |
shared.opts.onchange("sd_vae", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
|
99 |
shared.opts.onchange("sd_vae_as_default", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
|
100 |
shared.opts.onchange("temp_dir", ui_tempdir.on_tmpdir_changed)
|
101 |
|
102 |
-
# shared.reload_hypernetworks()
|
103 |
-
|
104 |
-
# ui_extra_networks.intialize()
|
105 |
-
# ui_extra_networks.register_page(ui_extra_networks_textual_inversion.ExtraNetworksPageTextualInversion())
|
106 |
-
# ui_extra_networks.register_page(ui_extra_networks_hypernets.ExtraNetworksPageHypernetworks())
|
107 |
-
# ui_extra_networks.register_page(ui_extra_networks_checkpoints.ExtraNetworksPageCheckpoints())
|
108 |
-
|
109 |
-
# extra_networks.initialize()
|
110 |
-
# extra_networks.register_extra_network(extra_networks_hypernet.ExtraNetworkHypernet())
|
111 |
-
|
112 |
-
# if cmd_opts.tls_keyfile is not None and cmd_opts.tls_keyfile is not None:
|
113 |
-
|
114 |
-
# try:
|
115 |
-
# if not os.path.exists(cmd_opts.tls_keyfile):
|
116 |
-
# print("Invalid path to TLS keyfile given")
|
117 |
-
# if not os.path.exists(cmd_opts.tls_certfile):
|
118 |
-
# print(f"Invalid path to TLS certfile: '{cmd_opts.tls_certfile}'")
|
119 |
-
# except TypeError:
|
120 |
-
# cmd_opts.tls_keyfile = cmd_opts.tls_certfile = None
|
121 |
-
# print("TLS setup invalid, running webui without TLS")
|
122 |
-
# else:
|
123 |
-
# print("Running with TLS")
|
124 |
-
|
125 |
# make the program just exit at ctrl+c without waiting for anything
|
126 |
def sigint_handler(sig, frame):
|
127 |
print(f'Interrupted with signal {sig} in {frame}')
|
@@ -129,104 +81,6 @@ def initialize():
|
|
129 |
|
130 |
signal.signal(signal.SIGINT, sigint_handler)
|
131 |
|
132 |
-
|
133 |
-
class EndpointHandler():
|
134 |
-
def __init__(self, path=""):
|
135 |
-
# Preload all the elements you are going to need at inference.
|
136 |
-
# pseudo:
|
137 |
-
# self.model= load_model(path)
|
138 |
-
initialize()
|
139 |
-
self.shared = shared
|
140 |
-
|
141 |
-
def __call__(self, data: Dict[str, Any]) -> List[Dict[str, Any]]:
|
142 |
-
"""
|
143 |
-
data args:
|
144 |
-
inputs (:obj: `str` | `PIL.Image` | `np.array`)
|
145 |
-
kwargs
|
146 |
-
Return:
|
147 |
-
A :obj:`list` | `dict`: will be serialized and returned
|
148 |
-
"""
|
149 |
-
args = {
|
150 |
-
# todo: don't output png
|
151 |
-
"outpath_samples": "C:\\Users\\wolvz\\Desktop",
|
152 |
-
"prompt": "lora:koreanDollLikeness_v15:0.66, best quality, ultra high res, (photorealistic:1.4), 1girl, beige sweater, black choker, smile, laughing, bare shoulders, solo focus, ((full body), (brown hair:1), looking at viewer",
|
153 |
-
"negative_prompt": "paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, (ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, 3hands,4fingers,3arms, bad anatomy, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts,poorly drawn face,mutation,deformed",
|
154 |
-
"sampler_name": "DPM++ SDE Karras",
|
155 |
-
"steps": 20, # 25
|
156 |
-
"cfg_scale": 8,
|
157 |
-
"width": 512,
|
158 |
-
"height": 768,
|
159 |
-
"seed": -1,
|
160 |
-
}
|
161 |
-
if "prompt" in data.keys():
|
162 |
-
print("get prompt from request: ", data["prompt"])
|
163 |
-
args["prompt"] = data["prompt"]
|
164 |
-
p = StableDiffusionProcessingTxt2Img(sd_model=self.shared.sd_model, **args)
|
165 |
-
processed = process_images(p)
|
166 |
-
single_image_b64 = encode_pil_to_base64(processed.images[0]).decode('utf-8')
|
167 |
-
return {
|
168 |
-
"img_data": single_image_b64,
|
169 |
-
"parameters": processed.images[0].info.get('parameters', ""),
|
170 |
-
}
|
171 |
-
|
172 |
-
|
173 |
-
def manual_hack():
|
174 |
-
initialize()
|
175 |
-
args = {
|
176 |
-
# todo: don't output res
|
177 |
-
"outpath_samples": "C:\\Users\\wolvz\\Desktop",
|
178 |
-
"prompt": "lora:koreanDollLikeness_v15:0.66, best quality, ultra high res, (photorealistic:1.4), 1girl, beige sweater, black choker, smile, laughing, bare shoulders, solo focus, ((full body), (brown hair:1), looking at viewer",
|
179 |
-
"negative_prompt": "paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans",
|
180 |
-
"sampler_name": "DPM++ SDE Karras",
|
181 |
-
"steps": 20, # 25
|
182 |
-
"cfg_scale": 8,
|
183 |
-
"width": 512,
|
184 |
-
"height": 768,
|
185 |
-
"seed": -1,
|
186 |
-
}
|
187 |
-
p = StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args)
|
188 |
-
processed = process_images(p)
|
189 |
-
|
190 |
-
|
191 |
-
def decode_base64_to_image(encoding):
|
192 |
-
if encoding.startswith("data:image/"):
|
193 |
-
encoding = encoding.split(";")[1].split(",")[1]
|
194 |
-
try:
|
195 |
-
image = Image.open(BytesIO(base64.b64decode(encoding)))
|
196 |
-
return image
|
197 |
-
except Exception as err:
|
198 |
-
raise HTTPException(status_code=500, detail="Invalid encoded image")
|
199 |
-
|
200 |
-
def encode_pil_to_base64(image):
|
201 |
-
with io.BytesIO() as output_bytes:
|
202 |
-
|
203 |
-
if opts.samples_format.lower() == 'png':
|
204 |
-
use_metadata = False
|
205 |
-
metadata = PngImagePlugin.PngInfo()
|
206 |
-
for key, value in image.info.items():
|
207 |
-
if isinstance(key, str) and isinstance(value, str):
|
208 |
-
metadata.add_text(key, value)
|
209 |
-
use_metadata = True
|
210 |
-
image.save(output_bytes, format="PNG", pnginfo=(metadata if use_metadata else None), quality=opts.jpeg_quality)
|
211 |
-
|
212 |
-
elif opts.samples_format.lower() in ("jpg", "jpeg", "webp"):
|
213 |
-
parameters = image.info.get('parameters', None)
|
214 |
-
exif_bytes = piexif.dump({
|
215 |
-
"Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(parameters or "", encoding="unicode") }
|
216 |
-
})
|
217 |
-
if opts.samples_format.lower() in ("jpg", "jpeg"):
|
218 |
-
image.save(output_bytes, format="JPEG", exif = exif_bytes, quality=opts.jpeg_quality)
|
219 |
-
else:
|
220 |
-
image.save(output_bytes, format="WEBP", exif = exif_bytes, quality=opts.jpeg_quality)
|
221 |
-
|
222 |
-
else:
|
223 |
-
raise HTTPException(status_code=500, detail="Invalid image format")
|
224 |
-
|
225 |
-
bytes_data = output_bytes.getvalue()
|
226 |
-
|
227 |
-
return base64.b64encode(bytes_data)
|
228 |
-
|
229 |
-
|
230 |
class Predictor(BasePredictor):
|
231 |
def setup(self):
|
232 |
"""Load the model into memory to make running multiple predictions efficient"""
|
|
|
5 |
|
6 |
import os
|
7 |
import sys
|
|
|
|
|
8 |
import signal
|
9 |
import re
|
10 |
from typing import Dict, List, Any
|
|
|
|
|
|
|
|
|
11 |
|
12 |
import logging
|
13 |
logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())
|
|
|
23 |
torch.__version__ = re.search(r'[\d.]+[\d]', torch.__version__).group(0)
|
24 |
|
25 |
from modules import shared, devices, ui_tempdir
|
26 |
+
from modules.api.api import encode_pil_to_base64
|
27 |
import modules.codeformer_model as codeformer
|
28 |
import modules.face_restoration
|
29 |
import modules.gfpgan_model as gfpgan
|
|
|
46 |
import modules.hypernetworks.hypernetwork
|
47 |
|
48 |
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
|
51 |
def initialize():
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
modelloader.cleanup_models()
|
53 |
modules.sd_models.setup_model()
|
54 |
codeformer.setup_model(cmd_opts.codeformer_models_path)
|
55 |
gfpgan.setup_model(cmd_opts.gfpgan_models_path)
|
56 |
|
57 |
modelloader.list_builtin_upscalers()
|
|
|
58 |
modelloader.load_upscalers()
|
59 |
|
60 |
modules.sd_vae.refresh_vae_list()
|
61 |
|
|
|
62 |
|
63 |
try:
|
64 |
modules.sd_models.load_model()
|
|
|
69 |
exit(1)
|
70 |
|
71 |
shared.opts.data["sd_model_checkpoint"] = shared.sd_model.sd_checkpoint_info.title
|
|
|
72 |
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()))
|
73 |
shared.opts.onchange("sd_vae", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
|
74 |
shared.opts.onchange("sd_vae_as_default", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
|
75 |
shared.opts.onchange("temp_dir", ui_tempdir.on_tmpdir_changed)
|
76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
# make the program just exit at ctrl+c without waiting for anything
|
78 |
def sigint_handler(sig, frame):
|
79 |
print(f'Interrupted with signal {sig} in {frame}')
|
|
|
81 |
|
82 |
signal.signal(signal.SIGINT, sigint_handler)
|
83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
84 |
class Predictor(BasePredictor):
|
85 |
def setup(self):
|
86 |
"""Load the model into memory to make running multiple predictions efficient"""
|
repositories/BLIP/BLIP.gif
DELETED
Git LFS Details
|
repositories/CodeFormer/assets/CodeFormer_logo.png
DELETED
Git LFS Details
|
repositories/CodeFormer/assets/color_enhancement_result1.png
DELETED
Git LFS Details
|
repositories/CodeFormer/assets/color_enhancement_result2.png
DELETED
Git LFS Details
|
repositories/CodeFormer/assets/inpainting_result1.png
DELETED
Git LFS Details
|
repositories/CodeFormer/assets/inpainting_result2.png
DELETED
Git LFS Details
|
repositories/CodeFormer/assets/network.jpg
DELETED
Git LFS Details
|
repositories/CodeFormer/assets/restoration_result1.png
DELETED
Git LFS Details
|
repositories/CodeFormer/assets/restoration_result2.png
DELETED
Git LFS Details
|
repositories/CodeFormer/assets/restoration_result3.png
DELETED
Git LFS Details
|
repositories/CodeFormer/assets/restoration_result4.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0143.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0240.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0342.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0345.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0368.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0412.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0444.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0478.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0500.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0599.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0717.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0720.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0729.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0763.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0770.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0777.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0885.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/0934.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/Solvay_conference_1927_0018.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/cropped_faces/Solvay_conference_1927_2_16.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/whole_imgs/00.jpg
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/whole_imgs/01.jpg
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/whole_imgs/02.png
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/whole_imgs/03.jpg
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/whole_imgs/04.jpg
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/whole_imgs/05.jpg
DELETED
Git LFS Details
|
repositories/CodeFormer/inputs/whole_imgs/06.png
DELETED
Git LFS Details
|
repositories/taming-transformers/assets/birddrawnbyachild.png
DELETED
Git LFS Details
|
repositories/taming-transformers/assets/coco_scene_images_training.svg
DELETED
repositories/taming-transformers/assets/drin.jpg
DELETED
Git LFS Details
|
repositories/taming-transformers/assets/faceshq.jpg
DELETED
Git LFS Details
|
repositories/taming-transformers/assets/first_stage_mushrooms.png
DELETED
Git LFS Details
|