wolverinn commited on
Commit
31f2f07
1 Parent(s): 9c8d2ad

add lightning

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .lightning +1 -0
  2. .lightningignore +32 -0
  3. README.md +53 -1
  4. app.py +129 -0
  5. handler.py +1 -0
  6. play.py +19 -0
  7. predict.py +1 -147
  8. repositories/BLIP/BLIP.gif +0 -3
  9. repositories/CodeFormer/assets/CodeFormer_logo.png +0 -3
  10. repositories/CodeFormer/assets/color_enhancement_result1.png +0 -3
  11. repositories/CodeFormer/assets/color_enhancement_result2.png +0 -3
  12. repositories/CodeFormer/assets/inpainting_result1.png +0 -3
  13. repositories/CodeFormer/assets/inpainting_result2.png +0 -3
  14. repositories/CodeFormer/assets/network.jpg +0 -3
  15. repositories/CodeFormer/assets/restoration_result1.png +0 -3
  16. repositories/CodeFormer/assets/restoration_result2.png +0 -3
  17. repositories/CodeFormer/assets/restoration_result3.png +0 -3
  18. repositories/CodeFormer/assets/restoration_result4.png +0 -3
  19. repositories/CodeFormer/inputs/cropped_faces/0143.png +0 -3
  20. repositories/CodeFormer/inputs/cropped_faces/0240.png +0 -3
  21. repositories/CodeFormer/inputs/cropped_faces/0342.png +0 -3
  22. repositories/CodeFormer/inputs/cropped_faces/0345.png +0 -3
  23. repositories/CodeFormer/inputs/cropped_faces/0368.png +0 -3
  24. repositories/CodeFormer/inputs/cropped_faces/0412.png +0 -3
  25. repositories/CodeFormer/inputs/cropped_faces/0444.png +0 -3
  26. repositories/CodeFormer/inputs/cropped_faces/0478.png +0 -3
  27. repositories/CodeFormer/inputs/cropped_faces/0500.png +0 -3
  28. repositories/CodeFormer/inputs/cropped_faces/0599.png +0 -3
  29. repositories/CodeFormer/inputs/cropped_faces/0717.png +0 -3
  30. repositories/CodeFormer/inputs/cropped_faces/0720.png +0 -3
  31. repositories/CodeFormer/inputs/cropped_faces/0729.png +0 -3
  32. repositories/CodeFormer/inputs/cropped_faces/0763.png +0 -3
  33. repositories/CodeFormer/inputs/cropped_faces/0770.png +0 -3
  34. repositories/CodeFormer/inputs/cropped_faces/0777.png +0 -3
  35. repositories/CodeFormer/inputs/cropped_faces/0885.png +0 -3
  36. repositories/CodeFormer/inputs/cropped_faces/0934.png +0 -3
  37. repositories/CodeFormer/inputs/cropped_faces/Solvay_conference_1927_0018.png +0 -3
  38. repositories/CodeFormer/inputs/cropped_faces/Solvay_conference_1927_2_16.png +0 -3
  39. repositories/CodeFormer/inputs/whole_imgs/00.jpg +0 -3
  40. repositories/CodeFormer/inputs/whole_imgs/01.jpg +0 -3
  41. repositories/CodeFormer/inputs/whole_imgs/02.png +0 -3
  42. repositories/CodeFormer/inputs/whole_imgs/03.jpg +0 -3
  43. repositories/CodeFormer/inputs/whole_imgs/04.jpg +0 -3
  44. repositories/CodeFormer/inputs/whole_imgs/05.jpg +0 -3
  45. repositories/CodeFormer/inputs/whole_imgs/06.png +0 -3
  46. repositories/taming-transformers/assets/birddrawnbyachild.png +0 -3
  47. repositories/taming-transformers/assets/coco_scene_images_training.svg +0 -2574
  48. repositories/taming-transformers/assets/drin.jpg +0 -3
  49. repositories/taming-transformers/assets/faceshq.jpg +0 -3
  50. repositories/taming-transformers/assets/first_stage_mushrooms.png +0 -3
.lightning ADDED
@@ -0,0 +1 @@
 
 
1
+ name: famous-carson-8575
.lightningignore ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ __pycache__
2
+ /ESRGAN/*
3
+ /SwinIR/*
4
+ /venv
5
+ /tmp
6
+ /GFPGANv1.3.pth
7
+ /gfpgan/weights/*.pth
8
+ /ui-config.json
9
+ /outputs
10
+ /log
11
+ /webui.settings.bat
12
+ /embeddings
13
+ /styles.csv
14
+ /params.txt
15
+ /styles.csv.bak
16
+ /interrogate
17
+ /user.css
18
+ /.idea
19
+ notification.mp3
20
+ /SwinIR
21
+ /textual_inversion
22
+ .vscode
23
+ /extensions
24
+ /test/stdout.txt
25
+ /test/stderr.txt
26
+ /cache.json
27
+ .git
28
+ */chilloutmix_NiPrunedFp32Fix.safetensors
29
+ */vae-ft-mse-840000-ema-pruned.ckpt
30
+ */stLouisLuxuriousWheels_v1.safetensors
31
+ */taiwanDollLikeness_v10.safetensors
32
+ */koreanDollLikeness_v10.safetensors
README.md CHANGED
@@ -1,7 +1,59 @@
1
  # Chill Watcher
2
  consider deploy on:
3
- - hugging-face inference point
4
  - replicate api
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
  ### some stackoverflow:
7
  install docker:
 
1
  # Chill Watcher
2
  consider deploy on:
3
+ - huggingface inference point
4
  - replicate api
5
+ - lightning.ai
6
+
7
+ # platform comparison
8
+ > all support autoscaling
9
+
10
+ |platform|prediction speed|charges|deploy handiness|
11
+ |-|-|-|-|
12
+ |huggingface|fast:20s|high:$0.6/hr (without autoscaling)|easy:git push|
13
+ |replicate|fast if used frequently: 30s, slow if needs initialization: 5min|low: $0.02 per generation|difficult: build image and upload|
14
+ |lightning.ai|fast with app running: 20s, slow if idle: XXs|low: free $30 per month, XX per run|easy: one command|
15
+
16
+ # platform deploy options
17
+ ## huggingface
18
+ > [docs](https://huggingface.co/docs/inference-endpoints/guides/custom_handler)
19
+
20
+ - requirements: use pip packages in `requirements.txt`
21
+ - `init()` and `predict()` function: use `handler.py`, implement the `EndpointHandler` class
22
+ - more: modify `handler.py` for requests and inference and explore more highly-customized features
23
+ - deploy: git (lfs) push to huggingface repository(the whole directory including models and weights, etc.), and use inference endpoints to deploy. Click and deploy automaticly, very simple.
24
+ - call api: use the url provide by inference endpoints after endpoint is ready(build, initialize and in a "running" state), make a post request to the url using request schema definied in the `handler.py`
25
+
26
+ ## replicate
27
+ > [docs](https://replicate.com/docs/guides/push-a-model)
28
+
29
+ - requirements: specify all requirements(pip packages, system packages, python version, cuda, etc.) in `cog.yaml`
30
+ - `init()` and `predict()` function: use `predict.py`, implement the `Predictor` class
31
+ - more: modify `predict.py`
32
+ - deploy:
33
+ 1. get a linux GPU machine with 60GB disk space;
34
+ 2. install [cog](https://replicate.com/docs/guides/push-a-model) and [docker](https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository)
35
+ 3. `git pull` the current repository from huggingface, including large model files
36
+ 4. after `predict.py` and `cog.yaml` is correctly coded, run `cog login`, `cog push`, then cog will build a docker image locally and push the image to replicate. As the image could take 30GB or so disk space, it would cost a lot network bandwidth.
37
+ - call api: if everything runs successfully and the docker image is pushed to replicate, you will see a web-ui and an API example directly in your replicate repository
38
+
39
+ ## lightning.ai
40
+ > docs: [code](https://lightning.ai/docs/app/stable/levels/basic/real_lightning_component_implementations.html), [deploy](https://lightning.ai/docs/app/stable/workflows/run_app_on_cloud/)
41
+
42
+ - requirements:
43
+ - pip packages are listed in `requirements.txt`, note that some requirements are different from those in huggingface, and you need to modify some lines in `requirements.txt` according to the comment in the `requirements.txt`
44
+ - other pip packages, system packages and some big model weight files download commands, can be listed using a custom build config. Checkout `class CustomBuildConfig(BuildConfig)` in `app.py`. In a custom build config you can use many linux commands such as `wget` and `sudo apt-get update`. The custom build config will be executed on the `__init__()` of the `PythonServer` class
45
+ - `init()` and `predict()` function: use `app.py`, implement the `PythonServer` class. Note:
46
+ - some packages haven't been installed when the file is called(these packages may be installed when `__init__()` is called), so some import code should be in the function, not at the top of the file, or you may get import errors.
47
+ - you can't save your own value to `PythonServer.self` unless it's predifined in the variables, so don't assign any self-defined variables to `self`
48
+ - if you use the custom build config, you should implement `PythonServer`'s `__init()__` yourself, so don't forget to use the correct function signature
49
+ - more: ...
50
+ - deploy:
51
+ - `pip install lightning`
52
+ - prepare the directory on your local computer(no need to have a GPU)
53
+ - list big files in the `.lightningignore` file to avoid big file upload and save deploy time cost
54
+ - run `lightning run app app.py --cloud` in the local terminal, and it will upload the files in the directory to lightning cloud, and start deploying on the cloud
55
+ - check error logs on the web-ui, use `all logs`
56
+ - call api: only if the app starts successfully, you can see a valid url in the `settings` page of the web-ui. Open that url, and you can see the api
57
 
58
  ### some stackoverflow:
59
  install docker:
app.py ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # inference handler for lightning ai
2
+
3
+ import re
4
+ import os
5
+ import logging
6
+ # import json
7
+ from pydantic import BaseModel
8
+ from typing import Any, Dict, Optional, TYPE_CHECKING
9
+ from dataclasses import dataclass
10
+ logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())
11
+
12
+ import lightning as L
13
+ from lightning.app.components.serve import PythonServer, Text
14
+ from lightning.app import BuildConfig
15
+
16
+
17
+ class _DefaultInputData(BaseModel):
18
+ prompt: str
19
+
20
+ class _DefaultOutputData(BaseModel):
21
+ prediction: str
22
+ parameters: str
23
+
24
+
25
+ @dataclass
26
+ class CustomBuildConfig(BuildConfig):
27
+ def build_commands(self):
28
+ dir_path = os.path.dirname(os.path.abspath(__file__))
29
+ model_path = os.path.join(dir_path, "models/Stable-diffusion")
30
+ model_url = "https://huggingface.co/Hardy01/chill_watcher/resolve/main/models/Stable-diffusion/chilloutmix_NiPrunedFp32Fix.safetensors"
31
+ download_cmd = "wget -P {} {}".format(str(model_path), model_url)
32
+ vae_url = "https://huggingface.co/Hardy01/chill_watcher/resolve/main/models/VAE/vae-ft-mse-840000-ema-pruned.ckpt"
33
+ vae_path = os.path.join(dir_path, "models/VAE")
34
+ down2 = "wget -P {} {}".format(str(vae_path), vae_url)
35
+ lora_url1 = "https://huggingface.co/Hardy01/chill_watcher/resolve/main/models/Lora/koreanDollLikeness_v10.safetensors"
36
+ lora_url2 = "https://huggingface.co/Hardy01/chill_watcher/resolve/main/models/Lora/taiwanDollLikeness_v10.safetensors"
37
+ lora_path = os.path.join(dir_path, "models/Lora")
38
+ down3 = "wget -P {} {}".format(str(lora_path), lora_url1)
39
+ down4 = "wget -P {} {}".format(str(lora_path), lora_url2)
40
+ # https://stackoverflow.com/questions/55313610/importerror-libgl-so-1-cannot-open-shared-object-file-no-such-file-or-directo
41
+ cmd1 = "pip3 install torch==1.13.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117"
42
+ cmd2 = "pip3 install torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117"
43
+ cmd_31 = "sudo apt-get update"
44
+ cmd3 = "sudo apt-get install libgl1-mesa-glx"
45
+ cmd4 = "sudo apt-get install libglib2.0-0"
46
+ return [download_cmd, down2, down3, down4, cmd1, cmd2, cmd_31, cmd3, cmd4]
47
+
48
+
49
+ class PyTorchServer(PythonServer):
50
+ def __init__(
51
+ self,
52
+ input_type: type = _DefaultInputData,
53
+ output_type: type = _DefaultOutputData,
54
+ **kwargs: Any,
55
+ ):
56
+ super().__init__(input_type=input_type, output_type=output_type, **kwargs)
57
+
58
+ # Use the custom build config
59
+ self.cloud_build_config = CustomBuildConfig()
60
+ def setup(self):
61
+ # need to install dependancies first to import packages
62
+ import torch
63
+ # Truncate version number of nightly/local build of PyTorch to not cause exceptions with CodeFormer or Safetensors
64
+ if ".dev" in torch.__version__ or "+git" in torch.__version__:
65
+ torch.__long_version__ = torch.__version__
66
+ torch.__version__ = re.search(r'[\d.]+[\d]', torch.__version__).group(0)
67
+
68
+ from handler import initialize
69
+ initialize()
70
+
71
+ def predict(self, request):
72
+ from modules.api.api import encode_pil_to_base64
73
+ from modules import shared
74
+ from modules.processing import StableDiffusionProcessingTxt2Img, process_images
75
+ args = {
76
+ # todo: don't output png
77
+ "outpath_samples": "C:\\Users\\wolvz\\Desktop",
78
+ "prompt": "lora:koreanDollLikeness_v15:0.66, best quality, ultra high res, (photorealistic:1.4), 1girl, beige sweater, black choker, smile, laughing, bare shoulders, solo focus, ((full body), (brown hair:1), looking at viewer",
79
+ "negative_prompt": "paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, (ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, 3hands,4fingers,3arms, bad anatomy, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts,poorly drawn face,mutation,deformed",
80
+ "sampler_name": "DPM++ SDE Karras",
81
+ "steps": 20, # 25
82
+ "cfg_scale": 8,
83
+ "width": 512,
84
+ "height": 768,
85
+ "seed": -1,
86
+ }
87
+ print("&&&&&&&&&&&&&&&&&&&&&&&&",request)
88
+ if request.prompt:
89
+ prompt = request.prompt
90
+ print("get prompt from request: ", prompt)
91
+ args["prompt"] = prompt
92
+ p = StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args)
93
+ processed = process_images(p)
94
+ single_image_b64 = encode_pil_to_base64(processed.images[0]).decode('utf-8')
95
+ return {
96
+ "prediction": single_image_b64,
97
+ "parameters": processed.images[0].info.get('parameters', ""),
98
+ }
99
+
100
+
101
+ component = PyTorchServer(
102
+ cloud_compute=L.CloudCompute('gpu', disk_size=20, idle_timeout=30)
103
+ )
104
+ app = L.LightningApp(component)
105
+
106
+ # class Flow(L.LightningFlow):
107
+ # # 1. Define the state
108
+ # def __init__(self):
109
+ # self.cloud_build_config = CustomBuildConfig()
110
+ # super().__init__()
111
+ # self.component = PyTorchServer(
112
+ # input_type=Text, output_type=Text, cloud_compute=L.CloudCompute('gpu', disk_size=20, idle_timeout=30)
113
+ # )
114
+
115
+ # # 2. Optional, but used to validate names
116
+ # def run(self):
117
+ # self.component.run()
118
+
119
+ # # 3. Method executed when a request is received.
120
+ # def handle_post(self, prompt: str):
121
+ # return f'The name {name} was registered'
122
+
123
+ # # 4. Defines this Component's Restful API. You can have several routes.
124
+ # def configure_api(self):
125
+ # return [Post(route="/name", method=self.handle_post)]
126
+
127
+
128
+ # app = L.LightningApp(Flow())
129
+
handler.py CHANGED
@@ -1,3 +1,4 @@
 
1
  import os
2
  import sys
3
  import time
 
1
+ # inference handler for huggingface
2
  import os
3
  import sys
4
  import time
play.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import requests
2
+ import random
3
+ import time
4
+ import base64
5
+ import hashlib
6
+ import json
7
+
8
+ def lightning():
9
+ start = int(time.time())
10
+ url = "https://wsoqr-01gwy9mc1gzh3b4ce9b708vp31.litng-ai-03.litng.ai/predict"
11
+ form = {
12
+ "prompt": "extremely detailed CG unity 8k wallpaper, masterpiece, best quality, ultra-detailed, best illustration, best shadow, photorealistic:1.4, 1 gorgeous girls,oversize pink_hoodie,under eiffel tower,grey_hair:1.1, collarbone,puffy breasts:1.5,full body shot,shiny eyes,enjoyable expression,evil smile,slim legs,narrow waist,detailed face, looking at viewer,looking back,gorgeous skin,short curly hair,kneeling,puffy ass up,climbing,lying,rosy pussy,nsfw,insert left_hand into pussy",
13
+ }
14
+ resp = requests.post(url, json=form)
15
+ print(resp.status_code, '\n', resp.content)
16
+ print("time cost(ms): ", int(time.time())*1e3-start*1e3)
17
+
18
+
19
+ lightning()
predict.py CHANGED
@@ -5,15 +5,9 @@ from cog import BasePredictor, Input, Path
5
 
6
  import os
7
  import sys
8
- import time
9
- import importlib
10
  import signal
11
  import re
12
  from typing import Dict, List, Any
13
- # from fastapi import FastAPI
14
- # from fastapi.middleware.cors import CORSMiddleware
15
- # from fastapi.middleware.gzip import GZipMiddleware
16
- from packaging import version
17
 
18
  import logging
19
  logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())
@@ -29,6 +23,7 @@ if ".dev" in torch.__version__ or "+git" in torch.__version__:
29
  torch.__version__ = re.search(r'[\d.]+[\d]', torch.__version__).group(0)
30
 
31
  from modules import shared, devices, ui_tempdir
 
32
  import modules.codeformer_model as codeformer
33
  import modules.face_restoration
34
  import modules.gfpgan_model as gfpgan
@@ -51,38 +46,19 @@ from modules.shared import cmd_opts, opts
51
  import modules.hypernetworks.hypernetwork
52
 
53
  from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images
54
- import base64
55
- import io
56
- from fastapi import HTTPException
57
- from io import BytesIO
58
- import piexif
59
- import piexif.helper
60
- from PIL import PngImagePlugin,Image
61
 
62
 
63
  def initialize():
64
- # check_versions()
65
-
66
- # extensions.list_extensions()
67
- # localization.list_localizations(cmd_opts.localizations_dir)
68
-
69
- # if cmd_opts.ui_debug_mode:
70
- # shared.sd_upscalers = upscaler.UpscalerLanczos().scalers
71
- # modules.scripts.load_scripts()
72
- # return
73
-
74
  modelloader.cleanup_models()
75
  modules.sd_models.setup_model()
76
  codeformer.setup_model(cmd_opts.codeformer_models_path)
77
  gfpgan.setup_model(cmd_opts.gfpgan_models_path)
78
 
79
  modelloader.list_builtin_upscalers()
80
- # modules.scripts.load_scripts()
81
  modelloader.load_upscalers()
82
 
83
  modules.sd_vae.refresh_vae_list()
84
 
85
- # modules.textual_inversion.textual_inversion.list_textual_inversion_templates()
86
 
87
  try:
88
  modules.sd_models.load_model()
@@ -93,35 +69,11 @@ def initialize():
93
  exit(1)
94
 
95
  shared.opts.data["sd_model_checkpoint"] = shared.sd_model.sd_checkpoint_info.title
96
-
97
  shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()))
98
  shared.opts.onchange("sd_vae", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
99
  shared.opts.onchange("sd_vae_as_default", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
100
  shared.opts.onchange("temp_dir", ui_tempdir.on_tmpdir_changed)
101
 
102
- # shared.reload_hypernetworks()
103
-
104
- # ui_extra_networks.intialize()
105
- # ui_extra_networks.register_page(ui_extra_networks_textual_inversion.ExtraNetworksPageTextualInversion())
106
- # ui_extra_networks.register_page(ui_extra_networks_hypernets.ExtraNetworksPageHypernetworks())
107
- # ui_extra_networks.register_page(ui_extra_networks_checkpoints.ExtraNetworksPageCheckpoints())
108
-
109
- # extra_networks.initialize()
110
- # extra_networks.register_extra_network(extra_networks_hypernet.ExtraNetworkHypernet())
111
-
112
- # if cmd_opts.tls_keyfile is not None and cmd_opts.tls_keyfile is not None:
113
-
114
- # try:
115
- # if not os.path.exists(cmd_opts.tls_keyfile):
116
- # print("Invalid path to TLS keyfile given")
117
- # if not os.path.exists(cmd_opts.tls_certfile):
118
- # print(f"Invalid path to TLS certfile: '{cmd_opts.tls_certfile}'")
119
- # except TypeError:
120
- # cmd_opts.tls_keyfile = cmd_opts.tls_certfile = None
121
- # print("TLS setup invalid, running webui without TLS")
122
- # else:
123
- # print("Running with TLS")
124
-
125
  # make the program just exit at ctrl+c without waiting for anything
126
  def sigint_handler(sig, frame):
127
  print(f'Interrupted with signal {sig} in {frame}')
@@ -129,104 +81,6 @@ def initialize():
129
 
130
  signal.signal(signal.SIGINT, sigint_handler)
131
 
132
-
133
- class EndpointHandler():
134
- def __init__(self, path=""):
135
- # Preload all the elements you are going to need at inference.
136
- # pseudo:
137
- # self.model= load_model(path)
138
- initialize()
139
- self.shared = shared
140
-
141
- def __call__(self, data: Dict[str, Any]) -> List[Dict[str, Any]]:
142
- """
143
- data args:
144
- inputs (:obj: `str` | `PIL.Image` | `np.array`)
145
- kwargs
146
- Return:
147
- A :obj:`list` | `dict`: will be serialized and returned
148
- """
149
- args = {
150
- # todo: don't output png
151
- "outpath_samples": "C:\\Users\\wolvz\\Desktop",
152
- "prompt": "lora:koreanDollLikeness_v15:0.66, best quality, ultra high res, (photorealistic:1.4), 1girl, beige sweater, black choker, smile, laughing, bare shoulders, solo focus, ((full body), (brown hair:1), looking at viewer",
153
- "negative_prompt": "paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, (ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, 3hands,4fingers,3arms, bad anatomy, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts,poorly drawn face,mutation,deformed",
154
- "sampler_name": "DPM++ SDE Karras",
155
- "steps": 20, # 25
156
- "cfg_scale": 8,
157
- "width": 512,
158
- "height": 768,
159
- "seed": -1,
160
- }
161
- if "prompt" in data.keys():
162
- print("get prompt from request: ", data["prompt"])
163
- args["prompt"] = data["prompt"]
164
- p = StableDiffusionProcessingTxt2Img(sd_model=self.shared.sd_model, **args)
165
- processed = process_images(p)
166
- single_image_b64 = encode_pil_to_base64(processed.images[0]).decode('utf-8')
167
- return {
168
- "img_data": single_image_b64,
169
- "parameters": processed.images[0].info.get('parameters', ""),
170
- }
171
-
172
-
173
- def manual_hack():
174
- initialize()
175
- args = {
176
- # todo: don't output res
177
- "outpath_samples": "C:\\Users\\wolvz\\Desktop",
178
- "prompt": "lora:koreanDollLikeness_v15:0.66, best quality, ultra high res, (photorealistic:1.4), 1girl, beige sweater, black choker, smile, laughing, bare shoulders, solo focus, ((full body), (brown hair:1), looking at viewer",
179
- "negative_prompt": "paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans",
180
- "sampler_name": "DPM++ SDE Karras",
181
- "steps": 20, # 25
182
- "cfg_scale": 8,
183
- "width": 512,
184
- "height": 768,
185
- "seed": -1,
186
- }
187
- p = StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args)
188
- processed = process_images(p)
189
-
190
-
191
- def decode_base64_to_image(encoding):
192
- if encoding.startswith("data:image/"):
193
- encoding = encoding.split(";")[1].split(",")[1]
194
- try:
195
- image = Image.open(BytesIO(base64.b64decode(encoding)))
196
- return image
197
- except Exception as err:
198
- raise HTTPException(status_code=500, detail="Invalid encoded image")
199
-
200
- def encode_pil_to_base64(image):
201
- with io.BytesIO() as output_bytes:
202
-
203
- if opts.samples_format.lower() == 'png':
204
- use_metadata = False
205
- metadata = PngImagePlugin.PngInfo()
206
- for key, value in image.info.items():
207
- if isinstance(key, str) and isinstance(value, str):
208
- metadata.add_text(key, value)
209
- use_metadata = True
210
- image.save(output_bytes, format="PNG", pnginfo=(metadata if use_metadata else None), quality=opts.jpeg_quality)
211
-
212
- elif opts.samples_format.lower() in ("jpg", "jpeg", "webp"):
213
- parameters = image.info.get('parameters', None)
214
- exif_bytes = piexif.dump({
215
- "Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(parameters or "", encoding="unicode") }
216
- })
217
- if opts.samples_format.lower() in ("jpg", "jpeg"):
218
- image.save(output_bytes, format="JPEG", exif = exif_bytes, quality=opts.jpeg_quality)
219
- else:
220
- image.save(output_bytes, format="WEBP", exif = exif_bytes, quality=opts.jpeg_quality)
221
-
222
- else:
223
- raise HTTPException(status_code=500, detail="Invalid image format")
224
-
225
- bytes_data = output_bytes.getvalue()
226
-
227
- return base64.b64encode(bytes_data)
228
-
229
-
230
  class Predictor(BasePredictor):
231
  def setup(self):
232
  """Load the model into memory to make running multiple predictions efficient"""
 
5
 
6
  import os
7
  import sys
 
 
8
  import signal
9
  import re
10
  from typing import Dict, List, Any
 
 
 
 
11
 
12
  import logging
13
  logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())
 
23
  torch.__version__ = re.search(r'[\d.]+[\d]', torch.__version__).group(0)
24
 
25
  from modules import shared, devices, ui_tempdir
26
+ from modules.api.api import encode_pil_to_base64
27
  import modules.codeformer_model as codeformer
28
  import modules.face_restoration
29
  import modules.gfpgan_model as gfpgan
 
46
  import modules.hypernetworks.hypernetwork
47
 
48
  from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images
 
 
 
 
 
 
 
49
 
50
 
51
  def initialize():
 
 
 
 
 
 
 
 
 
 
52
  modelloader.cleanup_models()
53
  modules.sd_models.setup_model()
54
  codeformer.setup_model(cmd_opts.codeformer_models_path)
55
  gfpgan.setup_model(cmd_opts.gfpgan_models_path)
56
 
57
  modelloader.list_builtin_upscalers()
 
58
  modelloader.load_upscalers()
59
 
60
  modules.sd_vae.refresh_vae_list()
61
 
 
62
 
63
  try:
64
  modules.sd_models.load_model()
 
69
  exit(1)
70
 
71
  shared.opts.data["sd_model_checkpoint"] = shared.sd_model.sd_checkpoint_info.title
 
72
  shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()))
73
  shared.opts.onchange("sd_vae", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
74
  shared.opts.onchange("sd_vae_as_default", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
75
  shared.opts.onchange("temp_dir", ui_tempdir.on_tmpdir_changed)
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  # make the program just exit at ctrl+c without waiting for anything
78
  def sigint_handler(sig, frame):
79
  print(f'Interrupted with signal {sig} in {frame}')
 
81
 
82
  signal.signal(signal.SIGINT, sigint_handler)
83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  class Predictor(BasePredictor):
85
  def setup(self):
86
  """Load the model into memory to make running multiple predictions efficient"""
repositories/BLIP/BLIP.gif DELETED

Git LFS Details

  • SHA256: 7757a1a1133807158ec4e696a8187f289e64c30a86aa470d8e0a93948a02be22
  • Pointer size: 132 Bytes
  • Size of remote file: 6.71 MB
repositories/CodeFormer/assets/CodeFormer_logo.png DELETED

Git LFS Details

  • SHA256: 9f47e6d67d4aabffe5f3794d9e46c301b953f59c3328dd7dcafd94ccb615d29c
  • Pointer size: 129 Bytes
  • Size of remote file: 5.26 kB
repositories/CodeFormer/assets/color_enhancement_result1.png DELETED

Git LFS Details

  • SHA256: 0e5fd836661f974b8f691d2a779d15b0f1419ffc6be1e57ce864b9c3562754a2
  • Pointer size: 131 Bytes
  • Size of remote file: 685 kB
repositories/CodeFormer/assets/color_enhancement_result2.png DELETED

Git LFS Details

  • SHA256: 7b7d0166964c1752083ffc3a8bf247b70e8509cda66ca75c61f70f6800a268c8
  • Pointer size: 131 Bytes
  • Size of remote file: 515 kB
repositories/CodeFormer/assets/inpainting_result1.png DELETED

Git LFS Details

  • SHA256: e3fd793921f1916e442b36664b21c1fa7cbac7d8206990b44c682eca891d0618
  • Pointer size: 131 Bytes
  • Size of remote file: 687 kB
repositories/CodeFormer/assets/inpainting_result2.png DELETED

Git LFS Details

  • SHA256: 7354e20981cfa84fd4b63ddb9b312ceedabdedc47da3e0dd2967096af6440534
  • Pointer size: 131 Bytes
  • Size of remote file: 767 kB
repositories/CodeFormer/assets/network.jpg DELETED

Git LFS Details

  • SHA256: 81903e45d27c7078a04dec0c5666be8c54c2c4313a65715240dc5c8639b22d19
  • Pointer size: 131 Bytes
  • Size of remote file: 238 kB
repositories/CodeFormer/assets/restoration_result1.png DELETED

Git LFS Details

  • SHA256: 4ee9b266878328066d33af365f2434633988d69ccabb34a98997d19944e90e4a
  • Pointer size: 131 Bytes
  • Size of remote file: 831 kB
repositories/CodeFormer/assets/restoration_result2.png DELETED

Git LFS Details

  • SHA256: 716b3ba5fc642e0bef4648093ac346fd90878bcf6fbeb6d1f685ce682557b3c7
  • Pointer size: 131 Bytes
  • Size of remote file: 824 kB
repositories/CodeFormer/assets/restoration_result3.png DELETED

Git LFS Details

  • SHA256: 05ff467e78b5412870d825fb17eec8ed4230d8e2bb546c554bf93eb8a06a6a27
  • Pointer size: 131 Bytes
  • Size of remote file: 752 kB
repositories/CodeFormer/assets/restoration_result4.png DELETED

Git LFS Details

  • SHA256: 4fd54cfbb531e4be7a1476fd4d238aea20c05522e7c098a892b47191cc4ce4cb
  • Pointer size: 131 Bytes
  • Size of remote file: 697 kB
repositories/CodeFormer/inputs/cropped_faces/0143.png DELETED

Git LFS Details

  • SHA256: 3991bebe9cdd6132601419a8bcb9e28bb0dc99490f2a77af2f83bac36b114f69
  • Pointer size: 131 Bytes
  • Size of remote file: 158 kB
repositories/CodeFormer/inputs/cropped_faces/0240.png DELETED

Git LFS Details

  • SHA256: 355c64a2f67c0cc40f79563ae889c1cfff7cca5d6d94f3dbc6008c8ca30b9dd6
  • Pointer size: 131 Bytes
  • Size of remote file: 191 kB
repositories/CodeFormer/inputs/cropped_faces/0342.png DELETED

Git LFS Details

  • SHA256: 2afe9ef2014079cb13f46ac9a22a59d83127038d8d419541e0d504dbab8b6815
  • Pointer size: 131 Bytes
  • Size of remote file: 184 kB
repositories/CodeFormer/inputs/cropped_faces/0345.png DELETED

Git LFS Details

  • SHA256: 5bd7399008225b079cb0d2496cab97064ee1107ce2d199c71977f142ee6cbeae
  • Pointer size: 131 Bytes
  • Size of remote file: 189 kB
repositories/CodeFormer/inputs/cropped_faces/0368.png DELETED

Git LFS Details

  • SHA256: 32e0264f49dae964d3ff31809b5e31ae5cd9f552f917f8ff2929125527f57414
  • Pointer size: 131 Bytes
  • Size of remote file: 192 kB
repositories/CodeFormer/inputs/cropped_faces/0412.png DELETED

Git LFS Details

  • SHA256: 66c340192cc18d46ab1e329f8fab58e3d22e14d473585c6e3f552aa5bcf1d223
  • Pointer size: 131 Bytes
  • Size of remote file: 214 kB
repositories/CodeFormer/inputs/cropped_faces/0444.png DELETED

Git LFS Details

  • SHA256: 9f5aedbcf55fc9e4977210cf88ac81faff5e68147f6c49cadab42174e36d6d1d
  • Pointer size: 131 Bytes
  • Size of remote file: 176 kB
repositories/CodeFormer/inputs/cropped_faces/0478.png DELETED

Git LFS Details

  • SHA256: bd903ae794396fbfc536a99a21565c423262ff88bea9425a76d0943962a6b21f
  • Pointer size: 131 Bytes
  • Size of remote file: 180 kB
repositories/CodeFormer/inputs/cropped_faces/0500.png DELETED

Git LFS Details

  • SHA256: 5f236bfea4f63d839373d2f996977ff01e72cbe67aee891caad388ac95f0f110
  • Pointer size: 131 Bytes
  • Size of remote file: 232 kB
repositories/CodeFormer/inputs/cropped_faces/0599.png DELETED

Git LFS Details

  • SHA256: 82b546c3134e2d309ce513f6d3057d9c26c308558cf0636e0a02fb81f73278d9
  • Pointer size: 131 Bytes
  • Size of remote file: 184 kB
repositories/CodeFormer/inputs/cropped_faces/0717.png DELETED

Git LFS Details

  • SHA256: 70dc83bd07ee2391e5f4a2facf2ef981e21fe2cb1c71c662af5a6105703262c8
  • Pointer size: 131 Bytes
  • Size of remote file: 191 kB
repositories/CodeFormer/inputs/cropped_faces/0720.png DELETED

Git LFS Details

  • SHA256: 49be8e39e50a09c8a225ef7bd779c1c85e80a2a3f6288831a389e5edea047b45
  • Pointer size: 131 Bytes
  • Size of remote file: 158 kB
repositories/CodeFormer/inputs/cropped_faces/0729.png DELETED

Git LFS Details

  • SHA256: 7c4637d11916672025d47073247ed61cbfe6537e42552cf8cb7525af0d6ab0a8
  • Pointer size: 131 Bytes
  • Size of remote file: 173 kB
repositories/CodeFormer/inputs/cropped_faces/0763.png DELETED

Git LFS Details

  • SHA256: 8af7b3ec327e1a4b197fe9aca44e16a656e60332870a215c603548d446d632db
  • Pointer size: 131 Bytes
  • Size of remote file: 139 kB
repositories/CodeFormer/inputs/cropped_faces/0770.png DELETED

Git LFS Details

  • SHA256: f6864816779953edae0e5485d740929db0300b22ae40f8c142a4a59678a925a2
  • Pointer size: 131 Bytes
  • Size of remote file: 172 kB
repositories/CodeFormer/inputs/cropped_faces/0777.png DELETED

Git LFS Details

  • SHA256: 2a3da4596a08b95dc5731cb851d1204916b6d55d9085f3ddf1cb854c1c2f6a4b
  • Pointer size: 131 Bytes
  • Size of remote file: 163 kB
repositories/CodeFormer/inputs/cropped_faces/0885.png DELETED

Git LFS Details

  • SHA256: b4cf4648abd8cd9bcd3245d8f543bc4f106a8849fe7d328f156ae369a4ee9a90
  • Pointer size: 131 Bytes
  • Size of remote file: 184 kB
repositories/CodeFormer/inputs/cropped_faces/0934.png DELETED

Git LFS Details

  • SHA256: 918dc17bf1f1abf94763f3b95af2f6aaff4f8e821422494030dd7c788fa4d072
  • Pointer size: 131 Bytes
  • Size of remote file: 172 kB
repositories/CodeFormer/inputs/cropped_faces/Solvay_conference_1927_0018.png DELETED

Git LFS Details

  • SHA256: 71778c17080db35bc2d2296f4ef621387dc62fb835d818f77c9496948b3410bc
  • Pointer size: 131 Bytes
  • Size of remote file: 327 kB
repositories/CodeFormer/inputs/cropped_faces/Solvay_conference_1927_2_16.png DELETED

Git LFS Details

  • SHA256: b2b03b12601c841ca1b341cb9931cce7128916a1ccc2701c531147c572137df6
  • Pointer size: 131 Bytes
  • Size of remote file: 313 kB
repositories/CodeFormer/inputs/whole_imgs/00.jpg DELETED

Git LFS Details

  • SHA256: 7e980159f82a43aa0aad0a4a5ffce474c76647d7284918c444d613157fe5d88e
  • Pointer size: 130 Bytes
  • Size of remote file: 26.8 kB
repositories/CodeFormer/inputs/whole_imgs/01.jpg DELETED

Git LFS Details

  • SHA256: 886eaea9980675ae924b0266d80e3708285ca45cded81e858f0c3aa0714f31fb
  • Pointer size: 129 Bytes
  • Size of remote file: 9.33 kB
repositories/CodeFormer/inputs/whole_imgs/02.png DELETED

Git LFS Details

  • SHA256: 9c7d80aa1e999ffa9437375f00cab5e439284d00797958e175516f777a8e7129
  • Pointer size: 131 Bytes
  • Size of remote file: 937 kB
repositories/CodeFormer/inputs/whole_imgs/03.jpg DELETED

Git LFS Details

  • SHA256: 757986993aabee5a0d3bde3d5cd6a0ef3026605a0189ac28ed689256c0b833b9
  • Pointer size: 130 Bytes
  • Size of remote file: 15.5 kB
repositories/CodeFormer/inputs/whole_imgs/04.jpg DELETED

Git LFS Details

  • SHA256: 13fc7dcaece8d2f26f6895d6ff86a42c643d8c7621d75366a8b4d7467e23e8e2
  • Pointer size: 129 Bytes
  • Size of remote file: 6.77 kB
repositories/CodeFormer/inputs/whole_imgs/05.jpg DELETED

Git LFS Details

  • SHA256: eaf35fa88120ff32086a6ec7e3da8b29ddd40c43bee685a9860add2ade6d8fb3
  • Pointer size: 129 Bytes
  • Size of remote file: 7.86 kB
repositories/CodeFormer/inputs/whole_imgs/06.png DELETED

Git LFS Details

  • SHA256: 2828f119e044674f89da002ec3f4453e971e9f712a62d63c4f151d1f91e341ff
  • Pointer size: 131 Bytes
  • Size of remote file: 684 kB
repositories/taming-transformers/assets/birddrawnbyachild.png DELETED

Git LFS Details

  • SHA256: 165778bb85e86f8aaaed38eee4d33f62ab1ef237d890229cfa2e0685f5064127
  • Pointer size: 132 Bytes
  • Size of remote file: 1.61 MB
repositories/taming-transformers/assets/coco_scene_images_training.svg DELETED
repositories/taming-transformers/assets/drin.jpg DELETED

Git LFS Details

  • SHA256: 83652380049c45af8c1b75216ded141b3d064cca8154eb2875337b4d5182152b
  • Pointer size: 131 Bytes
  • Size of remote file: 286 kB
repositories/taming-transformers/assets/faceshq.jpg DELETED

Git LFS Details

  • SHA256: 6f20c66b935086464db0bad4b5dd90fadb3fb1d20373cb02c415ec4a9cfb989c
  • Pointer size: 131 Bytes
  • Size of remote file: 307 kB
repositories/taming-transformers/assets/first_stage_mushrooms.png DELETED

Git LFS Details

  • SHA256: 425218621d5e01ea30c9e51fa0969ad36c22063a405dc6f6ccb6dd8db64000a0
  • Pointer size: 132 Bytes
  • Size of remote file: 1.35 MB