import gradio as gr from io import BytesIO import requests import PIL from PIL import Image import numpy as np import os import uuid import torch from torch import autocast import cv2 from matplotlib import pyplot as plt from inpainting import StableDiffusionInpaintingPipeline from torchvision import transforms #from clipseg.models.clipseg import CLIPDensePredT #from huggingface_hub import hf_hub_download #hf_hub_download(repo_id="ThereforeGames/txt2mask", filename="/repositories/clipseg/") #clone_from (str, optional) — Either a repository url or repo_id. Example: #api = HfApi() #from huggingface_hub import Repository #with Repository(local_dir="clipseg", clone_from="ThereforeGames/txt2mask/repositories/clipseg/") from huggingface_hub import HfApi api = HfApi() api.upload_folder( folder_path="/", path_in_repo="ThereforeGames/txt2mask/repositories/clipseg/", repo_id="ThereforeGames/txt2mask", # repo_type="dataset", # ignore_patterns="**/logs/*.txt", ) #.commit(commit_message="clipseg uploaded...") # with open("file.txt", "w+") as f: # f.write(json.dumps({"hey": 8})) auth_token = os.environ.get("API_TOKEN") or True def download_image(url): response = requests.get(url) return PIL.Image.open(BytesIO(response.content)).convert("RGB") device = "cuda" if torch.cuda.is_available() else "cpu" pipe = StableDiffusionInpaintingPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16, use_auth_token=auth_token, ).to(device) model = CLIPDensePredT(version='ViT-B/16', reduce_dim=64) model.eval() model.load_state_dict(torch.load('./clipseg/weights/rd64-uni.pth', map_location=torch.device('cuda')), strict=False) transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), transforms.Resize((512, 512)), ]) def predict(radio, dict, word_mask, prompt=""): if(radio == "draw a mask above"): with autocast("cuda"): init_image = dict["image"].convert("RGB").resize((512, 512)) mask = dict["mask"].convert("RGB").resize((512, 512)) else: img = transform(dict["image"]).unsqueeze(0) word_masks = [word_mask] with torch.no_grad(): preds = model(img.repeat(len(word_masks),1,1,1), word_masks)[0] init_image = dict['image'].convert('RGB').resize((512, 512)) filename = f"{uuid.uuid4()}.png" plt.imsave(filename,torch.sigmoid(preds[0][0])) img2 = cv2.imread(filename) gray_image = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) (thresh, bw_image) = cv2.threshold(gray_image, 100, 255, cv2.THRESH_BINARY) cv2.cvtColor(bw_image, cv2.COLOR_BGR2RGB) mask = Image.fromarray(np.uint8(bw_image)).convert('RGB') os.remove(filename) with autocast("cuda"): images = pipe(prompt = prompt, init_image=init_image, mask_image=mask, strength=0.8)["sample"] return images[0] # examples = [[dict(image="init_image.png", mask="mask_image.png"), "A panda sitting on a bench"]] css = ''' .container {max-width: 1150px;margin: auto;padding-top: 1.5rem} #image_upload{min-height:400px} #image_upload [data-testid="image"], #image_upload [data-testid="image"] > div{min-height: 400px} #mask_radio .gr-form{background:transparent; border: none} #word_mask{margin-top: .75em !important} #word_mask textarea:disabled{opacity: 0.3} .footer {margin-bottom: 45px;margin-top: 35px;text-align: center;border-bottom: 1px solid #e5e5e5} .footer>p {font-size: .8rem; display: inline-block; padding: 0 10px;transform: translateY(10px);background: white} .dark .footer {border-color: #303030} .dark .footer>p {background: #0b0f19} .acknowledgments h4{margin: 1.25em 0 .25em 0;font-weight: bold;font-size: 115%} #image_upload .touch-none{display: flex} ''' def swap_word_mask(radio_option): if(radio_option == "type what to mask below"): return gr.update(interactive=True, placeholder="A cat") else: return gr.update(interactive=False, placeholder="Disabled") image_blocks = gr.Blocks(css=css) with image_blocks as demo: gr.HTML( """
Inpaint Stable Diffusion by either drawing a mask or typing what to replace