File size: 5,247 Bytes
16aa660
d445f18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
import gradio as gr
from PIL import Image
import torch
import numpy as np
from os.path import exists as path_exists
from git.repo.base import Repo
from einops import rearrange
import torchvision.transforms as transforms
from torchvision.utils import make_grid

if not (path_exists(f"rudalle-aspect-ratio")):
    Repo.clone_from("https://github.com/shonenkov-AI/rudalle-aspect-ratio", "rudalle-aspect-ratio")

import sys
sys.path.append('./rudalle-aspect-ratio')
from rudalle_aspect_ratio import RuDalleAspectRatio, get_rudalle_model
from rudalle import get_vae, get_tokenizer
from rudalle.pipelines import show
#model_path_e = hf_hub_download(repo_id="multimodalart/compvis-latent-diffusion-text2img-large", filename="txt2img-f8-large.ckpt")

device = 'cuda'
dalle_surreal = get_rudalle_model('Surrealist_XL', fp16=True, device=device)
dalle_real = get_rudalle_model('Malevich',fp16=True,device=device)
dalle_emoji = get_rudalle_model('Emojich',fp16=True,device=device)
vae, tokenizer = get_vae().to(device), get_tokenizer()
def np_gallery(array, ncols=3):
    nindex, height, width, intensity = array.shape
    nrows = nindex//ncols
    assert nindex == nrows*ncols
    # want result.shape = (height*nrows, width*ncols, intensity)
    result = (array.reshape(nrows, ncols, height, width, intensity)
              .swapaxes(1,2)
              .reshape(height*nrows, width*ncols, intensity))
    return result

def image_to_np(image):
    return np.asarray(image)

def run(prompt, aspect_ratio, model):
    if(model=='Surrealism'):
        dalle = dalle_surreal
    elif(model=='Realism'):  
        dalle = dalle_real
    elif(model=='Emoji'):  
        dalle = dalle_emoji
    if(aspect_ratio == 'Square'):
        aspect_ratio_value = 1
        top_k = 512
    elif(aspect_ratio == 'Horizontal'):
        aspect_ratio_value = 32/9
        top_k = 1024
    elif(aspect_ratio == 'Vertical'):
        aspect_ratio_value = 9/32
        top_k = 512
    rudalle_ar = RuDalleAspectRatio(
        dalle=dalle, vae=vae, tokenizer=tokenizer,
        aspect_ratio=aspect_ratio_value, bs=1, device=device
    )
    
    _, result_pil_images = rudalle_ar.generate_images(prompt, top_k, 0.975, 1)
    #np_images = map(image_to_np,result_pil_images)
    #np_grid = np_gallery(np.array(list(np_images)),2)
    #result_grid = Image.fromarray(np_grid)

    return(result_pil_images[0])

image = gr.outputs.Image(type="pil", label="Your result")
iface = gr.Interface(fn=run, inputs=[
    gr.inputs.Textbox(label="Prompt (if not in Russian, it will be automatically translated to Russian)",default="chalk pastel drawing of a dog wearing a funny hat"),
    #gr.inputs.Slider(label="Steps - more steps can increase quality but will take longer to generate",default=45,maximum=50,minimum=1,step=1),
    gr.inputs.Radio(label="Aspect Ratio", choices=["Square", "Horizontal", "Vertical"],default="Horizontal"),
    gr.inputs.Dropdown(label="Model", choices=["Surrealism","Realism", "Emojis"], default="Surrealism")
    #gr.inputs.Radio(label="Height", choices=[32,64,128,256,512],default=256),
    #gr.inputs.Slider(label="Images - How many images you wish to generate", default=2, step=1, minimum=1, maximum=4),
    #gr.inputs.Slider(label="Diversity scale - How different from one another you wish the images to be",default=5.0, minimum=1.0, maximum=15.0),
    #gr.inputs.Slider(label="ETA - between 0 and 1. Lower values can provide better quality, higher values can be more diverse",default=0.0,minimum=0.0, maximum=1.0,step=0.1),
    ], 
    outputs=image,
    #css=css,
    title="Generate images from text with ruDALLE",
    description="<div>By typing a prompt and pressing submit you can generate images based on this prompt. <a href='https://github.com/CompVis/latent-diffusion' target='_blank'>ruDALLE</a> is an open source text-to-image model, this Arbitrary Aspect ration implementation was created by <a href='https://github.com/shonenkov-AI' target='_blank'>Alex Shonenkov</a><br>This UI to the model was assembled by <a style='color: rgb(245, 158, 11);font-weight:bold' href='https://twitter.com/multimodalart' target='_blank'>@multimodalart</a></div>",
    article="<h4 style='font-size: 110%;margin-top:.5em'>Biases acknowledgment</h4><div>Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exarcbates societal biases. According to the <a href='https://arxiv.org/abs/2112.10752' target='_blank'>Latent Diffusion paper</a>:<i> \"Deep learning modules tend to reproduce or exacerbate biases that are already present in the data\"</i>. The models are meant to be used for research purposes, such as this one.</div><h4 style='font-size: 110%;margin-top:1em'>Who owns the images produced by this demo?</h4><div>Definetly not me! Probably you do. I say probably because the Copyright discussion about AI generated art is ongoing. So <a href='https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise' target='_blank'>it may be the case that everything produced here falls automatically into the public domain</a>. But in any case it is either yours or is in the public domain.</div>")
iface.launch(enable_queue=True)