fac / gf /sd-webui-lua /extras.html
dikdimon's picture
Upload gf using SD-Hub extension
29a5ed9 verified
<div>
sd-webui-lua links:
<a href="http://github.com/yownas/sd-webui-lua/">Github</a> |
<a href="https://github.com/yownas/sd-webui-lua/wiki">Wiki</a> |
<a href="https://github.com/scoder/lupa">Lupa</a>
</div>
<div>
<div>Functions</div>
<div>UI</div>
<table>
<tr>
<td>ui.out(string)></td>
<td>Write string to the Output box.</td>
</tr>
<tr>
<td>ui.clear()</td>>
<td>Clear Output box.</td>
</tr>
<tr>
<td>ui.console(string)</td>
<td>Print to console. </td>
</tr>
<tr>
<td>ui.out(string)</td>
<td>Print to Output box.</td>
</tr>
<tr>
<td>ui.gallery.add(image)</td>
<td>Add image to Gallery,</td>
</tr>
<tr>
<td>ui.gallery.addc(image, string)</td>
<td>Add image with caption to Gallery.</td>
</tr>
<tr>
<td>ui.gallery.clear()</td>
<td>Clear the gallery.</td>
</tr>
<tr>
<td>ui.gallery.del(index)</td>
<td>Delete image from gallery. (Starts at 1 since this is Lua.)</td>
</tr>
<tr>
<td>ui.gallery.getgif(duration)</td>
<td>Get a gif from the images in the gallery. Show each image for "duration" ms.</td>
</tr>
<tr>
<td>ui.gallery.saveall()</td>
<td>Save all images in the gallery.</td>
</tr>
<tr>
<td>ui.image.save(image, name)</td>
<td>Save image.</td>
</tr>
<tr>
<td>ui.status(text)</td>
<td>Update status-text under the buttons during run.</td>
</tr>
<tr>
<td>ui.log.info(text)</td>
<td>Write info log to console.</td>
</tr>
<tr>
<td>ui.log.warning(text)</td>
<td>Write warning log to console.</td>
</tr>
<tr>
<td>ui.log.error(text)</td>
<td>Write error log to console.</td>
</tr>
</table>
<div>SD</div>
<table>
<tr>
<td>sd.pipeline(p)</td>
<td>Deconstructed pipeline from the webui. Generate picture from a Processing object.</td>
</tr>
<tr>
<td>sd.process(string)</td>>
<td>Webui pipeline, generate image from a prompt-string or processing object.</td>
</tr>
<tr>
<td>sd.getp()</td>>
<td>Returns a default processing object (see below).</td>
</tr>
<tr>
<td> sd.cond(string)</td>
<td>Run prompt string through clip.</td>
</tr>
<tr>
<td>sd.negcond(string)</td>
<td>Run negative prompt string through clip. (These are unfortunately slightly different at the momemt)</td>
</tr>
<tr>
<td>sd.sample(p, cond, negcond)</td>
<td>Turn noise into something that can get turned into an image. Takes a Processing object, a cond and a negcond value. Cond and negcond can also be Null, string or a tensor from sd.textencode().</td>
</tr>
<tr>
<td>sd.vae(latent)</td>>
<td>Variational auto-envoder.</td>
</tr>
<tr>
<td>sd.toimage(latent)</td>
<td>Last step to get an image after the vae.</td>
</tr>
<tr>
<td>sd.textencode(string)</td>
<td>Get a tensor from Clips text encode.</td>
</tr>
<tr>
<td>sd.clip2negcond(text encode)</td>
<td>Convert tensor to a negative conditioning used by functions from the webui.</td>
</tr>
<tr>
<td>sd.negcond2cond(negcond)</td>
<td>Convert a negative conditioning to conditioning used by functions from the webui. The regular prompt and the negative prompt is treated slightly different internally, this is why this is needed.</td>
</tr>
<tr>
<td>sd.getsamplers()</td>
<td>Get list of samplers.</td>
</tr>
<tr>
<td>sd.restorefaces(image)</td>
<td>Postprocess an image to restore faces.</td>
</tr>
<tr>
<td>sd.interrogate.clip(image)</td>
<td>Get prompt from an image.</td>
</tr>
<tr>
<td>sd.interrogate.blip(image)</td>
<td>(Same as sd.interrigate.clip()) Get prompt from an image.</td>
</tr>
<tr>
<td>sd.interrogate.deepbooru(image)</td>
<td>Get prompt from an image. Using DeepBooru.</td>
</tr>
</table>
<div>Torch</div>
<table>
<tr>
<td>torch.clamp(v1, min, max)</td>
<td>Clamp vector v1 between min and max.</td>
</tr>
<tr>
<td>torch.lerp(v1, v2, weight)</td>
<td>Linear interpolation of v1 and v2, by weight. v1 + weight * (v2 - v1).</td>
</tr>
<tr>
<td>torch.abs(v1)</td>
<td>Absolute value of v1.</td>
</tr>
<tr>
<td>torch.add(v1, v2)</td>
<td>Add v2 (vector or float) to v1.</td>
</tr>
<tr>
<td>torch.sub(v1, v2)</td>
<td>Subtract v2 (vector or float) from v1.</td>
</tr>
<tr>
<td>torch.mul(v1, v2)</td>
<td>Multiply v2 (vector or float) with v1.</td>
</tr>
<tr>
<td>torch.div(v1, v2)</td>
<td>Divide v1 with v2 (vector or float).</td>
</tr>
<tr>
<td>torch.size(v1)</td>
<td>Return the size of vector v1.</td>
</tr>
<tr>
<td>torch.new_zeros(size)</td>
<td>Take a Lua table, size, and create a zero-filled tensor.</td>
</tr>
<tr>
<td>torch.max(v)</td>
<td>Return the max value in v.</td>
</tr>
<tr>
<td>torch.min(v)</td>
<td>Return the min value in v.</td>
</tr>
<tr>
<td>torch.f2t(tensor)</td>
<td>Return a tensor from a float.</td>
</tr>
<tr>
<td>torch.t2f(tensor)</td>
<td>Return a float from a tensor.</td>
</tr>
<tr>
<td>torch.cat({table, with, tensors, ...}, dim)</td>
<td>Concatenate tensors in dimension dim. For example, textencodings can be concatenated in dimension 1.</td>
</tr>
</table>
</div>
<hr>
<div>
Default Processing-object:<br>
<pre>
p = StableDiffusionProcessingTxt2Img(
sd_model=shared.sd_model,
outpath_samples=shared.opts.outdir_samples or shared.opts.outdir_txt2img_samples,
outpath_grids=shared.opts.outdir_grids or shared.opts.outdir_txt2img_grids,
prompt='',
styles=[],
negative_prompt='',
seed=-1,
subseed=-1,
subseed_strength=0,
seed_resize_from_h=0,
seed_resize_from_w=0,
seed_enable_extras=True,
sampler_name='Euler a',
batch_size=1,
n_iter=1,
steps=20,
cfg_scale=7,
width=512,
height=512,
restore_faces=False,
tiling=False,
enable_hr=False,
denoising_strength=0,
hr_scale=0,
hr_upscaler=None,
hr_second_pass_steps=0,
hr_resize_x=0,
hr_resize_y=0,
override_settings=[],
)
</pre>
</div>