# Unlimited Replicant Model Card ![eyecatch.jpg](eyecatch.jpg) Title: Replicant from unlimited sky. # Introduction Unlimited Replicant is the latent diffusion model made for AI art. # Legal and ethical information We create this model legally. However, we think that this model have ethical problems. Therefore, we cannot use the model for commercially except for news reporting. # Usage I recommend to use the model by Web UI. You can download the model [here](unlimited_replicant.safetensors). ## Model Details - **Developed by:** Robin Rombach, Patrick Esser, Alfred Increment - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M-NC License](MODEL-LICENSE), [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples - Web UI - Diffusers ## Web UI **Run with --no-half option. I recommend to install [xformers](https://github.com/facebookresearch/xformers).** Download the model [here](unlimited_replicant.safetensors). Then, install [Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) by AUTIMATIC1111. ## Diffusers Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Picassso Diffusion 1.0 in a simple and efficient manner. ```bash pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy ``` Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to EulerDiscreteScheduler): ```python from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler import torch model_id = "alfredplpl/unlimited-1-0" scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "masterpiece, anime, close up, white short hair, red eyes, 1girl, solo, red roses" negative_prompt="lowres , kanji, monochrome, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)), ((censored)), ((bad aesthetic))" images = pipe(prompt,negative_prompt=negative_prompt, num_inference_steps=30, height=1024, width=768).images images[0].save("girl.png") ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) *This model card was written by: Alfred Increment and is based on the [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md)