onediffusion-bf16 / README.md
twodgirl's picture
Update README.md
159ac57 verified
|
raw
history blame
1.98 kB
metadata
base_model:
  - lehduong/OneDiffusion

OneDiffusion

The inference code provided for the text-to-image workflow. The modified code is not a requirement, it's for demo purposes only. It has less heavy requirements than the original repo. The inference speed is 7 s/it atm with the flash attention module removed.

The VRAM requirement is similar to that of SDXL and SD3.5 Medium models.

If you need a prompt to describe other images, you can use the Molmo spaces.

Installation

pip install accelerate diffusers einops sentencepiece transformers

Inference

from onediffusion.pipeline.onediffusion import OneDiffusionPipeline
import torch

if __name__ == '__main__':
    prompt = 'A bipedal black cat wearing a huge oversized witch hat, a wizards robe, casting a spell,in an enchanted forest. The scene is filled with fireflies and moss on surrounding rocks and trees'
    negative_prompt = 'monochrome, greyscale, low-res, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation'
    pipe = OneDiffusionPipeline.from_pretrained('twodgirl/onediffusion-bf16').to(device='cuda',
                                                                                 dtype=torch.bfloat16)
    # pipe.enable_model_cpu_offload()
    image = pipe(prompt='[[text2image]] {}'.format(prompt),
                 negative_prompt=negative_prompt,
                 num_inference_steps=30,
                 guidance_scale=4,
                 height=1024,
                 width=1024).images[0]
    image.save('cat.png')