kviimager2.0 / README.md
Kvikontent's picture
Update README.md
ebdcfab verified
metadata
tags:
  - text-to-image
  - stable-diffusion
  - kviai
  - midjourney
  - lora
  - dalle-3
  - dalle
  - deepvision
  - diffusers
widget:
  - text: reimagine the ZX Spectrum Game MANIC MINER as a 3D modern style game
    output:
      url: >-
        https://www.instantaiprompt.com/wp-content/uploads/2023/12/manic-miner.jpg
  - text: >-
      cute Harry Potter, pixar animated movie style, dramatic lighting, standing
      outside Hogwarts.
    output:
      url: >-
        https://www.instantaiprompt.com/wp-content/uploads/2023/12/harry-potter-ai-hp.jpg
  - text: >-
      close up of a Quokka, national geographic style photography, stunning
      image, golden hour
    output:
      url: https://www.instantaiprompt.com/wp-content/uploads/2023/12/ai-quokka.jpg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: <lora:Dall-e_3_0.3-v2-000003>
license: openrail
language:
  - en
  - fr
  - ru
pipeline_tag: text-to-image
library_name: diffusers

DALL-E 3 XL

Prompt
reimagine the ZX Spectrum Game MANIC MINER as a 3D modern style game
Prompt
cute Harry Potter, pixar animated movie style, dramatic lighting, standing outside Hogwarts.
Prompt
close up of a Quokka, national geographic style photography, stunning image, golden hour

Model description

This is a test model like Dall-E 3.

Estimated generateion time is ~ 60 seconds on gpu

By KVI Kontent

Usage

You can try out model using Huggingface Interface API, and here is how:

import requests
import io
from PIL import *

API_URL = "https://api-inference.huggingface.co/models/Kvikontent/kviimager2.0"
headers = {"Authorization": "Bearer huggingface_api_token"}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.content

image_bytes = query({
    "inputs": "Astronaut riding a horse",
})

image = Image.open(io.BytesIO(image_bytes))
image.save("generated_image.jpg")

or using Diffusers library (requires pytorch and transformers too):

from diffusers import DiffusionPipeline
import io
from PIL import *

pipeline = DiffusionPipeline.from_pretrained("stablediffusionapi/juggernaut-xl-v5")
pipeline.load_lora_weights("Kvikontent/kviimager2.0")

prompt = "Astronaut riding a horse"

image_bytes = pipeline(prompt)
image = Image.open(io.BytesIO(image_bytes))
image.save("generated_image.jpg")

Credits

  • Author - Vasiliy Katsyka
  • Company - KVIAI
  • Licence - Openrail

Official demo

You can use official demo on Spaces: try.