Edit model card

sdxl-tng-interior LoRA by fofr

SDXL fine-tune of Star Trek Next Generation interiors

lora_image

Inference with Replicate API

Grab your replicate token here

pip install replicate
export REPLICATE_API_TOKEN=r8_*************************************
import replicate

output = replicate.run(
    "sdxl-tng-interior@sha256:45f1d0cf3445f54d4b19a2a03e53b15abd7237ea72e2fb4824b193ffa429e31f",
    input={"prompt": "A photo in the style of TOK, interior, house - sustainable, minimalist, organic, light-filled, dynamic, efficient, autonomous, connected, harmonious, innovative, detailed, 8k, high resolution, sharp focus"}
)
print(output)

You may also do inference via the API with Node.js or curl, and locally with COG and Docker, check out the Replicate API page for this model

Inference with 🧨 diffusers

Replicate SDXL LoRAs are trained with Pivotal Tuning, which combines training a concept via Dreambooth LoRA with training a new token with Textual Inversion. As diffusers doesn't yet support textual inversion for SDXL, we will use cog-sdxl TokenEmbeddingsHandler class.

The trigger tokens for your prompt will be <s0><s1>

pip install diffusers transformers accelerate safetensors huggingface_hub
git clone https://github.com/replicate/cog-sdxl cog_sdxl
import torch
from huggingface_hub import hf_hub_download
from diffusers import DiffusionPipeline
from cog_sdxl.dataset_and_utils import TokenEmbeddingsHandler
from diffusers.models import AutoencoderKL

pipe = DiffusionPipeline.from_pretrained(
        "stabilityai/stable-diffusion-xl-base-1.0",
        torch_dtype=torch.float16,
        variant="fp16",
).to("cuda")

pipe.load_lora_weights("fofr/sdxl-tng-interior", weight_name="lora.safetensors")

text_encoders = [pipe.text_encoder, pipe.text_encoder_2]
tokenizers = [pipe.tokenizer, pipe.tokenizer_2]

embedding_path = hf_hub_download(repo_id="fofr/sdxl-tng-interior", filename="embeddings.pti", repo_type="model")
embhandler = TokenEmbeddingsHandler(text_encoders, tokenizers)
embhandler.load_embeddings(embedding_path)
prompt="A photo in the style of <s0><s1>, interior, house - sustainable, minimalist, organic, light-filled, dynamic, efficient, autonomous, connected, harmonious, innovative, detailed, 8k, high resolution, sharp focus"
images = pipe(
    prompt,
    cross_attention_kwargs={"scale": 0.8},
).images
#your output image
images[0]
Downloads last month
99
Inference API
Inference API (serverless) has been turned off for this model.

Adapter for