LoRA Finetuning - Aff4n20/wuerstchen-ancient-coins
This pipeline was finetuned from warp-ai/wuerstchen-prior on the Aff4n20/ancient-coin-dataset dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['inscription, IMP AVG DIVI F; bare head of Augustus left; in front palm; behind, winged caduceus']:
Pipeline usage
You can use the pipeline like so:
from diffusers import DiffusionPipeline
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
"warp-ai/wuerstchen", torch_dtype=torch.float16
)
# load lora weights from folder:
pipeline.prior_pipe.load_lora_weights("Aff4n20/wuerstchen-ancient-coins", torch_dtype=torch.float16)
image = pipeline(prompt=prompt).images[0]
image.save("my_image.png")
Training info
These are the key hyperparameters used during training:
- LoRA rank: 4
- Epochs: 19
- Learning rate: 0.0001
- Batch size: 1
- Gradient accumulation steps: 4
- Image resolution: 512
- Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your wandb
run page.
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Aff4n20/wuerstchen-ancient-coins
Base model
warp-ai/wuerstchen-prior