Edit model card

Text-to-image finetuning - ekshat/Stable_Diffussion_Anime_Style

This pipeline was finetuned from ekshat/stable-diffusion-anime-style on the lambdalabs/naruto-blip-captions dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['A person with blue eyes.']:

val_imgs_grid

Pipeline usage

You can use the pipeline like so:

from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained("ekshat/Stable_Diffussion_Anime_Style", torch_dtype=torch.float16)
pipeline = pipeline.to("cuda")

prompt = "A person with blue eyes."
image = pipeline(prompt).images[0]
image.save("my_image.png")

Training info

These are the key hyperparameters used during training:

  • Epochs: 17
  • Learning rate: 2e-06
  • Batch size: 2
  • Gradient accumulation steps: 1
  • Image resolution: 512
  • Mixed-precision: fp16
Downloads last month
0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.

Dataset used to train ekshat/Stable_Diffussion_Naruto_Style