--- license: apache-2.0 --- ### mineral-colour on Stable Diffusion via Dreambooth #### token mineral-colour Here are the images used for training this concept: ![1](./1.png) ![2](./2.png) ![3](./3.png) ![4](./4.png) ![5](./5.png) #### inference ```` from torch import autocast from diffusers import StableDiffusionPipeline import torch import diffusers from PIL import Image def image_grid(imgs, rows, cols): assert len(imgs) == rows*cols w, h = imgs[0].size grid = Image.new('RGB', size=(cols*w, rows*h)) grid_w, grid_h = grid.size for i, img in enumerate(imgs): grid.paste(img, box=(i%cols*w, i//cols*h)) return grid pipe = StableDiffusionPipeline.from_pretrained("Dushwe/mineral-colour").to("cuda") prompt = 'A little girl in china chic hanfu walks in the forest, mineral-colour' images = pipe(prompt, num_images_per_prompt=1, num_inference_steps=50, guidance_scale=7.5,torch_dtype=torch.cuda.HalfTensor).images grid = image_grid(images, 1, 1) grid ```` ![grid1](./min-1.png) #### generate samples Chinese palace, 4k resolution, mineral-colour ![grid2](./min-2.png) beginning of autumn, autumn, forests, scenery, background, landscape, woodland, trees,mineral-colour ![grid3](./min-3.png) You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!