File size: 1,904 Bytes
2603f55 e9c25f6 2603f55 e9c25f6 2603f55 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: chanel25
widget:
- text: >-
chanel25 The image shows a fashion runway setting with a woman walking down
the catwalk. She is wearing a structured, burgundy button-up coat that
extends to about knee length. Attached to the lower portion of this coat—or
forming a skirt beneath it—are layers of delicate, feather-like
embellishments in soft gray and pale blue tones, giving the garment a
dynamic, textured look. Her hair is pulled back neatly, and she appears
poised and focused as she walks. In the background, there is a large, airy
venue with a high ceiling and white architectural elements, and an audience
is seated on either side of the runway. The overall impression is of an
elegant, high-fashion moment.
output:
url: images/example_t3h8rl5m9.png
---
# Chanel25
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `chanel25` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('veravira/chanel25', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|