metadata
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: MNALSA
widget:
- text: >-
a photo of MNALSA woman with a coffee in a parisian cafe, 50mm, sharp,
award winning portrait photography
output:
url: >-
https://replicate.delivery/yhqm/OQPHQ9kRw9ZpIBMub19XY8NxKznMDA50eSEjpx52QP3qoVrJA/out-0.webp
- text: a photo of MNALSA woman with pink hair at a rave
output:
url: >-
https://replicate.delivery/yhqm/apYK6kZFfZUYRyoJ11NzhHY2YXbrjCHajYIiN9EznGR4qVrJA/out-0.webp
Flux Mona Lisa

- Prompt
- a photo of MNALSA woman with a coffee in a parisian cafe, 50mm, sharp, award winning portrait photography

- Prompt
- a photo of MNALSA woman with pink hair at a rave
Run on Replicate:
https://replicate.com/fofr/flux-mona-lisa
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
Trigger words
You should use MNALSA
to trigger the image generation.
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('fofr/flux-mona-lisa', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers