--- license: other license_name: flux1dev tags: - text-to-image - diffusers - lora - flux - flux-diffusers - template:sd-lora base_model: black-forest-labs/FLUX.1-dev instance_prompt: Paper Cutout Style widget: - text: The cookie monster, Paper Cutout Style output: url: d979c6e346504090ae53b1a5ef5d4018_8b54c3ba6d284637b140f40718118b78.png - text: Gal Gadot as wonderwoman, Paper Cutout Style output: url: d13591878d5043f3989dd6eb1c25b710_233c18effb4b491cb467ca31c97e90b5.png - text: The Joker, Paper Cutout Style output: url: 4e5fd35736f24061a08bc57bb4c92ca4_e416314bb419473ca3da3f0971ec26ef.png - text: >- A green Cthulhu is rising from the blue sea in a great lightning storm, based on a story by H.P. Lovecraft, Paper Cutout Style output: url: 08a19840b6214b76b0607b2f9d5a7e28_63159b9d98124c008efb1d36446a615c.png - text: Kermit the frog, Paper Cutout Style output: url: 1f14bd65af7242149b0ab202e9b7a88c_ffb3fb9207f34d2dbc577bae2a2f38b2.png license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- ## Model description ## Trigger words You should use `1980s anime screengrab, VHS quality,` or `syntheticanime` to trigger the image generation. ## Use it with the [:firecracker: diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('dataautogpt3/FLUX-SyntheticAnime', weight_name='Flux_1_Dev_LoRA_syntheticanime.safetensors') image = pipeline('Gal Gadot as wonderwoman, Paper Cutout Style').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).