--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image widget: - text: 'screenprint tshirt design, a happy cat holding a sign that says "I LOVE REPLICATE", ATMGRN illustration style, green' output: url: "images/1.webp" - text: "a woman, ATMGRN illustration style" output: url: "images/2.webp" - text: "incredibly intricate and detailed illustrated abstract art iphone wallpaper, ATMGRN illustration style, green" output: url: "images/3.webp" - text: "a fall landscape, ATMGRN illustration style" output: url: "images/4.webp" - text: "A punk rock frog in a studded leather jacket shouting into a microphone while standing on a stump, ATMGRN illustration style, blue tint" output: url: "images/5.webp" - text: a penguin that is a car, ATMGRN illustration style, blue tint" output: url: "images/6.webp" instance_prompt: ATMGRN --- # Flux Autumn Green Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ATMGRN` to trigger the image generation. ## Training details This model was trained on Replicate, here: https://replicate.com/ostris/flux-dev-lora-trainer/train The training set is comprised of 14 images generated on MidJourney using the --sref 2795713976. You can find the entire training set, including auto-generated captions, and training images in the `./training_set` directory. Below are the training parameters I used, which seem to work fairly well for illustration/cartoony Flux LoRAs. NOTE: This is 3200 training steps in total. The reason the `steps` parameter is `800`, is because I did a `batch_size` of `4`. ```json { "steps": 800, "lora_rank": 24, "batch_size": 4, "autocaption": true, "input_images": "training_set/2024-09-23-autumn-green.zip", "trigger_word": "ATMGRN", "learning_rate": 0.0003, "autocaption_suffix": "ATMGRN style" } ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jakedahn/flux-autumn-green', weight_name='lora.safetensors') image = pipeline('cat with a hat, ATMGRN illustration style').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)