FFUSION.ai-Text-Encoder-LyCORIS-SD-2.1 / di.FFUSION.ai Text Encoder - SD 2.1 LyCORIS_model_card.md
idlebg's picture
Upload di.FFUSION.ai Text Encoder - SD 2.1 LyCORIS_model_card.md
e9027a2
|
raw
history blame
14.8 kB

Model Card for di.FFUSION.ai Text Encoder - SD 2.1 LyCORIS

di.FFUSION.ai-tXe-FXAA Trained on "121361" images.

Enhance your model's quality and sharpness using your own pre-trained Unet.

The text encoder (without UNET) is wrapped in LyCORIS. Optimizer: torch.optim.adamw.AdamW(weight_decay=0.01, betas=(0.9, 0.99))

Network dimension/rank: 768.0 Alpha: 768.0 Module: lycoris.kohya {'conv_dim': '256', 'conv_alpha': '256', 'algo': 'loha'}

Large size due to Lyco CONV 256

This is a heavy experimental version we used to test even with sloppy captions (quick WD tags and terrible clip), yet the results were satisfying.

Note: This is not the text encoder used in the official FFUSION AI model.

Table of Contents

Model Details

Model Description

di.FFUSION.ai-tXe-FXAA Trained on "121361" images.

Enhance your model's quality and sharpness using your own pre-trained Unet.

The text encoder (without UNET) is wrapped in LyCORIS. Optimizer: torch.optim.adamw.AdamW(weight_decay=0.01, betas=(0.9, 0.99))

Network dimension/rank: 768.0 Alpha: 768.0 Module: lycoris.kohya {'conv_dim': '256', 'conv_alpha': '256', 'algo': 'loha'}

Large size due to Lyco CONV 256

This is a heavy experimental version we used to test even with sloppy captions (quick WD tags and terrible clip), yet the results were satisfying.

Note: This is not the text encoder used in the official FFUSION AI model.

  • Developed by: F, F, u, s, i, o, n, ., a, i
  • Shared by [Optional]: i, d, l, e, , s, t, o, e, v
  • Model type: Language model
  • Language(s) (NLP): en
  • License: creativeml-openrail-m
  • Parent Model: More information needed
  • Resources for more information: More information needed

Uses

Direct Use

The text encoder (without UNET) is wrapped in LyCORIS. Optimizer: torch.optim.adamw.AdamW(weight_decay=0.01, betas=(0.9, 0.99))

Network dimension/rank: 768.0 Alpha: 768.0 Module: lycoris.kohya {'conv_dim': '256', 'conv_alpha': '256', 'algo': 'loha'}

Large size due to Lyco CONV 256

Downstream Use [Optional]

Out-of-Scope Use

Bias, Risks, and Limitations

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.

Recommendations

Training Details

Training Data

Trained on "121361" images.

ss_caption_tag_dropout_rate: "0.0", ss_multires_noise_discount: "0.3", ss_mixed_precision: "bf16", ss_text_encoder_lr: "1e-07", ss_keep_tokens: "3", ss_network_args: "{"conv_dim": "256", "conv_alpha": "256", "algo": "loha"}", ss_caption_dropout_rate: "0.02", ss_flip_aug: "False", ss_learning_rate: "2e-07", ss_sd_model_name: "stabilityai/stable-diffusion-2-1-base", ss_max_grad_norm: "1.0", ss_num_epochs: "2", ss_gradient_checkpointing: "False", ss_face_crop_aug_range: "None", ss_epoch: "2", ss_num_train_images: "121361", ss_color_aug: "False", ss_gradient_accumulation_steps: "1", ss_total_batch_size: "100", ss_prior_loss_weight: "1.0", ss_training_comment: "None", ss_network_dim: "768", ss_output_name: "FusionaMEGA1tX", ss_max_bucket_reso: "1024", ss_network_alpha: "768.0", ss_steps: "2444", ss_shuffle_caption: "True", ss_training_finished_at: "1684158038.0763328", ss_min_bucket_reso: "256", ss_noise_offset: "0.09", ss_enable_bucket: "True", ss_batch_size_per_device: "20", ss_max_train_steps: "2444", ss_network_module: "lycoris.kohya",

Training Procedure

Preprocessing

"{"buckets": {"0": {"resolution": [192, 256], "count": 1}, "1": {"resolution": [192, 320], "count": 1}, "2": {"resolution": [256, 384], "count": 1}, "3": {"resolution": [256, 512], "count": 1}, "4": {"resolution": [384, 576], "count": 2}, "5": {"resolution": [384, 640], "count": 2}, "6": {"resolution": [384, 704], "count": 1}, "7": {"resolution": [384, 1088], "count": 15}, "8": {"resolution": [448, 448], "count": 5}, "9": {"resolution": [448, 576], "count": 1}, "10": {"resolution": [448, 640], "count": 1}, "11": {"resolution": [448, 768], "count": 1}, "12": {"resolution": [448, 832], "count": 1}, "13": {"resolution": [448, 1088], "count": 25}, "14": {"resolution": [448, 1216], "count": 1}, "15": {"resolution": [512, 640], "count": 2}, "16": {"resolution": [512, 768], "count": 10}, "17": {"resolution": [512, 832], "count": 3}, "18": {"resolution": [512, 896], "count": 1525}, "19": {"resolution": [512, 960], "count": 2}, "20": {"resolution": [512, 1024], "count": 665}, "21": {"resolution": [512, 1088], "count": 8}, "22": {"resolution": [576, 576], "count": 5}, "23": {"resolution": [576, 768], "count": 1}, "24": {"resolution": [576, 832], "count": 667}, "25": {"resolution": [576, 896], "count": 9601}, "26": {"resolution": [576, 960], "count": 872}, "27": {"resolution": [576, 1024], "count": 17}, "28": {"resolution": [640, 640], "count": 3}, "29": {"resolution": [640, 768], "count": 7}, "30": {"resolution": [640, 832], "count": 608}, "31": {"resolution": [640, 896], "count": 90}, "32": {"resolution": [704, 640], "count": 1}, "33": {"resolution": [704, 704], "count": 11}, "34": {"resolution": [704, 768], "count": 1}, "35": {"resolution": [704, 832], "count": 1}, "36": {"resolution": [768, 640], "count": 225}, "37": {"resolution": [768, 704], "count": 6}, "38": {"resolution": [768, 768], "count": 74442}, "39": {"resolution": [832, 576], "count": 23784}, "40": {"resolution": [832, 640], "count": 554}, "41": {"resolution": [896, 512], "count": 1235}, "42": {"resolution": [896, 576], "count": 50}, "43": {"resolution": [896, 640], "count": 88}, "44": {"resolution": [960, 512], "count": 165}, "45": {"resolution": [960, 576], "count": 5246}, "46": {"resolution": [1024, 448], "count": 5}, "47": {"resolution": [1024, 512], "count": 1187}, "48": {"resolution": [1024, 576], "count": 40}, "49": {"resolution": [1088, 384], "count": 70}, "50": {"resolution": [1088, 448], "count": 36}, "51": {"resolution": [1088, 512], "count": 3}, "52": {"resolution": [1216, 448], "count": 36}, "53": {"resolution": [1344, 320], "count": 29}, "54": {"resolution": [1536, 384], "count": 1}}, "mean_img_ar_error": 0.01693107810697896}",

Speeds, Sizes, Times

ss_resolution: "(768, 768)", ss_v2: "True", ss_cache_latents: "False", ss_unet_lr: "2e-07", ss_num_reg_images: "0", ss_max_token_length: "225", ss_lr_scheduler: "linear", ss_reg_dataset_dirs: "{}", ss_lr_warmup_steps: "303", ss_num_batches_per_epoch: "1222", ss_lowram: "False", ss_multires_noise_iterations: "None", ss_optimizer: "torch.optim.adamw.AdamW(weight_decay=0.01,betas=(0.9, 0.99))",

Evaluation

Testing Data, Factors & Metrics

Testing Data

More information needed

Factors

More information needed

Metrics

More information needed

Results

More information needed

Model Examination

More information needed

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: 8xA100
  • Hours used: 64
  • Cloud Provider: CoreWeave
  • Compute Region: US Main
  • Carbon Emitted: 6.72

Technical Specifications [optional]

Model Architecture and Objective

Enhance your model's quality and sharpness using your own pre-trained Unet.

Compute Infrastructure

More information needed

Hardware

8xA100

Software

Fully trained only with Kohya S & Shih-Ying Yeh (Kohaku-BlueLeaf) https://arxiv.org/abs/2108.06098

Citation

BibTeX:

More information needed

APA:

@misc{LyCORIS, author = "Shih-Ying Yeh (Kohaku-BlueLeaf), Yu-Guan Hsieh, Zhidong Gao", title = "LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion", howpublished = "\url{https://github.com/KohakuBlueleaf/LyCORIS}", month = "March", year = "2023" }

Glossary [optional]

More information needed

More Information [optional]

More information needed

Model Card Authors [optional]

i, d, l, e, , s, t, o, e, v

Model Card Contact

d, i, @, f, f, u, s, i, o, n, ., a, i

How to Get Started with the Model

Use the code below to get started with the model.

Click to expand

More information needed