coreml-dreamful / README.md
julien-c's picture
julien-c HF staff
Metadata update: we changed the `not-for-all-eyes` format
51e6047
|
raw
history blame
1.89 kB
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
- not-for-all-eyes
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
- Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
- `original` version is only compatible with CPU & GPU option.<br>
- Custom resolution versions are tagged accordingly.<br>
- `vae` tagged files have a vae embedded into the model.<br>
- Descriptions are posted as-is from original model source. Not all features and/or results may be available in CoreML format.<br>
- This model was converted with `vae-encoder` for i2i.
- Models that are 32 bit will have "fp32" in the filename.
# Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# MODELNAME:
Source(s): [CivitAI](https://civitai.com/models/17754/dreamful)
Model mix aims to create the most realistic and natural images possible. It's currently in the testing process, so please comment.
Guide:
For the settings or parameters, I recommend using these settings.
Sampler: DPM++ SDE Karras or Ruler a
Steps: 20 for portrait, 30 for full body
CFG Scale: 7-10
List models:
guofeng3_v32Light
pastelMixStylizedAnime_pastelMixPrunedFP16
goodAsianGirlFace_goodAsianGirlFaceV12
Basil_mix_fixed
List existing VAEs:
vae-ft-mse-840000-ema-pruned
emaPrunedVAE_emaPruned
LORA is not added yet