Edit model card

Olive Optimized DirectML Onnx model for https://huggingface.co/mhdang/dpo-sdxl-text2image-v1 Created with the Olive Toolset https://github.com/microsoft/Olive

Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. https://arxiv.org/abs/2311.12908

This model is being used by Fusion Quill - a Windows app that runs Stable Diffusion models locally. https://FusionQuill.AI

Downloads last month
4