--- license: cc-by-nc-4.0 --- # InstaFlow-0.9B fine-tuned from 2-Rectified Flow InstaFlow-0.9B is a **one-step** text-to-image generative model fine-tuned from [2-Rectified Flow](https://huggingface.co/XCLiu/2_rectified_flow_from_sd_1_5). It is trained with text-conditioned reflow and distillation as described in [our paper](https://arxiv.org/abs/2309.06380). Rectified Flow has interesting theoretical properties. You may check [this ICLR paper](https://arxiv.org/abs/2209.03003) and [this arXiv paper](https://arxiv.org/abs/2209.14577). ## 512-Resolution Images Generated from InstaFlow-0.9B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646b0bbdec9a61e871799339/Q3RF8B3tMVL2Zx7QDwYjL.png) # Usage Please refer to the [official github repo](https://github.com/gnobitab/InstaFlow). ## Training Training pipeline: 1. Distill (Stage 1): Starting from the [2-Rectified Flow](https://huggingface.co/XCLiu/2_rectified_flow_from_sd_1_5) checkpoint, we fix the time t=0 for the neural network, and fine-tune it using the distillation objective with a batch size of 1024 for 21,500 iterations. The guidance scale of the teacher model, 2-Rectified Flow, is set to 1.5 and the similarity loss is L2 loss. (54.4 A100 GPU days) 2. Distill (Stage 2): We switch the similarity loss to LPIPS loss, then we continue to train the model using the distillation objective and a batch size of 1024 for another 18,000 iterations. (53.6 A100 GPU days) The final model is **InstaFlow-0.9B**. **Total Training Cost:** It takes 199.2 A100 GPU days in total (data generation + reflow + distillation) to get InstaFlow-0.9B. ## Evaluation Results - Metrics The following metrics of InstaFlow-0.9B are measured on MS COCO 2017 with 5,000 images and 1-step Euler solver: *FID-5k = 23.4, CLIP score = 0.304* Measured on MS COCO 2014 with 30,000 images and 1-step Euler solver: *FID-30k = 13.1* ## Citation ``` @article{liu2023insta, title={InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation}, author={Liu, Xingchao and Zhang, Xiwen and Ma, Jianzhu and Peng, Jian and Liu, Qiang}, journal={arXiv preprint arXiv:2309.06380}, year={2023} } ```