ACE-0.6B-1024px / README.md
chaojiemao's picture
Update README.md
3f5af9f verified
|
raw
history blame
8.06 kB
metadata
license: apache-2.0
language:
  - en
tags:
  - Diffusion Transformer
  - Image Editing
  - Image To Image
  - Scepter
  - ACE

ACE: All-round Creator and Editor Following Instructions via Diffusion Transformer

Tongyi Lab, Alibaba Group

ACE is a unified foundational model framework that supports a wide range of visual generation tasks. By defining CU for unifying multi-modal inputs across different tasks and incorporating long-context CU, we introduce historical contextual information into visual generation tasks, paving the way for ChatGPT-like dialog systems in visual generation.

πŸ“’ News

πŸš€ Installation

Install the necessary packages with pip:

git clone https://github.com/ali-vilab/ACE.git
pip install -r requirements.txt

πŸ”₯ ACE Models

Model Status
ACE-0.6B-512px Demo link
ModelScope link HuggingFace link
ACE-0.6B-1024px Demo link
ModelScope link HuggingFace link
ACE-12B-FLUX-dev Coming Soon

πŸ–Ό Model Performance Visualization

The current model's parameters scale of ACE is 0.6B, which imposes certain limitations on the quality of image generation. FLUX.1-Dev, on the other hand, has a significant advantage in text-to-image generation quality. By using SDEdit, we can effectively leverage the generative capabilities of FLUX to further enhance the image results generated by ACE. Based on the above considerations, we have designed the ACE-Refiner pipeline, as shown in the diagram below.

ACE_REFINER

As shown in the figure below, when the strength Οƒ of the generated image is high, the generated image will suffer from fidelity loss compared to the original image. Conversely, lower Οƒ does not significantly improve the image quality. Therefore, users can make a trade-off between fidelity to the generated result and the image quality based on their own needs. Users can set the value of "REFINER_SCALE" in the configuration file config/inference_config/models/ace_0.6b_1024_refiner.yaml. We recommend that users use the advance options in the webui-demo for effect verification.

ACE_REFINER_EXAMPLE

We compared the generation and editing performance of different models on several tasks, as shown as following. Samples

πŸ”₯ Training

We offer a demonstration training YAML that enables the end-to-end training of ACE using a toy dataset. For a comprehensive overview of the hyperparameter configurations, please consult config/ace_0.6b_512_train.yaml.

Prepare datasets

Please find the dataset class located in modules/data/dataset/dataset.py, designed to facilitate end-to-end training using an open-source toy dataset. Download a dataset zip file from modelscope, and then extract its contents into the cache/datasets/ directory.

Should you wish to prepare your own datasets, we recommend consulting modules/data/dataset/dataset.py for detailed guidance on the required data format.

Prepare initial weight

The ACE checkpoint has been uploaded to both ModelScope and HuggingFace platforms:

In the provided training YAML configuration, we have designated the Modelscope URL as the default checkpoint URL. Should you wish to transition to Hugging Face, you can effortlessly achieve this by modifying the PRETRAINED_MODEL value within the YAML file (replace the prefix "ms://iic" to "hf://scepter-studio").

Start training

You can easily start training procedure by executing the following command:

# ACE-0.6B-512px
PYTHONPATH=. python tools/run_train.py --cfg config/ace_0.6b_512_train.yaml
# ACE-0.6B-1024px
PYTHONPATH=. python tools/run_train.py --cfg config/ace_0.6b_1024_train.yaml

πŸš€ Inference

We provide a simple inference demo that allows users to generate images from text descriptions.

 PYTHONPATH=. python tools/run_inference.py --cfg config/inference_config/models/ace_0.6b_512.yaml --instruction "make the boy cry, his eyes filled with tears" --seed 199999 --input_image examples/input_images/example0.webp

We recommend runing the examples for quick testing. Running the following command will run the example inference and the results will be saved in examples/output_images/.

PYTHONPATH=. python tools/run_inference.py --cfg config/inference_config/models/ace_0.6b_512.yaml

πŸ“ Citation

@article{han2024ace,
  title={ACE: All-round Creator and Editor Following Instructions via Diffusion Transformer},
  author={Han, Zhen and Jiang, Zeyinzi and Pan, Yulin and Zhang, Jingfeng and Mao, Chaojie and Xie, Chenwei and Liu, Yu and Zhou, Jingren},
  journal={arXiv preprint arXiv:2410.00086},
  year={2024}
}