Download the pretrained checkpoints:
First, make sure you have installed the huggingface CLI and modelscope CLI.
pip install -U "huggingface_hub[cli]"
pip install modelscope
Download the pretrained DiT and VAE checkpoints:
hf download tencent/HunyuanImage-2.1 --local-dir ./ckpts
Downloading TextEncoders
HunyuanImage uses an MLLM and a byT5 as text encoders.
MLLM
HunyuanImage can be integrated with different MLLMs (including HunyuanMLLM and other open-source MLLM models).
At this stage, we have not yet released the latest HunyuanMLLM. We recommend the users in community to use an open-source alternative, such as Qwen2.5-VL-7B-Instruct provided by Qwen Team, which can be downloaded by the following command:
hf download Qwen/Qwen2.5-VL-7B-Instruct --local-dir ./ckpts/text_encoder/llm
ByT5 encoder
We use Glyph-SDXL-v2 as our byT5 encoder, which can be downloaded by the following command:
hf download google/byt5-small --local-dir ./ckpts/text_encoder/byt5-small modelscope download --model AI-ModelScope/Glyph-SDXL-v2 --local_dir ./ckpts/text_encoder/Glyph-SDXL-v2
You can also manually download the checkpoints from here and place them in the text_encoder folder like:
ckpts βββ text_encoder β βββ Glyph-SDXL-v2 β β βββ assets β β β βββ color_idx.json β β β βββ multilingual_10-lang_idx.json β β β βββ ... β β βββ checkpoints β β βββ byt5_model.pt β β βββ ... β ββ ... ββ ...
π‘Tips for using hf/huggingface-cli (network problem)
1. Using HF-Mirror
If you encounter slow download speeds in China, you can try a mirror to speed up the download process:
HF_ENDPOINT=https://hf-mirror.com hf download tencent/HunyuanImage-2.1 --local-dir ./ckpts
2. Resume Download
huggingface-cli
supports resuming downloads. If the download is interrupted, you can just rerun the download
command to resume the download process.
Note: If an No such file or directory: 'ckpts/.huggingface/.gitignore.lock'
like error occurs during the download
process, you can ignore the error and rerun the download command.