# Deep Model Assembling This repository contains the official code for [Deep Model Assembling](https://arxiv.org/abs/2212.04129).
> **Title**: [**Deep Model Assembling**](https://arxiv.org/abs/2212.04129) > **Authors**: [Zanlin Ni](https://scholar.google.com/citations?user=Yibz_asAAAAJ&hl=en&oi=ao), [Yulin Wang](https://scholar.google.com/citations?hl=en&user=gBP38gcAAAAJ), Jiangwei Yu, [Haojun Jiang](https://scholar.google.com/citations?hl=en&user=ULmStp8AAAAJ), [Yue Cao](https://scholar.google.com/citations?hl=en&user=iRUO1ckAAAAJ), [Gao Huang](https://scholar.google.com/citations?user=-P9LwcgAAAAJ&hl=en&oi=ao) (Corresponding Author) > **Institute**: Tsinghua University and Beijing Academy of Artificial Intelligence (BAAI) > **Publish**: *arXiv preprint ([arXiv 2212.04129](https://arxiv.org/abs/2212.04129))* > **Contact**: nzl22 at mails dot tsinghua dot edu dot cn ## News - `Dec 10, 2022`: release code for training ViT-B, ViT-L and ViT-H on ImageNet-1K. ## Overview In this paper, we present a divide-and-conquer strategy for training large models. Our algorithm, Model Assembling, divides a large model into smaller modules, optimizes them independently, and then assembles them together. Though conceptually simple, our method significantly outperforms end-to-end (E2E) training in terms of both training efficiency and final accuracy. For example, on ViT-H, Model Assembling outperforms E2E training by **2.7%**, while reducing the training cost by **43%**.
## Data Preparation - The ImageNet dataset should be prepared as follows: ``` data ├── train │ ├── folder 1 (class 1) │ ├── folder 2 (class 1) │ ├── ... ├── val │ ├── folder 1 (class 1) │ ├── folder 2 (class 1) │ ├── ... ``` ## Training on ImageNet-1K - You can add `--use_amp 1` to train in PyTorch's Automatic Mixed Precision (AMP). - Auto-resuming is enabled by default, i.e., the training script will automatically resume from the latest ckpt in
output_dir
.
- The effective batch size = `NGPUS` * `batch_size` * `update_freq`. We keep using an effective batch size of 2048. To avoid OOM issues, you may adjust these arguments accordingly.
- We provide single-node training scripts for simplicity. For multi-node training, simply modify the training scripts accordingly with torchrun:
```bash
python -m torch.distributed.launch --nproc_per_node=${NGPUS} --master_port=23346 --use_env main.py ...
# modify the above code to
torchrun \
--nnodes=$NODES \
--nproc_per_node=$NGPUS \
--rdzv_backend=c10d \
--rdzv_endpoint=$MASTER_ADDR:60900 \
main.py ...
```
### Results on CIFAR-100
### Training Efficiency - Comparing different training budgets
- Detailed convergence curves of ViT-Huge
### Data Efficiency
## Citation If you find our work helpful, please **star🌟** this repo and **cite📑** our paper. Thanks for your support! ``` @article{Ni2022Assemb, title={Deep Model Assembling}, author={Ni, Zanlin and Wang, Yulin and Yu, Jiangwei and Jiang, Haojun and Cao, Yue and Huang, Gao}, journal={arXiv preprint arXiv:2212.04129}, year={2022} } ``` ## Acknowledgements Our implementation is mainly based on [deit](https://github.com/facebookresearch/deit). We thank to their clean codebase. ## Contact If you have any questions or concerns, please send mail to [nzl22@mails.tsinghua.edu.cn](mailto:nzl22@mails.tsinghua.edu.cn).