MobileNet-V4 (now in timm)

Community Article Published June 12, 2024

History

5 years ago now MobileNet-V3 (https://arxiv.org/abs/1905.02244) and EfficientNet (https://arxiv.org/abs/1905.11946) vision models were introduced by Google researchers. At a very early stage in timm's development, I set out to reproduce these model architectures and port the originally released Tensorflow model weights into PyTorch.

Both of these model architectures were based on the Inverted Residual Block (also called Inverted Bottleneck) that was introduced in the earlier MobileNet-V2 model. The IR consists of a 1x1 pointwise (PW) expansion convolution, followed by a depthwise (DW) convolution (3x3 or 5x5), and finally a 1x1 pointwise linear (PWL, no activation) convolution in the residual path. Unlike most other residual blocks at the time, the wide part is in the middle of the block at the depthwise conv, instead of at the start/end, and the output of the block has no activation (linear output), hence 'Inverted'.

To this day, timm remains the most comprehensive collection of known EfficientNet and MobileNet-V3 architectures. It covers all of the officially released Tensorflow weights from various model papers (EfficientNet, EfficientNet-EdgeTPU, EfficientNet-V2, MobileNet-V2, MobileNet-V3), training techinques (RandAug/AutoAug, AdvProp, Noisy Student), and numerous other closely related architectures and weights such as MNasNet, FBNet v1/v2/v3, LCNet, TinyNet, MixNet. There are also many weights trained in timm, pure PyTorch, with PyTorch friendly convolution padding (TF weight ports use a 'SAME' padding emulation) that aren't in other collections.

New Models

Now, it's finally time for MobileNet-V4 (https://arxiv.org/abs/2404.10518). Reading through the paper it's apparent the goal was to come up with a new set of NAS searched models that are runtime optimal on today's mobile/edge hardware, from small DSP/CPU devices to modest edge accelerators (e.g. EdgeTPU) in current mobile phones.

This goal was achieved by introducing two new block types to the previous mix:

  • Universal Inverted Bottleneck (UIB)
  • Multi Query Attention (MQA)

Universal Inverted Bottleneck

A superset of the original Inverted Residual / Inverted Bottleneck block, the UIB adds more flexibility in the search space, adding 2 extra depthwise convolution positions at the start and end of the block, and making the middle depthwise convolution optional. The extra final convolution isn't used (yet), but the new blocks in use now included:

  • 'ExtraDW' with a 3x3 or 5x5 DW convolution to start the block in front of existing 1x1 PW + kxk DW + 1x1 PWL pattern
  • 'FFN' with no DW convs enabled and just the 1x1 PW expansion + linear convs
  • 'ConvNeXt' with 3x3 or 5x5 DW convolution to start the block, no middle DW convolution, so a 1x1 + 1x1 FFN to end

image/png https://arxiv.org/abs/2404.10518

Mobile MQA

Also added for 'Hybrid' variants of the MobileNet-V4 is attention via a mobile optimized Multi Query Attention module. Neither the key or value have any heads, just 4 or 8 heads for the query. There is optional 2D spatial downsampling for the key-value and/or query.

PyTorch Implementation

I've recently implemented these models in timm in a bid to keep timm the best place to go for efficient image encoders. It builds on top of the previous MobileNet-V3 implementation. I have trained a number of initial weights and am working on covering all of the models mentioned in the paper: https://huggingface.co/collections/timm/mobilenetv4-pretrained-weights-6669c22cda4db4244def9637

And in case you look at that PR and wonder, WTH is EfficientNet-X / EfficientNet-H? They are little known variants of those models w/ Space2Depth, tweaked for TPU or GPU use. That's there too but not the focus.

The is an official Tensorflow implementation of these models in the Tensorflow Model Garden (https://github.com/tensorflow/models), but no sign of any weights yet.