The model architectures included come from a wide variety of sources. Sources, including papers, original impl (“reference code”) that I rewrote / adapted, and PyTorch impl that I leveraged directly (“code”) are listed below.
Most included models have pretrained weights. The weights are either:
The validation results for the pretrained weights are here
A more exciting view (with pretty pictures) of the models within timm
can be found at paperswithcode.
Big Transfer (BiT): General Visual Representation Learning
- https://arxiv.org/abs/1912.11370CSPNet: A New Backbone that can Enhance Learning Capability of CNN
- https://arxiv.org/abs/1911.11929Densely Connected Convolutional Networks
- https://arxiv.org/abs/1608.06993Dual Path Networks
- https://arxiv.org/abs/1707.01629Neural Architecture Design for GPU-Efficient Networks
- https://arxiv.org/abs/2006.14090Deep High-Resolution Representation Learning for Visual Recognition
- https://arxiv.org/abs/1908.07919Rethinking the Inception Architecture for Computer Vision
- https://arxiv.org/abs/1512.00567Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
- https://arxiv.org/abs/1602.07261Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
- https://arxiv.org/abs/1602.07261Learning Transferable Architectures for Scalable Image Recognition
- https://arxiv.org/abs/1707.07012Progressive Neural Architecture Search
- https://arxiv.org/abs/1712.00559Searching for MobileNetV3
- https://arxiv.org/abs/1905.02244Designing Network Design Spaces
- https://arxiv.org/abs/2003.13678Making VGG-style ConvNets Great Again
- https://arxiv.org/abs/2101.03697Implementation: resnet.py
ResNet (V1B)
Deep Residual Learning for Image Recognition
- https://arxiv.org/abs/1512.03385ResNeXt
Aggregated Residual Transformations for Deep Neural Networks
- https://arxiv.org/abs/1611.05431‘Bag of Tricks’ / Gluon C, D, E, S ResNet variants
Bag of Tricks for Image Classification with CNNs
- https://arxiv.org/abs/1812.01187Instagram pretrained / ImageNet tuned ResNeXt101
Exploring the Limits of Weakly Supervised Pretraining
- https://arxiv.org/abs/1805.00932Semi-supervised (SSL) / Semi-weakly Supervised (SWSL) ResNet and ResNeXts
Billion-scale semi-supervised learning for image classification
- https://arxiv.org/abs/1905.00546Squeeze-and-Excitation Networks
Squeeze-and-Excitation Networks
- https://arxiv.org/abs/1709.01507senet.py
is being deprecatedECAResNet (ECA-Net)
ECA-Net: Efficient Channel Attention for Deep CNN
- https://arxiv.org/abs/1910.03151v4Res2Net: A New Multi-scale Backbone Architecture
- https://arxiv.org/abs/1904.01169ResNeSt: Split-Attention Networks
- https://arxiv.org/abs/2004.08955ReXNet: Diminishing Representational Bottleneck on CNN
- https://arxiv.org/abs/2007.00992Selective-Kernel Networks
- https://arxiv.org/abs/1903.06586XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera
- https://arxiv.org/abs/1907.00837Implementation: senet.py
NOTE: I am deprecating this version of the networks, the new ones are part of resnet.py
Paper: Squeeze-and-Excitation Networks
- https://arxiv.org/abs/1709.01507
TResNet: High Performance GPU-Dedicated Architecture
- https://arxiv.org/abs/2003.13630Very Deep Convolutional Networks For Large-Scale Image Recognition
- https://arxiv.org/pdf/1409.1556.pdfAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
- https://arxiv.org/abs/2010.11929CenterMask : Real-Time Anchor-Free Instance Segmentation
- https://arxiv.org/abs/1911.06667Xception: Deep Learning with Depthwise Separable Convolutions
- https://arxiv.org/abs/1610.02357Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
- https://arxiv.org/abs/1802.02611Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
- https://arxiv.org/abs/1802.02611