kadirnar commited on
Commit
9d72ee4
1 Parent(s): c784d72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md CHANGED
@@ -1,3 +1,118 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - object-detection
5
+ - computer-vision
6
+ - timm
7
+ - object-classification
8
+ language:
9
+ - en
10
+ library_name: timm
11
  ---
12
+
13
+ ### Model Description
14
+ Aggregating Nested Transformers - https://arxiv.org/abs/2105.12723
15
+ BEiT - https://arxiv.org/abs/2106.08254
16
+ Big Transfer ResNetV2 (BiT) - https://arxiv.org/abs/1912.11370
17
+ Bottleneck Transformers - https://arxiv.org/abs/2101.11605
18
+ CaiT (Class-Attention in Image Transformers) - https://arxiv.org/abs/2103.17239
19
+ CoaT (Co-Scale Conv-Attentional Image Transformers) - https://arxiv.org/abs/2104.06399
20
+ CoAtNet (Convolution and Attention) - https://arxiv.org/abs/2106.04803
21
+ ConvNeXt - https://arxiv.org/abs/2201.03545
22
+ ConvNeXt-V2 - http://arxiv.org/abs/2301.00808
23
+ ConViT (Soft Convolutional Inductive Biases Vision Transformers)- https://arxiv.org/abs/2103.10697
24
+ CspNet (Cross-Stage Partial Networks) - https://arxiv.org/abs/1911.11929
25
+ DeiT - https://arxiv.org/abs/2012.12877
26
+ DeiT-III - https://arxiv.org/pdf/2204.07118.pdf
27
+ DenseNet - https://arxiv.org/abs/1608.06993
28
+ DLA - https://arxiv.org/abs/1707.06484
29
+ DPN (Dual-Path Network) - https://arxiv.org/abs/1707.01629
30
+ EdgeNeXt - https://arxiv.org/abs/2206.10589
31
+ EfficientFormer - https://arxiv.org/abs/2206.01191
32
+ EfficientNet (MBConvNet Family)
33
+ EfficientNet NoisyStudent (B0-B7, L2) - https://arxiv.org/abs/1911.04252
34
+ EfficientNet AdvProp (B0-B8) - https://arxiv.org/abs/1911.09665
35
+ EfficientNet (B0-B7) - https://arxiv.org/abs/1905.11946
36
+ EfficientNet-EdgeTPU (S, M, L) - https://ai.googleblog.com/2019/08/efficientnet-edgetpu-creating.html
37
+ EfficientNet V2 - https://arxiv.org/abs/2104.00298
38
+ FBNet-C - https://arxiv.org/abs/1812.03443
39
+ MixNet - https://arxiv.org/abs/1907.09595
40
+ MNASNet B1, A1 (Squeeze-Excite), and Small - https://arxiv.org/abs/1807.11626
41
+ MobileNet-V2 - https://arxiv.org/abs/1801.04381
42
+ Single-Path NAS - https://arxiv.org/abs/1904.02877
43
+ TinyNet - https://arxiv.org/abs/2010.14819
44
+ EVA - https://arxiv.org/abs/2211.07636
45
+ FlexiViT - https://arxiv.org/abs/2212.08013
46
+ GCViT (Global Context Vision Transformer) - https://arxiv.org/abs/2206.09959
47
+ GhostNet - https://arxiv.org/abs/1911.11907
48
+ gMLP - https://arxiv.org/abs/2105.08050
49
+ GPU-Efficient Networks - https://arxiv.org/abs/2006.14090
50
+ Halo Nets - https://arxiv.org/abs/2103.12731
51
+ HRNet - https://arxiv.org/abs/1908.07919
52
+ Inception-V3 - https://arxiv.org/abs/1512.00567
53
+ Inception-ResNet-V2 and Inception-V4 - https://arxiv.org/abs/1602.07261
54
+ Lambda Networks - https://arxiv.org/abs/2102.08602
55
+ LeViT (Vision Transformer in ConvNet's Clothing) - https://arxiv.org/abs/2104.01136
56
+ MaxViT (Multi-Axis Vision Transformer) - https://arxiv.org/abs/2204.01697
57
+ MLP-Mixer - https://arxiv.org/abs/2105.01601
58
+ MobileNet-V3 (MBConvNet w/ Efficient Head) - https://arxiv.org/abs/1905.02244
59
+ FBNet-V3 - https://arxiv.org/abs/2006.02049
60
+ HardCoRe-NAS - https://arxiv.org/abs/2102.11646
61
+ LCNet - https://arxiv.org/abs/2109.15099
62
+ MobileViT - https://arxiv.org/abs/2110.02178
63
+ MobileViT-V2 - https://arxiv.org/abs/2206.02680
64
+ MViT-V2 (Improved Multiscale Vision Transformer) - https://arxiv.org/abs/2112.01526
65
+ NASNet-A - https://arxiv.org/abs/1707.07012
66
+ NesT - https://arxiv.org/abs/2105.12723
67
+ NFNet-F - https://arxiv.org/abs/2102.06171
68
+ NF-RegNet / NF-ResNet - https://arxiv.org/abs/2101.08692
69
+ PNasNet - https://arxiv.org/abs/1712.00559
70
+ PoolFormer (MetaFormer) - https://arxiv.org/abs/2111.11418
71
+ Pooling-based Vision Transformer (PiT) - https://arxiv.org/abs/2103.16302
72
+ PVT-V2 (Improved Pyramid Vision Transformer) - https://arxiv.org/abs/2106.13797
73
+ RegNet - https://arxiv.org/abs/2003.13678
74
+ RegNetZ - https://arxiv.org/abs/2103.06877
75
+ RepVGG - https://arxiv.org/abs/2101.03697
76
+ ResMLP - https://arxiv.org/abs/2105.03404
77
+ ResNet/ResNeXt
78
+ ResNet (v1b/v1.5) - https://arxiv.org/abs/1512.03385
79
+ ResNeXt - https://arxiv.org/abs/1611.05431
80
+ 'Bag of Tricks' / Gluon C, D, E, S variations - https://arxiv.org/abs/1812.01187
81
+ Weakly-supervised (WSL) Instagram pretrained / ImageNet tuned ResNeXt101 - https://arxiv.org/abs/1805.00932
82
+ Semi-supervised (SSL) / Semi-weakly Supervised (SWSL) ResNet/ResNeXts - https://arxiv.org/abs/1905.00546
83
+ ECA-Net (ECAResNet) - https://arxiv.org/abs/1910.03151v4
84
+ Squeeze-and-Excitation Networks (SEResNet) - https://arxiv.org/abs/1709.01507
85
+ ResNet-RS - https://arxiv.org/abs/2103.07579
86
+ Res2Net - https://arxiv.org/abs/1904.01169
87
+ ResNeSt - https://arxiv.org/abs/2004.08955
88
+ ReXNet - https://arxiv.org/abs/2007.00992
89
+ SelecSLS - https://arxiv.org/abs/1907.00837
90
+ Selective Kernel Networks - https://arxiv.org/abs/1903.06586
91
+ Sequencer2D - https://arxiv.org/abs/2205.01972
92
+ Swin S3 (AutoFormerV2) - https://arxiv.org/abs/2111.14725
93
+ Swin Transformer - https://arxiv.org/abs/2103.14030
94
+ Swin Transformer V2 - https://arxiv.org/abs/2111.09883
95
+ Transformer-iN-Transformer (TNT) - https://arxiv.org/abs/2103.00112
96
+ TResNet - https://arxiv.org/abs/2003.13630
97
+ Twins (Spatial Attention in Vision Transformers) - https://arxiv.org/pdf/2104.13840.pdf
98
+ Visformer - https://arxiv.org/abs/2104.12533
99
+ Vision Transformer - https://arxiv.org/abs/2010.11929
100
+ VOLO (Vision Outlooker) - https://arxiv.org/abs/2106.13112
101
+ VovNet V2 and V1 - https://arxiv.org/abs/1911.06667
102
+ Xception - https://arxiv.org/abs/1610.02357
103
+ Xception (Modified Aligned, Gluon) - https://arxiv.org/abs/1802.02611
104
+ Xception (Modified Aligned, TF) - https://arxiv.org/abs/1802.02611
105
+ XCiT (Cross-Covariance Image Transformers) - https://arxiv.org/abs/2106.09681
106
+
107
+ ### Installation
108
+ ```
109
+ pip install classifyhub
110
+ ```
111
+
112
+ ### ClassifyHub(Timm) Usage
113
+ ```python
114
+ from classifyhub import Predictor
115
+
116
+ model = ClassifyPredictor("resnet18")
117
+ model.predict("data/plane.jpg")
118
+ ```