Image Classification
Transformers
PyTorch
ONNX
Inference Endpoints
fg-mindee commited on
Commit
5549b09
1 Parent(s): c51d245

docs: Updated README

Browse files
Files changed (1) hide show
  1. README.md +105 -0
README.md CHANGED
@@ -9,3 +9,108 @@ datasets:
9
 
10
 
11
  # RepVGG-A0 model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
 
11
  # RepVGG-A0 model
12
+
13
+ Pretrained on [ImageNette](https://github.com/fastai/imagenette). The RepVGG architecture was introduced in [this paper](https://arxiv.org/pdf/2101.03697.pdf).
14
+
15
+
16
+ ## Model description
17
+
18
+ The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
19
+
20
+
21
+ ## Installation
22
+
23
+ ### Prerequisites
24
+
25
+ Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
26
+
27
+ ### Latest stable release
28
+
29
+ You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
30
+
31
+ ```shell
32
+ pip install pylocron
33
+ ```
34
+
35
+ or using [conda](https://anaconda.org/frgfm/pylocron):
36
+
37
+ ```shell
38
+ conda install -c frgfm pylocron
39
+ ```
40
+
41
+ ### Developer mode
42
+
43
+ Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
44
+
45
+ ```shell
46
+ git clone https://github.com/frgfm/Holocron.git
47
+ pip install -e Holocron/.
48
+ ```
49
+
50
+
51
+ ## Usage instructions
52
+
53
+ ```python
54
+ from PIL import Image
55
+ from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
56
+ from torchvision.transforms.functional import InterpolationMode
57
+ from holocron.models import model_from_hf_hub
58
+
59
+ model = model_from_hf_hub("frgfm/repvgg_a0").eval()
60
+
61
+ img = Image.open(path_to_an_image).convert("RGB")
62
+
63
+ # Preprocessing
64
+ config = model.default_cfg
65
+ transform = Compose([
66
+ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
67
+ PILToTensor(),
68
+ ConvertImageDtype(torch.float32),
69
+ Normalize(config['mean'], config['std'])
70
+ ])
71
+
72
+ input_tensor = transform(img).unsqueeze(0)
73
+
74
+ # Inference
75
+ with torch.inference_mode():
76
+ output = model(input_tensor)
77
+ probs = output.squeeze(0).softmax(dim=0)
78
+ ```
79
+
80
+
81
+ ## Citation
82
+
83
+ Original paper
84
+
85
+ ```bibtex
86
+ @article{DBLP:journals/corr/abs-2101-03697,
87
+ author = {Xiaohan Ding and
88
+ Xiangyu Zhang and
89
+ Ningning Ma and
90
+ Jungong Han and
91
+ Guiguang Ding and
92
+ Jian Sun},
93
+ title = {RepVGG: Making VGG-style ConvNets Great Again},
94
+ journal = {CoRR},
95
+ volume = {abs/2101.03697},
96
+ year = {2021},
97
+ url = {https://arxiv.org/abs/2101.03697},
98
+ eprinttype = {arXiv},
99
+ eprint = {2101.03697},
100
+ timestamp = {Tue, 09 Feb 2021 15:29:34 +0100},
101
+ biburl = {https://dblp.org/rec/journals/corr/abs-2101-03697.bib},
102
+ bibsource = {dblp computer science bibliography, https://dblp.org}
103
+ }
104
+ ```
105
+
106
+ Source of this implementation
107
+
108
+ ```bibtex
109
+ @software{Fernandez_Holocron_2020,
110
+ author = {Fernandez, François-Guillaume},
111
+ month = {5},
112
+ title = {{Holocron}},
113
+ url = {https://github.com/frgfm/Holocron},
114
+ year = {2020}
115
+ }
116
+ ```