frgfm commited on
Commit
ab1c8cc
1 Parent(s): 02a290c

docs: Updated README

Browse files
Files changed (1) hide show
  1. README.md +117 -117
README.md CHANGED
@@ -1,117 +1,117 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - image-classification
5
- - pytorch
6
- - onnx
7
- datasets:
8
- - imagenette
9
- ---
10
-
11
-
12
- # RepVGG-A0 model
13
-
14
- Pretrained on [ImageNette](https://github.com/fastai/imagenette). The RepVGG architecture was introduced in [this paper](https://arxiv.org/pdf/2101.03697.pdf).
15
-
16
-
17
- ## Model description
18
-
19
- The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
20
-
21
-
22
- ## Installation
23
-
24
- ### Prerequisites
25
-
26
- Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
27
-
28
- ### Latest stable release
29
-
30
- You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
31
-
32
- ```shell
33
- pip install pylocron
34
- ```
35
-
36
- or using [conda](https://anaconda.org/frgfm/pylocron):
37
-
38
- ```shell
39
- conda install -c frgfm pylocron
40
- ```
41
-
42
- ### Developer mode
43
-
44
- Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
45
-
46
- ```shell
47
- git clone https://github.com/frgfm/Holocron.git
48
- pip install -e Holocron/.
49
- ```
50
-
51
-
52
- ## Usage instructions
53
-
54
- ```python
55
- from PIL import Image
56
- from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
57
- from torchvision.transforms.functional import InterpolationMode
58
- from holocron.models import model_from_hf_hub
59
-
60
- model = model_from_hf_hub("frgfm/repvgg_a0").eval()
61
-
62
- img = Image.open(path_to_an_image).convert("RGB")
63
-
64
- # Preprocessing
65
- config = model.default_cfg
66
- transform = Compose([
67
- Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
68
- PILToTensor(),
69
- ConvertImageDtype(torch.float32),
70
- Normalize(config['mean'], config['std'])
71
- ])
72
-
73
- input_tensor = transform(img).unsqueeze(0)
74
-
75
- # Inference
76
- with torch.inference_mode():
77
- output = model(input_tensor)
78
- probs = output.squeeze(0).softmax(dim=0)
79
- ```
80
-
81
-
82
- ## Citation
83
-
84
- Original paper
85
-
86
- ```bibtex
87
- @article{DBLP:journals/corr/abs-2101-03697,
88
- author = {Xiaohan Ding and
89
- Xiangyu Zhang and
90
- Ningning Ma and
91
- Jungong Han and
92
- Guiguang Ding and
93
- Jian Sun},
94
- title = {RepVGG: Making VGG-style ConvNets Great Again},
95
- journal = {CoRR},
96
- volume = {abs/2101.03697},
97
- year = {2021},
98
- url = {https://arxiv.org/abs/2101.03697},
99
- eprinttype = {arXiv},
100
- eprint = {2101.03697},
101
- timestamp = {Tue, 09 Feb 2021 15:29:34 +0100},
102
- biburl = {https://dblp.org/rec/journals/corr/abs-2101-03697.bib},
103
- bibsource = {dblp computer science bibliography, https://dblp.org}
104
- }
105
- ```
106
-
107
- Source of this implementation
108
-
109
- ```bibtex
110
- @software{Fernandez_Holocron_2020,
111
- author = {Fernandez, François-Guillaume},
112
- month = {5},
113
- title = {{Holocron}},
114
- url = {https://github.com/frgfm/Holocron},
115
- year = {2020}
116
- }
117
- ```
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - image-classification
5
+ - pytorch
6
+ - onnx
7
+ datasets:
8
+ - frgfm/imagenette
9
+ ---
10
+
11
+
12
+ # RepVGG-A0 model
13
+
14
+ Pretrained on [ImageNette](https://github.com/fastai/imagenette). The RepVGG architecture was introduced in [this paper](https://arxiv.org/pdf/2101.03697.pdf).
15
+
16
+
17
+ ## Model description
18
+
19
+ The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
20
+
21
+
22
+ ## Installation
23
+
24
+ ### Prerequisites
25
+
26
+ Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
27
+
28
+ ### Latest stable release
29
+
30
+ You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
31
+
32
+ ```shell
33
+ pip install pylocron
34
+ ```
35
+
36
+ or using [conda](https://anaconda.org/frgfm/pylocron):
37
+
38
+ ```shell
39
+ conda install -c frgfm pylocron
40
+ ```
41
+
42
+ ### Developer mode
43
+
44
+ Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
45
+
46
+ ```shell
47
+ git clone https://github.com/frgfm/Holocron.git
48
+ pip install -e Holocron/.
49
+ ```
50
+
51
+
52
+ ## Usage instructions
53
+
54
+ ```python
55
+ from PIL import Image
56
+ from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
57
+ from torchvision.transforms.functional import InterpolationMode
58
+ from holocron.models import model_from_hf_hub
59
+
60
+ model = model_from_hf_hub("frgfm/repvgg_a0").eval()
61
+
62
+ img = Image.open(path_to_an_image).convert("RGB")
63
+
64
+ # Preprocessing
65
+ config = model.default_cfg
66
+ transform = Compose([
67
+ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
68
+ PILToTensor(),
69
+ ConvertImageDtype(torch.float32),
70
+ Normalize(config['mean'], config['std'])
71
+ ])
72
+
73
+ input_tensor = transform(img).unsqueeze(0)
74
+
75
+ # Inference
76
+ with torch.inference_mode():
77
+ output = model(input_tensor)
78
+ probs = output.squeeze(0).softmax(dim=0)
79
+ ```
80
+
81
+
82
+ ## Citation
83
+
84
+ Original paper
85
+
86
+ ```bibtex
87
+ @article{DBLP:journals/corr/abs-2101-03697,
88
+ author = {Xiaohan Ding and
89
+ Xiangyu Zhang and
90
+ Ningning Ma and
91
+ Jungong Han and
92
+ Guiguang Ding and
93
+ Jian Sun},
94
+ title = {RepVGG: Making VGG-style ConvNets Great Again},
95
+ journal = {CoRR},
96
+ volume = {abs/2101.03697},
97
+ year = {2021},
98
+ url = {https://arxiv.org/abs/2101.03697},
99
+ eprinttype = {arXiv},
100
+ eprint = {2101.03697},
101
+ timestamp = {Tue, 09 Feb 2021 15:29:34 +0100},
102
+ biburl = {https://dblp.org/rec/journals/corr/abs-2101-03697.bib},
103
+ bibsource = {dblp computer science bibliography, https://dblp.org}
104
+ }
105
+ ```
106
+
107
+ Source of this implementation
108
+
109
+ ```bibtex
110
+ @software{Fernandez_Holocron_2020,
111
+ author = {Fernandez, François-Guillaume},
112
+ month = {5},
113
+ title = {{Holocron}},
114
+ url = {https://github.com/frgfm/Holocron},
115
+ year = {2020}
116
+ }
117
+ ```