frgfm commited on
Commit
8b9f2c2
1 Parent(s): fc4564b

docs: Updated README

Browse files
Files changed (1) hide show
  1. README.md +116 -116
README.md CHANGED
@@ -1,116 +1,116 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - image-classification
5
- - pytorch
6
- datasets:
7
- - imagenette
8
- ---
9
-
10
-
11
- # CSP-Darknet-53 Mish model
12
-
13
- Pretrained on [ImageNette](https://github.com/fastai/imagenette). The CSP-Darknet-53 Mish architecture was introduced in [this paper](https://arxiv.org/pdf/1911.11929.pdf).
14
-
15
-
16
- ## Model description
17
-
18
- The core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture and replace activations with Mish.
19
-
20
-
21
- ## Installation
22
-
23
- ### Prerequisites
24
-
25
- Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
26
-
27
- ### Latest stable release
28
-
29
- You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
30
-
31
- ```shell
32
- pip install pylocron
33
- ```
34
-
35
- or using [conda](https://anaconda.org/frgfm/pylocron):
36
-
37
- ```shell
38
- conda install -c frgfm pylocron
39
- ```
40
-
41
- ### Developer mode
42
-
43
- Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
44
-
45
- ```shell
46
- git clone https://github.com/frgfm/Holocron.git
47
- pip install -e Holocron/.
48
- ```
49
-
50
-
51
- ## Usage instructions
52
-
53
- ```python
54
- from PIL import Image
55
- from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
56
- from torchvision.transforms.functional import InterpolationMode
57
- from holocron.models import model_from_hf_hub
58
-
59
- model = model_from_hf_hub("frgfm/cspdarknet53_mish").eval()
60
-
61
- img = Image.open(path_to_an_image).convert("RGB")
62
-
63
- # Preprocessing
64
- config = model.default_cfg
65
- transform = Compose([
66
- Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
67
- PILToTensor(),
68
- ConvertImageDtype(torch.float32),
69
- Normalize(config['mean'], config['std'])
70
- ])
71
-
72
- input_tensor = transform(img).unsqueeze(0)
73
-
74
- # Inference
75
- with torch.inference_mode():
76
- output = model(input_tensor)
77
- probs = output.squeeze(0).softmax(dim=0)
78
- ```
79
-
80
-
81
- ## Citation
82
-
83
- Original paper
84
-
85
- ```bibtex
86
- @article{DBLP:journals/corr/abs-1911-11929,
87
- author = {Chien{-}Yao Wang and
88
- Hong{-}Yuan Mark Liao and
89
- I{-}Hau Yeh and
90
- Yueh{-}Hua Wu and
91
- Ping{-}Yang Chen and
92
- Jun{-}Wei Hsieh},
93
- title = {CSPNet: {A} New Backbone that can Enhance Learning Capability of {CNN}},
94
- journal = {CoRR},
95
- volume = {abs/1911.11929},
96
- year = {2019},
97
- url = {http://arxiv.org/abs/1911.11929},
98
- eprinttype = {arXiv},
99
- eprint = {1911.11929},
100
- timestamp = {Tue, 03 Dec 2019 20:41:07 +0100},
101
- biburl = {https://dblp.org/rec/journals/corr/abs-1911-11929.bib},
102
- bibsource = {dblp computer science bibliography, https://dblp.org}
103
- }
104
- ```
105
-
106
- Source of this implementation
107
-
108
- ```bibtex
109
- @software{Fernandez_Holocron_2020,
110
- author = {Fernandez, François-Guillaume},
111
- month = {5},
112
- title = {{Holocron}},
113
- url = {https://github.com/frgfm/Holocron},
114
- year = {2020}
115
- }
116
- ```
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - image-classification
5
+ - pytorch
6
+ datasets:
7
+ - frgfm/imagenette
8
+ ---
9
+
10
+
11
+ # CSP-Darknet-53 Mish model
12
+
13
+ Pretrained on [ImageNette](https://github.com/fastai/imagenette). The CSP-Darknet-53 Mish architecture was introduced in [this paper](https://arxiv.org/pdf/1911.11929.pdf).
14
+
15
+
16
+ ## Model description
17
+
18
+ The core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture and replace activations with Mish.
19
+
20
+
21
+ ## Installation
22
+
23
+ ### Prerequisites
24
+
25
+ Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
26
+
27
+ ### Latest stable release
28
+
29
+ You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
30
+
31
+ ```shell
32
+ pip install pylocron
33
+ ```
34
+
35
+ or using [conda](https://anaconda.org/frgfm/pylocron):
36
+
37
+ ```shell
38
+ conda install -c frgfm pylocron
39
+ ```
40
+
41
+ ### Developer mode
42
+
43
+ Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
44
+
45
+ ```shell
46
+ git clone https://github.com/frgfm/Holocron.git
47
+ pip install -e Holocron/.
48
+ ```
49
+
50
+
51
+ ## Usage instructions
52
+
53
+ ```python
54
+ from PIL import Image
55
+ from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
56
+ from torchvision.transforms.functional import InterpolationMode
57
+ from holocron.models import model_from_hf_hub
58
+
59
+ model = model_from_hf_hub("frgfm/cspdarknet53_mish").eval()
60
+
61
+ img = Image.open(path_to_an_image).convert("RGB")
62
+
63
+ # Preprocessing
64
+ config = model.default_cfg
65
+ transform = Compose([
66
+ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
67
+ PILToTensor(),
68
+ ConvertImageDtype(torch.float32),
69
+ Normalize(config['mean'], config['std'])
70
+ ])
71
+
72
+ input_tensor = transform(img).unsqueeze(0)
73
+
74
+ # Inference
75
+ with torch.inference_mode():
76
+ output = model(input_tensor)
77
+ probs = output.squeeze(0).softmax(dim=0)
78
+ ```
79
+
80
+
81
+ ## Citation
82
+
83
+ Original paper
84
+
85
+ ```bibtex
86
+ @article{DBLP:journals/corr/abs-1911-11929,
87
+ author = {Chien{-}Yao Wang and
88
+ Hong{-}Yuan Mark Liao and
89
+ I{-}Hau Yeh and
90
+ Yueh{-}Hua Wu and
91
+ Ping{-}Yang Chen and
92
+ Jun{-}Wei Hsieh},
93
+ title = {CSPNet: {A} New Backbone that can Enhance Learning Capability of {CNN}},
94
+ journal = {CoRR},
95
+ volume = {abs/1911.11929},
96
+ year = {2019},
97
+ url = {http://arxiv.org/abs/1911.11929},
98
+ eprinttype = {arXiv},
99
+ eprint = {1911.11929},
100
+ timestamp = {Tue, 03 Dec 2019 20:41:07 +0100},
101
+ biburl = {https://dblp.org/rec/journals/corr/abs-1911-11929.bib},
102
+ bibsource = {dblp computer science bibliography, https://dblp.org}
103
+ }
104
+ ```
105
+
106
+ Source of this implementation
107
+
108
+ ```bibtex
109
+ @software{Fernandez_Holocron_2020,
110
+ author = {Fernandez, François-Guillaume},
111
+ month = {5},
112
+ title = {{Holocron}},
113
+ url = {https://github.com/frgfm/Holocron},
114
+ year = {2020}
115
+ }
116
+ ```