trained models
Browse files
README.md
CHANGED
@@ -1,5 +1,26 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
-
|
4 |
-
|
5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
---
|
4 |
+
|
5 |
+
# [Visual Product Recognition Challenge](https://www.aicrowd.com/challenges/visual-product-recognition-challenge-2023)
|
6 |
+
|
7 |
+
|
8 |
+
The trained models for the competition. The training code for the models can be found in [HCA97/Product-Recognition](https://github.com/HCA97/Product-Recognition).
|
9 |
+
|
10 |
+
|
11 |
+
# How to use it?
|
12 |
+
|
13 |
+
You need to install `open_clip` library.
|
14 |
+
|
15 |
+
```bash
|
16 |
+
pip install open_clip
|
17 |
+
```
|
18 |
+
|
19 |
+
Example of loading the model:
|
20 |
+
|
21 |
+
```py
|
22 |
+
model = open_clip.create_model_and_transforms('ViT-H-14', None)[0].visual
|
23 |
+
model.load_state_dict(th.load('path to model'))
|
24 |
+
model.half()
|
25 |
+
model.eval()
|
26 |
+
```
|
model1.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:13d24332c79fb809ab40a9af19e64e018b428a918f5539324a4815c4685a5ff4
|
3 |
+
size 1264309243
|
model2.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:729770f8e62c2742a5d1be6e4e1078cd62de89229f78d1563dc8f0268a8d585c
|
3 |
+
size 1264312001
|
model3.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:11617b481eb495124b62a700ffcfe743739ef74045295712396e7cda134f76da
|
3 |
+
size 1264297689
|