isalia99 commited on
Commit
57d22ab
1 Parent(s): f7cf318

readme and preprocessor config

Browse files
Files changed (2) hide show
  1. README.md +74 -0
  2. preprocessor_config.json +18 -0
README.md CHANGED
@@ -1,3 +1,77 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - object-detection
5
+ - vision
6
+ datasets:
7
+ - sku110k
8
+ widget:
9
+ - src: >-
10
+ https://github.com/Isalia20/DETR-finetune/blob/main/IMG_3507.jpg?raw=true
11
+ example_title: StoreExample(Not from SKU110K Dataset)
12
  ---
13
+
14
+ # DETR (End-to-End Object Detection) model with ResNet-101-DC5 backbone trained on SKU110K Dataset with 400 num_queries
15
+
16
+ DEtection TRansformer (DETR) model trained end-to-end on SKU110K object detection (8k annotated images) dataset. Main difference compared to the original model is it having **400** num_queries and it being pretrained on SKU110K dataset.
17
+
18
+ ### How to use
19
+
20
+ Here is how to use this model:
21
+
22
+ ```python
23
+ from transformers import DetrImageProcessor, DetrForObjectDetection
24
+ import torch
25
+ from PIL import Image, ImageOps
26
+ import requests
27
+
28
+ url = "https://github.com/Isalia20/DETR-finetune/blob/main/IMG_3507.jpg?raw=true"
29
+ image = Image.open(requests.get(url, stream=True).raw)
30
+ image = ImageOps.exif_transpose(image)
31
+
32
+ # you can specify the revision tag if you don't want the timm dependency
33
+ processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-101-dc5")
34
+ model = DetrForObjectDetection.from_pretrained("isalia99/detr-resnet-101-dc5-sku110k")
35
+ model = model.eval()
36
+ inputs = processor(images=image, return_tensors="pt")
37
+ outputs = model(**inputs)
38
+
39
+ # convert outputs (bounding boxes and class logits) to COCO API
40
+ # let's only keep detections with score > 0.8
41
+ target_sizes = torch.tensor([image.size[::-1]])
42
+ results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.8)[0]
43
+
44
+ for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
45
+ box = [round(i, 2) for i in box.tolist()]
46
+ print(
47
+ f"Detected {model.config.id2label[label.item()]} with confidence "
48
+ f"{round(score.item(), 3)} at location {box}"
49
+ )
50
+ ```
51
+ This should output:
52
+ ```
53
+ Detected LABEL_1 with confidence 0.983 at location [665.49, 480.05, 708.15, 650.11]
54
+ Detected LABEL_1 with confidence 0.938 at location [204.99, 1405.9, 239.9, 1546.5]
55
+ ...
56
+ Detected LABEL_1 with confidence 0.998 at location [772.85, 169.49, 829.67, 372.18]
57
+ Detected LABEL_1 with confidence 0.999 at location [828.28, 1475.16, 874.37, 1593.43]
58
+ ```
59
+
60
+ Currently, both the feature extractor and model support PyTorch.
61
+
62
+ ## Training data
63
+
64
+ The DETR model was trained on [SKU110K Dataset](https://github.com/eg4000/SKU110K_CVPR19), a dataset consisting of **8,219/588/2,936** annotated images for training/validation/test respectively.
65
+
66
+ ## Training procedure
67
+ ### Training
68
+
69
+ The model was trained for 140 epochs on 1 RTX 4060 Ti GPU(Finetuning decoder only) with batch size of 8 and 70 epochs(finetuning the whole network) with batch size of 3 and accumulating gradients for 3 steps.
70
+
71
+ ## Evaluation results
72
+
73
+ This model achieves an mAP of **59.8** on SKU110k validation set. Result was calculated with torchmetrics MeanAveragePrecision class.
74
+
75
+ ## Training Code
76
+
77
+ Code is released in this repository [Repo Link](https://github.com/Isalia20/DETR-finetune/tree/main). However it's not finalized/tested well yet but the main stuff is in the code.
preprocessor_config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "do_resize": true,
4
+ "feature_extractor_type": "DetrFeatureExtractor",
5
+ "format": "coco_detection",
6
+ "image_mean": [
7
+ 0.485,
8
+ 0.456,
9
+ 0.406
10
+ ],
11
+ "image_std": [
12
+ 0.229,
13
+ 0.224,
14
+ 0.225
15
+ ],
16
+ "max_size": 1333,
17
+ "size": 800
18
+ }