Files changed (1) hide show
  1. README.md +73 -1
README.md CHANGED
@@ -1,3 +1,75 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ tags:
4
+ - object-detection
5
+ - vision
6
+ datasets:
7
+ - coco
8
+ widget:
9
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
10
+ example_title: Savanna
11
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
12
+ example_title: Football Match
13
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
14
+ example_title: Airport
15
  ---
16
+
17
+ # DETR (End-to-End Object Detection) model with ResNet-50 backbone trained on SKU110K Dataset with 400 num_queries
18
+
19
+ DEtection TRansformer (DETR) model trained end-to-end on SKU110K object detection (8k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
20
+
21
+ ### How to use
22
+
23
+ Here is how to use this model:
24
+
25
+ ```python
26
+ from transformers import DetrImageProcessor, DetrForObjectDetection
27
+ import torch
28
+ from PIL import Image
29
+ import requests
30
+
31
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
32
+ image = Image.open(requests.get(url, stream=True).raw)
33
+
34
+ # you can specify the revision tag if you don't want the timm dependency
35
+ processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", revision="no_timm")
36
+ model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", revision="no_timm")
37
+
38
+ inputs = processor(images=image, return_tensors="pt")
39
+ outputs = model(**inputs)
40
+
41
+ # convert outputs (bounding boxes and class logits) to COCO API
42
+ # let's only keep detections with score > 0.9
43
+ target_sizes = torch.tensor([image.size[::-1]])
44
+ results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0]
45
+
46
+ for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
47
+ box = [round(i, 2) for i in box.tolist()]
48
+ print(
49
+ f"Detected {model.config.id2label[label.item()]} with confidence "
50
+ f"{round(score.item(), 3)} at location {box}"
51
+ )
52
+ ```
53
+ This should output:
54
+ ```
55
+ Detected remote with confidence 0.998 at location [40.16, 70.81, 175.55, 117.98]
56
+ Detected remote with confidence 0.996 at location [333.24, 72.55, 368.33, 187.66]
57
+ Detected couch with confidence 0.995 at location [-0.02, 1.15, 639.73, 473.76]
58
+ Detected cat with confidence 0.999 at location [13.24, 52.05, 314.02, 470.93]
59
+ Detected cat with confidence 0.999 at location [345.4, 23.85, 640.37, 368.72]
60
+ ```
61
+
62
+ Currently, both the feature extractor and model support PyTorch.
63
+
64
+ ## Training data
65
+
66
+ The DETR model was trained on [SKU110K Dataset](https://github.com/eg4000/SKU110K_CVPR19), a dataset consisting of 8219/588/2936 annotated images for training/validation/test respectively.
67
+
68
+ ## Training procedure
69
+ ### Training
70
+
71
+ The model was trained for 140 epochs on 1 RTX 4060 Ti GPU(Finetuning decoder only) with batch size of 8 and 70 epochs(finetuning the whole network) with batch size of 3 and accumulating gradients for 3 steps.
72
+
73
+ ## Evaluation results
74
+
75
+ This model achieves an AP (average precision) of **59.0** on SKU110k validation. Result was calculated with torchmetrics MeanAveragePrecision class.