srinivasgs commited on
Commit
199695c
1 Parent(s): 3686e65

updated the How to use section so that the code actually does what the live demo does

Browse files

added some code to convert the raw outputs (which are not very useful/interpretable) to run it through the image processor and generate a message with text showing confidence and human-readable labels.

the output of this code snippet is now:

```
Detected remote with confidence 0.994 at location [46.96, 72.61, 181.02, 119.73]
Detected remote with confidence 0.975 at location [340.66, 79.19, 372.59, 192.65]
Detected cat with confidence 0.984 at location [12.27, 54.25, 319.42, 470.99]
Detected remote with confidence 0.922 at location [41.66, 71.96, 178.7, 120.33]
Detected cat with confidence 0.914 at location [342.34, 21.48, 638.64, 372.46]
```

which i think is useful and tells you if the model is working

Files changed (1) hide show
  1. README.md +13 -1
README.md CHANGED
@@ -35,7 +35,7 @@ You can use the raw model for object detection. See the [model hub](https://hugg
35
  Here is how to use this model:
36
 
37
  ```python
38
- from transformers import YolosFeatureExtractor, YolosForObjectDetection
39
  from PIL import Image
40
  import requests
41
 
@@ -44,6 +44,7 @@ image = Image.open(requests.get(url, stream=True).raw)
44
 
45
  feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-tiny')
46
  model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny')
 
47
 
48
  inputs = feature_extractor(images=image, return_tensors="pt")
49
  outputs = model(**inputs)
@@ -51,6 +52,17 @@ outputs = model(**inputs)
51
  # model predicts bounding boxes and corresponding COCO classes
52
  logits = outputs.logits
53
  bboxes = outputs.pred_boxes
 
 
 
 
 
 
 
 
 
 
 
54
  ```
55
 
56
  Currently, both the feature extractor and model support PyTorch.
 
35
  Here is how to use this model:
36
 
37
  ```python
38
+ from transformers import YolosFeatureExtractor, YolosForObjectDetection, AutoImageProcessor
39
  from PIL import Image
40
  import requests
41
 
 
44
 
45
  feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-tiny')
46
  model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny')
47
+ image_processor = AutoImageProcessor.from_pretrained("hustvl/yolos-tiny")
48
 
49
  inputs = feature_extractor(images=image, return_tensors="pt")
50
  outputs = model(**inputs)
 
52
  # model predicts bounding boxes and corresponding COCO classes
53
  logits = outputs.logits
54
  bboxes = outputs.pred_boxes
55
+
56
+
57
+ # print results
58
+ target_sizes = torch.tensor([image.size[::-1]])
59
+ results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[0]
60
+ for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
61
+ box = [round(i, 2) for i in box.tolist()]
62
+ print(
63
+ f"Detected {model.config.id2label[label.item()]} with confidence "
64
+ f"{round(score.item(), 3)} at location {box}"
65
+ )
66
  ```
67
 
68
  Currently, both the feature extractor and model support PyTorch.