updated the How to use section so that the code actually does what the live demo does

#4
by srinivasgs - opened

added some code to convert the raw outputs (which are not very useful/interpretable) to run it through the image processor and generate a message with text showing confidence and human-readable labels.

the output of this code snippet is now:

Detected remote with confidence 0.994 at location [46.96, 72.61, 181.02, 119.73]
Detected remote with confidence 0.975 at location [340.66, 79.19, 372.59, 192.65]
Detected cat with confidence 0.984 at location [12.27, 54.25, 319.42, 470.99]
Detected remote with confidence 0.922 at location [41.66, 71.96, 178.7, 120.33]
Detected cat with confidence 0.914 at location [342.34, 21.48, 638.64, 372.46]

which i think is useful and tells you if the model is working

Thanks, could you also remove the feature extractor? As that class is now deprecated

Yes, you can remove YolosFeatureExtractor from the code snippet in favor of YolosImageProcessor or AutoImageProcessor.

Thanks a lot!

nielsr changed pull request status to merged

Sign up or log in to comment