url
stringclasses
675 values
text
stringlengths
0
9.95k
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/
If you would like to learn more about implementing social distancing detectors with computer vision, check out some of the following resources: Automatic social distance measurementSocial distancing in the workplaceRohit Kumar Srivastava’s social distancing implementationVenkatagiri Ramesh’s social distancing projectMohan Morkel’s social distancing application (which I think may be based on Venkatagiri Ramesh’s) If you have implemented your own OpenCV social distancing project and I have not linked to it, kindly accept my apologies — there are simply too many implementations for me to keep track of at this point. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do.
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/
My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to implement a social distancing detector using OpenCV, computer vision, and deep learning. Our implementation worked by: Using the YOLO object detector to detect people in a video streamDetermining the centroids for each detected personComputing the pairwise distances between all centroidsChecking to see if any pairwise distances were < N pixels apart, and if so, indicating that the pair of people violated social distancing rules Furthermore, by using an NVIDIA CUDA-capable GPU, along with OpenCV’s dnn module compiled with NVIDIA GPU support, our method was able to run in real-time, making it usable as a proof-of-concept social distancing detector. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below!
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/
Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Click here to download the source code to this pos In this tutorial, you will learn how to take any pre-trained deep learning image classifier and turn it into an object detector using Keras, TensorFlow, and OpenCV. Today, we’re starting a four-part series on deep learning and object detection: Part 1: Turning any deep learning image classifier into an object detector with Keras and TensorFlow (today’s post)Part 2: OpenCV Selective Search for Object DetectionPart 3: Region proposal for object detection with OpenCV, Keras, and TensorFlowPart 4: R-CNN object detection with Keras and TensorFlow The goal of this series of posts is to obtain a deeper understanding of how deep learning-based object detectors work, and more specifically: How traditional computer vision object detection algorithms can be combined with deep learningWhat the motivations behind end-to-end trainable object detectors and the challenges associated with them areAnd most importantly, how the seminal Faster R-CNN architecture came to be (we’ll be building a variant of the R-CNN architecture throughout this series) Today, we’ll be starting with the fundamentals of object detection, including how to take a pre-trained image classifier and utilize image pyramids, sliding windows, and non-maxima suppression to build a basic object detector (think HOG + Linear SVM-inspired). Over the coming weeks, we’ll learn how to build an end-to-end trainable network from scratch. But for today, let’s start with the basics. To learn how to take any Convolutional Neural Network image classifier and turn it into an object detector with Keras and TensorFlow, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Turning any CNN image classifier into an object detector with Keras, TensorFlow, and OpenCV In the first part of this tutorial, we’ll discuss the key differences between image classification and object detection tasks. I’ll then show you how you can take any Convolutional Neural Network trained for image classification and then turn it into an object detector, all in ~200 lines of code. From there, we’ll implement the code necessary to take an image classifier and turn it into an object detector using Keras, TensorFlow, and OpenCV. Finally, we’ll review the results of our work, noting some of the problems and limitations with our implementation, including how we can improve this method.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Image classification vs. object detection Figure 1: Left: Image classification. Right: Object detection. In this blog post, we will learn how to turn any deep learning image classifier CNN into an object detector with Keras, TensorFlow, and OpenCV. When performing image classification, given an input image, we present it to our neural network, and we obtain a single class label and a probability associated with the class label prediction (Figure 1, left). This class label is meant to characterize the contents of the entire image, or at least the most dominant, visible contents of the image. We can thus think of image classification as: One image inOne class label out Object detection, on the other hand, not only tells us what is in the image (i.e., class label) but also where in the image the object is via bounding box (x, y)-coordinates (Figure 1, right). Therefore, object detection algorithms allow us to: Input one imageObtain multiple bounding boxes and class labels as output At the very core, any object detection algorithm (regardless of traditional computer vision or state-of-the-art deep learning), follows the same pattern: 1. Input: An image that we wish to apply object detection to2. Output: Three values, including:2a. A list of bounding boxes, or the (x, y)-coordinates for each object in an image2b.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
The class label associated with each of the bounding boxes2c. The probability/confidence score associated with each bounding box and class label Today, you’ll see an example of this pattern in action. How can we turn any deep learning image classifier into an object detector? At this point, you’re likely wondering: Hey Adrian, if I have a Convolutional Neural Network trained for image classification, how in the world am I going to use it for object detection?Based on your explanation above, it seems like image classification and object detection are fundamentally different, requiring two different types of network architectures. And essentially, that is correct — object detection does require a specialized network architecture. Anyone who has read papers on Faster R-CNN, Single Shot Detectors (SSDs), YOLO, RetinaNet, etc. knows that object detection networks are more complex, more involved, and take multiple orders of magnitude and more effort to implement compared to traditional image classification. That said, there is a hack we can leverage to turn our CNN image classifier into an object detector — and the secret sauce lies in traditional computer vision algorithms. Back before deep learning-based object detectors, the state-of-the-art was to use HOG + Linear SVM to detect objects in an image. We’ll be borrowing elements from HOG + Linear SVM to convert any deep neural network image classifier into an object detector.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
The first key ingredient from HOG + Linear SVM is to use image pyramids. An “image pyramid” is a multi-scale representation of an image: Figure 2: Image pyramids allow us to produce images at different scales. When turning an image classifier into an object detector, it is important to classify windows at multiple scales. We will learn how to write an image pyramid Python generator and put it to work in our Keras, TensorFlow, and OpenCV script. Utilizing an image pyramid allows us to find objects in images at different scales (i.e., sizes) of an image (Figure 2). At the bottom of the pyramid, we have the original image at its original size (in terms of width and height). And at each subsequent layer, the image is resized (subsampled) and optionally smoothed (usually via Gaussian blurring). The image is progressively subsampled until some stopping criterion is met, which is normally when a minimum size has been reached and no further subsampling needs to take place. The second key ingredient we need is sliding windows: Figure 3: We will classify regions of our multi-scale image representations. These regions are generated by means of sliding windows.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
The combination of image pyramids and sliding windows allow us to turn any image classifier into an object detector using Keras, TensorFlow, and OpenCV. As the name suggests, a sliding window is a fixed-size rectangle that slides from left-to-right and top-to-bottom within an image. ( As Figure 3 demonstrates, our sliding window could be used to detect the face in the input image). At each stop of the window we would: Extract the ROIPass it through our image classifier (ex., Linear SVM, CNN, etc.)Obtain the output predictions Combined with image pyramids, sliding windows allow us to localize objects at different locations and multiple scales of the input image: The final key ingredient we need is non-maxima suppression. When performing object detection, our object detector will typically produce multiple, overlapping bounding boxes surrounding an object in an image. Figure 4: One key ingredient to turning a CNN image classifier into an object detector with Keras, TensorFlow, and OpenCV is applying a process known as non-maxima suppression (NMS). We will use NMS to suppress weak, overlapping bounding boxes in favor of higher confidence predictions. This behavior is totally normal — it simply implies that as the sliding window approaches an image, our classifier component is returning larger and larger probabilities of a positive detection. Of course, multiple bounding boxes pose a problem — there’s only one object there, and we somehow need to collapse/remove the extraneous bounding boxes.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
The solution to the problem is to apply non-maxima suppression (NMS), which collapses weak, overlapping bounding boxes in favor of the more confident ones: Figure 5: After non-maxima suppression (NMS) has been applied, we’re left with a single detection for each object in the image. TensorFlow, Keras, and OpenCV allow us to turn a CNN image classifier into an object detector. On the left, we have multiple detections, while on the right, we have the output of non-maxima suppression, which collapses the multiple bounding boxes into a single detection. Combining traditional computer vision with deep learning to build an object detector Figure 6: The steps to turn a deep learning classifier into an object detector using Python and libraries such as TensorFlow, Keras, and OpenCV. In order to take any Convolutional Neural Network trained for image classification and instead utilize it for object detection, we’re going to utilize the three key ingredients for traditional computer vision: Image pyramids: Localize objects at different scales/sizes. Sliding windows: Detect exactly where in the image a given object is. Non-maxima suppression: Collapse weak, overlapping bounding boxes. The general flow of our algorithm will be: Step #1: Input an imageStep #2: Construct an image pyramidStep #3: For each scale of the image pyramid, run a sliding windowStep #3a: For each stop of the sliding window, extract the ROIStep #3b: Take the ROI and pass it through our CNN originally trained for image classificationStep #3c: Examine the probability of the top class label of the CNN, and if meets a minimum confidence, record (1) the class label and (2) the location of the sliding windowStep #4: Apply class-wise non-maxima suppression to the bounding boxesStep #5: Return results to calling function That may seem like a complicated process, but as you’ll see in the remainder of this post, we can implement the entire object detection procedure in < 200 lines of code! Configuring your development environment To configure your system for this tutorial, I first recommend following either of these tutorials: How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. Please note that PyImageSearch does not recommend or support Windows for CV/DL projects.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Project structure Once you extract the .zip from the “Downloads” section of this blog post, your directory will be organized as follows: . ├── images │   ├── hummingbird.jpg │   ├── lawn_mower.jpg │   └── stingray.jpg ├── pyimagesearch │   ├── __init__.py │   └── detection_helpers.py └── detect_with_classifier.py 2 directories, 6 files Today’s pyimagesearch module contains a Python file — detection_helpers.py — consisting of two helper functions: image_pyramid: Assists in generating copies of our image at different scales so that we can find objects of different sizes sliding_window: Helps us find where in the image an object is by sliding our classification window from left-to-right (column-wise) and top-to-bottom (row-wise) Using the helper functions, our detect_with_classifier.py Python driver script accomplishes object detection by means of a classifier (using a sliding window and image pyramid approach). The classifier we’re using is a pre-trained ResNet50 CNN trained on the ImageNet dataset. The ImageNet dataset consists of 1,000 classes of objects. Three images/ are provided for testing purposes. You should also test this script with images of your own — given that our classifier-based object detector can recognize 1,000 types of classes, most everyday objects and animals can be recognized. Have fun with it! Implementing our image pyramid and sliding window utility functions In order to turn our CNN image classifier into an object detector, we must first implement helper utilities to construct sliding windows and image pyramids. Let’s implement this helper functions now — open up the detection_helpers.py file in the pyimagesearch module, and insert the following code: # import the necessary packages import imutils def sliding_window(image, step, ws): # slide a window across the image for y in range(0, image.shape[0] - ws[1], step): for x in range(0, image.shape[1] - ws[0], step): # yield the current window yield (x, y, image[y:y + ws[1], x:x + ws[0]]) We begin by importing my package of convenience functions, imutils. From there, we dive right in by defining our sliding_window generator function.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
This function expects three parameters: image: The input image that we are going to loop over and generate windows from. This input image may come from the output of our image pyramid. step: Our step size, which indicates how many pixels we are going to “skip” in both the (x, y) directions. Normally, we would not want to loop over each and every pixel of the image (i.e., step=1), as this would be computationally prohibitive if we were applying an image classifier at each window. Instead, the step size is determined on a per-dataset basis and is tuned to give optimal performance based on your dataset of images. In practice, it’s common to use a step of 4 to 8 pixels. Remember, the smaller your step size is, the more windows you’ll need to examine. ws: The window size defines the width and height (in pixels) of the window we are going to extract from our image. If you scroll back to Figure 3, the window size is equivalent to the dimensions of the green box that is sliding across the image. The actual “sliding” of our window takes place on Lines 6-9 according to the following: Line 6 is our loop over our rows via determining a range of y-values.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Line 7 is our loop over our columns (a range of x-values). Line 9 ultimately yields the window of our image (i.e., ROI) according to the (x, y)-values, window size (ws), and step size. The yield keyword is used in place of the return keyword because our sliding_window function is implemented as a Python generator. For more information on our sliding windows implementation, please refer to my previous Sliding Windows for Object Detection with Python and OpenCV article. Now that we’ve successfully defined our sliding window routine, let’s implement our image_pyramid generator used to construct a multi-scale representation of an input image: def image_pyramid(image, scale=1.5, minSize=(224, 224)): # yield the original image yield image # keep looping over the image pyramid while True: # compute the dimensions of the next image in the pyramid w = int(image.shape[1] / scale) image = imutils.resize(image, width=w) # if the resized image does not meet the supplied minimum # size, then stop constructing the pyramid if image.shape[0] < minSize[1] or image.shape[1] < minSize[0]: break # yield the next image in the pyramid yield image Our image_pyramid function accepts three parameters as well: image: The input image for which we wish to generate multi-scale representations. scale: Our scale factor controls how much the image is resized at each layer. Smaller scale values yield more layers in the pyramid, and larger scale values yield fewer layers. minSize: Controls the minimum size of an output image (layer of our pyramid). This is important because we could effectively construct progressively smaller scaled representations of our input image infinitely. Without a minSize parameter, our while loop would continue forever (which is not what we want).
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Now that we know the parameters that must be inputted to the function, let’s dive into the internals of our image pyramid generator function. Referring to Figure 2, notice that the largest representation of our image is the input image itself. Line 13 of our generator simply yields the original, unaltered image the first time our generator is asked to produce a layer of our pyramid. Subsequent generated images are controlled by the infinite while True loop beginning on Line 16. Inside the loop, we first compute the dimensions of the next image in the pyramid according to our scale and the original image dimensions (Line 18). In this case, we simply divide the width of the input image by the scale to determine our width (w) ratio. From there, we go ahead and resize the image down to the width while maintaining aspect ratio (Line 19). As you can see, we are using the aspect-aware resizing helper built into my imutils package. While we are effectively done (we’ve resized our image, and now we can yield it), we need to implement an exit condition so that our generator knows to stop. As we learned when we defined our parameters to the image_pyramid function, the exit condition is determined by the minSize parameter.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Therefore, the conditional on Lines 23 and 24 determines whether our resized image is too small (height or width) and exits the loop accordingly. Assuming our scaled output image passes our minSize threshold, Line 27 yields it to the caller. For more details, please refer to my Image Pyramids with Python and OpenCV article, which also includes an alternative scikit-image image pyramid implementation that may be useful to you. Using Keras and TensorFlow to turn a pre-trained image classifier into an object detector With our sliding_window and image_pyramid functions implemented, let’s now use them to take a deep neural network trained for image classification and turn it into an object detector. Open up a new file, name it detect_with_classifier.py, and let’s begin coding: # import the necessary packages from tensorflow.keras.applications import ResNet50 from tensorflow.keras.applications.resnet import preprocess_input from tensorflow.keras.preprocessing.image import img_to_array from tensorflow.keras.applications import imagenet_utils from imutils.object_detection import non_max_suppression from pyimagesearch.detection_helpers import sliding_window from pyimagesearch.detection_helpers import image_pyramid import numpy as np import argparse import imutils import time import cv2 This script begins with a selection of imports including: ResNet50: The popular ResNet Convolutional Neural Network (CNN) classifier by He et al. introduced in their 2015 paper, Deep Residual Learning for Image Recognition. We will load this CNN with pre-trained ImageNet weights. non_max_suppression: An implementation of NMS in my imutils package. sliding_window: Our sliding window generator function as described in the previous section. image_pyramid: The image pyramid generator that we defined previously.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Now that our imports are taken care of, let’s parse command line arguments: # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to the input image") ap.add_argument("-s", "--size", type=str, default="(200, 150)", help="ROI size (in pixels)") ap.add_argument("-c", "--min-conf", type=float, default=0.9, help="minimum probability to filter weak detections") ap.add_argument("-v", "--visualize", type=int, default=-1, help="whether or not to show extra visualizations for debugging") args = vars(ap.parse_args()) The following arguments must be supplied to this Python script at runtime from your terminal: --image: The path to the input image for classification-based object detection. --size: A tuple describing the size of the sliding window. This tuple must be surrounded by quotes for our argument parser to grab it directly from the command line. --min-conf: The minimum probability threshold to filter weak detections. --visualize: A switch to determine whether to show additional visualizations for debugging. We now have a handful of constants to define for our object detection procedures: # initialize variables used for the object detection procedure WIDTH = 600 PYR_SCALE = 1.5 WIN_STEP = 16 ROI_SIZE = eval(args["size"]) INPUT_SIZE = (224, 224) Our classifier-based object detection methodology constants include: WIDTH: Given that the selection of images/ for testing (refer to the “Project Structure” section) are all slightly different in size, we set a constant width here for later resizing purposes. By ensuring our images have a consistent starting width, we know that the image will fit on our screen. PYR_SCALE: Our image pyramid scale factor. This value controls how much the image is resized at each layer.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Smaller scale values yield more layers in the pyramid, and larger scales yield fewer layers. The fewer layers you have, the faster the overall object detection system will operate, potentially at the expense of accuracy. WIN_STEP: Our sliding window step size, which indicates how many pixels we are going to “skip” in both the (x, y) directions. Remember, the smaller your step size is, the more windows you’ll need to examine, which leads to a slower overall object detection execution time. In practice, I would recommend trying values of 4 and 8 to start with (depending on the dimensions of your input and your minSize). ROI_SIZE: Controls the aspect ratio of the objects we want to detect; if a mistake is made setting the aspect ratio, it will be nearly impossible to detect objects. Additionally, this value is related to the image pyramid minSize value — giving our image pyramid generator a means of exiting. As you can see, this value comes directly from our --size command line argument. INPUT_SIZE: The classification CNN dimensions. Note that the tuple defined here on Line 32 heavily depends on the CNN you are using (in our case, it is ResNet50).
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
If this notion doesn’t resonate with you, I suggest you read this tutorial and, more specifically the section entitled “Can I make the input dimensions [of a CNN] anything I want?” Understanding what each of the above constants controls is crucial to your understanding of how to turn an image classifier into an object detector with Keras, TensorFlow, and OpenCV. Be sure to mentally distinguish each of these before moving on. Let’s load our ResNet classification CNN and input image: # load our network weights from disk print("[INFO] loading network...") model = ResNet50(weights="imagenet", include_top=True) # load the input image from disk, resize it such that it has the # has the supplied width, and then grab its dimensions orig = cv2.imread(args["image"]) orig = imutils.resize(orig, width=WIDTH) (H, W) = orig.shape[:2] Line 36 loads ResNet pre-trained on ImageNet. If you choose to use a different pre-trained classifier, you can substitute one here for your particular project. To learn how to train your own classifier, I suggest you read Deep Learning for Computer Vision with Python. We also load our input --image. Once it is loaded, we resize it (while maintaining aspect ratio according to our constant WIDTH) and grab resulting image dimensions. From here, we’re ready to initialize our image pyramid generator object: # initialize the image pyramid pyramid = image_pyramid(orig, scale=PYR_SCALE, minSize=ROI_SIZE) # initialize two lists, one to hold the ROIs generated from the image # pyramid and sliding window, and another list used to store the # (x, y)-coordinates of where the ROI was in the original image rois = [] locs = [] # time how long it takes to loop over the image pyramid layers and # sliding window locations start = time.time() On Line 45, we supply the necessary parameters to our image_pyramid generator function. Given that pyramid is a generator object at this point, we can loop over values it produces.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Before we do just that, Lines 50 and 51 initialize two lists: rois: Holds the regions of interest (ROIs) generated from pyramid + sliding window output locs: Stores the (x, y)-coordinates of where the ROI was in the original image And we also set a start timestamp so we can later determine how long our classification-based object detection method (given our parameters) took on the input image (Line 55). Let’s loop over each image our pyramid produces: # loop over the image pyramid for image in pyramid: # determine the scale factor between the *original* image # dimensions and the *current* layer of the pyramid scale = W / float(image.shape[1]) # for each layer of the image pyramid, loop over the sliding # window locations for (x, y, roiOrig) in sliding_window(image, WIN_STEP, ROI_SIZE): # scale the (x, y)-coordinates of the ROI with respect to the # *original* image dimensions x = int(x * scale) y = int(y * scale) w = int(ROI_SIZE[0] * scale) h = int(ROI_SIZE[1] * scale) # take the ROI and preprocess it so we can later classify # the region using Keras/TensorFlow roi = cv2.resize(roiOrig, INPUT_SIZE) roi = img_to_array(roi) roi = preprocess_input(roi) # update our list of ROIs and associated coordinates rois.append(roi) locs.append((x, y, x + w, y + h)) Looping over the layers of our image pyramid begins on Line 58. Our first step in the loop is to compute the scale factor between the original image dimensions (W) and current layer dimensions (image.shape[1]) of our pyramid (Line 61). We need this value to later upscale our object bounding boxes. Now we’ll cascade into our sliding window loop from this particular layer in our image pyramid. Our sliding_window generator allows us to look side-to-side and up-and-down in our image. For each ROI that it generates, we’ll soon apply image classification. Line 65 defines our loop over our sliding windows. Inside, we: Scale coordinates (Lines 68-71). Grab the ROI and preprocess it (Lines 75-77).
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Preprocessing includes resizing to the CNN’s required INPUT_SIZE, converting the image to array format, and applying Keras’ preprocessing convenience function. This includes adding a batch dimension, converting from RGB to BGR, and zero-centering color channels according to the ImageNet dataset. Update the list of rois and associated locs coordinates (Lines 80 and 81). We also handle optional visualization: # check to see if we are visualizing each of the sliding # windows in the image pyramid if args["visualize"] > 0: # clone the original image and then draw a bounding box # surrounding the current region clone = orig.copy() cv2.rectangle(clone, (x, y), (x + w, y + h), (0, 255, 0), 2) # show the visualization and current ROI cv2.imshow("Visualization", clone) cv2.imshow("ROI", roiOrig) cv2.waitKey(0) Here, we visualize both the original image with a green box indicating where we are “looking” and the resized ROI, which is ready for classification (Lines 85-95). As you can see, we’ll only --visualize when the flag is set via the command line. Next, we’ll (1) check our benchmark on the pyramid + sliding window process, (2) classify all of our rois in batch, and (3) decode predictions: # show how long it took to loop over the image pyramid layers and # sliding window locations end = time.time() print("[INFO] looping over pyramid/windows took {:.5f} seconds".format( end - start)) # convert the ROIs to a NumPy array rois = np.array(rois, dtype="float32") # classify each of the proposal ROIs using ResNet and then show how # long the classifications took print("[INFO] classifying ROIs...") start = time.time() preds = model.predict(rois) end = time.time() print("[INFO] classifying ROIs took {:.5f} seconds".format( end - start)) # decode the predictions and initialize a dictionary which maps class # labels (keys) to any ROIs associated with that label (values) preds = imagenet_utils.decode_predictions(preds, top=1) labels = {} First, we end our pyramid + sliding window timer and show how long the process took (Lines 99-101). Then, we take the ROIs and pass them (in batch) through our pre-trained image classifier (i.e., ResNet) via predict (Lines 104-118). As you can see, we print out a benchmark for the inference process here too. Finally, Line 117 decodes the predictions, grabbing only the top prediction for each ROI. We’ll need a means to map class labels (keys) to ROI locations associated with that label (values); the labels dictionary (Line 118) serves that purpose.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Let’s go ahead and populate our labels dictionary now: # loop over the predictions for (i, p) in enumerate(preds): # grab the prediction information for the current ROI (imagenetID, label, prob) = p[0] # filter out weak detections by ensuring the predicted probability # is greater than the minimum probability if prob >= args["min_conf"]: # grab the bounding box associated with the prediction and # convert the coordinates box = locs[i] # grab the list of predictions for the label and add the # bounding box and probability to the list L = labels.get(label, []) L.append((box, prob)) labels[label] = L Looping over predictions beginning on Line 121, we first grab the prediction information including the ImageNet ID, class label, and probability (Line 123). From there, we check to see if the minimum confidence has been met (Line 127). Assuming so, we update the labels dictionary (Lines 130-136) with the bounding box and prob score tuple (value) associated with each class label (key). As a recap, so far, we have: Generated scaled images with our image pyramid Generated ROIs using a sliding window approach for each layer (scaled image) of our image pyramid Performed classification on each ROI and placed the results in our labels list We’re not quite done yet with turning our image classifier into an object detector with Keras, TensorFlow, and OpenCV. Now, we need to visualize the results. This is the time where you would implement logic to do something useful with the results (labels), whereas in our case, we’re simply going to annotate the objects. We will also have to handle our overlapping detections by means of non-maxima suppression (NMS). Let’s go ahead and loop over over all keys in our labels list: # loop over the labels for each of detected objects in the image for label in labels.keys(): # clone the original image so that we can draw on it print("[INFO] showing results for '{}'".format(label)) clone = orig.copy() # loop over all bounding boxes for the current label for (box, prob) in labels[label]: # draw the bounding box on the image (startX, startY, endX, endY) = box cv2.rectangle(clone, (startX, startY), (endX, endY), (0, 255, 0), 2) # show the results *before* applying non-maxima suppression, then # clone the image again so we can display the results *after* # applying non-maxima suppression cv2.imshow("Before", clone) clone = orig.copy() Our loop over the labels for each of the detected objects begins on Line 139. We make a copy of the original input image so that we can annotate it (Line 142). We then annotate all bounding boxes for the current label (Lines 145-149).
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
So that we can visualize the before/after applying NMS, Line 154 displays the “before” image, and then we proceed to make another copy (Line 155). Now, let’s apply NMS and display our “after” NMS visualization: # extract the bounding boxes and associated prediction # probabilities, then apply non-maxima suppression boxes = np.array([p[0] for p in labels[label]]) proba = np.array([p[1] for p in labels[label]]) boxes = non_max_suppression(boxes, proba) # loop over all bounding boxes that were kept after applying # non-maxima suppression for (startX, startY, endX, endY) in boxes: # draw the bounding box and label on the image cv2.rectangle(clone, (startX, startY), (endX, endY), (0, 255, 0), 2) y = startY - 10 if startY - 10 > 10 else startY + 10 cv2.putText(clone, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 255, 0), 2) # show the output after apply non-maxima suppression cv2.imshow("After", clone) cv2.waitKey(0) To apply NMS, we first extract the bounding boxes and associated prediction probabilities (proba) via Lines 159 and 160. We then pass those results into my imultils implementation of NMS (Line 161). For more details on non-maxima suppression, be sure to refer to my blog post. After NMS has been applied, Lines 165-171 annotate bounding box rectangles and labels on the “after” image. Lines 174 and 175 display the results until a key is pressed, at which point all GUI windows close, and the script exits. Great job! In the next section, we’ll analyze results of our method for using an image classifier for object detection purposes. Image classifier to object detector results using Keras and TensorFlow At this point, we are ready to see the results of our hard work. Make sure you use the “Downloads” section of this tutorial to download the source code and example images from this blog post.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
From there, open up a terminal, and execute the following command: $ python detect_with_classifier.py --image images/stingray.jpg --size "(300, 150)" [INFO] loading network... [INFO] looping over pyramid/windows took 0.19142 seconds [INFO] classifying ROIs... [INFO] classifying ROIs took 9.67027 seconds [INFO] showing results for 'stingray' Figure 7: Top: Classifier-based object detection. Bottom: Classifier-based object detection followed by non-maxima suppression. In this tutorial, we used TensorFlow, Keras, and OpenCV to turn a CNN image classifier into an object detector. Here, you can see that I have inputted an example image containing a “stingray” which CNNs trained on ImageNet will be able to recognize (since ImageNet contains a “stingray” class). Figure 7 (top) shows the original output from our object detection procedure. Notice how there are multiple, overlapping bounding boxes surrounding the stingray. Applying non-maxima suppression (Figure 7, bottom) collapses the bounding boxes into a single detection. Let’s try another image, this one of a hummingbird (again, which networks trained on ImageNet will be able to recognize): $ python detect_with_classifier.py --image images/hummingbird.jpg --size "(250, 250)" [INFO] loading network... [INFO] looping over pyramid/windows took 0.07845 seconds [INFO] classifying ROIs... [INFO] classifying ROIs took 4.07912 seconds [INFO] showing results for 'hummingbird' Figure 8: Turning a deep learning convolutional neural network image classifier into an object detector with Python, Keras, and OpenCV. Figure 8 (top) shows the original output of our detection procedure, while the bottom shows the output after applying non-maxima suppression. Again, our “image classifier turned object detector” procedure performed well here.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
But let’s now try an example image where our object detection algorithm doesn’t perform optimally: $ python detect_with_classifier.py --image images/lawn_mower.jpg --size "(200, 200)" [INFO] loading network... [INFO] looping over pyramid/windows took 0.13851 seconds [INFO] classifying ROIs... [INFO] classifying ROIs took 7.00178 seconds [INFO] showing results for 'lawn_mower' [INFO] showing results for 'half_track' Figure 9: Turning a deep learning convolutional neural network image classifier into an object detector with Python, Keras, and OpenCV. The bottom shows the result after NMS has been applied. At first glance, it appears this method worked perfectly — we were able to localize the “lawn mower” in the input image. But there was actually a second detection for a “half-track” (a military vehicle that has regular wheels on the front and tank-like tracks on the back): Figure 10: What do we do when we have a false-positive detection using our CNN image classifier-based object detector? Clearly, there is not a half-track in this image, so how do we improve the results of our object detection procedure? The answer is to increase our --min-conf to remove false-positive predictions: $ python detect_with_classifier.py --image images/lawn_mower.jpg --size "(200, 200)" --min-conf 0.95 [INFO] loading network... [INFO] looping over pyramid/windows took 0.13618 seconds [INFO] classifying ROIs... [INFO] classifying ROIs took 6.99953 seconds [INFO] showing results for 'lawn_mower' Figure 11: By increasing the confidence threshold in our classifier-based object detector (made with TensorFlow, Keras, and OpenCV), we’ve eliminated the false-positive “half-track” detection. By increasing the minimum confidence to 95%, we have filtered out the less confident “half-track” prediction, leaving only the (correct) “lawn mower” object detection. While our procedure for turning a pre-trained image classifier into an object detector isn’t perfect, it still can be used for certain situations, specifically when images are captured in controlled environments. In the rest of this series, we’ll be learning how to improve upon our object detection results and build a more robust deep learning-based object detector. Problems, limitations, and next steps If you carefully inspect the results of our object detection procedure, you’ll notice a few key takeaways: The actual object detector is slow.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Constructing all the image pyramid and sliding window locations takes ~1/10th of a second, and that doesn’t even include the time it takes for the network to make predictions on all the ROIs (4-9 seconds on a 3 GHz CPU)!Bounding box locations aren’t necessarily accurate. The largest issue with this object detection algorithm is that the accuracy of our detections is dependent on our selection of image pyramid scale, sliding window step, and ROI size. If any one of these values is off, then our detector is going to perform suboptimally. The network is not end-to-end trainable. The reason deep learning-based object detectors such as Faster R-CNN, SSDs, YOLO, etc. perform so well is because they are end-to-end trainable, meaning that any error in bounding box predictions can be made more accurate through backpropagation and updating the weights of the network — since we’re using a pre-trained image classifier with fixed weights, we cannot backpropagate error terms through the network. Throughout this four-part series, we’ll be examining how to resolve these issues and build an object detector similar to the R-CNN family of networks. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to take any pre-trained deep learning image classifier and turn into an object detector using Keras, TensorFlow, and OpenCV. To accomplish this task, we combined deep learning with traditional computer vision algorithms: In order to detect objects at different scales (i.e., sizes), we utilized image pyramids, which take our input image and repeatedly downsample it. To detect objects at different locations, we used sliding windows, which slide a fixed size window from left-to-right and top-to-bottom across the input image — at each stop of the window, we extract the ROI and pass it through our image classifier. It’s natural for object detection algorithms to produce multiple, overlapping bounding boxes for objects in an image; in order to “collapse” these overlapping bounding boxes into a single detection, we applied non-maxima suppression. The end results of our hacked together object detection routine were fairly reasonable, but there were two primary problems: The network is not end-to-end trainable. We’re not actually “learning” to detect objects; we’re instead just taking ROIs and classifying them using a CNN trained for image classification. The object detection results are incredibly slow.
https://pyimagesearch.com/2020/06/22/turning-any-cnn-image-classifier-into-an-object-detector-with-keras-tensorflow-and-opencv/
On my Intel Xeon W 3 Ghz processor, applying object detection to a single image took ~4-9.5 seconds, depending on the input image resolution. Such an object detector could not be applied in real time. In order to fix both of these problems, next week, we’ll start exploring the algorithms necessary to build an object detector from the R-CNN, Fast R-CNN, and Faster R-CNN family. This will be a great series of tutorials, so you won’t want to miss them! To download the source code to this post (and be notified when the next tutorial in this series publishes), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
Click here to download the source code to this pos Today, you will learn how to use OpenCV Selective Search for object detection. Today’s tutorial is Part 2 in our 4-part series on deep learning and object detection: Part 1: Turning any deep learning image classifier into an object detector with Keras and TensorFlowPart 2: OpenCV Selective Search for Object Detection (today’s tutorial)Part 3: Region proposal for object detection with OpenCV, Keras, and TensorFlow (next week’s tutorial)Part 4: R-CNN object detection with Keras and TensorFlow (publishing in two weeks) Selective Search, first introduced by Uijlings et al. in their 2012 paper, Selective Search for Object Recognition, is a critical piece of computer vision, deep learning, and object detection research. In their work, Uijlings et al. demonstrated: How images can be over-segmented to automatically identify locations in an image that could contain an objectThat Selective Search is far more computationally efficient than exhaustively computing image pyramids and sliding windows (and without loss of accuracy)And that Selective Search can be swapped in for any object detection framework that utilizes image pyramids and sliding windows Automatic region proposal algorithms such as Selective Search paved the way for Girshick et al. ’s seminal R-CNN paper, which gave rise to highly accurate deep learning-based object detectors. Furthermore, research with Selective Search and object detection has allowed researchers to create state-of-the-art Region Proposal Network (RPN) components that are even more accurate and more efficient than Selective Search (see Girshick et al. ’s follow-up 2015 paper on Faster R-CNNs). But before we can get into RPNs, we first need to understand how Selective Search works, including how we can leverage Selective Search for object detection with OpenCV. To learn how to use OpenCV’s Selective Search for object detection, just keep reading.
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
Looking for the source code to this post? Jump Right To The Downloads Section OpenCV Selective Search for Object Detection In the first part of this tutorial, we’ll discuss the concept of region proposals via Selective Search and how they can efficiently replace the traditional method of using image pyramids and sliding windows to detect objects in an image. From there, we’ll review the Selective Search algorithm in detail, including how it over-segments an image via: Color similarityTexture similaritySize similarityShape similarityA final meta-similarity, which is a linear combination of the above similarity measures I’ll then show you how to implement Selective Search using OpenCV. Region proposals versus sliding windows and image pyramids In last week’s tutorial, you learned how to turn any image classifier into an object detector by applying image pyramids and sliding windows. As a refresher, image pyramids create a multi-scale representation of an input image, allowing us to detect objects at multiple scales/sizes: Figure 1: Selective Search is a more advanced form of object detection compared to sliding windows and image pyramids, which search every ROI of an image by means of an image pyramid and sliding window. Sliding windows operate on each layer of the image pyramid, sliding from left-to-right and top-to-bottom, thereby allowing us to localize where in an image a given object is: There are a number of problems with the image pyramid and sliding window approach, but the two primary ones are: It’s painfully slow. Even with an optimized-for-loops approach and multiprocessing, looping over each image pyramid layer and inspecting every location in the image via sliding windows is computationally expensive. They are sensitive to their parameter choices. Different values of your image pyramid scale and sliding window size can lead to dramatically different results in terms of positive detection rate, false-positive detections, and missing detections altogether. Given these reasons, computer vision researchers have looked into creating automatic region proposal generators that replace sliding windows and image pyramids.
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
The general idea is that a region proposal algorithm should inspect the image and attempt to find regions of an image that likely contain an object (think of region proposal as a cousin to saliency detection). The region proposal algorithm should: Be faster and more efficient than sliding windows and image pyramidsAccurately detect the regions of an image that could contain an objectPass these “candidate proposals” to a downstream classifier to actually label the regions, thus completing the object detection framework The question is, what types of region proposal algorithms can we use for object detection? What is Selective Search and how can Selective Search be used for object detection? The Selective Search algorithm implemented in OpenCV was first introduced by Uijlings et al. in their 2012 paper, Selective Search for Object Recognition. Selective Search works by over-segmenting an image using a superpixel algorithm (instead of SLIC, Uijlings et al. use the Felzenszwalb method from Felzenszwalb and Huttenlocher’s 2004 paper, Efficient graph-based image segmentation). An example of running the Felzenszwalb superpixel algorithm can be seen below: Figure 2: OpenCV’s Selective Search uses the Felzenszwalb superpixel method to find regions of an image that could contain an object. Selective Search is not end-to-end object detection. (image source) From there, Selective Search seeks to merge together the superpixels to find regions of an image that could contain an object.
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
Selective Search merges superpixels in a hierarchical fashion based on five key similarity measures: Color similarity: Computing a 25-bin histogram for each channel of an image, concatenating them together, and obtaining a final descriptor that is 25×3=75-d. Color similarity of any two regions is measured by the histogram intersection distance. Texture similarity: For texture, Selective Search extracts Gaussian derivatives at 8 orientations per channel (assuming a 3-channel image). These orientations are used to compute a 10-bin histogram per channel, generating a final texture descriptor that is 8x10x=240-d. To compute texture similarity between any two regions, histogram intersection is once again used. Size similarity: The size similarity metric that Selective Search uses prefers that smaller regions be grouped earlier rather than later. Anyone who has used Hierarchical Agglomerative Clustering (HAC) algorithms before knows that HACs are prone to clusters reaching a critical mass and then combining everything that they touch. By enforcing smaller regions to merge earlier, we can help prevent a large number of clusters from swallowing up all smaller regions. Shape similarity/compatibility: The idea behind shape similarity in Selective Search is that they should be compatible with each other. Two regions are considered “compatible” if they “fit” into each other (thereby filling gaps in our regional proposal generation). Furthermore, shapes that do not touch should not be merged. A final meta-similarity measure: A final meta-similarity acts as a linear combination of the color similarity, texture similarity, size similarity, and shape similarity/compatibility.
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
The results of Selective Search applying these hierarchical similarity measures can be seen in the following figure: Figure 3: OpenCV’s Selective Search applies hierarchical similarity measures to join regions and eventually form the final set of proposals for where objects could be present. ( image source) On the bottom layer of the pyramid, we can see the original over-segmentation/superpixel generation from the Felzenszwalb method. In the middle layer, we can see regions being joined together, eventually forming the final set of proposals (top). If you’re interested in learning more about the underlying theory of Selective Search, I would suggest referring to the following resources: Efficient Graph-Based Image Segmentation (Felzenszwalb and Huttenlocher, 2004)Selective Search for Object Recognition (Uijlings et al., 2012)Selective Search for Object Detection (C++/Python) (Chandel/Mallick, 2017) Selective Search generates regions, not class labels A common misconception I see with Selective Search is that readers mistakenly think that Selective Search replaces entire object detection frameworks such as HOG + Linear SVM, R-CNN, etc. In fact, a couple of weeks ago, PyImageSearch reader Hayden emailed in with that exact same question: Hi Adrian, I am using Selective Search to detect objects with OpenCV. However, Selective Search is just returning bounding boxes — I can’t seem to figure out how to get labels associated with these bounding boxes. So, here’s the deal: Selective Search does generate regions of an image that could contain an object. However, Selective Search does not have any knowledge of what is in that region (think of it as a cousin to saliency detection).Selective Search is meant to replace the computationally expensive, highly inefficient method of exhaustively using image pyramids and sliding windows to examine locations of an image for a potential object. By using Selective Search, we can more efficiently examine regions of an image that likely contain an object and then pass those regions on to a SVM, CNN, etc.
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
for final classification. If you are using Selective Search, just keep in mind that the Selective Search algorithm will not give you class label predictions — it is assumed that your downstream classifier will do that for you (the topic of next week’s blog post). But in the meantime, let’s learn how we can use OpenCV Selective Search in our own projects. Project structure Be sure to grab the .zip for this tutorial from the “Downloads” section. Once you’ve extracted the files, you may use the tree command to see what’s inside: $ tree . ├── dog.jpg └── selective_search.py 0 directories, 2 files Our project is quite simple, consisting of a Python script (selective_search.py) and a testing image (dog.jpg). In the next section, we’ll learn how to implement our Selective Search script with Python and OpenCV. Implementing Selective Search with OpenCV and Python We are now ready to implement Selective Search with OpenCV! Open up a new file, name it selective_search.py, and insert the following code: # import the necessary packages import argparse import random import time import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to the input image") ap.add_argument("-m", "--method", type=str, default="fast", choices=["fast", "quality"], help="selective search method") args = vars(ap.parse_args()) We begin our dive into Selective Search with a few imports, the main one being OpenCV (cv2).
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
The other imports are built-in to Python. Our script handles two command line arguments: --image: The path to your input image (we’ll be testing with dog.jpg today). --method: The Selective Search algorithm to use. You have two choices — either "fast" or "quality". In most cases, the fast method will be sufficient, so it is set as the default method. We’re now ready to load our input image and initialize our Selective Search algorithm: # load the input image image = cv2.imread(args["image"]) # initialize OpenCV's selective search implementation and set the # input image ss = cv2.ximgproc.segmentation.createSelectiveSearchSegmentation() ss.setBaseImage(image) # check to see if we are using the *fast* but *less accurate* version # of selective search if args["method"] == "fast": print("[INFO] using *fast* selective search") ss.switchToSelectiveSearchFast() # otherwise we are using the *slower* but *more accurate* version else: print("[INFO] using *quality* selective search") ss.switchToSelectiveSearchQuality() Line 17 loads our --image from disk. From there, we initialize Selective Search and set our input image (Lines 21 and 22). Initialization of Selective search requires another step — choosing and setting the internal mode of operation. Lines 26-33 use the command line argument --method value to determine whether we should use either: The "fast" method: switchToSelectiveSearchFast The "quality" method: switchToSelectiveSearchQuality Generally, the faster method will be suitable; however, depending on your application, you might want to sacrifice speed to achieve better quality results (more on that later). Let’s go ahead and perform Selective Search with our image: # run selective search on the input image start = time.time() rects = ss.process() end = time.time() # show how along selective search took to run along with the total # number of returned region proposals print("[INFO] selective search took {:.4f} seconds".format(end - start)) print("[INFO] {} total region proposals".format(len(rects))) To run Selective Search, we simply call the process method on our ss object (Line 37).
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
We’ve set timestamps around this call, so we can get a feel for how fast the algorithm is; Line 42 reports the Selective Search benchmark to our terminal. Subsequently, Line 43 tells us the number of region proposals the Selective Search operation found. Now, what fun would finding our region proposals be if we weren’t going to visualize the result? Zero fun. To wrap up, let’s draw the output on our image: # loop over the region proposals in chunks (so we can better # visualize them) for i in range(0, len(rects), 100): # clone the original image so we can draw on it output = image.copy() # loop over the current subset of region proposals for (x, y, w, h) in rects[i:i + 100]: # draw the region proposal bounding box on the image color = [random.randint(0, 255) for j in range(0, 3)] cv2.rectangle(output, (x, y), (x + w, y + h), color, 2) # show the output image cv2.imshow("Output", output) key = cv2.waitKey(0) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break To annotate our output, we simply: Loop over region proposals in chunks of 100 (Selective Search will generate a few hundred to a few thousand proposals; we “chunk” them so we can better visualize them) via the nested for loops established on Line 47 and Line 52 Extract the bounding box coordinates surrounding each of our region proposals generated by Selective Search, and draw a colored rectangle for each (Lines 52-55) Show the result on our screen (Line 59) Allow the user to cycle through results (by pressing any key) until either all results are exhausted or the q (quit) key is pressed In the next section, we’ll analyze results of both methods (fast and quality). OpenCV Selective Search results We are now ready to apply Selective Search with OpenCV to our own images. Start by using the “Downloads” section of this blog post to download the source code and example images. From there, open up a terminal, and execute the following command: $ python selective_search.py --image dog.jpg [INFO] using *fast* selective search [INFO] selective search took 1.0828 seconds [INFO] 1219 total region proposals Figure 4: The results of OpenCV’s “fast mode” of Selective Search, a component of object detection. Here, you can see that OpenCV’s Selective Search “fast mode” took ~1 second to run and generated 1,219 bounding boxes — the visualization in Figure 4 shows us looping over each of the regions generated by Selective Search and visualizing them to our screen. If you’re confused by this visualization, consider the end goal of Selective Search: to replace traditional computer vision object detection techniques such as sliding windows and image pyramids with a more efficient region proposal generation method.
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
Thus, Selective Search will not tell you what is in the ROI, but it tells you that the ROI is “interesting enough” to passed on to a downstream classifier (ex., SVM, CNN, etc.) for final classification. Let’s apply Selective Search to the same image, but this time, use the --method quality mode: $ python selective_search.py --image dog.jpg --method quality [INFO] using *quality* selective search [INFO] selective search took 3.7614 seconds [INFO] 4712 total region proposals Figure 5: OpenCV’s Selective Search “quality mode” sacrifices speed to produce more accurate region proposal results. The “quality” Selective Search method generated 286% more region proposals but also took 247% longer to run. Whether or not you should use the “fast” or “quality” mode is dependent on your application. In most cases, the “fast” Selective Search is sufficient, but you may choose to use the “quality” mode: When performing inference and wanting to ensure you generate more quality regions to your downstream classifier (of course, this means that real-time detection is not a concern)When using Selective Search to generate training data, thereby ensuring you generate more positive and negative regions for your classifier to learn from Where can I learn more about OpenCV’s Selective Search for object detection? In next week’s tutorial, you’ll learn how to: Use Selective Search to generate object detection proposal regionsTake a pre-trained CNN and classify each of the regions (discarding any low confidence/background regions)Apply non-maxima suppression to return our final object detections And in two weeks, we’ll use Selective Search to generate training data and then fine-tune a CNN to perform object detection via region proposal. This has been a great series of tutorials so far, and you don’t want to miss the next two! What's next?
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today.
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to perform Selective Search to generate object detection proposal regions with OpenCV. Selective Search works by over-segmenting an image by combining regions based on five key components: Color similarityTexture similaritySize similarityShape similarityAnd a final similarity measure, which is a linear combination of the above four similarity measures It’s important to note that Selective Search itself does not perform object detection. Instead, Selective Search returns proposal regions that could contain an object. The idea here is that we replace our computationally expensive, highly inefficient sliding windows and image pyramids with a less expensive, more efficient Selective Search. Next week, I’ll show you how to take the proposal regions generated by Selective Search and then run an image classifier on top of them, allowing you to create an ad hoc deep learning-based object detector!
https://pyimagesearch.com/2020/06/29/opencv-selective-search-for-object-detection/
Stay tuned for next week’s tutorial. To download the source code to this post (and be notified when the next tutorial in this series publishes), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/07/06/region-proposal-object-detection-with-opencv-keras-and-tensorflow/
Click here to download the source code to this pos In this tutorial, you will learn how to utilize region proposals for object detection using OpenCV, Keras, and TensorFlow. Today’s tutorial is part 3 in our 4-part series on deep learning and object detection: Part 1: Turning any deep learning image classifier into an object detector with Keras and TensorFlowPart 2: OpenCV Selective Search for Object DetectionPart 3: Region proposal for object detection with OpenCV, Keras, and TensorFlow (today’s tutorial)Part 4: R-CNN object detection with Keras and TensorFlow In last week’s tutorial, we learned how to utilize Selective Search to replace the traditional computer vision approach of using bounding boxes and sliding windows for object detection. But the question still remains: How do we take the region proposals (i.e., regions of an image that could contain an object of interest) and then actually classify them to obtain our final object detections? We’ll be covering that exact question in this tutorial. To learn how to perform object detection with region proposals using OpenCV, Keras, and TensorFlow, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Region proposal object detection with OpenCV, Keras, and TensorFlow In the first part of this tutorial, we’ll discuss the concept of region proposals and how they can be used in deep learning-based object detection pipelines. We’ll then implement region proposal object detection using OpenCV, Keras, and TensorFlow. We’ll wrap up this tutorial by reviewing our region proposal object detection results. What are region proposals, and how can they be used for object detection?
https://pyimagesearch.com/2020/07/06/region-proposal-object-detection-with-opencv-keras-and-tensorflow/
Figure 1: OpenCV’s Selective Search applies hierarchical similarity measures to join regions and eventually form the final set of region proposals for where objects could be present. ( image source) We discussed the concept of region proposals and the Selective Search algorithm in last week’s tutorial on OpenCV Selective Search for Object Detection — I suggest you give that tutorial a read before you continue here today, but the gist is that traditional computer vision object detection algorithms relied on image pyramids and sliding windows to locate objects in images and varying scales and locations: There are a few problems with the image pyramid and sliding window method, but the primary issues are that: Sliding windows/image pyramids are painfully slowThey are sensitive to hyperparameter choices (namely pyramid scale size, ROI size, and window step size)They are computationally inefficient Region proposal algorithms seek to replace the traditional image pyramid and sliding window approach. These algorithms: Accept an input imageOver-segment it by applying a superpixel clustering algorithmMerge segments of the superpixels based on five components (color similarity, texture similarity, size similarity, shape similarity/compatibility, and a final meta-similarity that linearly combines the aforementioned scores) The end results are proposals that indicate where in the image there could be an object: Figure 2: In this tutorial, we will learn how to use Selective Search region proposals to perform object detection with OpenCV, Keras, and TensorFlow. Notice how I’ve italicized “could” in the sentence above the image — keep in mind that region proposal algorithms have no idea if a given region does in fact contain an object. Instead, region proposal methods simply tell us: Hey, this looks like an interesting region of the input image. Let’s apply our more computationally expensive classifier to determine what’s actually in this region. Region proposal algorithms tend to be far more efficient than the traditional object detection techniques of image pyramids and sliding windows because: Fewer individual ROIs are examinedIt is faster than exhaustively examining every scale/location of the input imageThe amount of accuracy lost is minimal, if any In the rest of this tutorial, you’ll learn how to implement region proposal object detection. Configuring your development environment To configure your system for this tutorial, I recommend following either of these tutorials: How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. Please note that PyImageSearch does not recommend or support Windows for CV/DL projects. Project structure Be sure to grab today’s files from the “Downloads” section so you can follow along with today’s tutorial: $ tree .
https://pyimagesearch.com/2020/07/06/region-proposal-object-detection-with-opencv-keras-and-tensorflow/
├── beagle.png └── region_proposal_detection.py 0 directories, 2 files As you can see, our project layout is very straightforward today, consisting of a single Python script, aptly named region_proposal_detection.py for today’s region proposal object detection example. I’ve also included a picture of Jemma, my family’s beagle. We’ll use this photo for testing our OpenCV, Keras, and TensorFlow region proposal object detection system. Implementing region proposal object detection with OpenCV, Keras, and TensorFlow Let’s get started implementing our region proposal object detector. Open a new file, name it region_proposal_detection.py, and insert the following code: # import the necessary packages from tensorflow.keras.applications import ResNet50 from tensorflow.keras.applications.resnet50 import preprocess_input from tensorflow.keras.applications import imagenet_utils from tensorflow.keras.preprocessing.image import img_to_array from imutils.object_detection import non_max_suppression import numpy as np import argparse import cv2 We begin our script with a handful of imports. In particular, we’ll be using the pre-trained ResNet50 classifier, my imutils implementation of non_max_suppression (NMS), and OpenCV. Be sure to follow the links in the “Configuring your development environment” section to ensure that all of the required packages are installed in a Python virtual environment. Last week, we learned about Selective Search to find region proposals where an object might exist. We’ll now take last week’s code snippet and wrap it in a convenience function named selective_search: def selective_search(image, method="fast"): # initialize OpenCV's selective search implementation and set the # input image ss = cv2.ximgproc.segmentation.createSelectiveSearchSegmentation() ss.setBaseImage(image) # check to see if we are using the *fast* but *less accurate* version # of selective search if method == "fast": ss.switchToSelectiveSearchFast() # otherwise we are using the *slower* but *more accurate* version else: ss.switchToSelectiveSearchQuality() # run selective search on the input image rects = ss.process() # return the region proposal bounding boxes return rects Our selective_search function accepts an input image and algorithmic method (either "fast" or "quality"). From there, we initialize Selective Search with our input image (Lines 14 and 15).
https://pyimagesearch.com/2020/07/06/region-proposal-object-detection-with-opencv-keras-and-tensorflow/
We then explicitly set our mode using the value contained in method (Lines 19-24), which should either be "fast" or "quality". Generally, the faster method will be suitable; however, depending on your application, you might want to sacrifice speed to achieve better quality results. Finally, we execute Selective Search and return the region proposals (rects) via Lines 27-30. When we call the selective_search function and pass an image to it, we’ll get a list of bounding boxes that represent where an object could exist. Later, we will have code which accepts the bounding boxes, extracts the corresponding ROI from the input image, passes the ROI into a classifier, and applies NMS. The result of these steps will be a deep learning object detector based on independent Selective Search and classification. We are not building an end-to-end deep learning object detector with Selective Search embedded. Keep this distinction in mind as you follow the rest of this tutorial. Let’s define the inputs to our Python script: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to the input image") ap.add_argument("-m", "--method", type=str, default="fast", choices=["fast", "quality"], help="selective search method") ap.add_argument("-c", "--conf", type=float, default=0.9, help="minimum probability to consider a classification/detection") ap.add_argument("-f", "--filter", type=str, default=None, help="comma separated list of ImageNet labels to filter on") args = vars(ap.parse_args()) Our script accepts four command line arguments; --image: The path to our input photo we’d like to perform object detection on --method: The Selective Search mode — either "fast" or "quality" --conf: Minimum probability threshold to consider a classification/detection --filter: ImageNet classes separated by commas that we wish to consider Now that our command line args are defined, let’s hone in on the --filter argument: # grab the label filters command line argument labelFilters = args["filter"] # if the label filter is not empty, break it into a list if labelFilters is not None: labelFilters = labelFilters.lower().split(",") Line 46 sets our class labelFilters directly from the --filter command line argument.
https://pyimagesearch.com/2020/07/06/region-proposal-object-detection-with-opencv-keras-and-tensorflow/
From there, Lines 49 and 50 overwrite labelFilters with each comma delimited class stored organized into a single Python list. Next, we’ll load our pre-trained ResNet image classifier: # load ResNet from disk (with weights pre-trained on ImageNet) print("[INFO] loading ResNet...") model = ResNet50(weights="imagenet") # load the input image from disk and grab its dimensions image = cv2.imread(args["image"]) (H, W) = image.shape[:2] Here, we Initialize ResNet pre-trained on ImageNet (Line 54). We also load our input --image and extract its dimensions (Lines 57 and 58). At this point, we’re ready to apply Selective Search to our input photo: # run selective search on the input image print("[INFO] performing selective search with '{}' method...".format( args["method"])) rects = selective_search(image, method=args["method"]) print("[INFO] {} regions found by selective search".format(len(rects))) # initialize the list of region proposals that we'll be classifying # along with their associated bounding boxes proposals = [] boxes = [] Taking advantage of our selective_search convenience function, Line 63 executes Selective Search on our --image using the desired --method. The result is our list of object region proposals stored in rects. In the next code block, we’re going to populate two lists using our region proposals: proposals: Initialized on Line 68, this list will hold sufficiently large pre-processed ROIs from our input --image, which we will feed into our ResNet classifier. boxes: Initialized on Line 69, this list of bounding box coordinates corresponds to our proposals and is similar to rects with an important distinction: Only sufficiently large regions are included. We need our proposals ROIs to send through our image classifier, and we need the boxes coordinates so that we know where in the input --image each ROI actually came from. Now that we have an understanding of what we need to do, let’s get to it: # loop over the region proposal bounding box coordinates generated by # running selective search for (x, y, w, h) in rects: # if the width or height of the region is less than 10% of the # image width or height, ignore it (i.e., filter out small # objects that are likely false-positives) if w / float(W) < 0.1 or h / float(H) < 0.1: continue # extract the region from the input image, convert it from BGR to # RGB channel ordering, and then resize it to 224x224 (the input # dimensions required by our pre-trained CNN) roi = image[y:y + h, x:x + w] roi = cv2.cvtColor(roi, cv2.COLOR_BGR2RGB) roi = cv2.resize(roi, (224, 224)) # further preprocess by the ROI roi = img_to_array(roi) roi = preprocess_input(roi) # update our proposals and bounding boxes lists proposals.append(roi) boxes.append((x, y, w, h)) Looping over proposals from Selective Search (rects) beginning on Line 73, we proceed to: Filter out small boxes that likely don’t contain an object (i.e., noise) via Lines 77 and 78 Extract our region proposal roi (Line 83) and preprocess it (Lines 84-89) Update our proposal and boxes lists (Lines 92 and 93) We’re now ready to classify each pre-processed region proposal ROI: # convert the proposals list into NumPy array and show its dimensions proposals = np.array(proposals) print("[INFO] proposal shape: {}".format(proposals.shape)) # classify each of the proposal ROIs using ResNet and then decode the # predictions print("[INFO] classifying proposals...") preds = model.predict(proposals) preds = imagenet_utils.decode_predictions(preds, top=1) # initialize a dictionary which maps class labels (keys) to any # bounding box associated with that label (values) labels = {} We have one final pre-processing step to handle before inference — converting the proposals list into a NumPy array. Line 96 handles this step.
https://pyimagesearch.com/2020/07/06/region-proposal-object-detection-with-opencv-keras-and-tensorflow/
We make predictions on our proposals by performing deep learning classification inference (Line 102 and 103). Given each classification, we’ll filter the results based on our labelFilters and --conf (confidence threshold). The labels dictionary (initialized on Line 107) will hold each of our class labels (keys) and lists of bounding boxes + probabilities (values). Let’s filter and organize the results now: # loop over the predictions for (i, p) in enumerate(preds): # grab the prediction information for the current region proposal (imagenetID, label, prob) = p[0] # only if the label filters are not empty *and* the label does not # exist in the list, then ignore it if labelFilters is not None and label not in labelFilters: continue # filter out weak detections by ensuring the predicted probability # is greater than the minimum probability if prob >= args["conf"]: # grab the bounding box associated with the prediction and # convert the coordinates (x, y, w, h) = boxes[i] box = (x, y, x + w, y + h) # grab the list of predictions for the label and add the # bounding box + probability to the list L = labels.get(label, []) L.append((box, prob)) labels[label] = L Looping over predictions beginning on Line 110, we: Extract the prediction information including the class label and probability (Line 112) Ensure the particular prediction’s class label is in the label filter, dropping results we don’t wish to consider (Lines 116 and 117) Filter out weak confidence inference results (Line 121) Grab the bounding box associated with the prediction and then convert and store (x, y)-coordinates (Lines 124 and 125) Update the labels dictionary so that it is organized with each ImageNet class label (key) associated with a list of tuples (value) consisting of a detection’s bounding box and prob (Lines 129-131) Now that our results are collated in the labels dictionary, we will produce two visualizations of our results: Before applying non-maxima suppression (NMS) After applying NMS By applying NMS, weak overlapping bounding boxes will be suppressed, thereby resulting in a single object detection. In order to demonstrate the power of NMS, first let’s generate our Before NMS result: # loop over the labels for each of detected objects in the image for label in labels.keys(): # clone the original image so that we can draw on it print("[INFO] showing results for '{}'".format(label)) clone = image.copy() # loop over all bounding boxes for the current label for (box, prob) in labels[label]: # draw the bounding box on the image (startX, startY, endX, endY) = box cv2.rectangle(clone, (startX, startY), (endX, endY), (0, 255, 0), 2) # show the results *before* applying non-maxima suppression, then # clone the image again so we can display the results *after* # applying non-maxima suppression cv2.imshow("Before", clone) clone = image.copy() Looping over unique keys in our labels dictionary, we annotate our output image with bounding boxes for that particular label (Lines 140-144) and display the Before NMS result (Line 149). Given that our visualization will likely be very cluttered with many bounding boxes, I chose not to annotate class labels. Now, let’s apply NMS and display the After NMS result: # extract the bounding boxes and associated prediction # probabilities, then apply non-maxima suppression boxes = np.array([p[0] for p in labels[label]]) proba = np.array([p[1] for p in labels[label]]) boxes = non_max_suppression(boxes, proba) # loop over all bounding boxes that were kept after applying # non-maxima suppression for (startX, startY, endX, endY) in boxes: # draw the bounding box and label on the image cv2.rectangle(clone, (startX, startY), (endX, endY), (0, 255, 0), 2) y = startY - 10 if startY - 10 > 10 else startY + 10 cv2.putText(clone, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 255, 0), 2) # show the output after apply non-maxima suppression cv2.imshow("After", clone) cv2.waitKey(0) Lines 154-156 apply non-maxima suppression using my imutils method. From there, we annotate each remaining bounding box and class label (Lines 160-166) and display the After NMS result (Line 169). Both the Before NMS and After NMS visualizations will remain on your screen until a key is pressed (Line 170). Region proposal object detection results using OpenCV, Keras, and TensorFlow We are now ready to perform region proposal object detection!
https://pyimagesearch.com/2020/07/06/region-proposal-object-detection-with-opencv-keras-and-tensorflow/
Make sure you use the “Downloads” section of this tutorial to download the source code and example images. From there, open up a terminal, and execute the following command: $ python region_proposal_detection.py --image beagle.png [INFO] loading ResNet... [INFO] performing selective search with 'fast' method... [INFO] 922 regions found by selective search [INFO] proposal shape: (534, 224, 224, 3) [INFO] classifying proposals... [INFO] showing results for 'beagle' [INFO] showing results for 'clog' [INFO] showing results for 'quill' [INFO] showing results for 'paper_towel' Figure 3: Left: Object detections for the “beagle” class as a result of region proposal object detection with OpenCV, Keras, and TensorFlow. Right: After applying non-maxima suppression to eliminate overlapping bounding boxes. Initially, our results look quite good. If you take a look at Figure 3, you’ll see that on the left we have the object detections for the “beagle” class (a type of dog) and on the right we have the output after applying non-maxima suppression. As you can see from the output, Jemma, my family’s beagle, was correctly detected! However, as the rest of our results show, our model is also reporting that we detected a “clog” (a type of wooden shoe): Figure 4: One of the regions proposed by Selective Search is later predicted incorrectly to have a “clog” shoe in it using OpenCV, Keras, and TensorFlow. As well as a “quill” (a writing pen made from a feather): Figure 5: Another region proposed by Selective Search is then classified incorrectly to have a “quill” pen in it. And finally, a “paper towel”: Figure 6: Our Selective Search + ResNet classifier-based object detection method (created with OpenCV, TensorFlow, and Keras) has incorrectly predicted that a “paper towel” is present in this photo. Looking at the ROIs for each of these classes, one can imagine how our CNN may have been confused when making those classifications.
https://pyimagesearch.com/2020/07/06/region-proposal-object-detection-with-opencv-keras-and-tensorflow/
But how do we actually remove the incorrect object detections? The solution here is that we can filter through only the detections we care about. For example, if I were building a “beagle detector” application, I would supply the --filter beagle command line argument: $ python region_proposal_detection.py --image beagle.png --filter beagle [INFO] loading ResNet... [INFO] performing selective search with 'fast' method... [INFO] 922 regions found by selective search [INFO] proposal shape: (534, 224, 224, 3) [INFO] classifying proposals... [INFO] showing results for 'beagle' Figure 7: While Selective Search has proposed many regions that might contain an object, after classification of the ROIs, we’ve filtered for only the “beagle” class so that all other classes are ignored. And in that case, only the “beagle” class is found (the rest are discarded). Problems and limitations As our results section demonstrated, our region proposal object detector “only kinda-sorta worked” — while we obtained the correct object detection, we also got a lot of noise. In next week’s tutorial, I’ll show you how we can use Selective Search and region proposals to build a complete R-CNN object detector pipeline that is far more accurate than the method we’ve covered here today. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated?
https://pyimagesearch.com/2020/07/06/region-proposal-object-detection-with-opencv-keras-and-tensorflow/
Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
https://pyimagesearch.com/2020/07/06/region-proposal-object-detection-with-opencv-keras-and-tensorflow/
✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to perform region proposal object detection with OpenCV, Keras, and TensorFlow. Using region proposals for object detection is a 4-step process: Step #1: Use Selective Search (a region proposal algorithm) to generate candidate regions of an input image that could contain an object of interest. Step #2: Take these regions and pass them through a pre-trained CNN to classify the candidate areas (again, that could contain an object).Step #3: Apply non-maxima suppression (NMS) to suppress weak, overlapping bounding boxes. Step #4: Return the final bounding boxes to the calling function. We implemented the above pipeline using OpenCV, Keras, and TensorFlow — all in ~150 lines of code! However, you’ll note that we used a network that was pre-trained on the ImageNet dataset. That raises the questions: What if we wanted to train a network on our own custom dataset?How can we train a network using Selective Search?And how will that change our inference code used for object detection? I’ll be answering those questions in next week’s tutorial.
https://pyimagesearch.com/2020/07/06/region-proposal-object-detection-with-opencv-keras-and-tensorflow/
To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Click here to download the source code to this pos In this tutorial, you will learn how to build an R-CNN object detector using Keras, TensorFlow, and Deep Learning. Today’s tutorial is the final part in our 4-part series on deep learning and object detection: Part 1: Turning any CNN image classifier into an object detector with Keras, TensorFlow, and OpenCVPart 2: OpenCV Selective Search for Object DetectionPart 3: Region proposal for object detection with OpenCV, Keras, and TensorFlowPart 4: R-CNN object detection with Keras and TensorFlow (today’s tutorial) Last week, you learned how to use region proposals and Selective Search to replace the traditional computer vision object detection pipeline of image pyramids and sliding windows: Using Selective Search, we generated candidate regions (called “proposals”) that could contain an object of interest. These proposals were passed in to a pre-trained CNN to obtain the actual classifications. We then processed the results by applying confidence filtering and non-maxima suppression. Our method worked well enough — but it raised some questions: What if we wanted to train an object detection network on our own custom datasets? How can we train that network using Selective Search search? And how will using Selective Search change our object detection inference script? In fact, these are the same questions that Girshick et al. had to consider in their seminal deep learning object detection paper Rich feature hierarchies for accurate object detection and semantic segmentation. Each of these questions will be answered in today’s tutorial — and by the time you’re done reading it, you’ll have a fully functioning R-CNN, similar (yet simplified) to the one Girshick et al.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
implemented! To learn how to build an R-CNN object detector using Keras and TensorFlow, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section R-CNN object detection with Keras, TensorFlow, and Deep Learning Today’s tutorial on building an R-CNN object detector using Keras and TensorFlow is by far the longest tutorial in our series on deep learning object detectors. I would suggest you budget your time accordingly — it could take you anywhere from 40 to 60 minutes to read this tutorial in its entirety. Take it slow, as there are many details and nuances in the blog post (and don’t be afraid to read the tutorial 2-3x to ensure you fully comprehend it). We’ll start our tutorial by discussing the steps required to implement an R-CNN object detector using Keras and TensorFlow. From there, we’ll review the example object detection datasets we’ll be using here today. Next, we’ll implement our configuration file along with a helper utility function used to compute object detection accuracy via Intersection over Union (IoU). We’ll then build our object detection dataset by applying Selective Search.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Selective Search, along with a bit of post-processing logic, will enable us to identify regions of an input image that do and do not contain a potential object of interest. We’ll take these regions and use them as our training data, fine-tuning MobileNet (pre-trained on ImageNet) to classify and recognize objects from our dataset. Finally, we’ll implement a Python script that can be used for inference/prediction by applying Selective Search to an input image, classifying the region proposals generated by Selective Search, and then display the output R-CNN object detection results to our screen. Let’s get started! Steps to implementing an R-CNN object detector with Keras and TensorFlow Figure 1: Steps to build a R-CNN object detection with Keras, TensorFlow, and Deep Learning. Implementing an R-CNN object detector is a somewhat complex multistep process. If you haven’t yet, make sure you’ve read the previous tutorials in this series to ensure you have the proper knowledge and prerequisites: Turning any CNN image classifier into an object detector with Keras, TensorFlow, and OpenCVOpenCV Selective Search for Object DetectionRegion proposal for object detection with OpenCV, Keras, and TensorFlow I’ll be assuming you have a working knowledge of how Selective Search works, how region proposals can be utilized in an object detection pipeline, and how to fine-tune a network. With that said, below you can see our 6-step process to implementing an R-CNN object detector: Step #1: Build an object detection dataset using Selective SearchStep #2: Fine-tune a classification network (originally trained on ImageNet) for object detectionStep #3: Create an object detection inference script that utilizes Selective Search to propose regions that could contain an object that we would like to detectStep #4: Use our fine-tuned network to classify each region proposed via Selective SearchStep #5: Apply non-maxima suppression to suppress weak, overlapping bounding boxesStep #6: Return the final object detection results As I’ve already mentioned earlier, this tutorial is complex and covers many nuanced details. Therefore, don’t be too hard on yourself if you need to go over it multiple times to ensure you understand our R-CNN object detection implementation. With that in mind, let’s move on to reviewing our R-CNN project structure.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Our object detection dataset Figure 2: The raccoon object detection dataset is curated by Dat Tran. We will use the dataset to perform R-CNN object detection with Keras, TensorFlow, and Deep Learning. As Figure 2 shows, we’ll be training an R-CNN object detector to detect raccoons in input images. This dataset contains 200 images with 217 total raccoons (some images contain more than one raccoon). The dataset was originally curated by esteemed data scientist Dat Tran. The GitHub repository for the raccoon dataset can be found here; however, for convenience I have included the dataset in the “Downloads” associated with this tutorial. If you haven’t yet, make sure you use the “Downloads” section of this blog post to download the raccoon dataset and Python source code to allow you to follow along with the rest of this tutorial. Configuring your development environment To configure your system for this tutorial, I recommend following either of these tutorials: How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. Please note that PyImageSearch does not recommend or support Windows for CV/DL projects. Project structure If you haven’t yet, use the “Downloads” section to grab both the code and dataset for today’s tutorial.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Inside, you’ll find the following: $ tree --dirsfirst --filelimit 10 . ├── dataset │   ├── no_raccoon [2200 entries] │   └── raccoon [1560 entries] ├── images │   ├── raccoon_01.jpg │   ├── raccoon_02.jpg │   └── raccoon_03.jpg ├── pyimagesearch │   ├── __init__.py │   ├── config.py │   ├── iou.py │   └── nms.py ├── raccoons │   ├── annotations [200 entries] │   └── images [200 entries] ├── build_dataset.py ├── detect_object_rcnn.py ├── fine_tune_rcnn.py ├── label_encoder.pickle ├── plot.png └── raccoon_detector.h5 8 directories, 13 files As previously discussed, our raccoons/ dataset of images/ and annotations/ was curated and made available by Dat Tran. This dataset is not to be confused with the one that our build_dataset.py script produces — dataset/ — which is for the purpose of fine-tuning our MobileNet V2 model to create a raccoon classifier (raccoon_detector.h5). The downloads include a pyimagesearch module with the following: config.py: Holds our configuration settings, which will be used in our selection of Python scripts iou.py: Computes the Intersection over Union (IoU), an object detection evaluation metric nms.py: Performs non-maxima suppression (NMS) to eliminate overlapping boxes around objects The components of the pyimagesearch module will come in handy in the following three Python scripts, which represent the bulk of what we are learning in this tutorial: build_dataset.py: Takes Dat Tran’s raccoon dataset and creates a separate raccoon/ no_raccoon dataset, which we will use to fine-tune a MobileNet V2 model that is pre-trained on the ImageNet dataset fine_tune_rcnn.py: Trains our raccoon classifier by means of fine-tuning detect_object_rcnn.py: Brings all the pieces together to perform rudimentary R-CNN object detection, the key components being Selective Search and classification (note that this script does not accomplish true end-to-end R-CNN object detection by means of a model with a built-in Selective Search region proposal portion of the network) Note: We will not be reviewing nms.py; please refer to my tutorial on Non-Maximum Suppression for Object Detection in Python as needed. Implementing our object detection configuration file Before we get too far in our project, let’s first implement a configuration file that will store key constants and settings, which we will use across multiple Python scripts. Open up the config.py file in the pyimagesearch module, and insert the following code: # import the necessary packages import os # define the base path to the *original* input dataset and then use # the base path to derive the image and annotations directories ORIG_BASE_PATH = "raccoons" ORIG_IMAGES = os.path.sep.join([ORIG_BASE_PATH, "images"]) ORIG_ANNOTS = os.path.sep.join([ORIG_BASE_PATH, "annotations"]) We begin by defining paths to the original raccoon dataset images and object detection annotations (i.e., bounding box information) on Lines 6-8. Next, we define the paths to the dataset we will soon build: # define the base path to the *new* dataset after running our dataset # builder scripts and then use the base path to derive the paths to # our output class label directories BASE_PATH = "dataset" POSITVE_PATH = os.path.sep.join([BASE_PATH, "raccoon"]) NEGATIVE_PATH = os.path.sep.join([BASE_PATH, "no_raccoon"]) Here, we establish the paths to our positive (i.e,. there is a raccoon) and negative (i.e., no raccoon in the input image) example images (Lines 13-15). These directories will be populated when we run our build_dataset.py script. And now, we define the maximum number of Selective Search region proposals to be utilized for training and inference, respectively: # define the number of max proposals used when running selective # search for (1) gathering training data and (2) performing inference MAX_PROPOSALS = 2000 MAX_PROPOSALS_INFER = 200 Followed by setting the maximum number of positive and negative regions to use when building our dataset: # define the maximum number of positive and negative images to be # generated from each image MAX_POSITIVE = 30 MAX_NEGATIVE = 10 And we wrap up with model-specific constants: # initialize the input dimensions to the network INPUT_DIMS = (224, 224) # define the path to the output model and label binarizer MODEL_PATH = "raccoon_detector.h5" ENCODER_PATH = "label_encoder.pickle" # define the minimum probability required for a positive prediction # (used to filter out false-positive predictions) MIN_PROBA = 0.99 Line 28 sets the input spatial dimensions to our classification network (MobileNet, pre-trained on ImageNet).
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
We then define the output file paths to our raccoon classifier and label encoder (Lines 31 and 32). The minimum probability required for a positive prediction during inference (used to filter out false-positive detections) is set to 99% on Line 36. Measuring object detection accuracy with Intersection over Union (IoU) Figure 3: An example of detecting a stop sign in an image. The predicted bounding box is drawn in red while the ground-truth bounding box is drawn in green. Our goal is to compute the Intersection over Union between these bounding boxes, a ratio of the area of overlap to the area of union. ( image source) In order to measure how “good” a job our object detector is doing at predicting bounding boxes, we’ll be using the Intersection over Union (IoU) metric. The IoU method computes the ratio of the area of overlap to the area of the union between the predicted bounding box and the ground-truth bounding box: Figure 4: Computing the Intersection over Union is as simple as dividing the area of overlap between the bounding boxes by the area of union. ( image source) Examining this equation, you can see that Intersection over Union is simply a ratio: In the numerator, we compute the area of overlap between the predicted bounding box and the ground-truth bounding box. The denominator is the area of union, or more simply, the area encompassed by both the predicted bounding box and the ground-truth bounding box. Dividing the area of overlap by the area of union yields our final score — the Intersection over Union (hence the name).
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
We’ll use IoU to measure object detection accuracy, including how much a given Selective Search proposal overlaps with a ground-truth bounding box (which is useful when we go to generate positive and negative examples for our training data). If you’re interested in learning more about IoU, be sure to refer to my tutorial, Intersection over Union (IoU) for object detection. Otherwise, let’s briefly review our IoU implementation now — open up the iou.py file in the pyimagesearch directory, and insert the following code: def compute_iou(boxA, boxB): # determine the (x, y)-coordinates of the intersection rectangle xA = max(boxA[0], boxB[0]) yA = max(boxA[1], boxB[1]) xB = min(boxA[2], boxB[2]) yB = min(boxA[3], boxB[3]) # compute the area of intersection rectangle interArea = max(0, xB - xA + 1) * max(0, yB - yA + 1) # compute the area of both the prediction and ground-truth # rectangles boxAArea = (boxA[2] - boxA[0] + 1) * (boxA[3] - boxA[1] + 1) boxBArea = (boxB[2] - boxB[0] + 1) * (boxB[3] - boxB[1] + 1) # compute the intersection over union by taking the intersection # area and dividing it by the sum of prediction + ground-truth # areas - the intersection area iou = interArea / float(boxAArea + boxBArea - interArea) # return the intersection over union value return iou The comptue_iou function accepts two parameters, boxA and boxB, which are the ground-truth and predicted bounding boxes for which we seek to compute the Intersection over Union (IoU). Order of the parameters does not matter for the purposes of our computation. Inside, we begin by computing both the top-right and bottom-left (x, y)-coordinates of the bounding boxes (Lines 3-6). Using the bounding box coordinates, we compute the intersection (overlapping area) of the bounding boxes (Line 9). This value is the numerator for the IoU forumula. To determine the denominator, we need to derive the area of both the predicted and ground-truth bounding boxes (Lines 13 and 14). The Intersection over Union can then be calculated on Line 19 by dividing the intersection area (numerator) by the union area of the two bounding boxes (denominator), taking care to subtract out the intersection area (otherwise the intersection area would be doubly counted). Line 22 returns the IoU result.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Implementing our object detection dataset builder script Figure 5: Steps to build our dataset for R-CNN object detection with Keras, TensorFlow, and Deep Learning. Before we can create our R-CNN object detector, we first need to build our dataset, accomplishing Step #1 from our list of six steps for today’s tutorial. Our build_dataset.py script will: 1. Accept our input raccoons dataset 2. Loop over all images in the dataset 2a. Load thea given input image 2b. Load and parse the bounding box coordinates for any raccoons in the input image 3. Run Selective Search on the input image 4. Use IoU to determine which region proposals from Selective Search sufficiently overlap with the ground-truth bounding boxes and which ones do not 5. Save region proposals as overlapping (contains raccoon) or not (no raccoon) Once our dataset is built, we will be able to work on Step #2 — fine-tuning an object detection network.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Now that we understand the dataset builder at a high level, let’s implement it. Open the build_dataset.py file, and follow along: # import the necessary packages from pyimagesearch.iou import compute_iou from pyimagesearch import config from bs4 import BeautifulSoup from imutils import paths import cv2 import os In addition to our IoU and configuration settings (Lines 2 and 3), this script requires Beautifulsoup, imutils, and OpenCV. If you followed the “Configuring your development environment” section above, your system has all of these tools at your disposal. Now that our imports are taken care of, lets create two empty directories and build a list of all the raccoon images: # loop over the output positive and negative directories for dirPath in (config. POSITVE_PATH, config. NEGATIVE_PATH): # if the output directory does not exist yet, create it if not os.path.exists(dirPath): os.makedirs(dirPath) # grab all image paths in the input images directory imagePaths = list(paths.list_images(config. ORIG_IMAGES)) # initialize the total number of positive and negative images we have # saved to disk so far totalPositive = 0 totalNegative = 0 Our positive and negative directories will soon contain our raccoon or no raccoon images. Lines 10-13 create these directories if they don’t yet exist. Then, Line 16 grabs all input image paths in our raccoons dataset directory, storing them in the imagePaths list. Our totalPositive and totalNegative accumulators (Lines 20 and 21) will hold the final counts of our raccoon or no raccoon images, but more importantly, our filenames will be derived from the count as our loop progresses.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Speaking of such a loop, let’s begin looping over all of the imagePaths in our dataset: # loop over the image paths for (i, imagePath) in enumerate(imagePaths): # show a progress report print("[INFO] processing image {}/{}...".format(i + 1, len(imagePaths))) # extract the filename from the file path and use it to derive # the path to the XML annotation file filename = imagePath.split(os.path.sep)[-1] filename = filename[:filename.rfind(".")] annotPath = os.path.sep.join([config. ORIG_ANNOTS, "{}.xml".format(filename)]) # load the annotation file, build the soup, and initialize our # list of ground-truth bounding boxes contents = open(annotPath).read() soup = BeautifulSoup(contents, "html.parser") gtBoxes = [] # extract the image dimensions w = int(soup.find("width").string) h = int(soup.find("height").string) Inside our loop over imagePaths, Lines 31-34 derive the image path’s associated XML annotation file path in (PASCAL VOC format.) — this file contains the ground-truth object detection annotations for the current image. From there, Lines 38 and 39 load and parse the XML object. Our gtBoxes list will soon hold our dataset’s ground-truth bounding boxes (Line 40). The first pieces of data we extract from our PASCAL VOC XML annotation file are the image dimensions (Lines 43 and 44). Next, we’ll grab bounding box coordinates from all the <object> elements in our annotation file: # loop over all 'object' elements for o in soup.find_all("object"): # extract the label and bounding box coordinates label = o.find("name").string xMin = int(o.find("xmin").string) yMin = int(o.find("ymin").string) xMax = int(o.find("xmax").string) yMax = int(o.find("ymax").string) # truncate any bounding box coordinates that may fall # outside the boundaries of the image xMin = max(0, xMin) yMin = max(0, yMin) xMax = min(w, xMax) yMax = min(h, yMax) # update our list of ground-truth bounding boxes gtBoxes.append((xMin, yMin, xMax, yMax)) Looping over all <object> elements from the XML file (i.e,. the actual ground-truth bounding boxes), we: Extract the label as well as the bounding box coordinates (Lines 49-53) Ensure bounding box coordinates do not fall outside bounds of image spatial dimensions by truncating them accordingly (Lines 57-60) Update our list of ground-truth bounding boxes (Line 63) At this point, we need to load an image and perform Selective Search: # load the input image from disk image = cv2.imread(imagePath) # run selective search on the image and initialize our list of # proposed boxes ss = cv2.ximgproc.segmentation.createSelectiveSearchSegmentation() ss.setBaseImage(image) ss.switchToSelectiveSearchFast() rects = ss.process() proposedRects= [] # loop over the rectangles generated by selective search for (x, y, w, h) in rects: # convert our bounding boxes from (x, y, w, h) to (startX, # startY, startX, endY) proposedRects.append((x, y, x + w, y + h)) Here, we load an image from the dataset (Line 66), perform Selective Search to find region proposals (Lines 70-73), and populate our proposedRects list with the results (Lines 74-80). Now that we have (1) ground-truth bounding boxes and (2) region proposals generated by Selective Search, we will use IoU to determine which regions overlap sufficiently with the ground-truth boxes and which do not: # initialize counters used to count the number of positive and # negative ROIs saved thus far positiveROIs = 0 negativeROIs = 0 # loop over the maximum number of region proposals for proposedRect in proposedRects[:config.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
MAX_PROPOSALS]: # unpack the proposed rectangle bounding box (propStartX, propStartY, propEndX, propEndY) = proposedRect # loop over the ground-truth bounding boxes for gtBox in gtBoxes: # compute the intersection over union between the two # boxes and unpack the ground-truth bounding box iou = compute_iou(gtBox, proposedRect) (gtStartX, gtStartY, gtEndX, gtEndY) = gtBox # initialize the ROI and output path roi = None outputPath = None We will refer to: positiveROIs as the number of region proposals for the current image that (1) sufficiently overlap with ground-truth annotations and (2) are saved to disk in the path contained in config. POSITIVE_PATH negativeROIs as the number of region proposals for the current image that (1) fail to meet our IoU threshold of 70% and (2) are saved to disk to the config. NEGATIVE_PATH We initialize both of these counters on Lines 84 and 85. Beginning on Line 88, we loop over region proposals generated by Selective Search (up to our defined maximum proposal count). Inside, we: Unpack the current bounding box generated by Selective Search (Line 90). Loop over all the ground-truth bounding boxes (Line 93). Compute the IoU between the region proposal bounding box and the ground-truth bounding box (Line 96). This iou value will serve as our threshold to determine if a region proposal is a positive ROI or negative ROI. Initialize the roi along with its outputPath (Lines 100 and 101). Let’s determine if this proposedRect and gtBox pair is a positive ROI: # check to see if the IOU is greater than 70% *and* that # we have not hit our positive count limit if iou > 0.7 and positiveROIs <= config.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
MAX_POSITIVE: # extract the ROI and then derive the output path to # the positive instance roi = image[propStartY:propEndY, propStartX:propEndX] filename = "{}.png".format(totalPositive) outputPath = os.path.sep.join([config. POSITVE_PATH, filename]) # increment the positive counters positiveROIs += 1 totalPositive += 1 Assuming this particular region passes the check to see if we have an IoU > 70% and we have not yet hit our limit on positive examples for the current image (Line 105), we simply: Extract the positive roi via NumPy slicing (Line 108) Construct the outputPath to where the ROI will be exported (Lines 109-111) Increment our positive counters (Lines 114 and 115) In order to determine if this proposedRect and gtBox pair is a negative ROI, we first need to check whether we have a full overlap: # determine if the proposed bounding box falls *within* # the ground-truth bounding box fullOverlap = propStartX >= gtStartX fullOverlap = fullOverlap and propStartY >= gtStartY fullOverlap = fullOverlap and propEndX <= gtEndX fullOverlap = fullOverlap and propEndY <= gtEndY If the region proposal bounding box (proposedRect) falls entirely within the ground-truth bounding box (gtBox), then we have what I call a fullOverlap. The logic on Lines 119-122 inspects the (x, y)-coordinates to determine whether we have such a fullOverlap. We’re now ready to handle the case where our proposedRect and gtBox are considered a negative ROI: # check to see if there is not full overlap *and* the IoU # is less than 5% *and* we have not hit our negative # count limit if not fullOverlap and iou < 0.05 and \ negativeROIs <= config. MAX_NEGATIVE: # extract the ROI and then derive the output path to # the negative instance roi = image[propStartY:propEndY, propStartX:propEndX] filename = "{}.png".format(totalNegative) outputPath = os.path.sep.join([config. NEGATIVE_PATH, filename]) # increment the negative counters negativeROIs += 1 totalNegative += 1 Here, our conditional (Lines 127 and 128) checks to see if all of the following hold true: There is not full overlapThe IoU is sufficiently smallOur limit on the number of negative examples for the current image is not exceeded If all checks pass, we: Extract the negative roi (Line 131) Construct the path to where the ROI will be stored (Lines 132-134) Increment the negative counters (Lines 137 and 138) At this point, we’ve reached our final task for building the dataset: exporting the current roi to the appropriate directory: # check to see if both the ROI and output path are valid if roi is not None and outputPath is not None: # resize the ROI to the input dimensions of the CNN # that we'll be fine-tuning, then write the ROI to # disk roi = cv2.resize(roi, config. INPUT_DIMS, interpolation=cv2.INTER_CUBIC) cv2.imwrite(outputPath, roi) Assuming both the ROI and associated output path are not None, (Line 141), we simply resize the ROI according to our CNN input dimensions and write the ROI to disk (Lines 145-147). Recall that each ROI’s outputPath is based on either the config. POSITIVE_PATH or config. NEGATIVE_PATH as well as the current totalPositive or totalNegative count.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Therefore, our ROIs are sorted according to the purpose of this script as either dataset/raccoon or dataset/no_raccoon. In the next section, we’ll put this script to work for us! Preparing our image dataset for object detection We are now ready to build our image dataset for R-CNN object detection. If you haven’t yet, use the “Downloads” section of this tutorial to download the source code and example image datasets. From there, open up a terminal, and execute the following command: $ time python build_dataset.py [INFO] processing image 1/200... [INFO] processing image 2/200... [INFO] processing image 3/200... ... [INFO] processing image 198/200... [INFO] processing image 199/200... [INFO] processing image 200/200... real 5m42.453s user 6m50.769s sys 1m23.245s As you can see, running Selective Search on our entire dataset of 200 images took 5m42 seconds. If you check the contents of the raccoons and no_raccoons subdirectories of dataset, you’ll see that we have 1,560 images of “raccoons” and 2,200 images of “no raccoons”: $ ls -l dataset/raccoon/*.png | wc -l 1560 $ ls -l dataset/no_raccoon/*.png | wc -l 2200 A sample of both classes can be seen below: Figure 6: A montage of our resulting raccoon dataset, which we will use to build a rudimentary R-CNN object detector with Keras and TensorFlow. As you can see from Figure 6 (left), the “No Raccoon” class has sample image patches generated by Selective Search that did not overlap significantly with any of the raccoon ground-truth bounding boxes. Then, on Figure 6 (right), we have our “Raccoon” class images. You’ll note that some of these images are similar to each other and in some cases are near-duplicates — that is in fact the intended behavior. Keep in mind that Selective Search attempts to identify regions of an image that could contain a potential object.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Therefore, it’s totally feasible that Selective Search could fire multiple times in the similar regions. You could choose to keep these regions (as I’ve done) or add additional logic that can be used to filter out regions that significantly overlap (I’m leaving that as an exercise to you). Fine-tuning a network for object detection with Keras and TensorFlow With our dataset created via the previous two sections (Step #1), we’re now ready to fine-tune a classification CNN to recognize both of these classes (Step #2). When we combine this classifier with Selective Search, we’ll be able to build our R-CNN object detector. For the purposes of this tutorial, I’ve chosen to fine-tune the MobileNet V2 CNN, which is pre-trained on the 1,000-class ImageNet dataset. I recommend that you read up on the concepts of transfer learning and fine-tuning if you are not familiar with them: Transfer Learning with Keras and Deep Learning (be sure to read from the beginning through the “Two types of transfer learning: feature extraction and fine tuning” section at a minimum)Fine-tuning with Keras and Deep Learning (I highly recommend reading this tutorial in its entirety) The result of fine-tuning MobileNet will be a classifier that distinguishes between our raccoon and no_raccoon classes. When you’re ready, open the fine_tune_rcnn.py file in your project directory structure, and let’s get started: # import the necessary packages from pyimagesearch import config from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.applications import MobileNetV2 from tensorflow.keras.layers import AveragePooling2D from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Input from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam from tensorflow.keras.applications.mobilenet_v2 import preprocess_input from tensorflow.keras.preprocessing.image import img_to_array from tensorflow.keras.preprocessing.image import load_img from tensorflow.keras.utils import to_categorical from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from imutils import paths import matplotlib.pyplot as plt import numpy as np import argparse import pickle import os Phew! That’s a metric ton of imports we’ll be using for this script. Let’s break them down: config: Our Python configuration file consisting of paths and constants. ImageDataGenerator: For the purposes of data augmentation.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
MobileNetV2: The MobileNet CNN architecture is common, so it is built-in to TensorFlow/Keras. For the purposes of fine-tuning, we’ll load the network with pre-trained ImageNet weights, chop off the network’s head and replace it, and tune/train until our network is performing well. tensorflow.keras.layers: A selection of CNN layer types are used to build/replace the head of MobileNet V2. Adam: An optimizer alternative to Stochastic Gradient Descent (SGD). LabelBinarizer and to_categorical: Used in conjunction to perform one-hot encoding of our class labels. train_test_split: Conveniently helps us segment our dataset into training and testing sets. classification_report: Computes a statistical summary of our model evaluation results. matplotlib: Python’s de facto plotting package will be used to generate accuracy/loss curves from our training history data. With our imports ready to go, let’s parse command line arguments and set our hyperparameter constants: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-p", "--plot", type=str, default="plot.png", help="path to output loss/accuracy plot") args = vars(ap.parse_args()) # initialize the initial learning rate, number of epochs to train for, # and batch size INIT_LR = 1e-4 EPOCHS = 5 BS = 32 The --plot command line argument defines the path to our accuracy/loss plot (Lines 27-30).
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
We then establish training hyperparameters including our initial learning rate, number of training epochs, and batch size (Lines 34-36). Loading our dataset is straightforward, since we did all the hard work already in Step #1: # grab the list of images in our dataset directory, then initialize # the list of data (i.e., images) and class labels print("[INFO] loading images...") imagePaths = list(paths.list_images(config. BASE_PATH)) data = [] labels = [] # loop over the image paths for imagePath in imagePaths: # extract the class label from the filename label = imagePath.split(os.path.sep)[-2] # load the input image (224x224) and preprocess it image = load_img(imagePath, target_size=config. INPUT_DIMS) image = img_to_array(image) image = preprocess_input(image) # update the data and labels lists, respectively data.append(image) labels.append(label) Recall that our new dataset lives in the path defined by config. BASE_PATH. Line 41 grabs all the imagePaths located in the base path and its class subdirectories. From there, we seek to populate our data and labels lists (Lines 42 and 43). To do so, we define a loop over the imagePaths (Line 46) and proceed to: Extract the particular image’s class label directly from the path (Line 48) Load and pre-process the image, specifying the target_size according to the input dimensions of the MobileNet V2 CNN (Lines 51-53) Append the image and label to the data and labels lists We have a few more steps to take care of to prepare our data: # convert the data and labels to NumPy arrays data = np.array(data, dtype="float32") labels = np.array(labels) # perform one-hot encoding on the labels lb = LabelBinarizer() labels = lb.fit_transform(labels) labels = to_categorical(labels) # partition the data into training and testing splits using 75% of # the data for training and the remaining 25% for testing (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.20, stratify=labels, random_state=42) # construct the training image generator for data augmentation aug = ImageDataGenerator( rotation_range=20, zoom_range=0.15, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.15, horizontal_flip=True, fill_mode="nearest") Here we: Convert the data and label lists to NumPy arrays (Lines 60 and 61) One-hot encode our labels (Lines 64-66) Construct our training and testing data splits (Lines 70 and 71) Initialize our data augmentation object with settings for random mutations of our data to improve our model’s ability to generalize (Lines 74-81) Now that our data is ready, let’s prepare MobileNet V2 for fine-tuning: # load the MobileNetV2 network, ensuring the head FC layer sets are # left off baseModel = MobileNetV2(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3))) # construct the head of the model that will be placed on top of the # the base model headModel = baseModel.output headModel = AveragePooling2D(pool_size=(7, 7))(headModel) headModel = Flatten(name="flatten")(headModel) headModel = Dense(128, activation="relu")(headModel) headModel = Dropout(0.5)(headModel) headModel = Dense(2, activation="softmax")(headModel) # place the head FC model on top of the base model (this will become # the actual model we will train) model = Model(inputs=baseModel.input, outputs=headModel) # loop over all layers in the base model and freeze them so they will # *not* be updated during the first training process for layer in baseModel.layers: layer.trainable = False To ensure our MobileNet V2 CNN is ready to be fine-tuned, we use the following approach: Load MobileNet pre-trained on the ImageNet dataset, leaving off fully-connect (FC) head Construct a new FC head Append the new FC head to the MobileNet base resulting in our model Freeze the base layers of MobileNet (i.e., set them as not trainable) Take a step back to consider what we’ve just accomplished in this code block. The MobileNet base of our network has pre-trained weights that are frozen. We will only train the head of the network.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Notice that the head of our network has a Softmax Classifier with 2 outputs corresponding to our raccoon and no_raccoon classes. So far, in this script, we’ve loaded our data, initialized our data augmentation object, and prepared for fine tuning. We’re now ready to fine-tune our model: # compile our model print("[INFO] compiling model...") opt = Adam(lr=INIT_LR) model.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"]) # train the head of the network print("[INFO] training head...") H = model.fit( aug.flow(trainX, trainY, batch_size=BS), steps_per_epoch=len(trainX) // BS, validation_data=(testX, testY), validation_steps=len(testX) // BS, epochs=EPOCHS) We compile our model with the Adam optimizer and binary crossentropy loss. Note: If you are using this script as a basis for training with a dataset of three or more classes, ensure you do the following: (1) Use "categorical_crossentropy" loss on Lines 109 and 110, and (2) set your Softmax Classifier outputs accordingly on Line 95 (we’re using 2 in this tutorial because we have two classes). Training launches via Lines 114-119. Since TensorFlow 2.0 was released, the fit method can handle data augmentation generators, whereas previously we relied on the fit_generator method. For more details on these two methods, be sure to read my updated tutorial: How to use Keras fit and fit_generator (a hands-on tutorial). Once training draws to a close, our model is ready for evaluation on the test set: # make predictions on the testing set print("[INFO] evaluating network...") predIdxs = model.predict(testX, batch_size=BS) # for each image in the testing set we need to find the index of the # label with corresponding largest predicted probability predIdxs = np.argmax(predIdxs, axis=1) # show a nicely formatted classification report print(classification_report(testY.argmax(axis=1), predIdxs, target_names=lb.classes_)) Line 123 makes predictions on our testing set, and then Line 127 grabs all indices of the labels with the highest predicted probability. We then print our classification_report to the terminal for statistical analysis (Lines 130 and 131). Let’s go ahead and export both our (1) trained model and (2) label encoder: # serialize the model to disk print("[INFO] saving mask detector model...") model.save(config.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
MODEL_PATH, save_format="h5") # serialize the label encoder to disk print("[INFO] saving label encoder...") f = open(config. ENCODER_PATH, "wb") f.write(pickle.dumps(lb)) f.close() Line 135 serializes our model to disk. For TensorFlow 2.0+, I recommend explicitly setting the save_format="h5" (HDF5 format). Our label encoder is serialized to disk in Python’s pickle format (Lines 139-141). To close out, we’ll plot our accuracy/loss curves from our training history: # plot the training loss and accuracy N = EPOCHS plt.style.use("ggplot") plt.figure() plt.plot(np.arange(0, N), H.history["loss"], label="train_loss") plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss") plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc") plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(args["plot"]) Using matplotlib, we plot the accuracy and loss curves for inspection (Lines 144-154). We export the resulting figure to the path contained in the --plot command line argument. Training our R-CNN object detection network with Keras and TensorFlow We are now ready to fine-tune our mobile such that we can create an R-CNN object detector! If you haven’t yet, go to the “Downloads” section of this tutorial to download the source code and sample dataset. From there, open up a terminal, and execute the following command: $ time python fine_tune_rcnn.py [INFO] loading images... [INFO] compiling model... [INFO] training head... Train for 94 steps, validate on 752 samples Train for 94 steps, validate on 752 samples Epoch 1/5 94/94 [==============================] - 77s 817ms/step - loss: 0.3072 - accuracy: 0.8647 - val_loss: 0.1015 - val_accuracy: 0.9728 Epoch 2/5 94/94 [==============================] - 74s 789ms/step - loss: 0.1083 - accuracy: 0.9641 - val_loss: 0.0534 - val_accuracy: 0.9837 Epoch 3/5 94/94 [==============================] - 71s 756ms/step - loss: 0.0774 - accuracy: 0.9784 - val_loss: 0.0433 - val_accuracy: 0.9864 Epoch 4/5 94/94 [==============================] - 74s 784ms/step - loss: 0.0624 - accuracy: 0.9781 - val_loss: 0.0367 - val_accuracy: 0.9878 Epoch 5/5 94/94 [==============================] - 74s 791ms/step - loss: 0.0590 - accuracy: 0.9801 - val_loss: 0.0340 - val_accuracy: 0.9891 [INFO] evaluating network... precision recall f1-score support no_raccoon 1.00 0.98 0.99 440 raccoon 0.97 1.00 0.99 312 accuracy 0.99 752 macro avg 0.99 0.99 0.99 752 weighted avg 0.99 0.99 0.99 752 [INFO] saving mask detector model... [INFO] saving label encoder... real 6m37.851s user 31m43.701s sys 33m53.058s Fine-tuning MobileNet on my 3Ghz Intel Xeon W processor took ~6m30 seconds, and as you can see, we are obtaining ~99% accuracy. And as our training plot shows, there are little signs of overfitting: Figure 7: Accuracy and loss curves for fine-tuning the MobileNet V2 classifier on the raccoon dataset.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
This classifier is a key component in our elementary R-CNN object detection with Keras, TensorFlow, and Deep Learning. With our MobileNet model fine-tuned for raccoon prediction, we’re ready to put all the pieces together and create our R-CNN object detection pipeline! Putting the pieces together: Implementing our R-CNN object detection inference script Figure 8: Steps to build a R-CNN object detection with Keras, TensorFlow, and Deep Learning. So far, we’ve accomplished: Step #1: Build an object detection dataset using Selective SearchStep #2: Fine-tune a classification network (originally trained on ImageNet) for object detection At this point, we’re going to put our trained model to work to perform object detection inference on new images. Accomplishing our object detection inference script accounts for Step #3 – Step #6. Let’s review those steps now: Step #3: Create an object detection inference script that utilized Selective Search to propose regions that could contain an object that we would like to detectStep #4: Use our fine-tuned network to classify each region proposed via Selective SearchStep #5: Apply non-maxima suppression to suppress weak, overlapping bounding boxesStep #6: Return the final object detection results We will take Step #6 a bit further and display the results so we can visually verify that our system is working. Let’s implement the R-CNN object detection pipeline now — open up a new file, name it detect_object_rcnn.py, and insert the following code: # import the necessary packages from pyimagesearch.nms import non_max_suppression from pyimagesearch import config from tensorflow.keras.applications.mobilenet_v2 import preprocess_input from tensorflow.keras.preprocessing.image import img_to_array from tensorflow.keras.models import load_model import numpy as np import argparse import imutils import pickle import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to input image") args = vars(ap.parse_args()) Most of this script’s imports should look familiar by this point if you’ve been following along. The one that sticks out is non_max_suppression (Line 2). Be sure to read my tutorial on Non-Maximum Suppression for Object Detection in Python if you want to study what NMS entails.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Our script accepts the --image command line argument, which points to our input image path (Lines 14-17). From here, let’s (1) load our model, (2) load our image, and (3) perform Selective Search: # load the our fine-tuned model and label binarizer from disk print("[INFO] loading model and label binarizer...") model = load_model(config. MODEL_PATH) lb = pickle.loads(open(config. ENCODER_PATH, "rb").read()) # load the input image from disk image = cv2.imread(args["image"]) image = imutils.resize(image, width=500) # run selective search on the image to generate bounding box proposal # regions print("[INFO] running selective search...") ss = cv2.ximgproc.segmentation.createSelectiveSearchSegmentation() ss.setBaseImage(image) ss.switchToSelectiveSearchFast() rects = ss.process() Lines 21 and 22 load our fine-tuned raccoon model and associated label binarizer. We then load our input --image and resize it to a known width (Lines 25 and 26). Next, we perform Selective Search on our image to generate our region proposals (Lines 31-34). At this point, we’re going to extract each of our proposal ROIs and pre-process them: # initialize the list of region proposals that we'll be classifying # along with their associated bounding boxes proposals = [] boxes = [] # loop over the region proposal bounding box coordinates generated by # running selective search for (x, y, w, h) in rects[:config. MAX_PROPOSALS_INFER]: # extract the region from the input image, convert it from BGR to # RGB channel ordering, and then resize it to the required input # dimensions of our trained CNN roi = image[y:y + h, x:x + w] roi = cv2.cvtColor(roi, cv2.COLOR_BGR2RGB) roi = cv2.resize(roi, config. INPUT_DIMS, interpolation=cv2.INTER_CUBIC) # further preprocess the ROI roi = img_to_array(roi) roi = preprocess_input(roi) # update our proposals and bounding boxes lists proposals.append(roi) boxes.append((x, y, x + w, y + h)) First, we initialize a list to hold our ROI proposals and another to hold the (x, y)-coordinates of our bounding boxes (Lines 38 and 39). We define a loop over the region proposal bounding boxes generated by Selective Search (Line 43).
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Inside the loop, we extract the roi via NumPy slicing and pre-process using the same steps as in our build_dataset.py script (Lines 47-54). Both the roi and (x, y)-coordinates are then added to the proposals and boxes lists (Lines 57 and 58). Next, we’ll classify all of our proposals: # convert the proposals and bounding boxes into NumPy arrays proposals = np.array(proposals, dtype="float32") boxes = np.array(boxes, dtype="int32") print("[INFO] proposal shape: {}".format(proposals.shape)) # classify each of the proposal ROIs using fine-tuned model print("[INFO] classifying proposals...") proba = model.predict(proposals) Lines 61 and 62 convert our proposals and boxes into NumPy arrays with the specified datatype. Calling the predict method on our batch of proposals performs inference and returns the predictions (Line 67). Keep in mind that we have used a classifier on our Selective Search region proposals here. We’re using a combination of classification and Selective Search to conduct object detection. Our boxes contain the locations (i.e., coordinates) of our original input --image for where our objects (either raccoon or no_raccoon) are. The remaining code blocks localize and annotate our raccoon predictions. Let’s go ahead and filter for all the raccoon predictions, dropping the no_raccoon results: # find the index of all predictions that are positive for the # "raccoon" class print("[INFO] applying NMS...") labels = lb.classes_[np.argmax(proba, axis=1)] idxs = np.where(labels == "raccoon")[0] # use the indexes to extract all bounding boxes and associated class # label probabilities associated with the "raccoon" class boxes = boxes[idxs] proba = proba[idxs][:, 1] # further filter indexes by enforcing a minimum prediction # probability be met idxs = np.where(proba >= config. MIN_PROBA) boxes = boxes[idxs] proba = proba[idxs] To filter for raccoon results, we: Extract all predictions that are positive for raccoon (Lines 72 and 73) Use indices to extract all bounding boxes and class label probabilities associated with the raccoon class (Lines 77 and 78) Further filter indexes by enforcing a minimum probability (Lines 82-84) We’re now going to visualize the results without applying NMS: # clone the original image so that we can draw on it clone = image.copy() # loop over the bounding boxes and associated probabilities for (box, prob) in zip(boxes, proba): # draw the bounding box, label, and probability on the image (startX, startY, endX, endY) = box cv2.rectangle(clone, (startX, startY), (endX, endY), (0, 255, 0), 2) y = startY - 10 if startY - 10 > 10 else startY + 10 text= "Raccoon: {:.2f}%".format(prob * 100) cv2.putText(clone, text, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 255, 0), 2) # show the output after *before* running NMS cv2.imshow("Before NMS", clone) Looping over bounding boxes and probabilities that are predicted to contain raccoons (Line 90), we: Extract the bounding box coordinates (Line 92) Draw the bounding box rectangle (Lines 93 and 94) Draw the label and probability text at the top-left corner of the bounding box (Lines 95-98) From there, we display the before NMS visualization (Line 101).
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Let’s apply NMS and see how the result compares: # run non-maxima suppression on the bounding boxes boxIdxs = non_max_suppression(boxes, proba) # loop over the bounding box indexes for i in boxIdxs: # draw the bounding box, label, and probability on the image (startX, startY, endX, endY) = boxes[i] cv2.rectangle(image, (startX, startY), (endX, endY), (0, 255, 0), 2) y = startY - 10 if startY - 10 > 10 else startY + 10 text= "Raccoon: {:.2f}%".format(proba[i] * 100) cv2.putText(image, text, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 255, 0), 2) # show the output image *after* running NMS cv2.imshow("After NMS", image) cv2.waitKey(0) We apply non-maxima suppression (NMS) via Line 104, effectively eliminating overlapping rectangles around objects. From there, Lines 107-119 draw the bounding boxes, labels, and probabilities and display the after NMS results until a key is pressed. Great job implementing your elementary R-CNN object detection script using TensorFlow/Keras, OpenCV, and Python. R-CNN object detection results using Keras and TensorFlow At this point, we have fully implemented a bare-bones R-CNN object detection pipeline using Keras, TensorFlow, and OpenCV. Are you ready to see it in action? Start by using the “Downloads” section of this tutorial to download the source code, example dataset, and pre-trained R-CNN detector. From there, you can execute the following command: $ python detect_object_rcnn.py --image images/raccoon_01.jpg [INFO] loading model and label binarizer... [INFO] running selective search... [INFO] proposal shape: (200, 224, 224, 3) [INFO] classifying proposals... [INFO] applying NMS... Here, you can see that two raccoon bounding boxes were found after applying our R-CNN object detector: Figure 9: Results of R-CNN object detection before NMS has been applied. Our elementary R-CNN was created with Selective Search and Deep Learning using TensorFlow, Keras, and OpenCV. By applying non-maxima suppression, we can suppress the weaker one, leaving with the one correct bounding box: Figure 10: NMS has suppressed overlapping bounding boxes that were present in Figure 9. Let’s try another image: $ python detect_object_rcnn.py --image images/raccoon_02.jpg [INFO] loading model and label binarizer... [INFO] running selective search... [INFO] proposal shape: (200, 224, 224, 3) [INFO] classifying proposals... [INFO] applying NMS... Again, here we have two bounding boxes: Figure 11: Our R-CNN object detector built with Keras, TensorFlow, and Deep Learning has detected our raccoon.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
In this example, NMS has not been applied. Applying non-maxima suppression to our R-CNN object detection output leaves us with the final object detection: Figure 12: After applying NMS to our R-CNN object detection results, only one bounding box remains around the raccoon. Let’s look at one final example: $ python detect_object_rcnn.py --image images/raccoon_03.jpg [INFO] loading model and label binarizer... [INFO] running selective search... [INFO] proposal shape: (200, 224, 224, 3) [INFO] classifying proposals... [INFO] applying NMS... Figure 13: R-CNN object detection with and without NMS yields the same result in this particular case. Using Python and Keras/TensorFlow and OpenCV we built an R-CNN object detector. As you can see, only one bounding box was detected, so the output of the before/after NMS is identical. So there you have it, building a simple R-CNN object detector isn’t as hard as it may seem! We were able to build a simplified R-CNN object detection pipeline using Keras, TensorFlow, and OpenCV in only 427 lines of code, including comments! I hope that you can use this pipeline when you start to build basic object detectors of your own. What's next? We recommend PyImageSearch University.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to implement a basic R-CNN object detector using Keras, TensorFlow, and deep learning. Our R-CNN object detector was a stripped-down, bare-bones version of what Girshick et al. may have created during the initial experiments for their seminal object detection paper Rich feature hierarchies for accurate object detection and semantic segmentation. The R-CNN object detection pipeline we implemented was a 6-step process, including: Step #1: Building an object detection dataset using Selective SearchStep #2: Fine-tuning a classification network (originally trained on ImageNet) for object detectionStep #3: Creating an object detection inference script that utilizes Selective Search to propose regions that could contain an object that we would like to detectStep #4: Using our fine-tuned network to classify each region proposed via Selective SearchStep #5: Applying non-maxima suppression to suppress weak, overlapping bounding boxesStep #6: Returning the final object detection results Overall, our R-CNN object detector performed quite well! I hope you can use this implementation as a starting point for your own object detection projects. And if you would like to learn more about implementing your own custom deep learning object detectors, be sure to refer to my book, Deep Learning for Computer Vision with Python, where I cover object detection in detail.
https://pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
Click here to download the source code to this pos In this tutorial, you will learn two ways to implement label smoothing using Keras, TensorFlow, and Deep Learning. When training your own custom deep neural networks there are two critical questions that you should constantly be asking yourself: Am I overfitting to my training data? Will my model generalize to data outside my training and testing splits? Regularization methods are used to help combat overfitting and help our model generalize. Examples of regularization methods include dropout, L2 weight decay, data augmentation, etc. However, there is another regularization technique we haven’t discussed yet — label smoothing. Label smoothing: Turns “hard” class label assignments to “soft” label assignments. Operates directly on the labels themselves. Is dead simple to implement. Can lead to a model that generalizes better.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
In the remainder of this tutorial, I’ll show you how to implement label smoothing and utilize it when training your own custom neural networks. To learn more about label smoothing with Keras and TensorFlow, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Label smoothing with Keras, TensorFlow, and Deep Learning In the first part of this tutorial I’ll address three questions: What is label smoothing? Why would we want to apply label smoothing? How does label smoothing improve our output model? From there I’ll show you two methods to implement label smoothing using Keras and TensorFlow: Label smoothing by explicitly updating your labels list Label smoothing using your loss function We’ll then train our own custom models using both methods and examine the results. What is label smoothing and why would we want to use it? When performing image classification tasks we typically think of labels as hard, binary assignments. For example, let’s consider the following image from the MNIST dataset: Figure 1: Label smoothing with Keras, TensorFlow, and Deep Learning is a regularization technique with a goal of enabling your model to generalize to new data better.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
This digit is clearly a “7”, and if we were to write out the one-hot encoded label vector for this data point it would look like the following: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0] Notice how we’re performing hard label assignment here: all entries in the vector are 0 except for the 8th index (which corresponds to the digit 7) which is a 1. Hard label assignment is natural to us and maps to how our brains want to efficiently categorize and store information in neatly labeled and packaged boxes. For example, we would look at Figure 1 and say something like: “I’m sure that’s a 7. I’m going to label it a 7 and put it in the ‘7’ box.” It would feel awkward and unintuitive to say the following: “Well, I’m sure that’s a 7. But even though I’m 100% certain that it’s a 7, I’m going to put 90% of that 7 in the ‘7’ box and then divide the remaining 10% into all boxes just so my brain doesn’t overfit to what a ‘7’ looks like.” If we were to apply soft label assignment to our one-hot encoded vector above it would now look like this: [0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.91 0.01 0.01] Notice how summing the list of values equals 1, just like in the original one-hot encoded vector. This type of label assignment is called soft label assignment. Unlike hard label assignments where class labels are binary (i.e., positive for one class and a negative example for all other classes), soft label assignment allows: The positive class to have the largest probability While all other classes have a very small probability So, why go through all the trouble? The answer is that we don’t want our model to become too confident in its predictions.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
By applying label smoothing we can lessen the confidence of the model and prevent it from descending into deep crevices of the loss landscape where overfitting occurs. For a mathematically motivated discussion of label smoothing, I would recommend reading the following article by Lei Mao. Additionally, be sure to read Müller et al. ’s 2019 paper, When Does Label Smoothing Help? as well as He at al. ’s Bag of Tricks for Image Classification with Convolutional Neural Networks for detailed studies on label smoothing. In the remainder of this tutorial, I’ll show you how to implement label smoothing with Keras and TensorFlow. Project structure Go ahead and grab today’s files from the “Downloads” section of today’s tutorial. Once you have extracted the files, you can use the tree command as shown to view the project structure: $ tree --dirsfirst . ├── pyimagesearch │   ├── __init__.py │   ├── learning_rate_schedulers.py │   └── minigooglenet.py ├── label_smoothing_func.py ├── label_smoothing_loss.py ├── plot_func.png └── plot_loss.png 1 directory, 7 files Inside the pyimagesearch module you’ll find two files: learning_rate_schedulers.py : Be sure to refer to Keras Learning Rate Schedulers and Decay, a previous PyImageSearch tutorial.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
minigooglenet.py : MiniGoogLeNet is the CNN architecture we will utilize. Be sure to refer to my book, Deep Learning for Computer Vision with Python, for more details of the model architecture. We will not be covering the above implementations today and will instead focus on our two label smoothing methods: Method #1 uses label smoothing by explicitly updating your labels list in label_smoothing_func.py . Method #2 covers label smoothing using your TensorFlow/Keras loss function in label_smoothing_loss.py . Method #1: Label smoothing by explicitly updating your labels list The first label smoothing implementation we’ll be looking at directly modifies our labels after one-hot encoding — all we need to do is implement a simple custom function. Let’s get started. Open up the label_smoothing_func.py file in your project structure and insert the following code: # set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from pyimagesearch.learning_rate_schedulers import PolynomialDecay from pyimagesearch.minigooglenet import MiniGoogLeNet from sklearn.metrics import classification_report from sklearn.preprocessing import LabelBinarizer from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.callbacks import LearningRateScheduler from tensorflow.keras.optimizers import SGD from tensorflow.keras.datasets import cifar10 import matplotlib.pyplot as plt import numpy as np import argparse Lines 2-16 import our packages, modules, classes, and functions. In particular, we’ll work with the scikit-learn LabelBinarizer (Line 9). The heart of Method #1 lies in the smooth_labels function: def smooth_labels(labels, factor=0.1): # smooth the labels labels *= (1 - factor) labels += (factor / labels.shape[1]) # returned the smoothed labels return labels Line 18 defines the smooth_labels function. The function accepts two parameters: labels: Contains one-hot encoded labels for all data points in our dataset.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
factor: The optional “smoothing factor” is set to 10% by default. The remainder of the smooth_labels function is best explained with a two-step example. To start, let’s assume that the following one-hot encoded vector is supplied to our function: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0] Notice how we have a hard label assignment here — the true class labels is a 1 while all others are 0. Line 20 reduces our hard assignment label of 1 by the supplied factor amount. With factor=0.1, the operation on Line 20 yields the following vector: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9, 0.0, 0.0] Notice how the hard assignment of 1.0 has been dropped to 0.9. The next step is to apply a very small amount of confidence to the rest of the class labels in the vector. We accomplish this task by taking factor and dividing it by the total number of possible class labels. In our case, there are 10 possible class labels so when factor=0.1, we, therefore, have 0.1 / 10 = 0.01 — that value is added to our vector on Line 21, resulting in: [0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.91 0.01 0.01] Notice how the “incorrect” classes here have a very small amount of confidence. It doesn’t seem like much, but in practice, it can help our model from overfitting. Finally, Line 24 returns the smoothed labels to the calling function.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
Note: The smooth_labels function in part comes from Chengwei’s article where they discuss the Bag of Tricks for Image Classification with Convolutional Neural Networks paper. Be sure to read the article if you’re interested in implementations from the paper. Let’s continue on with our implementation: # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-s", "--smoothing", type=float, default=0.1, help="amount of label smoothing to be applied") ap.add_argument("-p", "--plot", type=str, default="plot.png", help="path to output plot file") args = vars(ap.parse_args()) Our two command line arguments include: --smoothing : The smoothing factor (refer to the smooth_labels function and example above). --plot : The path to the output plot file. Let’s prepare our hyperparameters and data: # define the total number of epochs to train for, initial learning # rate, and batch size NUM_EPOCHS = 70 INIT_LR = 5e-3 BATCH_SIZE = 64 # initialize the label names for the CIFAR-10 dataset labelNames = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"] # load the training and testing data, converting the images from # integers to floats print("[INFO] loading CIFAR-10 data...") ((trainX, trainY), (testX, testY)) = cifar10.load_data() trainX = trainX.astype("float") testX = testX.astype("float") # apply mean subtraction to the data mean = np.mean(trainX, axis=0) trainX -= mean testX -= mean Lines 36-38 initialize three training hyperparameters including the total number of epochs to train for, initial learning rate, and batch size. Lines 41 and 42 then initialize our class labelNames for the CIFAR-10 dataset. Lines 47-49 handle loading CIFAR-10 dataset. Mean subtraction, a form of normalization covered in the Practitioner Bundle of Deep Learning for Computer Vision with Python, is applied to the data via Lines 52-54. Let’s apply label smoothing via Method #1: # convert the labels from integers to vectors, converting the data # type to floats so we can apply label smoothing lb = LabelBinarizer() trainY = lb.fit_transform(trainY) testY = lb.transform(testY) trainY = trainY.astype("float") testY = testY.astype("float") # apply label smoothing to the *training labels only* print("[INFO] smoothing amount: {}".format(args["smoothing"])) print("[INFO] before smoothing: {}".format(trainY[0])) trainY = smooth_labels(trainY, args["smoothing"]) print("[INFO] after smoothing: {}".format(trainY[0])) Lines 58-62 one-hot encode the labels and convert them to floats.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
Line 67 applies label smoothing using our smooth_labels function. From here we’ll prepare data augmentation and our learning rate scheduler: # construct the image generator for data augmentation aug = ImageDataGenerator( width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True, fill_mode="nearest") # construct the learning rate scheduler callback schedule = PolynomialDecay(maxEpochs=NUM_EPOCHS, initAlpha=INIT_LR, power=1.0) callbacks = [LearningRateScheduler(schedule)] # initialize the optimizer and model print("[INFO] compiling model...") opt = SGD(lr=INIT_LR, momentum=0.9) model = MiniGoogLeNet.build(width=32, height=32, depth=3, classes=10) model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) # train the network print("[INFO] training network...") H = model.fit_generator( aug.flow(trainX, trainY, batch_size=BATCH_SIZE), validation_data=(testX, testY), steps_per_epoch=len(trainX) // BATCH_SIZE, epochs=NUM_EPOCHS, callbacks=callbacks, verbose=1) Lines 71-75 instantiate our data augmentation object. Lines 78-80 initialize learning rate decay via a callback that will be executed at the start of each epoch. To learn about creating your own custom Keras callbacks, be sure to refer to the Starter Bundle of Deep Learning for Computer Vision with Python. We then compile and train our model (Lines 84-97). Once the model is fully trained, we go ahead and generate a classification report as well as a training history plot: # evaluate the network print("[INFO] evaluating network...") predictions = model.predict(testX, batch_size=BATCH_SIZE) print(classification_report(testY.argmax(axis=1), predictions.argmax(axis=1), target_names=labelNames)) # construct a plot that plots and saves the training history N = np.arange(0, NUM_EPOCHS) plt.style.use("ggplot") plt.figure() plt.plot(N, H.history["loss"], label="train_loss") plt.plot(N, H.history["val_loss"], label="val_loss") plt.plot(N, H.history["accuracy"], label="train_acc") plt.plot(N, H.history["val_accuracy"], label="val_acc") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(args["plot"]) Method #2: Label smoothing using your TensorFlow/Keras loss function Our second method to implement label smoothing utilizes Keras/TensorFlow’s CategoricalCrossentropy class directly. The benefit here is that we don’t need to implement any custom function — label smoothing can be applied on the fly when instantiating the CategoricalCrossentropy class with the label_smoothing parameter, like so: CategoricalCrossentropy(label_smoothing=0.1) Again, the benefit here is that we don’t need any custom implementation. The downside is that we don’t have access to the raw labels list which would be a problem if you need it to compute your own custom metrics when monitoring the training process. With all that said, let’s learn how to utilize the CategoricalCrossentropy for label smoothing. Our implementation is very similar to the previous section but with a few exceptions — I’ll be calling out the differences along the way.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
For a detailed review of our training script, refer to the previous section. Open up the label_smoothing_loss.py file in your directory structure and we’ll get started: # set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from pyimagesearch.learning_rate_schedulers import PolynomialDecay from pyimagesearch.minigooglenet import MiniGoogLeNet from sklearn.metrics import classification_report from sklearn.preprocessing import LabelBinarizer from tensorflow.keras.losses import CategoricalCrossentropy from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.callbacks import LearningRateScheduler from tensorflow.keras.optimizers import SGD from tensorflow.keras.datasets import cifar10 import matplotlib.pyplot as plt import numpy as np import argparse # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-s", "--smoothing", type=float, default=0.1, help="amount of label smoothing to be applied") ap.add_argument("-p", "--plot", type=str, default="plot.png", help="path to output plot file") args = vars(ap.parse_args()) Lines 2-17 handle our imports. Most notably Line 10 imports CategoricalCrossentropy. Our --smoothing and --plot command line arguments are the same as in Method #1. Our next codeblock is nearly the same as Method #1 with the exception of the very last part: # define the total number of epochs to train for initial learning # rate, and batch size NUM_EPOCHS = 2 INIT_LR = 5e-3 BATCH_SIZE = 64 # initialize the label names for the CIFAR-10 dataset labelNames = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"] # load the training and testing data, converting the images from # integers to floats print("[INFO] loading CIFAR-10 data...") ((trainX, trainY), (testX, testY)) = cifar10.load_data() trainX = trainX.astype("float") testX = testX.astype("float") # apply mean subtraction to the data mean = np.mean(trainX, axis=0) trainX -= mean testX -= mean # convert the labels from integers to vectors lb = LabelBinarizer() trainY = lb.fit_transform(trainY) testY = lb.transform(testY) Here we: Initialize training hyperparameters (Lines 29-31). Initialize our CIFAR-10 class names (Lines 34 and 35). Load CIFAR-10 data (Lines 40-42). Apply mean subtraction (Lines 45-47). Each of those steps is the same as Method #1.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
Lines 50-52 one-hot encode labels with a caveat compared to our previous method. The CategoricalCrossentropy class will take care of label smoothing for us, so there is no need to directly modify the trainY and testY lists, as we did previously. Let’s instantiate our data augmentation and learning rate scheduler callbacks: # construct the image generator for data augmentation aug = ImageDataGenerator( width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True, fill_mode="nearest") # construct the learning rate scheduler callback schedule = PolynomialDecay(maxEpochs=NUM_EPOCHS, initAlpha=INIT_LR, power=1.0) callbacks = [LearningRateScheduler(schedule)] And from there we will initialize our loss with the label smoothing parameter: # initialize the optimizer and loss print("[INFO] smoothing amount: {}".format(args["smoothing"])) opt = SGD(lr=INIT_LR, momentum=0.9) loss = CategoricalCrossentropy(label_smoothing=args["smoothing"]) print("[INFO] compiling model...") model = MiniGoogLeNet.build(width=32, height=32, depth=3, classes=10) model.compile(loss=loss, optimizer=opt, metrics=["accuracy"]) Lines 84 and 85 initialize our optimizer and loss function. The heart of Method #2 is here in the loss method with label smoothing: Notice how we’re passing in the label_smoothing parameter to the CategoricalCrossentropy class. This class will automatically apply label smoothing for us. We then compile the model, passing in our loss with label smoothing. To wrap up, we’ll train our model, evaluate it, and plot the training history: # train the network print("[INFO] training network...") H = model.fit_generator( aug.flow(trainX, trainY, batch_size=BATCH_SIZE), validation_data=(testX, testY), steps_per_epoch=len(trainX) // BATCH_SIZE, epochs=NUM_EPOCHS, callbacks=callbacks, verbose=1) # evaluate the network print("[INFO] evaluating network...") predictions = model.predict(testX, batch_size=BATCH_SIZE) print(classification_report(testY.argmax(axis=1), predictions.argmax(axis=1), target_names=labelNames)) # construct a plot that plots and saves the training history N = np.arange(0, NUM_EPOCHS) plt.style.use("ggplot") plt.figure() plt.plot(N, H.history["loss"], label="train_loss") plt.plot(N, H.history["val_loss"], label="val_loss") plt.plot(N, H.history["accuracy"], label="train_acc") plt.plot(N, H.history["val_accuracy"], label="val_acc") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(args["plot"]) Label smoothing results Now that we’ve implemented our label smoothing scripts, let’s put them to work. Start by using the “Downloads” section of this tutorial to download the source code. From there, open up a terminal and execute the following command to apply label smoothing using our custom smooth_labels function: $ python label_smoothing_func.py --smoothing 0.1 [INFO] loading CIFAR-10 data... [INFO] smoothing amount: 0.1 [INFO] before smoothing: [0. 0.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
0. 0. 0. 0. 1. 0. 0. 0.] [INFO] after smoothing: [0.01 0.01 0.01 0.01 0.01 0.01 0.91 0.01 0.01 0.01] [INFO] compiling model... [INFO] training network... Epoch 1/70 781/781 [==============================] - 115s 147ms/step - loss: 1.6987 - accuracy: 0.4482 - val_loss: 1.2606 - val_accuracy: 0.5488 Epoch 2/70 781/781 [==============================] - 98s 125ms/step - loss: 1.3924 - accuracy: 0.6066 - val_loss: 1.4393 - val_accuracy: 0.5419 Epoch 3/70 781/781 [==============================] - 96s 123ms/step - loss: 1.2696 - accuracy: 0.6680 - val_loss: 1.0286 - val_accuracy: 0.6458 Epoch 4/70 781/781 [==============================] - 96s 123ms/step - loss: 1.1806 - accuracy: 0.7133 - val_loss: 0.8514 - val_accuracy: 0.7185 Epoch 5/70 781/781 [==============================] - 95s 122ms/step - loss: 1.1209 - accuracy: 0.7440 - val_loss: 0.8533 - val_accuracy: 0.7155 ... Epoch 66/70 781/781 [==============================] - 94s 120ms/step - loss: 0.6262 - accuracy: 0.9765 - val_loss: 0.3728 - val_accuracy: 0.8910 Epoch 67/70 781/781 [==============================] - 94s 120ms/step - loss: 0.6267 - accuracy: 0.9756 - val_loss: 0.3806 - val_accuracy: 0.8924 Epoch 68/70 781/781 [==============================] - 95s 121ms/step - loss: 0.6245 - accuracy: 0.9775 - val_loss: 0.3659 - val_accuracy: 0.8943 Epoch 69/70 781/781 [==============================] - 94s 120ms/step - loss: 0.6245 - accuracy: 0.9773 - val_loss: 0.3657 - val_accuracy: 0.8936 Epoch 70/70 781/781 [==============================] - 94s 120ms/step - loss: 0.6234 - accuracy: 0.9778 - val_loss: 0.3649 - val_accuracy: 0.8938 [INFO] evaluating network... precision recall f1-score support airplane 0.91 0.90 0.90 1000 automobile 0.94 0.97 0.95 1000 bird 0.84 0.86 0.85 1000 cat 0.80 0.78 0.79 1000 deer 0.90 0.87 0.89 1000 dog 0.86 0.82 0.84 1000 frog 0.88 0.95 0.91 1000 horse 0.94 0.92 0.93 1000 ship 0.94 0.94 0.94 1000 truck 0.93 0.94 0.94 1000 accuracy 0.89 10000 macro avg 0.89 0.89 0.89 10000 weighted avg 0.89 0.89 0.89 10000 Figure 2: The results of training using our Method #1 of Label smoothing with Keras, TensorFlow, and Deep Learning. Here you can see we are obtaining ~89% accuracy on our testing set.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
But what’s really interesting to study is our training history plot in Figure 2. Notice that: Validation loss is significantly lower than the training loss. Yet the training accuracy is better than the validation accuracy. That’s quite strange behavior — typically, lower loss correlates with higher accuracy. How is it possible that the validation loss is lower than the training loss, yet the training accuracy is better than the validation accuracy? The answer lies in label smoothing — keep in mind that we only smoothed the training labels. The validation labels were not smoothed. Thus, you can think of the training labels as having additional “noise” in them. The ultimate goal of applying regularization when training our deep neural networks is to reduce overfitting and increase the ability of our model to generalize. Typically we achieve this goal by sacrificing training loss/accuracy during training time in hopes of a better generalizable model — that’s the exact behavior we’re seeing here.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
Next, let’s use Keras/TensorFlow’s CategoricalCrossentropy class when performing label smoothing: $ python label_smoothing_loss.py --smoothing 0.1 [INFO] loading CIFAR-10 data... [INFO] smoothing amount: 0.1 [INFO] compiling model... [INFO] training network... Epoch 1/70 781/781 [==============================] - 101s 130ms/step - loss: 1.6945 - accuracy: 0.4531 - val_loss: 1.4349 - val_accuracy: 0.5795 Epoch 2/70 781/781 [==============================] - 99s 127ms/step - loss: 1.3799 - accuracy: 0.6143 - val_loss: 1.3300 - val_accuracy: 0.6396 Epoch 3/70 781/781 [==============================] - 99s 126ms/step - loss: 1.2594 - accuracy: 0.6748 - val_loss: 1.3536 - val_accuracy: 0.6543 Epoch 4/70 781/781 [==============================] - 99s 126ms/step - loss: 1.1760 - accuracy: 0.7136 - val_loss: 1.2995 - val_accuracy: 0.6633 Epoch 5/70 781/781 [==============================] - 99s 127ms/step - loss: 1.1214 - accuracy: 0.7428 - val_loss: 1.1175 - val_accuracy: 0.7488 ... Epoch 66/70 781/781 [==============================] - 97s 125ms/step - loss: 0.6296 - accuracy: 0.9762 - val_loss: 0.7729 - val_accuracy: 0.8984 Epoch 67/70 781/781 [==============================] - 131s 168ms/step - loss: 0.6303 - accuracy: 0.9753 - val_loss: 0.7757 - val_accuracy: 0.8986 Epoch 68/70 781/781 [==============================] - 98s 125ms/step - loss: 0.6278 - accuracy: 0.9765 - val_loss: 0.7711 - val_accuracy: 0.9001 Epoch 69/70 781/781 [==============================] - 97s 124ms/step - loss: 0.6273 - accuracy: 0.9764 - val_loss: 0.7722 - val_accuracy: 0.9007 Epoch 70/70 781/781 [==============================] - 98s 126ms/step - loss: 0.6256 - accuracy: 0.9781 - val_loss: 0.7712 - val_accuracy: 0.9012 [INFO] evaluating network... precision recall f1-score support airplane 0.90 0.93 0.91 1000 automobile 0.94 0.97 0.96 1000 bird 0.88 0.85 0.87 1000 cat 0.83 0.78 0.81 1000 deer 0.90 0.88 0.89 1000 dog 0.87 0.84 0.85 1000 frog 0.88 0.96 0.92 1000 horse 0.93 0.92 0.92 1000 ship 0.95 0.95 0.95 1000 truck 0.94 0.94 0.94 1000 accuracy 0.90 10000 macro avg 0.90 0.90 0.90 10000 weighted avg 0.90 0.90 0.90 10000 Figure 3: The results of training using our Method #2 of Label smoothing with Keras, TensorFlow, and Deep Learning. Here we are obtaining ~90% accuracy, but that does not mean that the CategoricalCrossentropy method is “better” than the smooth_labels technique — for all intents and purposes these results are “equal” and would show to follow the same distribution if the results were averaged over multiple runs. Figure 3 displays the training history for the loss-based label smoothing method. Again, note that our validation loss is lower than our training loss yet our training accuracy is higher than our validation accuracy — this is totally normal behavior when using label smoothing so don’t be alarmed by it. When should I apply label smoothing? I recommend applying label smoothing when you are having trouble getting your model to generalize and/or your model is overfitting to your training set. When those situations happen we need to apply regularization techniques. Label smoothing is just one type of regularization, however. Other types of regularization include: Dropout L1, L2, etc. weight decay Data augmentation Decreasing model capacity You can mix and match these methods to combat overfitting and increase the ability of your model to generalize.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University   Summary In this tutorial you learned two methods to apply label smoothing using Keras, TensorFlow, and Deep Learning: Method #1: Label smoothing by updating your labels lists using a custom label parsing function Method #2: Label smoothing using your loss function in TensorFlow/Keras You can think of label smoothing as a form of regularization that improves the ability of your model to generalize to testing data, but perhaps at the expense of accuracy on your training set — typically this tradeoff is well worth it. I normally recommend Method #1 of label smoothing when either: Your entire dataset fits into memory and you can smooth all labels in a single function call. You need direct access to your label variables. Otherwise, Method #2 tends to be easier to utilize as (1) it’s baked right into Keras/TensorFlow and (2) does not require any hand-implemented functions.
https://pyimagesearch.com/2019/12/30/label-smoothing-with-keras-tensorflow-and-deep-learning/
Regardless of which method you choose, they both do the same thing — smooth your labels, thereby attempting to improve the ability of your model to generalize. I hope you enjoyed the tutorial! To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Click here to download the source code to this pos In this tutorial you will learn how to use the Movidius NCS to speed up face detection and face recognition on the Raspberry Pi by over 243%! If you’ve ever tried to perform deep learning-based face recognition on a Raspberry Pi, you may have noticed significant lag. Is there a problem with the face detection or face recognition models themselves? No, absolutely not. The problem is that your Raspberry Pi CPU simply can’t process the frames quickly enough. You need more computational horsepower. As the title to this tutorial suggests, we’re going to pair our Raspberry Pi with the Intel Movidius Neural Compute Stick coprocessor. The NCS Myriad processor will handle the more demanding face detection while the RPi CPU will handle extracting face embeddings. The RPi CPU processor will also handle the final machine learning classification using the results from the face embeddings. The process of offloading the most expensive deep learning task to the Movidius NCS frees up the Raspberry Pi CPU to handle the other tasks.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Each processor is then handling an appropriate load. We are certainly pushing our Raspberry Pi to the limit, but we don’t have much choice short of using a completely different single board computer such as an NVIDIA Jetson Nano. By the end of this tutorial, you’ll have a fully functioning face recognition script running at 6.29FPS on the RPi and Movidius NCS, a 243% speedup compared to using just the RPi alone! Note: This tutorial includes reposted content from my new Raspberry Pi for Computer Vision book (Chapter 14 of the Hacker Bundle). You can learn more and pick up your copy here. To learn how to perform face recognition using the Raspberry Pi and Movidius Neural Compute Stick, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Raspberry Pi and Movidius NCS Face Recognition In this tutorial, we will learn how to work with the Movidius NCS for face recognition. First, you’ll need an understanding of deep learning face recognition using deep metric learning and how to create a face recognition dataset. Without understanding these two concepts, you may feel lost reading this tutorial.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Prior to reading this tutorial, you should read any of the following: Face Recognition with OpenCV, Python, and deep learning, my first blog post on deep learning face recognition. OpenCV Face Recognition, my second blog post on deep learning face recognition using a model that comes with OpenCV. This article also includes a section entitled “Drawbacks, limitations, and how to obtain higher face recognition accuracy” that I highly recommend reading. Raspberry Pi for Computer Vision‘s “Face Recognition on the Raspberry Pi” (Chapter 5 of the Hacker Bundle). Additionally, you must read either of the following: How to build a custom face recognition dataset, a tutorial explaining three methods to build your face recognition dataset. Raspberry Pi for Computer Vision‘s “Step #1: Gather your dataset” (Chapter 5, Section 5.4.2 of the Hacker Bundle), Upon successfully reading and understanding those resources, you will be prepared for Raspberry Pi and Movidius NCS face recognition. In the remainder of this tutorial, we’ll begin by setting up our Raspberry Pi with OpenVINO, including installing the necessary software. From there, we’ll review our project structure ensuring we are familiar with the layout of today’s downloadable zip. We’ll then review the process of extracting embeddings for/with the NCS. We’ll train a machine learning model on top of the embeddings data.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Finally, we’ll develop a quick demo script to ensure that our faces are being recognized properly. Let’s dive in. Configuring your Raspberry Pi + OpenVINO environment Figure 1: Configuring OpenVINO on your Raspberry Pi for face recognition with the Movidius NCS. This tutorial requires a Raspberry Pi (3B+ or 4B is recommended) and Movidius NCS2 (or higher once faster versions are released in the future). Lower Raspberry Pi and NCS models may struggle to keep up. Another option is to use a capable laptop/desktop without OpenVINO altogether. Configuring your Raspberry Pi with the Intel Movidius NCS for this project is admittedly challenging. I suggest you (1) pick up a copy of Raspberry Pi for Computer Vision, and (2) flash the included pre-configured .img to your microSD. The .img that comes included with the book is worth its weight in gold as it will save you countless hours of toiling and frustration. For the stubborn few who wish to configure their Raspberry Pi + OpenVINO on their own, here is a brief guide: Head to my BusterOS install guide and follow all instructions to create an environment named cv .
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
The Raspberry Pi 4B model (either 1GB, 2GB, or 4GB) is recommended. Head to my OpenVINO installation guide and create a 2nd environment named openvino . Be sure to use OpenVINO 4.1.1 as 4.1.2 has issues. At this point, your RPi will have both a normal OpenCV environment as well as an OpenVINO-OpenCV environment. You will use the openvino environment for this tutorial. Now, simply plug in your NCS2 into a blue USB 3.0 port (the RPi 4B has USB 3.0 for maximum speed) and start your environment using either of the following methods: Option A: Use the shell script on my Pre-configured Raspbian .img (the same shell script is described in the “Recommended: Create a shell script for starting your OpenVINO environment” section of my OpenVINO installation guide). From here on, you can activate your OpenVINO environment with one simple command (as opposed to two commands like in the previous step: $ source ~/start_openvino.sh Starting Python 3.7 with OpenCV-OpenVINO 4.1.1 bindings... Option B: One-two punch method. Open a terminal and perform the following: $ workon openvino $ source ~/openvino/bin/setupvars.sh The first command activates our OpenVINO virtual environment. The second command sets up the Movidius NCS with OpenVINO (and is very important). From there we fire up the Python 3 binary in the environment and import OpenCV.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Both Option A and Option B assume that you either are using my Pre-configured Raspbian .img or that you followed my OpenVINO installation guide and installed OpenVINO with your Raspberry Pi on your own. Caveats: Some versions of OpenVINO struggle to read .mp4 videos. This is a known bug that PyImageSearch has reported to the Intel team. Our preconfigured .img includes a fix — Abhishek Thanki edited the source code and compiled OpenVINO from source. This blog post is long enough as is, so I cannot include the compile-from-source instructions. If you encounter this issue please encourage Intel to fix the problem, and either (A) compile from source using our customer portal instructions, or (B) pick up a copy of Raspberry Pi for Computer Vision and use the pre-configured .img. We will add to this list if we discover other caveats. Project Structure Go ahead and grab today’s .zip from the “Downloads” section of this blog post and extract the files. Our project is organized in the following manner: |-- dataset | |-- abhishek | |-- adrian | |-- dave | |-- mcCartney | |-- sayak | |-- unknown |-- face_detection_model | |-- deploy.prototxt | |-- res10_300x300_ssd_iter_140000.caffemodel |-- face_embedding_model | |-- openface_nn4.small2.v1.t7 |-- output | |-- embeddings.pickle | |-- le.pickle | |-- recognizer.pickle |-- setupvars.sh |-- extract_embeddings.py |-- train_model.py |-- recognize_video.py An example 5-person dataset/ is included. Each subdirectory contains 20 images for the respective person.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Our face detector will detect/localize a face in the image to be recognized. The pre-trained Caffe face detector files (provided by OpenCV) are included inside the face_detection_model/ directory. Be sure to refer to this deep learning face detection blog post to learn more about the detector and how it can be put to use. We will extract face embeddings with a pre-trained OpenFace PyTorch model included in the face_embedding_model/ directory. The openface_nn4.small2.v1.t7 file was trained by the team at Carnegie Mellon University as part of the OpenFace project. When we execute extract_embeddings.py, two pickle files will be generated. Both embeddings.pickle and le.pickle will be stored inside of the output/ directory if you so choose. The embeddings consist of a 128-d vector for each face in the dataset. We’ll then train a Support Vector Machines (SVM) machine learning model on top of the embeddings by executing the train_model.py script. The result of training our SVM will be serialized to recognizer.pickle in the output/ directory.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Note: If you choose to use your own dataset (instead of the one I have supplied with the downloads), you should delete the files included in the output/ directory and generate new files associated with your own face dataset. The recognize_video.py script simply activates your camera and detects + recognizes faces in each frame. Our Environment Setup Script Our Movidius face recognition system will not work properly unless an additional system environment variable, OPENCV_DNN_IE_VPU_TYPE , is set. Be sure to set this environment variable in addition to starting your virtual environment. This may change in future revisions of OpenVINO, but for now, a shell script is provided in the project associated with this tutorial. Open up setup.sh and inspect the script: #! /bin/sh export OPENCV_DNN_IE_VPU_TYPE=Myriad2 The “shebang” (#!) on Line 1 indicates that this script is executable. Line 3 sets the environment variable using the export command. You could, of course, manually type the command in your terminal, but this shell script alleviates you from having to memorize the variable name and setting.