url
stringclasses
675 values
text
stringlengths
0
9.95k
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/
If you intend to follow this tutorial, I suggest you take the time to configure your deep learning development environment. You can utilize either of these two guides to install TensorFlow and Keras on your system: How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. Having problems configuring your development environment? Figure 5: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux systems? Then join PyImageSearch University today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required.
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/
And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Let’s start this tutorial by reviewing our project directory structure. Use the “Downloads” section of this guide to retrieve the source code. You’ll then be presented with the following directory: $ tree . --dirsfirst . ├── pyimagesearch │ ├── __init__.py │ ├── datagen.py │ ├── fgsm.py │ └── simplecnn.py └── train_mixed_adversarial_defense.py 1 directory, 5 files Our directory structure is essentially identical to last week’s tutorial on Defending against adversarial image attacks with Keras and TensorFlow. The primary difference is that: We’re adding a new function to our datagen.py file to handle mixing both training images and on-the-fly generated adversarial images at the same time. Our driver training script, train_mixed_adversarial_defense.py, has a few additional bells and whistles to handle mixed training. If you haven’t yet, I strongly encourage you to read the previous two tutorials in this series: Adversarial attacks with FGSM (Fast Gradient Sign Method)Defending against adversarial image attacks with Keras and TensorFlow They are considered required reading before you continue! Our basic CNN Our CNN architecture can be found inside the simplecnn.py file in our project structure.
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/
I’ve already reviewed this model definition in detail during our Fast Gradient Sign Method tutorial, so I’m going to defer a complete explanation of the code to that guide. That said, I’ve included the full implementation of SimpleCNN for you to review below: # import the necessary packages from tensorflow.keras.models import Sequential from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import Activation from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Dense Lines 2-8 import our required Python packages. We can then create the SimpleCNN architecture: class SimpleCNN: @staticmethod def build(width, height, depth, classes): # initialize the model along with the input shape model = Sequential() inputShape = (height, width, depth) chanDim = -1 # first CONV => RELU => BN layer set model.add(Conv2D(32, (3, 3), strides=(2, 2), padding="same", input_shape=inputShape)) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) # second CONV => RELU => BN layer set model.add(Conv2D(64, (3, 3), strides=(2, 2), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) # first (and only) set of FC => RELU layers model.add(Flatten()) model.add(Dense(128)) model.add(Activation("relu")) model.add(BatchNormalization()) model.add(Dropout(0.5)) # softmax classifier model.add(Dense(classes)) model.add(Activation("softmax")) # return the constructed network architecture return model The salient points of this architecture include: A first set of CONV => RELU => BN layers. The CONV layer learns a total of 32 3×3 filters with 2×2 strided convolution to reduce volume size. A second set of CONV => RELU => BN layers. Same as above, but this time the CONV layer learns 64 filters. A set of dense/fully-connected layers. The output of which is our softmax classifier used for returning probabilities for each class label. Using FGSM to generate adversarial images We use the Fast Gradient Sign Method (FGSM) to generate image adversaries. We’ve covered this implementation in detail earlier in this series, so you can refer there for a complete review of the code.
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/
That said, if you open the fgsm.py file in your project directory structure, you will find the following code: # import the necessary packages from tensorflow.keras.losses import MSE import tensorflow as tf def generate_image_adversary(model, image, label, eps=2 / 255.0): # cast the image image = tf.cast(image, tf.float32) # record our gradients with tf. GradientTape() as tape: # explicitly indicate that our image should be tacked for # gradient updates tape.watch(image) # use our model to make predictions on the input image and # then compute the loss pred = model(image) loss = MSE(label, pred) # calculate the gradients of loss with respect to the image, then # compute the sign of the gradient gradient = tape.gradient(loss, image) signedGrad = tf.sign(gradient) # construct the image adversary adversary = (image + (signedGrad * eps)).numpy() # return the image adversary to the calling function return adversary At a high-level, this code is: Accepting a model that we want to “fool” into making incorrect predictionsTaking the model and using it to make predictions on the input imageComputing the loss of the model based on the ground-truth class labelComputing the gradients of the loss with respect to the imageTaking the sign of the gradient (either -1, 0, 1) and then using the signed gradient to create the image adversary The end result will be an output image that looks visually identical to the original but that the CNN will classify incorrectly. Again, you can refer to our FGSM guide for a detailed review of the code. Updating our data generator to mix normal images with adversarial images on the fly In this section, we are going to implement two functions: generate_adversarial_batch: Generates a total of N adversarial images using our FGSM implementation.generate_mixed_adverserial_batch: Generates a batch of N images, half of which are normal images and the other half are adversarial. We implemented the first method last week in our tutorial on Defending against adversarial image attacks with Keras and TensorFlow. The second function is brand new and exclusive to this tutorial. Let’s get started with our data batch generators. Open the datagen.py file in our project structure and insert the following code: # import the necessary packages from .fgsm import generate_image_adversary from sklearn.utils import shuffle import numpy as np Lines 2-4 handle our required imports. We’re importing the generate_image_adversary from our fgsm module such that we can generate image adversaries. The shuffle function is imported to jointly shuffle images and labels together.
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/
Below is the definition of our generate_adversarial_batch function, which we implemented last week: def generate_adversarial_batch(model, total, images, labels, dims, eps=0.01): # unpack the image dimensions into convenience variables (h, w, c) = dims # we're constructing a data generator here so we need to loop # indefinitely while True: # initialize our perturbed images and labels perturbImages = [] perturbLabels = [] # randomly sample indexes (without replacement) from the # input data idxs = np.random.choice(range(0, len(images)), size=total, replace=False) # loop over the indexes for i in idxs: # grab the current image and label image = images[i] label = labels[i] # generate an adversarial image adversary = generate_image_adversary(model, image.reshape(1, h, w, c), label, eps=eps) # update our perturbed images and labels lists perturbImages.append(adversary.reshape(h, w, c)) perturbLabels.append(label) # yield the perturbed images and labels yield (np.array(perturbImages), np.array(perturbLabels)) Since we discussed this function in detail in our previous post, I’m going to defer a complete discussion of the function to there, but at high-level, you can see that this function: Randomly samples N images (total) from our input images set (typically either our training or testing set)We then use the FGSM to generate adversarial examples from our randomly sampled imagesThe function rounds out by returning the adversarial images and labels to the calling function The big takeaway here is that the generate_adversarial_batch method returns exclusively adversarial images. However, the goal of this post is mixed training containing both normal images and adversarial images. Therefore, we need to implement a second helper function: def generate_mixed_adverserial_batch(model, total, images, labels, dims, eps=0.01, split=0.5): # unpack the image dimensions into convenience variables (h, w, c) = dims # compute the total number of training images to keep along with # the number of adversarial images to generate totalNormal = int(total * split) totalAdv = int(total * (1 - split)) As the name suggests, generate_mixed_adverserial_batch creates a mix of both normal images and adversarial images. This method has several arguments, including: model: The CNN we’re training and using to generate adversarial imagestotal: The total number of images we want in each batchimages: The input set of images (typically either our training or testing split)labels: The corresponding class labels belonging to the imagesdims: The spatial dimensions of the input imageseps: A small epsilon value used for generating the adversarial imagessplit: Percentage of normal images vs. adversarial images; here, we are doing a 50/50 split From there, we unpack the dims tuple into our height, width, and number of channels (Line 43). We also derive the total number of training images and number of adversarial images based on our split (Lines 47 and 48). Let’s now dive into the data generator itself: # we're constructing a data generator so we need to loop # indefinitely while True: # randomly sample indexes (without replacement) from the # input data and then use those indexes to sample our normal # images and labels idxs = np.random.choice(range(0, len(images)), size=totalNormal, replace=False) mixedImages = images[idxs] mixedLabels = labels[idxs] # again, randomly sample indexes from the input data, this # time to construct our adversarial images idxs = np.random.choice(range(0, len(images)), size=totalAdv, replace=False) Line 52 starts an infinite loop that will continue until the training process is complete. We then randomly sample a total of totalNormal images from our input set (Lines 56-59). Next, Lines 63 and 64 perform a second round of random sampling, this time for adversarial image generation. We can now loop over each of these idxs: # loop over the indexes for i in idxs: # grab the current image and label, then use that data to # generate the adversarial example image = images[i] label = labels[i] adversary = generate_image_adversary(model, image.reshape(1, h, w, c), label, eps=eps) # update the mixed images and labels lists mixedImages = np.vstack([mixedImages, adversary]) mixedLabels = np.vstack([mixedLabels, label]) # shuffle the images and labels together (mixedImages, mixedLabels) = shuffle(mixedImages, mixedLabels) # yield the mixed images and labels to the calling function yield (mixedImages, mixedLabels) For each image index, i, we: Grab the current image and label (Lines 70 and 71)Generate an adversarial image via FGSM (Lines 72 and 73)Update our mixedImages and mixedLabels list with our adversarial image and label (Lines 76 and 77) Line 80 jointly shuffles our mixedImages and mixedLabels. We perform this shuffling operation because the normal images and adversarial images were added together sequentially, meaning that the normal images appear at the front of the list while the adversarial images are at the back of the list.
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/
Shuffling ensures our data samples are randomly distributed throughout the batch. The shuffled batch of data is then yielded to the calling function. Creating our mixed image and adversarial image training script With all of our helper functions implemented, we can create our training script. Open the train_mixed_adverserial_defense.py file in your project structure, and let’s get to work: # import the necessary packages from pyimagesearch.simplecnn import SimpleCNN from pyimagesearch.datagen import generate_mixed_adverserial_batch from pyimagesearch.datagen import generate_adversarial_batch from tensorflow.keras.optimizers import Adam from tensorflow.keras.utils import to_categorical from tensorflow.keras.datasets import mnist import numpy as np Lines 2-8 import our required Python packages. Take note of our custom implementations, including: SimpleCNN: The CNN architecture we’ll be training.generate_mixed_adverserial_batch: Generates batches of both normal images and adversarial images togethergenerate_adversarial_batch: Generates batches of exclusively adversarial images We’ll be training SimpleCNN on the MNIST dataset, so let’s load it and preprocess it now: # load MNIST dataset and scale the pixel values to the range [0, 1] print("[INFO] loading MNIST dataset...") (trainX, trainY), (testX, testY) = mnist.load_data() trainX = trainX / 255.0 testX = testX / 255.0 # add a channel dimension to the images trainX = np.expand_dims(trainX, axis=-1) testX = np.expand_dims(testX, axis=-1) # one-hot encode our labels trainY = to_categorical(trainY, 10) testY = to_categorical(testY, 10) Line 12 loads the MNIST digits dataset from disk. We then proceed to preprocess it by: Scaling the pixel intensities from the range [0, 255] to [0, 1]Adding a batch dimension to the dataOne-hot encoding the labels We can now compile our model: # initialize our optimizer and model print("[INFO] compiling model...") opt = Adam(lr=1e-3) model = SimpleCNN.build(width=28, height=28, depth=1, classes=10) model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) # train the simple CNN on MNIST print("[INFO] training network...") model.fit(trainX, trainY, validation_data=(testX, testY), batch_size=64, epochs=20, verbose=1) Lines 26-29 compile our model. We then train it on Lines 33-37 on our trainX and trainY data. After training, the next step is to evaluate the model: # make predictions on the testing set for the model trained on # non-adversarial images (loss, acc) = model.evaluate(x=testX, y=testY, verbose=0) print("[INFO] normal testing images:") print("[INFO] loss: {:.4f}, acc: {:.4f}\n".format(loss, acc)) # generate a set of adversarial from our test set (so we can evaluate # our model performance *before* and *after* mixed adversarial # training) print("[INFO] generating adversarial examples with FGSM...\n") (advX, advY) = next(generate_adversarial_batch(model, len(testX), testX, testY, (28, 28, 1), eps=0.1)) # re-evaluate the model on the adversarial images (loss, acc) = model.evaluate(x=advX, y=advY, verbose=0) print("[INFO] adversarial testing images:") print("[INFO] loss: {:.4f}, acc: {:.4f}\n".format(loss, acc)) Lines 41-43 evaluate the model on our testing data. We then generate a set of exclusively adversarial images on Lines 49 and 50. Our model is then re-evaluated, this time on the adversarial images (Lines 53-55).
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/
As we’ll see in the next section, our model will perform well on the original testing data, but accuracy will plummet on the adversarial images. To help defend against adversarial attacks, we can fine-tune the model on data batches consisting of both normal images and adversarial examples. The following code block accomplishes this task: # lower the learning rate and re-compile the model (such that we can # fine-tune it on the mixed batches of normal images and dynamically # generated adversarial images) print("[INFO] re-compiling model...") opt = Adam(lr=1e-4) model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) # initialize our data generator to create data batches containing # a mix of both *normal* images and *adversarial* images print("[INFO] creating mixed data generator...") dataGen = generate_mixed_adverserial_batch(model, 64, trainX, trainY, (28, 28, 1), eps=0.1, split=0.5) # fine-tune our CNN on the adversarial images print("[INFO] fine-tuning network on dynamic mixed data...") model.fit( dataGen, steps_per_epoch=len(trainX) // 64, epochs=10, verbose=1) Lines 61-63 lower our learning rate and then recompile our model. From there, we create our data generator (Lines 68 and 69). Here we are telling our data generator to use our model to generate batches of data (with 64 total data points in each batch), sampling from our training data, with an equal 50/50 split for normal images and adversarial images. Passing in our dataGen to model.fit allows our CNN to be trained on these mixed batches. Let’s perform one final round of evaluation: # now that our model is fine-tuned we should evaluate it on the test # set (i.e., non-adversarial) again to see if performance has degraded (loss, acc) = model.evaluate(x=testX, y=testY, verbose=0) print("") print("[INFO] normal testing images *after* fine-tuning:") print("[INFO] loss: {:.4f}, acc: {:.4f}\n".format(loss, acc)) # do a final evaluation of the model on the adversarial images (loss, acc) = model.evaluate(x=advX, y=advY, verbose=0) print("[INFO] adversarial images *after* fine-tuning:") print("[INFO] loss: {:.4f}, acc: {:.4f}".format(loss, acc)) Lines 81-84 evaluate our CNN on our original testing set after fine-tuning on mixed batches. We then evaluate the CNN on our original adversarial images once again (Lines 87-89). Ideally, what we’ll see is balanced accuracy between our normal images and adversarial images, thus making our model more robust and capable of defending against an adversarial attack. Training our CNN on normal images and adversarial images We are now ready to train our CNN on both normal training images and adversarial images generated on the fly.
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/
Start by accessing the “Downloads” section of this tutorial to retrieve the source code. From there, open a terminal and execute the following command: $ time python train_mixed_adversarial_defense.py [INFO] loading MNIST dataset... [INFO] compiling model... [INFO] training network... Epoch 1/20 938/938 [==============================] - 6s 6ms/step - loss: 0.2043 - accuracy: 0.9377 - val_loss: 0.0615 - val_accuracy: 0.9805 Epoch 2/20 938/938 [==============================] - 6s 6ms/step - loss: 0.0782 - accuracy: 0.9764 - val_loss: 0.0470 - val_accuracy: 0.9846 Epoch 3/20 938/938 [==============================] - 6s 6ms/step - loss: 0.0597 - accuracy: 0.9810 - val_loss: 0.0493 - val_accuracy: 0.9828 ... Epoch 18/20 938/938 [==============================] - 6s 6ms/step - loss: 0.0102 - accuracy: 0.9965 - val_loss: 0.0478 - val_accuracy: 0.9889 Epoch 19/20 938/938 [==============================] - 6s 6ms/step - loss: 0.0116 - accuracy: 0.9961 - val_loss: 0.0359 - val_accuracy: 0.9915 Epoch 20/20 938/938 [==============================] - 6s 6ms/step - loss: 0.0105 - accuracy: 0.9967 - val_loss: 0.0477 - val_accuracy: 0.9891 [INFO] normal testing images: [INFO] loss: 0.0477, acc: 0.9891 Above, you can see the output of training our CNN on the normal MNIST training set. Here, we obtain 99.67% accuracy on the training set and 98.91% accuracy on the testing set. Now, let’s see what happens when we generate a set of adversarial images with the Fast Gradient Sign Method: [INFO] generating adversarial examples with FGSM... [INFO] adversarial testing images: [INFO] loss: 14.0658, acc: 0.0188 Our accuracy plummets from 98.91% accuracy down to 1.88% accuracy. Clearly, our model is not handling adversarial examples well. What we’ll do now is lower the learning rate, re-compile the model, and then fine-tune using a data generator that includes both the original training images and adversarial images generated on the fly: [INFO] re-compiling model... [INFO] creating mixed data generator... [INFO] fine-tuning network on dynamic mixed data... Epoch 1/10 937/937 [==============================] - 162s 173ms/step - loss: 1.5721 - accuracy: 0.7653 Epoch 2/10 937/937 [==============================] - 146s 156ms/step - loss: 0.4189 - accuracy: 0.8875 Epoch 3/10 937/937 [==============================] - 146s 156ms/step - loss: 0.2861 - accuracy: 0.9154 ... Epoch 8/10 937/937 [==============================] - 146s 155ms/step - loss: 0.1423 - accuracy: 0.9541 Epoch 9/10 937/937 [==============================] - 145s 155ms/step - loss: 0.1307 - accuracy: 0.9580 Epoch 10/10 937/937 [==============================] - 146s 155ms/step - loss: 0.1234 - accuracy: 0.9604 Using this approach, we obtain 96.04% accuracy. And when we apply it to our final testing images, we arrive at the following: [INFO] normal testing images *after* fine-tuning: [INFO] loss: 0.0315, acc: 0.9906 [INFO] adversarial images *after* fine-tuning: [INFO] loss: 0.1190, acc: 0.9641 real 27m17.243s user 43m1.057s sys 14m43.389s After fine-tuning our model using the dynamic data generation process, we obtain 99.06% accuracy on the original testing images (up from 98.44% from last week’s method). Our adversarial image accuracy weighs in at 96.41%, which is down from 99% last week, but that makes sense in this context — keep in mind that we are not fine-tuning the model on just the adversarial examples like we did last week. Instead, we allow the model to “iteratively fool itself” and learn from the adversarial examples that it generates. Further accuracy could potentially be obtained by fine-tuning again on only the adversarial examples (without any original training samples).
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/
Still, I’ll leave that as an exercise for you, the reader, to explore. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do.
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/
My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Credits and references The FGSM and data generator implementation were inspired by Sebastian Theiler’s excellent article on adversarial attacks and defenses. A huge shoutout and thank you to Sebastian for sharing his knowledge. Summary In this tutorial, you learned how to modify a CNN’s training procedure to generate image batches that include: Normal training imagesAdversarial examples generated by the CNN This method is different from the one we learned last week, where we simply fine-tuned a CNN on a sample of adversarial images.
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/
The benefit of today’s approach is that the CNN can better defend against adversarial examples by: Learning patterns from the original training examplesLearning patterns from the adversarial images generated on the fly Since the model can generate its own adversarial examples during every batch of training, it can continually learn from itself. Overall, I think you’ll find this approach more beneficial when training your own models to defend against adversarial attacks. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
Click here to download the source code to this pos In this tutorial, you will learn how to perform template matching using OpenCV and the cv2.matchTemplate function. Other than contour filtering and processing, template matching is arguably one of the most simple forms of object detection: It’s simple to implement, requiring only 2-3 lines of codeTemplate matching is computationally efficientIt doesn’t require you to perform thresholding, edge detection, etc., to generate a binary image (such as contour detection and processing does)And with a basic extension, template matching can detect multiple instances of the same/similar object in an input image (which we’ll cover next week) Of course, template matching isn’t perfect. Despite all the positives, template matching quickly fails if there are factors of variation in your input images, including changes to rotation, scale, viewing angle, etc. If your input images contain these types of variations, you should not use template matching — utilize dedicated object detectors including HOG + Linear SVM, Faster R-CNN, SSDs, YOLO, etc. But in situations where you know the rotation, scale, and viewing angle are constant, template matching can work wonders. To learn how to perform template matching with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section OpenCV Template Matching ( cv2.matchTemplate ) In the first part of this tutorial, we’ll discuss what template matching is and how OpenCV implements template matching via the cv2.matchTemplate function. From there, we’ll configure our development environment and review our project directory structure.
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
We’ll then implement template matching with OpenCV, apply it to a few example images, and discuss where it worked well, when it didn’t, and how to improve template matching results. What is template matching? Figure 1: An example of the template matching pipeline where we take the object/template we want to detect (left), the source image (middle), and then find the template in the source image (right). Template matching can be seen as a very basic form of object detection. Using template matching, we can detect objects in an input image using a “template” containing the object we want to detect. Essentially, what this means is that we require two images to apply template matching: Source image: This is the image we expect to find a match to our template in. Template image: The “object patch” we are searching for in the source image. To find the template in the source image, we slide the template from left-to-right and top-to-bottom across the source: Figure 2: Applying template matching is as simple as sliding the template from left-to-right and top-to-bottom across the source and computing a metric to indicate how good or bad the match is at each location. At each (x, y)-location, a metric is calculated to represent how “good” or “bad” the match is. Typically, we use the normalized correlation coefficient to determine how “similar” the pixel intensities of the two patches are: Figure 3: The equation for the normalized correlation coefficient for template matching.
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
For the full derivation of the correlation coefficient, including all other template matching methods OpenCV supports, refer to the OpenCV documentation. For each location T over I, the computed result metric is stored in our result matrix R. Each (x, y)-coordinate in the source image (that also has a valid width and height for the template image) contains an entry in the result matrix R: Figure 4: An example of layering the result matrix, R, over the source image. Notice how the result matrix’s brightest region occurs at the coffee mug’s upper-left corner, thus indicating correlation is at its maximum. Here, we can visualize our result matrix R overlaid on the original image. Notice how R is not the same size as the original template. This is because the entire template must fit inside the source image for the correlation to be computed. If the template exceeds the source’s boundaries, we do not compute the similarity metric. Bright locations of the result matrix R indicate the best matches, where dark regions indicate there is very little correlation between the source and template images. Notice how the result matrix’s brightest region appears at the coffee mug’s upper-left corner. While template matching is extremely simple and computationally efficient to apply, there are many limitations.
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
If there are any object scale variations, rotation, or viewing angle, template matching will likely fail. In nearly all cases, you’ll want to ensure that the template you are detecting is nearly identical to the object you want to detect in the source. Even small, minor deviations in appearance can dramatically affect template matching results and render it effectively useless. OpenCV’s “cv2.matchTemplate” function Figure 5: OpenCV’s “cv2.matchTemplate” function is used for template matching. We can apply template matching using OpenCV and the cv2.matchTemplate function: result = cv2.matchTemplate(image, template, cv2.TM_CCOEFF_NORMED) Here, you can see that we are providing the cv2.matchTemplate function with three parameters: The input image that contains the object we want to detectThe template of the object (i.e., what we want to detect in the image)The template matching method Here, we are using the normalized correlation coefficient, which is typically the template matching method you’ll want to use, but OpenCV supports other template matching methods as well. The output result from cv2.matchTemplate is a matrix with spatial dimensions: Width: image.shape[1] - template.shape[1] + 1Height: image.shape[0] - template.shape[0] + 1 We can then find the location in the result that has the maximum correlation coefficient, which corresponds to the most likely region that the template can be found in (which you’ll learn how to do later in this tutorial). It’s also worth noting that if you wanted to detect objects within only a specific region of an input image, you could supply a mask, like the following: result = cv2.matchTemplate(image, template, cv2.TM_CCOEFF_NORMED, mask) The mask must have the same spatial dimensions and data type as the template. For regions of the input image you don’t want to be searched, the mask should be set to zero. For regions of the image you want to be searched, be sure the mask has a corresponding value of 255. Configuring your development environment To follow this guide, you need to have the OpenCV library installed on your system.
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 6: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux systems? Then join PyImageSearch University today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux!
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
Project structure Before we get too far, let’s review our project directory structure. Start by accessing the “Downloads” section of this tutorial to retrieve the source code and example images. Your directory should look like the following: $ tree . --dirsfirst . ├── images │ ├── 8_diamonds.png │ ├── coke_bottle.png │ ├── coke_bottle_rotated.png │ ├── coke_logo.png │ └── diamonds_template.png └── single_template_matching.py 1 directory, 6 files We have a single Python script to review today, single_template_matching.py, which will perform template matching with OpenCV. Inside the images directory, we have five images to which we’ll be applying template matching. We’ll see each of these images later in the tutorial. Implementing template matching with OpenCV With our project directory structure reviewed, let’s move on to implementing template matching with OpenCV. Open the single_template_matching.py file in your directory structure and insert the following code: # import the necessary packages import argparse import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", type=str, required=True, help="path to input image where we'll apply template matching") ap.add_argument("-t", "--template", type=str, required=True, help="path to template image") args = vars(ap.parse_args()) On Lines 2 and 3, we import our required Python packages.
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
We only need argparse for command line argument parsing and cv2 for our OpenCV bindings. From there, we move on to parsing our command line arguments: --image: The path to the input image on disk that we’ll be applying template matching to (i.e., the image we want to detect objects in).--template: The example template image we want to find instances of in the input image. Next, let’s prepare our image and template for template matching: # load the input image and template image from disk, then display # them on our screen print("[INFO] loading images...") image = cv2.imread(args["image"]) template = cv2.imread(args["template"]) cv2.imshow("Image", image) cv2.imshow("Template", template) # convert both the image and template to grayscale imageGray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) templateGray = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY) We start by loading our image and template, then displaying them on our screen. Template matching is typically applied to grayscale images, so Lines 22 and 23 convert the image to grayscale. Next, all that needs to be done is call cv2.matchTemplate: # perform template matching print("[INFO] performing template matching...") result = cv2.matchTemplate(imageGray, templateGray, cv2.TM_CCOEFF_NORMED) (minVal, maxVal, minLoc, maxLoc) = cv2.minMaxLoc(result) Lines 27 and 28 perform template matching itself via the cv2.matchTemplate function. We pass in three required arguments to this function: The input image that we want to find objects inThe template image of the object we want to detect in the input imageThe template matching method Typically the normalized correlation coefficient (cv2.TM_CCOEF_NORMED) works well in most situations, but you can refer to the OpenCV documentation for more details on other template matching methods. Once we’ve applied the cv2.matchTemplate, we receive a result matrix with the following spatial dimensions: Width: image.shape[1] - template.shape[1] + 1Height: image.shape[0] - template.shape[0] + 1 The result matrix will have a large value (closer to 1), where there is more likely to be a template match. Similarly, the result matrix will have a small value (closer to 0), where matches are less likely. To find the location with the largest value, and therefore the most likely match, we make a call to cv2.minMaxLoc (Line 29), passing in the result matrix. Once we have the (x, y)-coordinates of the location with the largest normalized correlation coefficient (maxLoc), we can extract the coordinates and derive the bounding box coordinates: # determine the starting and ending (x, y)-coordinates of the # bounding box (startX, startY) = maxLoc endX = startX + template.shape[1] endY = startY + template.shape[0] Line 33 extracts the starting (x, y)-coordinates from our maxLoc, derived from calling cv2.minMaxLoc in the previous codeblock.
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
Using the startX and endX coordinates, we can derive the endX and endY coordinates on Lines 34 and 35 by adding the template width and height to the startX and endX coordinates, respectively. The final step is to draw the detected bounding box on the image: # draw the bounding box on the image cv2.rectangle(image, (startX, startY), (endX, endY), (255, 0, 0), 3) # show the output image cv2.imshow("Output", image) cv2.waitKey(0) A call to cv2.rectangle on Line 38 draws the bounding box on the image. Lines 41 and 42 then show our output image on our screen. OpenCV template matching results We are now ready to apply template matching with OpenCV! Access the “Downloads” section of this tutorial to retrieve the source code and example images. From there, open a terminal and execute the following command: $ python single_template_matching.py --image images/coke_bottle.png \ --template images/coke_logo.png [INFO] loading images... [INFO] performing template matching... In this example, we have an input image containing a Coca-Cola bottle: Figure 7: Our example input image. Our goal is to detect the Coke logo in the image: Figure 8: The template image containing the Coke logo. Our goal is to detect the Coke logo in the input image. By applying OpenCV and the cv2.matchTemplate function, we can correctly localize where in the coke_bottle.png image the coke_logo.png image is: Figure 9: Successfully applying template matching with OpenCV. This method works because the Coca-Cola logo in coke_logo.png is the same size (in terms of scale) as the logo in coke_bottle.png.
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
Similarly, the logos are viewed at the same viewing angle and are not rotated. If the logos differed in scale or the viewing angle was different, the method would fail. For example, let’s try this example image, but this time I have rotated the Coca-Cola bottle slightly and scaled the bottle down: $ python single_template_matching.py \ --image images/coke_bottle_rotated.png \ --template images/coke_logo.png [INFO] loading images... [INFO] performing template matching... Figure 10: Left: An input image containing the same Coca-Cola bottle, but this image has been rotated and slightly scaled down. Middle: The template image we want to detect in the left image. Right: Output of applying template matching. Notice how we’ve failed to detect the Coca-Cola logo successfully. Notice how we have a false-positive detection! We have failed to detect the Coca-Cola logo now that the scale and rotation are different. The key point here is that template matching is tremendously sensitive to changes in rotation, viewing angle, and scale. When that happens, you may need to apply more advanced object detection techniques.
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
In the following example, we’re working with a deck of cards and are trying to detect the “diamond” symbols on the eight of diamonds playing card: $ python single_template_matching.py --image images/8_diamonds.png \ --template images/diamonds_template.png [INFO] loading images... [INFO] performing template matching... Figure 11: Left: Input image containing a template of a “diamond.” Middle: The template of a “diamond” that we want to detect in the left image. Right: Output of applying basic template matching with OpenCV. Standard template matching is not designed to handle multiple detections. On the left, we have our diamonds_template.png image. We use OpenCV and the cv2.matchTemplate function to find all the diamond symbols (right)… …but what happened here? Why haven’t all the diamond symbols been detected? The answer is that the cv2.matchTemplate function, by itself, cannot detect multiple objects! There is a solution, though — and I’ll be covering multi-template matching with OpenCV in next week’s tutorial. A note on false-positive detections with template matching You’ll note that in our rotated Coca-Cola logo example, we failed to detect the Coke logo; however, our code still “reported” that the logo was found: Figure 12: Failing to detect the Coca-Cola logo using template matching.
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
Keep in mind that the cv2.matchTemplate function truly has no idea if the object was correctly found or not — it’s simply sliding the template image across the input image, computing a normalized correlation score, and then returning the location where the score is the largest. Template matching is an example of a “dumb algorithm.” There’s no machine learning going on, and it has no idea what is in the input image. To filter out false-positive detections, you should grab the maxVal and use an if statement to filter out scores that are below a certain threshold. Credits and References I would like to thank TheAILearner for their excellent article on template matching — I cannot take credit for the idea of using playing cards to demonstrate template matching. That was their idea, and it was an excellent one at that. Credits to them for coming up with that example, which I shamelessly used here, thank you. Additionally, the eight of diamonds image was obtained from the Reddit post by u/fireball_73. What's next? We recommend PyImageSearch University.
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects.
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to perform template matching using OpenCV and the cv2.matchTemplate function. Template matching is a basic form of object detection. It’s very fast and efficient, but the downside is that it fails when the rotation, scale, or viewing angle of an object changes — when that happens, you need a more advanced object detection technique. Nevertheless, suppose you can control the scale or normalize the scale of objects in the environment where you are capturing photos. In that case, you could potentially get away with template matching and avoid the tedious task of labeling your data, training an object detector, and tuning its hyperparameters. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below!
https://pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/
Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
Click here to download the source code to this pos In this tutorial, you will learn how to perform face detection with OpenCV and Haar cascades. This guide, along with the next two, were inspired by an email I received from PyImageSearch reader, Angelos: Hi Adrian,I’ve been an avid reader for PyImageSearch for the last three years, thanks for all the blog posts! My company does a lot of face application work, including face detection, recognition, etc. We just started a new project using embedded hardware. I don’t have the luxury of using OpenCV’s deep learning face detector which you covered before, it’s just too slow on my devices. What do you recommend I do? To start, I would recommend Angelos look into coprocessors such as the Movidius NCS and Google Coral USB Accelerator. Those devices can run computationally expensive deep learning-based face detectors (including OpenCV’s deep learning face detector) in real-time. That said, I’m not sure if these coprocessors are even an option for Angelos. They may be cost-prohibitive, require too much power draw, etc.
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
I thought about Angelos’ question for a bit and then went back through the archives to see if I had a tutorial that could help him out. To my surprise, I realized I had never authored a dedicated tutorial on face detection with OpenCV’s Haar cascades! While we can obtain significantly higher accuracy and more robust face detections with deep learning face detectors, OpenCV’s Haar cascades still have their place: They are lightweightThey are super fast, even on resource-constrained devicesThe Haar cascade model size is tiny (930 KB) Yes, there are several problems with Haar cascades, namely that they are prone to false-positive detections and less accurate than their HOG + Linear SVM, SSD, YOLO, etc., counterparts. However, they are still useful and practical, especially on resource-constrained devices. Today you’ll learn how to perform face detection with OpenCV. Next week we’ll cover other Haar cascades included in OpenCV, namely eye and mouth detectors. And in two weeks, you’ll learn how to use dlib’s HOG + Linear SVM face detector and deep learning face detector. To learn how to perform face detection with OpenCV and Haar cascades, just keep reading. Looking for the source code to this post?
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
Jump Right To The Downloads Section OpenCV Face detection with Haar cascades In the first part of this tutorial, we’ll configure our development environment and then review our project directory structure. We’ll then implement two Python scripts: The first one will apply Haar cascades to detect faces in static imagesAnd the second script will utilize OpenCV’s Haar cascades to detect faces in real-time video streams We’ll wrap up the tutorial with a discussion of our results, including the limitations of Haar cascades. Configuring your development environment To follow this guide, you need to have the OpenCV library installed on your system. Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 1: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux systems? Then join PyImageSearch University today!
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Before we can learn how to apply face detection with OpenCV’s Haar cascades, let’s first review our project directory structure. Start by accessing the “Downloads” section of this tutorial to retrieve the source code and example images: $ tree . --dirsfirst . ├── images │ ├── adrian_01.png │ ├── adrian_02.png │ └── messi.png ├── haar_face_detector.py ├── haarcascade_frontalface_default.xml └── video_face_detector.py 1 directory, 6 files We have two Python scripts to review today: haar_face_detector.py: Applies Haar cascade face detection to input images.video_face_detector.py: Performs real-time face detection with Haar cascades. The haarcascade_frontalface_default.xml file is our pre-trained face detector, provided by the developers and maintainers of the OpenCV library. The images directory then contains example images where we’ll apply Haar cascades. Implementing face detection with OpenCV and Haar Cascades Let’s get started implementing face detection with OpenCV and Haar cascades.
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
Open the haar_face_detector.py file in your project directory structure, and let’s get to work: # import the necessary packages import argparse import imutils import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", type=str, required=True, help="path to input image") ap.add_argument("-c", "--cascade", type=str, default="haarcascade_frontalface_default.xml", help="path to haar cascade face detector") args = vars(ap.parse_args()) Lines 2-4 import our required Python packages. We’ll need argparse for command line argument parsing, imutils for OpenCV convenience functions, and cv2 for our OpenCV bindings. Lines 7-13 parse our required command line arguments, including: --image: The path to the input image where we want to apply Haar cascade face detection.--cascade: The path to the pre-trained Haar cascade detector residing on disk. With our command line arguments parsed, we can load our Haar cascade from disk: # load the haar cascade face detector from print("[INFO] loading face detector...") detector = cv2.CascadeClassifier(args["cascade"]) # load the input image from disk, resize it, and convert it to # grayscale image = cv2.imread(args["image"]) image = imutils.resize(image, width=500) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) A call to cv2.CascadeClassifier on Line 17 loads our face detector from disk. We then load our input image, resize it, and convert it to grayscale (we apply Haar cascades to grayscale images). The final step is detection and annotation: # detect faces in the input image using the haar cascade face # detector print("[INFO] performing face detection...") rects = detector.detectMultiScale(gray, scaleFactor=1.05, minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE) print("[INFO] {} faces detected...".format(len(rects))) # loop over the bounding boxes for (x, y, w, h) in rects: # draw the face bounding box on the image cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2) # show the output image cv2.imshow("Image", image) cv2.waitKey(0) Lines 28-30 then detect the actual faces in our input image, returning a list of bounding boxes, or simply the starting and ending (x, y)-coordinates where the faces are in each image. Let’s take a look at what each of these arguments means: scaleFactor: How much the image size is reduced at each image scale. This value is used to create the scale pyramid. To detect faces at multiple scales in the image (some faces may be closer to the foreground, and thus be larger, other faces may be smaller and in the background, thus the usage of varying scales).
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
A value of 1.05 indicates that we are reducing the size of the image by 5% at each level in the pyramid.minNeighbors: How many neighbors each window should have for the area in the window to be considered a face. The cascade classifier will detect multiple windows around a face. This parameter controls how many rectangles (neighbors) need to be detected for the window to be labeled a face.minSize: A tuple of width and height (in pixels) indicating the window’s minimum size. Bounding boxes smaller than this size are ignored. It is a good idea to start with (30, 30) and fine-tune from there. Finally, given the list of bounding boxes, we loop over them individually and draw the bounding box around the face on Lines 34-36. Haar cascade face detection results Let’s put our Haar cascade face detector to the test! Start by accessing the “Downloads” section of this tutorial to retrieve the source code, example images, and pre-trained Haar cascade face detector. From there, you can open a shell and execute the following command: $ python haar_face_detector.py --image images/messi.png [INFO] loading face detector... [INFO] performing face detection... [INFO] 2 faces detected... Figure 2: Applying face detection with OpenCV Haar cascades. As Figure 2 shows, we’ve been able to detect both faces in the input image successfully.
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
Let’s try another image: $ python haar_face_detector.py --image images/adrian_01.png [INFO] loading face detector... [INFO] performing face detection... [INFO] 1 faces detected... Figure 3: Successfully using Haar cascades for face detection. Sure enough, my face has been detected. The following image poses a bit of a problem, though, and demonstrates one of the largest limitations of Haar cascades, namely, false-positive detections: $ python haar_face_detector.py --image images/adrian_02.png [INFO] loading face detector... [INFO] performing face detection... [INFO] 2 faces detected... Figure 4: OpenCV’s Haar cascades are prone to false-positive detections. While you can see that my face was correctly detected, we also have a false-positive detection toward the bottom of the image. Haar cascades tend to be very sensitive to your choice in detectMultiScale parameters. The scaleFactor and minNeighbors being the ones you have to tune most often. When you end up with false-positive detections (or no face is detected at all), you should go back to your detectMultiScale function and attempt to tune the parameters by trial and error. For example, our original call to detectMultiScale looks like this: rects = detector.detectMultiScale(gray, scaleFactor=1.05, minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE) Through experimentation, I found that I could still detect my face while removing the false-positive by updating the minNeighbors from 5 to 7: rects = detector.detectMultiScale(gray, scaleFactor=1.05, minNeighbors=7, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE) After doing that, we obtain the correct results: $ python haar_face_detector.py --image images/adrian_02.png [INFO] loading face detector... [INFO] performing face detection... [INFO] 1 faces detected... Figure 5: Adjusting the parameters to OpenCV’s Haar cascade detection function can improve results. This update worked because the minNeighbors parameter is designed to help control false-positive detections. When applying face detection, Haar cascades are sliding a window from left-to-right and top-to-bottom across the image, computing integral images along the way.
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
When a Haar cascade thinks a face is in a region, it will return a higher confidence score. If there are enough high confidence scores in a given area, then the Haar cascade will report a positive detection. By increasing minNeighbors we can require that Haar cascades find more neighbors, thus removing the false-positive detection we saw in Figure 4. Again, the above example highlights the primary limitation of Haar cascades. While they are fast, you pay the price via: False-positive detectionsLess accuracy (as opposed to HOG + Linear SVM and deep learning-based face detectors)Manual parameter tuning That said, in resource-constrained environments, you just cannot beat the speed of Haar cascade face detection. Implementing real-time face detection with Haar cascades Our previous example demonstrated how to apply face detection with Haar cascades to single images. Let’s now learn how to perform face detection in real-time video streams: # import the necessary packages from imutils.video import VideoStream import argparse import imutils import time import cv2 Lines 2-6 import our required Python packages. The VideoStream class allows us to access our webcam. We have only a single command line argument to parse: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-c", "--cascade", type=str, default="haarcascade_frontalface_default.xml", help="path to haar cascade face detector") args = vars(ap.parse_args()) The --cascade argument points to our pre-trained Haar cascade face detector residing on disk.
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
We then load the face detector and initialize our video stream: # load the haar cascade face detector from print("[INFO] loading face detector...") detector = cv2.CascadeClassifier(args["cascade"]) # initialize the video stream and allow the camera sensor to warm up print("[INFO] starting video stream...") vs = VideoStream(src=0).start() time.sleep(2.0) Let’s start reading frames from the video stream: # loop over the frames from the video stream while True: # grab the frame from the video stream, resize it, and convert it # to grayscale frame = vs.read() frame = imutils.resize(frame, width=500) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # perform face detection rects = detector.detectMultiScale(gray, scaleFactor=1.05, minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE) Inside our while loop, we: Read the next frame from the cameraResize it to have a width of 500 pixels (smaller frames are faster to process)Convert the frame to grayscale Lines 33-35 then perform face detection using our Haar cascade. The final step is to draw the bounding boxes of the detected faces on our frame: # loop over the bounding boxes for (x, y, w, h) in rects: # draw the face bounding box on the image cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) # show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup cv2.destroyAllWindows() vs.stop() Line 38 loops over the rects list, containing the: Starting x coordinate of the faceStarting y coordinate of the faceWidth (w) of the bounding boxHeight (h) of the bounding box We then display the output frame on our screen. Real-time Haar cascade face detection results We are now ready to apply face detection in real-time with OpenCV! Be sure to access the “Downloads” section of this tutorial to retrieve the source code and pre-trained Haar cascade. From there, open a shell and execute the following command: $ python video_face_detector.py [INFO] loading face detector... [INFO] starting video stream... As you can see, our Haar cascade face detector is running in real-time without an issue! If you need to obtain real-time face detection, especially on embedded devices, then consider utilizing Haar cascade face detectors. Yes, they are not as accurate as more modern face detectors, and yes, they are prone to false-positive detections as well, but the benefit is that you’ll gain tremendous speed, and you’ll require less computational power. Otherwise, if you’re on a laptop/desktop, or you can use a coprocessor such as the Movidius NCS or Google Coral USB Accelerator, then use deep learning-based face detection. You’ll obtain far higher accuracy and still be able to apply face detection in real-time. What's next?
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today.
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to perform face detection with OpenCV and Haar cascades. While Haar cascades are significantly less accurate than their HOG + Linear SVM, SSD, YOLO, etc., counterparts, they are very fast and lightweight. This makes them suitable for use on embedded devices, particularly in situations where coprocessors like the Movidius NCS and Google Coral USB Accelerator are unavailable. Next week we’ll discuss other OpenCV Haar cascades, including eye and mouth detectors.
https://pyimagesearch.com/2021/04/05/opencv-face-detection-with-haar-cascades/
To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
Click here to download the source code to this pos In this tutorial, you will learn about OpenCV Haar Cascades and how to apply them to real-time video streams. Haar cascades, first introduced by Viola and Jones in their seminal 2001 publication, Rapid Object Detection using a Boosted Cascade of Simple Features, are arguably OpenCV’s most popular object detection algorithm. Sure, many algorithms are more accurate than Haar cascades (HOG + Linear SVM, SSDs, Faster R-CNN, YOLO, to name a few), but they are still relevant and useful today. One of the primary benefits of Haar cascades is that they are just so fast — it’s hard to beat their speed. The downside to Haar cascades is that they tend to be prone to false-positive detections, require parameter tuning when being applied for inference/detection, and just, in general, are not as accurate as the more “modern” algorithms we have today. That said, Haar cascades are: An important part of the computer vision and image processing literatureStill used with OpenCVStill useful, particularly when working in resource-constrained devices when we cannot afford to use more computationally expensive object detectors In the remainder of this tutorial, you’ll learn about Haar cascades, including how to use them with OpenCV. To learn how to use OpenCV Haar cascades, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section OpenCV Haar Cascades In the first part of this tutorial, we’ll review what Haar cascades are and how to use Haar cascades with the OpenCV library. From there, we’ll configure our development environment and then review our project structure.
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
With our project directory structure reviewed, we’ll move on to apply our Haar cascades in real-time with OpenCV. We’ll wrap up this guide with a discussion of our results. What are Haar cascades? Figure 1: First introduced in 2001, Haar cascades are a class of object detection algorithms (image source). First published by Paul Viola and Michael Jones in their 2001 paper, Rapid Object Detection using a Boosted Cascade of Simple Features, this original work has become one of the most cited papers in computer vision literature. In their paper, Viola and Jones propose an algorithm that is capable of detecting objects in images, regardless of their location and scale in an image. Furthermore, this algorithm can run in real-time, making it possible to detect objects in video streams. Specifically, Viola and Jones focus on detecting faces in images. Still, the framework can be used to train detectors for arbitrary “objects,” such as cars, buildings, kitchen utensils, and even bananas. While the Viola-Jones framework certainly opened the door to object detection, it is now far surpassed by other methods, such as using Histogram of Oriented Gradients (HOG) + Linear SVM and deep learning.
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
We need to respect this algorithm and at least have a high-level understanding of what’s going on underneath the hood. Recall when we discussed image and convolutions and how we slid a small matrix across our image from left-to-right and top-to-bottom, computing an output value for each center pixel of the kernel? Well, it turns out that this sliding window approach is also extremely useful in the context of detecting objects in an image: Figure 2: An example of a sliding window, moving from left-to-right and top-to-bottom, to locate the face in the image. In Figure 2, we can see that we are sliding a fixed size window across our image at multiple scales. At each of these phases, our window stops, computes some features, and then classifies the region as Yes, this region does contain a face, or No, this region does not contain a face. This requires a bit of machine learning. We need a classifier that is trained in using positive and negative samples of a face: Positive data points are examples of regions containing a faceNegative data points are examples of regions that do not contain a face Given these positive and negative data points, we can “train” a classifier to recognize whether a given region of an image contains a face. Luckily for us, OpenCV can perform face detection out-of-the-box using a pre-trained Haar cascade: Figure 3: An example of detecting faces in images using OpenCV’s pre-trained Haar cascades. This ensures that we do not need to provide our own positive and negative samples, train our own classifier, or worry about getting the parameters tuned exactly right. Instead, we simply load the pre-trained classifier and detect faces in images.
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
However, under the hood, OpenCV is doing something quite interesting. For each of the stops along the sliding window path, five rectangular features are computed: Figure 4: The 5 different types of Haar-like features extracted from an image patch. If you are familiar with wavelets, you may see that they bear some resemblance to Haar basis functions and Haar wavelets (where Haar cascades get their name). To obtain features for each of these five rectangular areas, we simply subtract the sum of pixels under the white region from the sum of pixels under the black region. Interestingly enough, these features have actual real importance in the context of face detection: Eye regions tend to be darker than cheek regions. The nose region is brighter than the eye region. Therefore, given these five rectangular regions and their corresponding difference of sums, we can form features that can classify parts of a face. Then, for an entire dataset of features, we use the AdaBoost algorithm to select which ones correspond to facial regions of an image. However, as you can imagine, using a fixed sliding window and sliding it across every (x, y)-coordinate of an image, followed by computing these Haar-like features, and finally performing the actual classification can be computationally expensive. To combat this, Viola and Jones introduced the concept of cascades or stages.
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
At each stop along the sliding window path, the window must pass a series of tests where each subsequent test is more computationally expensive than the previous one. If any one test fails, the window is automatically discarded. Some Haar cascade benefits are that they’re very fast at computing Haar-like features due to the use of integral images (also called summed area tables). They are also very efficient for feature selection through the use of the AdaBoost algorithm. Perhaps most importantly, they can detect faces in images regardless of the location or scale of the face. Finally, the Viola-Jones algorithm for object detection is capable of running in real-time. Problems and limitations of Haar cascades However, it’s not all good news. The detector tends to be the most effective for frontal images of the face. Haar cascades are notoriously prone to false-positives — the Viola-Jones algorithm can easily report a face in an image when no face is present. Finally, as we’ll see in the rest of this lesson, it can be quite tedious to tune the OpenCV detection parameters.
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
There will be times when we can detect all the faces in an image. There will be other times when (1) regions of an image are falsely classified as faces, and/or (2) faces are missed entirely. If the Viola-Jones algorithm interests you, take a look at the official Wikipedia page and the original paper. The Wikipedia page does an excellent job of breaking the algorithm down into easy to digest pieces. How do you use Haar cascades with OpenCV? Figure 5: The official OpenCV GitHub maintains a repository of pre-trained Haar cascades. The OpenCV library maintains a repository of pre-trained Haar cascades. Most of these Haar cascades are used for either: Face detectionEye detectionMouth detectionFull/partial body detection Other pre-trained Haar cascades are provided, including one for Russian license plates and another for cat face detection. We can load a pre-trained Haar cascade from disk using the cv2.CascadeClassifer function: detector = cv2.CascadeClassifier(path) Once the Haar cascade is loaded into memory, we can make predictions with it using the detectMultiScale function: results = detector.detectMultiScale( gray, scaleFactor=1.05, minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE) The result is a list of bounding boxes that contain the starting x and y coordinates of the bounding box, along with their width (w) and height (h). You will gain hands-on experience with both cv2.CascadeClassifier and detectMultiScale later in this tutorial.
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
Configuring your development environment To follow this guide, you need to have the OpenCV library installed on your system. Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 6: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux systems? Then join PyImageSearch University today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required.
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Before we can learn about OpenCV’s Haar cascade functionality, we first need to review our project directory structure. Start by accessing the “Downloads” section of this tutorial to retrieve the source code and pre-trained Haar cascades: $ tree . --dirsfirst . ├── cascades │ ├── haarcascade_eye.xml │ ├── haarcascade_frontalface_default.xml │ └── haarcascade_smile.xml └── opencv_haar_cascades.py 1 directory, 4 files We will apply three Haar cascades to a real-time video stream. These Haar cascades reside in the cascades directory and include: haarcascade_frontalface_default.xml: Detects faceshaarcascade_eye.xml: Detects the left and right eyes on the facehaarcascade_smile.xml: While the filename suggests that this model is a “smile detector,” it actually detects the presence of the “mouth” on a face Our opencv_haar_cascades.py script will load these three Haar cascades from disk and apply them to a video stream, all in real-time. Implementing OpenCV Haar Cascade object detection (face, eyes, and mouth) With our project directory structure reviewed, we can implement our OpenCV Haar cascade detection script. Open the opencv_haar_cascades.py file in your project directory structure, and we can get to work: # import the necessary packages from imutils.video import VideoStream import argparse import imutils import time import cv2 import os Lines 2-7 import our required Python packages. We need VideoStream to access our webcam, argparse for command line arguments, imutils for our OpenCV convenience functions, time to insert a small sleep statement, cv2 for our OpenCV bindings, and os to build file paths, agnostic of which operating system you are on (Windows uses different path separators than Unix machines, such as macOS and Linux). We have only a single command line argument to parse: # construct the argument parser and parse the arguments ap = argparse.
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
ArgumentParser() ap.add_argument("-c", "--cascades", type=str, default="cascades", help="path to input directory containing haar cascades") args = vars(ap.parse_args()) The --cascades command line arguments point to the directory containing our pre-trained face, eye, and mouth Haar cascades. We proceed to load each of these Haar cascades from disk: # initialize a dictionary that maps the name of the haar cascades to # their filenames detectorPaths = { "face": "haarcascade_frontalface_default.xml", "eyes": "haarcascade_eye.xml", "smile": "haarcascade_smile.xml", } # initialize a dictionary to store our haar cascade detectors print("[INFO] loading haar cascades...") detectors = {} # loop over our detector paths for (name, path) in detectorPaths.items(): # load the haar cascade from disk and store it in the detectors # dictionary path = os.path.sep.join([args["cascades"], path]) detectors[name] = cv2.CascadeClassifier(path) Lines 17-21 define a dictionary that maps the name of the detector (key) to its corresponding file path (value). Line 25 initializes our detectors dictionary. It will have the same key as detectorPaths, but the value will be the Haar cascade once it’s been loaded from disk via cv2.CascadeClassifier. On Line 28, we loop over each of the Haar cascade names and paths, respectively. For each detector, we build the full file path, load it from disk, and store it in our detectors dictionary. With each of our three Haar cascades loaded from disk, we can move on to accessing our video stream: # initialize the video stream and allow the camera sensor to warm up print("[INFO] starting video stream...") vs = VideoStream(src=0).start() time.sleep(2.0) # loop over the frames from the video stream while True: # grab the frame from the video stream, resize it, and convert it # to grayscale frame = vs.read() frame = imutils.resize(frame, width=500) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # perform face detection using the appropriate haar cascade faceRects = detectors["face"].detectMultiScale( gray, scaleFactor=1.05, minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE) Lines 36-37 initialize our VideoStream, inserting a small time.sleep statement to allow our camera sensor to warm up. From there, we proceed to: Loop over frames from our video streamRead the next frameResize itConvert it to grayscale Once the frame has been converted to grayscale, we apply the face detector Haar cascade to locate any faces in the input frame. The next step is to loop over each of the face locations and apply our eye and mouth Haar cascades: # loop over the face bounding boxes for (fX, fY, fW, fH) in faceRects: # extract the face ROI faceROI = gray[fY:fY+ fH, fX:fX + fW] # apply eyes detection to the face ROI eyeRects = detectors["eyes"].detectMultiScale( faceROI, scaleFactor=1.1, minNeighbors=10, minSize=(15, 15), flags=cv2.CASCADE_SCALE_IMAGE) # apply smile detection to the face ROI smileRects = detectors["smile"].detectMultiScale( faceROI, scaleFactor=1.1, minNeighbors=10, minSize=(15, 15), flags=cv2.CASCADE_SCALE_IMAGE) Line 53 loops over all face bounding boxes. We then extract the face ROI on Line 55 using the bounding box information.
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
The next step is to apply our eye and mouth detectors to the face region. Eye detection is applied to the face ROI on Lines 58-60, while mouth detection is performed on Lines 63-65. And just like we looped over all face detections, we need to do the same for our eye and mouth detections: # loop over the eye bounding boxes for (eX, eY, eW, eH) in eyeRects: # draw the eye bounding box ptA = (fX + eX, fY + eY) ptB = (fX + eX + eW, fY + eY + eH) cv2.rectangle(frame, ptA, ptB, (0, 0, 255), 2) # loop over the smile bounding boxes for (sX, sY, sW, sH) in smileRects: # draw the smile bounding box ptA = (fX + sX, fY + sY) ptB = (fX + sX + sW, fY + sY + sH) cv2.rectangle(frame, ptA, ptB, (255, 0, 0), 2) # draw the face bounding box on the frame cv2.rectangle(frame, (fX, fY), (fX + fW, fY + fH), (0, 255, 0), 2) Lines 68-72 loop over all detected eye bounding boxes. However, notice how Lines 70 and 71 derive the eye bounding boxes relative to the original frame image dimensions. If we used the raw eX, eY, eW, and eH values, they would be in terms of the faceROI, not the original frame, hence why we add the face bounding box coordinates to the eye coordinates. We perform the same series of operations on Lines 75-79, this time for the mouth bounding boxes. Finally, we can wrap up by displaying our output frame on the screen: # show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup cv2.destroyAllWindows() vs.stop() We then clean up by closing any windows opened by OpenCV and stopping our video stream. Haar cascade results We are now ready to apply Haar cascades with OpenCV! Be sure to access the “Downloads” section of this tutorial to retrieve the source code and example images. From there, pop open a terminal and execute the following command: $ python opencv_haar_cascades.py --cascades cascades [INFO] loading haar cascades... [INFO] starting video stream... The video above shows the results of applying our three OpenCV Haar cascades for face detection, eye detection, and mouth detection.
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
Our results run in real-time without a problem, but as you can see, the detection themselves are not the most accurate: We have no problem detecting my face, but the mouth and eye cascades fire several false-positives. When I blink, one of two things happen: (1) the eye regions is no longer detected, or (2) it is incorrectly marked as a mouthThere tend to be multiple mouth detections in many frames OpenCV’s face detection Haar cascades tend to be the most accurate. You should feel free to use them in your own applications where you can tolerate some false-positive detections and a bit of parameter tuning. That said, for facial structure detection, I strongly recommend you use facial landmarks instead — they are more stable and even faster than the eye and mouth Haar cascades themselves. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc.
https://pyimagesearch.com/2021/04/12/opencv-haar-cascades/
Click here to join PyImageSearch University Summary In this tutorial, you learned how to apply Haar cascades with OpenCV. Specifically, you learned how to apply Haar cascades for: Face detectionEye detectionMouth detection Our face detection results were the most stable and accurate. Unfortunately, in many cases, the eye detection and mouth detection results were unusable — for facial feature/part extraction, I instead suggest you use facial landmarks. I’ll wrap up by saying that there are many more accurate face detection methods, including HOG + Linear SVM and deep learning-based object detectors, including SSDs, Faster R-CNN, YOLO, etc. Still, if you need pure speed, you just can’t beat OpenCV’s Haar cascades. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
Click here to download the source code to this pos In this tutorial, you will learn how to compare two images for similarity (and whether or not they belong to the same or different classes) using siamese networks and the Keras/TensorFlow deep learning libraries. This blog post is part three in our three-part series on the basics of siamese networks: Part #1: Building image pairs for siamese networks with Python (post from two weeks ago)Part #2: Training siamese networks with Keras, TensorFlow, and Deep Learning (last week’s tutorial)Part #3: Comparing images using siamese networks (this tutorial) Last week we learned how to train our siamese network. Our model performed well on our test set, correctly verifying whether two images belonged to the same or different classes. After training, we serialized the model to disk. Soon after last week’s tutorial published, I received an email from PyImageSearch reader Scott asking: “Hi Adrian — thanks for these guides on siamese networks. I’ve heard them mentioned in deep learning spaces but honestly was never really sure how they worked or what they did. This series really helped clear my doubts and have even helped me in one of my work projects. My question is:How do we take our trained siamese network and make predictions on it from images outside of the training and testing set?Is that possible?” You bet it is, Scott. And that’s exactly what we are covering here today.
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
To learn how to compare images for similarity using siamese networks, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Comparing images for similarity using siamese networks, Keras, and TensorFlow In the first part of this tutorial, we’ll discuss the basic process of how a trained siamese network can be used to predict the similarity between two image pairs and, more specifically, whether the two input images belong to the same or different classes. You’ll then learn how to configure your development environment for siamese networks using Keras and TensorFlow. Once your development environment is configured, we’ll review our project directory structure and then implement a Python script to compare images for similarity using our siamese network. We’ll wrap up this tutorial with a discussion of our results. How can siamese networks predict similarity between image pairs? Figure 1: Using siamese networks to compare two images for similarity results in a similarity score. The closer the score is to “1”, the more similar the images are (and are thus more likely to belong to the same class). Conversely, the closer the score is to “0”, the less similar the two images are.
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
In last week’s tutorial you learned how to train a siamese network to verify whether two pairs of digits belonged to the same or different classes. We then serialized our siamese model to disk after training. The question then becomes: “How can we use our trained siamese network to predict the similarity between two images?” The answer is that we utilize the final layer in our siamese network implementation, which is sigmoid activation function. The sigmoid activation function has an output in the range [0, 1], meaning that when we present an image pair to our siamese network, the model will output a value >= 0 and <= 1. A value of 0 means that the two images are completely and totally dissimilar, while a value of 1 implies that the images are very similar. An example of such a similarity can be seen in Figure 1 at the top of this section: Comparing a “7” to a “0” has a low similarity score of only 0.02.However, comparing a “0” to another “0” has a very high similarity score of 0.93. A good rule of thumb is to use a similarity cutoff value of 0.5 (50%) as your threshold: If two image pairs have an image similarity of <= 0.5, then they belong to a different class. Conversely, if pairs have a predicted similarity of > 0.5, then they belong to the same class. In this manner you can use siamese networks to (1) compare images for similarity and (2) determine whether they belong to the same class or not.
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
Practical use cases of using siamese networks include: Face recognition: Given two separate images containing a face, determine if it’s the same person in both photos. Signature verification: When presented with two signatures, determine whether one is a forgery or not. Prescription pill identification: Given two prescription pills, determine whether they are the same medication or different medications. Configuring your development environment This series of tutorials on siamese networks utilizes Keras and TensorFlow. If you intend on following this tutorial or the previous two parts in this series, I suggest you take the time now to configure your deep learning development environment. You can utilize either of these two guides to install TensorFlow and Keras on your system: How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. Having problems configuring your development environment? Figure 2: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus —- you’ll be up and running with this tutorial in a matter of minutes.
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Before we get too far into this tutorial, let’s first take a second and review our project directory structure. Start by making sure you use the “Downloads” section of this tutorial to download the source code and example images. From there, let’s take a look at the project: $ tree . --dirsfirst . ├── examples │ ├── image_01.png ... │ └── image_13.png ├── output │ ├── siamese_model │ │ ├── variables │ │ │ ├── variables.data-00000-of-00001 │ │ │ └── variables.index │ │ └── saved_model.pb │ └── plot.png ├── pyimagesearch │ ├── config.py │ ├── siamese_network.py │ └── utils.py ├── test_siamese_network.py └── train_siamese_network.py 4 directories, 21 files Inside the examples directory we have a number of example digits: Figure 3: Examples of digits we’ll be comparing for similarity using siamese networks implemented with Keras and TensorFlow.
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
We’ll be sampling pairs of these digits and then comparing them for similarity using our siamese network. The output directory contains the training history plot (plot.png) and our trained/serialized siamese network model (siamese_model/). Both of these files were generated in last week’s tutorial on training your own custom siamese network models — make sure you read that tutorial before you continue, as it’s required reading for today! The pyimagesearch module contains three Python files: config.py: Our configuration file storing important variables such as output file paths and training configurations (including image input dimensions, batch size, epochs, etc.) siamese_network.py: Our implementation of our siamese network architecture utils.py: Contains helper configuration functions to generate image pairs, compute Euclidean distances, and plot training history path The train_siamese_network.py script: Imports the configuration, siamese network implementation, and utility functionsLoads the MNIST dataset from diskGenerates image pairsCreates our training/testing dataset splitTrains our siamese networkSerializes the trained siamese network to disk I will not be covering these four scripts today, as I have already covered them in last week’s tutorial on how to train siamese networks. I’ve included these files in the project directory structure for today’s tutorial as a matter of completeness, but again, for a full review of these files, what they do, and how they work, refer back to last week’s tutorial. Finally, we have the focus of today’s tutorial, test_siamese_network.py. This script will: Load our trained siamese network model from disk Grab the paths to the sample digit images in the examples directory Randomly construct pairs of images from these samples Compare the pairs for similarity using the siamese network Let’s get to work! Implementing our siamese network image similarity script We are now ready to implement siamese networks for image similarity using Keras and TensorFlow. Start by making sure you use the “Downloads” section of this tutorial to download the source code, example images, and pre-trained siamese network model.
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
From there, open up test_siamese_network.py, and follow along: # import the necessary packages from pyimagesearch import config from pyimagesearch import utils from tensorflow.keras.models import load_model from imutils.paths import list_images import matplotlib.pyplot as plt import numpy as np import argparse import cv2 We start off by importing our required Python packages (Lines 2-9). Notable imports include: config: Contains important configurations, including the path to our trained/serialized siamese network model residing on disk utils: Contains the euclidean_distance function utilized in our Lambda layer of the siamese network — we need to import this package to suppress any UserWarnings about loading Lambda layers from disk load_model: The Keras/TensorFlow function used to load our trained siamese network from disk list_images: Grabs the paths to all images in our examples directory Let’s move on to parsing our command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--input", required=True, help="path to input directory of testing images") args = vars(ap.parse_args()) We only need a single argument here, --input, which is the path to our directory on disk containing the images we want to compare for similarity. When running this script, we’ll supply the path to the examples directory in our project. With our command line arguments parsed, we can now grab all testImagePaths in our --input directory: # grab the test dataset image paths and then randomly generate a # total of 10 image pairs print("[INFO] loading test dataset...") testImagePaths = list(list_images(args["input"])) np.random.seed(42) pairs = np.random.choice(testImagePaths, size=(10, 2)) # load the model from disk print("[INFO] loading siamese model...") model = load_model(config. MODEL_PATH) Line 20 grabs the paths to all of our example images containing digits we want to compare for similarity. Line 22 randomly generates a total of 10 pairs of images from these testImagePaths. Line 26 loads our siamese network from disk using the load_model function. With the siamese network loaded from disk, we can now compare images for similarity: # loop over all image pairs for (i, (pathA, pathB)) in enumerate(pairs): # load both the images and convert them to grayscale imageA = cv2.imread(pathA, 0) imageB = cv2.imread(pathB, 0) # create a copy of both the images for visualization purpose origA = imageA.copy() origB = imageB.copy() # add channel a dimension to both the images imageA = np.expand_dims(imageA, axis=-1) imageB = np.expand_dims(imageB, axis=-1) # add a batch dimension to both images imageA = np.expand_dims(imageA, axis=0) imageB = np.expand_dims(imageB, axis=0) # scale the pixel values to the range of [0, 1] imageA = imageA / 255.0 imageB = imageB / 255.0 # use our siamese model to make predictions on the image pair, # indicating whether or not the images belong to the same class preds = model.predict([imageA, imageB]) proba = preds[0][0] Line 29 starts a loop over all image pairs. For each image pair we: Load the two images from disk (Lines 31 and 32)Clone the two images such that we can draw/visualize them later (Lines 35 and 36)Add a channel dimension (Lines 39 and 40) along with a batch dimension (Lines 43 and 44)Scale the pixel intensities to from the range [0, 255] to [0, 1], just like we did when training our siamese network last week (Lines 47 and 48) Once imageA and imageB are preprocessed, we compare them for similarity by making a call to the .predict method on our siamese network model (Line 52), resulting in the probability/similarity scores of the two images (Line 53).
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
The final step is to display the image pair and corresponding similarity score to our screen: # initialize the figure fig = plt.figure("Pair #{}".format(i + 1), figsize=(4, 2)) plt.suptitle("Similarity: {:.2f}".format(proba)) # show first image ax = fig.add_subplot(1, 2, 1) plt.imshow(origA, cmap=plt.cm.gray) plt.axis("off") # show the second image ax = fig.add_subplot(1, 2, 2) plt.imshow(origB, cmap=plt.cm.gray) plt.axis("off") # show the plot plt.show() Lines 56 and 57 create a matplotlib figure for the pair and display the similarity score as the title of the plot. Lines 60-67 plot each of the images in the pair on the figure, while Line 70 displays the output to our screen. Congrats on implementing siamese networks for image comparison and similarity! Let’s see the results of our hard work in the next section. Image similarity results using siamese networks with Keras and TensorFlow We are now ready to compare images for similarity using our siamese network! Before we examine the results, make sure you: Have read our previous tutorial on training siamese networks so you understand how our siamese network model was trained and generatedUse the “Downloads” section of this tutorial to download the source code, pre-trained siamese network, and example images From there, open up a terminal, and execute the following command: $ python test_siamese_network.py --input examples [INFO] loading test dataset... [INFO] loading siamese model... Figure 4: The results of comparing images for similarity using siamese networks and the Keras/TensorFlow deep learning libraries. Note: Are you getting an error related to TypeError: ('Keyword argument not understood:', 'groups')? If so, keep in mind that the pre-trained model included in the “Downloads” section of this tutorial was trained using TensorFlow 2.3. You should therefore be using TensorFlow 2.3 when running test_siamese_network.py. If you instead prefer to use a different version of TensorFlow, simply run train_siamese_network.py to train the model and generate a new siamese_model serialized to disk.
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
From there you’ll be able to run test_siamese_network.py without error. Figure 4 above displays a montage of our image similarity results. For the first image pair, one contains a “7”, while the other contains a “1” — clearly these are not the same image, and the similarity score is low at 42%. Our siamese network has correctly marked these images as belonging to different classes. The next image pair consists of two “0” digits. Our siamese network has predicted a very high similarity score of 97%, indicating that these two images belong to the same class. You can see the same pattern for all other image pairs in Figure 4. Images that have a high similarity score belong to the same class, while image pairs with low similarity scores belong to different classes. Since we used the sigmoid activation layer as the final layer in our siamese network (which has an output value in the range [0, 1]), a good rule of thumb is to use a similarity cutoff value of 0.5 (50%) as your threshold: If two image pairs have an image similarity of <= 0.5, then they belong to different classes. Conversely, if pairs have a predicted similarity of > 0.5, then they belong to the same class.
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
You can use this rule of thumb in your own projects when using siamese networks to compute image similarity. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do.
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial you learned how to compare two images for similarity and, more specifically, whether they belonged to the same or different classes. We accomplished this task using siamese networks along with the Keras and TensorFlow deep learning libraries. This post is the final part in our three part series on introduction to siamese networks.
https://pyimagesearch.com/2020/12/07/comparing-images-for-similarity-using-siamese-networks-keras-and-tensorflow/
For easy reference, here are links to each guide in the series: Part #1: Building image pairs for siamese networks with PythonPart #2: Training siamese networks with Keras, TensorFlow, and Deep LearningPart #3: Comparing images for similarity using siamese networks, Keras, and TensorFlow (this tutorial) In the near future I’ll be covering more advanced series on siamese networks, including: Image tripletsContrastive lossTriplet lossFace recognition with siamese networksOne-shot learning with siamese networks Stay tuned for these tutorials; you don’t want to miss them! To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
Click here to download the source code to this pos In this tutorial you will learn how to detect ArUco markers in images and real-time video streams using OpenCV and Python. This blog post is part two in our three-part series on ArUco markers and fiducials: Generating ArUco markers with OpenCV and Python (last week’s post)Detecting ArUco markers in images and video with OpenCV (today’s tutorial)Automatically determining ArUco marker type with OpenCV (next week’s post) Last week we learned: What an ArUco dictionary isHow to select an ArUco dictionary appropriate to our taskHow to generate ArUco markers using OpenCVHow to create ArUco markers using online tools Today we’re going to learn how to actually detect ArUco markers using OpenCV. To learn how to detect ArUco markers in images and real-time video with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Detecting ArUco markers with OpenCV and Python In the first part of this tutorial, you will learn about OpenCV’s cv2.aruco module and how to detect ArUco markers in images and real-time video streams by: Specifying your ArUco dictionary Creating the parameters to the ArUco detector (which is typically just a single line of code using the default values) Applying the cv2.aruco.detectMarkers to actually detect the ArUco markers in your image or video stream From there we’ll review our project directory structure and implement two Python scripts: One Python script to detect ArUco markers in imagesAnd another Python script to detect ArUco markers in real-time video streams We’ll wrap up this tutorial on ArUco marker detection using OpenCV with a discussion of our results. OpenCV ArUCo marker detection Figure 1: Flowchart of steps required to detect ArUco markers with OpenCV. As I discussed in last week’s tutorial, the OpenCV library comes with built-in ArUco support, both for generating ArUco markers and for detecting them. Detecting ArUco markers with OpenCV is a three-step process made possible via the cv2.aruco submodule: Step #1: Use the cv2.aruco. Dictionary_get function to grab the dictionary of ArUco markers we’re using. Step #2: Define the ArUco detection parameters using cv2.aruco.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
DetectorParameters_create. Step #3: Perform ArUco marker detection via the cv2.aruco.detectMarkers function. Most important to us, we need to learn how to use the detectMarkers function. Understanding the “cv2.aruco.detectMarkers” function We can define an ArUco marker detection procedure in, essentially, only 3-4 lines of code: arucoDict = cv2.aruco. Dictionary_get(cv2.aruco. DICT_6X6_50) arucoParams = cv2.aruco. DetectorParameters_create() (corners, ids, rejected) = cv2.aruco.detectMarkers(image, arucoDict, parameters=arucoParams) The cv2.aruco.detectMarkers function accepts three arguments: image: The input image that we want to detect ArUco markers in arucoDict: The ArUco dictionary we are using parameters: The ArUco parameters used for detection (unless you have a good reason to modify the parameters, the default parameters returned by cv2.aruco. DetectorParameters_create are typically sufficient) After applying ArUco tag detection, the cv2.aruco.detectMarkers method returns three values: corners: A list containing the (x, y)-coordinates of our detected ArUco markers ids: The ArUco IDs of the detected markers rejected: A list of potential markers that were found but ultimately rejected due to the inner code of the marker being unable to be parsed (visualizing the rejected markers is often useful for debugging purposes) Later in this post you will see how to use the cv2.aruco.detectMarkers function to detect ArUco markers in images and real-time video streams. Configuring your development environment In order to generate and detect ArUco markers, you need to have the OpenCV library installed. Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV 4.3+, I highly recommend that you read my pip install opencv guide — it will have you up and running in a matter of minutes.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
Having problems configuring your development environment? Figure 2: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Before we can learn how to detect ArUco tags in images, let’s first review our project directory structure so you have a good idea on how our project is organized and what Python scripts we’ll be using.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
Start by using the “Downloads” section of this tutorial to download the source code and example images. From there, we can inspect the project directory: $ tree . --dirsfirst . ├── images │ ├── example_01.png │ └── example_02.png ├── detect_aruco_image.py └── detect_aruco_video.py 2 directories, 9 files Today we’ll be reviewing two Python scripts: detect_aruco_image.py: Detects ArUco tags in images. The example images we’ll be applying this script to reside in the images/ directory. detect_aruco_video.py: Applies ArUco detection to real-time video streams. I’ll be using my webcam as an example, but you could pipe in frames from a video file residing on disk as well. With our project directory structure reviewed, we can move on to implementing ArUco tag detection with OpenCV! Detecting ArUco markers with OpenCV in images Ready to learn how to detect ArUco tags in images using OpenCV? Open up the detect_aruco_image.py file in your project directory, and let’s get to work: # import the necessary packages import argparse import imutils import cv2 import sys We start off by importing our required Python packages.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
We’ll use argparse to parse our command line arguments, imutils for resizing images, cv2 for our OpenCV bindings, and sysin the event that we need to prematurely exit our script. Next comes our command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to input image containing ArUCo tag") ap.add_argument("-t", "--type", type=str, default="DICT_ARUCO_ORIGINAL", help="type of ArUCo tag to detect") args = vars(ap.parse_args()) We have two command line arguments that we need to parse: --image: The path to the input image containing any ArUco tags we want to detect --type: The type of ArUco tags that we’ll be detecting Setting the --type argument correctly is absolutely critical to successfully detect ArUco tags in input images. Simply put: The --type argument that we supply here must be the same ArUco type used to generate the tags in the input images. If one type was used to generate ArUco tags and then you use a different type when trying to detect them, the detection will fail, and you’ll end up with zero detected ArUco tags. Therefore, you must make absolutely certain that the type used to generate the ArUco tags is the same type you are using for the detection phase. Note: Don’t know what ArUco dictionary was used to generate the tags in your input images? Don’t worry, I’ve got you covered. Next week I’ll be showing you one of the Python scripts in my personal arsenal that I break out when I can’t identify what type a given ArUco tag is. This script automatically identifies the ArUco tag type.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
Stay tuned for next week’s tutorial, where I’ll review it in detail. Next up comes our ARUCO_DICT, which enumerates each of the ArUco tag types that OpenCV supports: # define names of each possible ArUco tag OpenCV supports ARUCO_DICT = { "DICT_4X4_50": cv2.aruco. DICT_4X4_50, "DICT_4X4_100": cv2.aruco. DICT_4X4_100, "DICT_4X4_250": cv2.aruco. DICT_4X4_250, "DICT_4X4_1000": cv2.aruco. DICT_4X4_1000, "DICT_5X5_50": cv2.aruco. DICT_5X5_50, "DICT_5X5_100": cv2.aruco. DICT_5X5_100, "DICT_5X5_250": cv2.aruco. DICT_5X5_250, "DICT_5X5_1000": cv2.aruco. DICT_5X5_1000, "DICT_6X6_50": cv2.aruco.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
DICT_6X6_50, "DICT_6X6_100": cv2.aruco. DICT_6X6_100, "DICT_6X6_250": cv2.aruco. DICT_6X6_250, "DICT_6X6_1000": cv2.aruco. DICT_6X6_1000, "DICT_7X7_50": cv2.aruco. DICT_7X7_50, "DICT_7X7_100": cv2.aruco. DICT_7X7_100, "DICT_7X7_250": cv2.aruco. DICT_7X7_250, "DICT_7X7_1000": cv2.aruco. DICT_7X7_1000, "DICT_ARUCO_ORIGINAL": cv2.aruco. DICT_ARUCO_ORIGINAL, "DICT_APRILTAG_16h5": cv2.aruco. DICT_APRILTAG_16h5, "DICT_APRILTAG_25h9": cv2.aruco.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
DICT_APRILTAG_25h9, "DICT_APRILTAG_36h10": cv2.aruco. DICT_APRILTAG_36h10, "DICT_APRILTAG_36h11": cv2.aruco. DICT_APRILTAG_36h11 } The key to this dictionary is a human-readable string (i.e., the name of the ArUco tag type). The key then maps to the value, which is OpenCV’s unique identifier for the ArUco tag type. Using this dictionary we can take our input --type command line argument, pass it through ARUCO_DICT, and then obtain the unique identifier for the ArUco tag type. The following Python shell block shows you a simple example of how this lookup operation is performed: >>> print(args) {'type': 'DICT_5X5_100'} >>> arucoType = ARUCO_DICT[args["type"]] >>> print(arucoType) 5 >>> 5 == cv2.aruco. DICT_5X5_100 True >>> I covered the types of ArUco dictionaries, including their name conventions in my previous tutorial, Generating ArUco markers with OpenCV and Python. If you would like more information on ArUco dictionaries, please refer there; otherwise, simply understand that this dictionary lists out all possible ArUco tags that OpenCV can detect. Next, let’s move on to loading our input image from disk: # load the input image from disk and resize it print("[INFO] loading image...") image = cv2.imread(args["image"]) image = imutils.resize(image, width=600) # verify that the supplied ArUCo tag exists and is supported by # OpenCV if ARUCO_DICT.get(args["type"], None) is None: print("[INFO] ArUCo tag of '{}' is not supported".format( args["type"])) sys.exit(0) # load the ArUCo dictionary, grab the ArUCo parameters, and detect # the markers print("[INFO] detecting '{}' tags...".format(args["type"])) arucoDict = cv2.aruco. Dictionary_get(ARUCO_DICT[args["type"]]) arucoParams = cv2.aruco.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
DetectorParameters_create() (corners, ids, rejected) = cv2.aruco.detectMarkers(image, arucoDict, parameters=arucoParams) Lines 43 and 44 load our input image and then resize it to have a width of 600 pixels (such that the image can easily fit on our screen). If you have a high resolution input image that has small ArUco tags, you may need to adjust this resizing operation; otherwise, the ArUco tags may be too small to detect after the resizing operation. Line 48 checks to see if the ArUco --type name exists in the ARUCO_DICT. If it does not, then we exit the script, since we don’t have an ArUco dictionary available for the supplied --type. Otherwise, we: Load the ArUco dictionary using the --type and the ARUCO_DICT lookup (Line 56) Instantiate our ArUco detector parameters (Line 57) Apply ArUco detection using the cv2.aruco.detectMarkers function (Lines 58 and 59) The cv2.aruco.detectMarkers results in a 3-tuple of: corners: The (x, y)-coordinates of our detected ArUco markers ids: The identifiers of the ArUco markers (i.e., the ID encoded in the marker itself) rejected: A list of potential markers that were detected but ultimately rejected due to the code inside the marker not being able to be parsed Let’s now start visualizing the ArUco markers we have detected: # verify *at least* one ArUco marker was detected if len(corners) > 0: # flatten the ArUco IDs list ids = ids.flatten() # loop over the detected ArUCo corners for (markerCorner, markerID) in zip(corners, ids): # extract the marker corners (which are always returned in # top-left, top-right, bottom-right, and bottom-left order) corners = markerCorner.reshape((4, 2)) (topLeft, topRight, bottomRight, bottomLeft) = corners # convert each of the (x, y)-coordinate pairs to integers topRight = (int(topRight[0]), int(topRight[1])) bottomRight = (int(bottomRight[0]), int(bottomRight[1])) bottomLeft = (int(bottomLeft[0]), int(bottomLeft[1])) topLeft = (int(topLeft[0]), int(topLeft[1])) Line 62 makes a check to ensure at least one marker was detected. If so, we proceed to flatten the ArUco ids list (Line 64) and then loop over each of the corners and ids together. Each markerCorner is represented by a list of four (x, y)-coordinates (Line 70). These (x, y)-coordinates represent the top-left, top-right, bottom-right, and bottom-left corners of the ArUco tag (Line 71). Furthermore, the (x, y)-coordinates are always returned in that order. The topRight, bottomRight, bottomLeft, and topLeft variables are NumPy arrays; however, we need to cast them to integer values (int) such that we can use OpenCV’s drawing functions to visualize the markers on our image (Lines 74-77).
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
With the marker (x, y)-coordinates cast of integers, we can draw them on image: # draw the bounding box of the ArUCo detection cv2.line(image, topLeft, topRight, (0, 255, 0), 2) cv2.line(image, topRight, bottomRight, (0, 255, 0), 2) cv2.line(image, bottomRight, bottomLeft, (0, 255, 0), 2) cv2.line(image, bottomLeft, topLeft, (0, 255, 0), 2) # compute and draw the center (x, y)-coordinates of the ArUco # marker cX = int((topLeft[0] + bottomRight[0]) / 2.0) cY = int((topLeft[1] + bottomRight[1]) / 2.0) cv2.circle(image, (cX, cY), 4, (0, 0, 255), -1) # draw the ArUco marker ID on the image cv2.putText(image, str(markerID), (topLeft[0], topLeft[1] - 15), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2) print("[INFO] ArUco marker ID: {}".format(markerID)) # show the output image cv2.imshow("Image", image) cv2.waitKey(0) Lines 80-83 draw the bounding box of the ArUco tag on our image using cv2.line calls. We then compute the center (x, y)-coordinates of the ArUco marker and draw the center on the image via a call to cv2.circle (Lines 87-89). Our final visualization step is to draw the markerID on the image and print it to our terminal (Lines 92-95). The final output visualization is displayed to our screen on Lines 98 and 99. OpenCV ArUco marker detection results Let’s put our OpenCV ArUco detector to work! Use the “Downloads” section of this tutorial to download the source code and example images. From there, you can execute the following command: $ python detect_aruco_image.py --image images/example_01.png --type DICT_5X5_100 [INFO] loading image... [INFO] detecting 'DICT_5X5_100' tags... [INFO] ArUco marker ID: 42 [INFO] ArUco marker ID: 24 [INFO] ArUco marker ID: 70 [INFO] ArUco marker ID: 66 [INFO] ArUco marker ID: 87 Figure 3: Detecting ArUco tags in an input image using OpenCV. These ArUco tags were generated in last week’s tutorial on Generating ArUco markers with OpenCV and Python. This image contains the ArUco markers that we generated in last week’s blog post. I took each of the five individual ArUco markers and constructed a montage of them in a single image.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
As Figure 3 shows, we’ve been able to correctly detect each of the ArUco markers and extract their IDs. Let’s try a different image, this one containing ArUco markers not generated by us: $ python detect_aruco_image.py --image images/example_02.png --type DICT_ARUCO_ORIGINAL [INFO] loading image... [INFO] detecting 'DICT_ARUCO_ORIGINAL' tags... [INFO] ArUco marker ID: 241 [INFO] ArUco marker ID: 1007 [INFO] ArUco marker ID: 1001 [INFO] ArUco marker ID: 923 Figure 4: Detecting ArUco tags with OpenCV and Python. Figure 4 displays the results of our OpenCV ArUco detector. As you can see, I have detected each of the four ArUco markers on my Pantone color matching card (which we’ll be using in a number of upcoming tutorials, so get used to seeing it). Looking at the command line arguments to the above script, you may be wondering: “Hey Adrian, how did you know to use DICT_ARUCO_ORIGINAL and not some other ArUco dictionary.” The short answer is that I didn’t … at least, not initially. I actually have a “secret weapon” up my sleeve. I’ve put together a Python script that can automatically infer ArUco marker type, even if I don’t know what type of marker is in an image. I’ll be sharing that script with you next week, so be on the lookout for it. Detecting ArUco markers in real-time video streams with OpenCV In our previous section we learned how to detect ArUco markers in images … … but is it possible to detect ArUco markers in real-time video streams?
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
The answer is yes, it absolutely is — and I’ll be showing you how to do so in this section. Open up the detect_aruco_video.py file in your project directory structure, and let’s get to work: # import the necessary packages from imutils.video import VideoStream import argparse import imutils import time import cv2 import sys Lines 2-7 import our required Python packages. These imports are identical to our previous script, with two exceptions: VideoStream: Used to access our webcam time: Inserts a small delay, allowing our camera sensor to warm up Let’s now parse our command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-t", "--type", type=str, default="DICT_ARUCO_ORIGINAL", help="type of ArUCo tag to detect") args = vars(ap.parse_args()) We only need a single command line argument here, --type, which is the type of ArUco tags we are going to detect in our video stream. Next we define the ARUCO_DICT, used to map the --type to OpenCV’s unique ArUco tag type: # define names of each possible ArUco tag OpenCV supports ARUCO_DICT = { "DICT_4X4_50": cv2.aruco. DICT_4X4_50, "DICT_4X4_100": cv2.aruco. DICT_4X4_100, "DICT_4X4_250": cv2.aruco. DICT_4X4_250, "DICT_4X4_1000": cv2.aruco. DICT_4X4_1000, "DICT_5X5_50": cv2.aruco. DICT_5X5_50, "DICT_5X5_100": cv2.aruco.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
DICT_5X5_100, "DICT_5X5_250": cv2.aruco. DICT_5X5_250, "DICT_5X5_1000": cv2.aruco. DICT_5X5_1000, "DICT_6X6_50": cv2.aruco. DICT_6X6_50, "DICT_6X6_100": cv2.aruco. DICT_6X6_100, "DICT_6X6_250": cv2.aruco. DICT_6X6_250, "DICT_6X6_1000": cv2.aruco. DICT_6X6_1000, "DICT_7X7_50": cv2.aruco. DICT_7X7_50, "DICT_7X7_100": cv2.aruco. DICT_7X7_100, "DICT_7X7_250": cv2.aruco. DICT_7X7_250, "DICT_7X7_1000": cv2.aruco.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
DICT_7X7_1000, "DICT_ARUCO_ORIGINAL": cv2.aruco. DICT_ARUCO_ORIGINAL, "DICT_APRILTAG_16h5": cv2.aruco. DICT_APRILTAG_16h5, "DICT_APRILTAG_25h9": cv2.aruco. DICT_APRILTAG_25h9, "DICT_APRILTAG_36h10": cv2.aruco. DICT_APRILTAG_36h10, "DICT_APRILTAG_36h11": cv2.aruco. DICT_APRILTAG_36h11 } Refer to the “Detecting ArUco markers with OpenCV in images” section above for a more detailed review of this code block. We can now load our ArUco dictionary: # verify that the supplied ArUCo tag exists and is supported by # OpenCV if ARUCO_DICT.get(args["type"], None) is None: print("[INFO] ArUCo tag of '{}' is not supported".format( args["type"])) sys.exit(0) # load the ArUCo dictionary and grab the ArUCo parameters print("[INFO] detecting '{}' tags...".format(args["type"])) arucoDict = cv2.aruco. Dictionary_get(ARUCO_DICT[args["type"]]) arucoParams = cv2.aruco. DetectorParameters_create() # initialize the video stream and allow the camera sensor to warm up print("[INFO] starting video stream...") vs = VideoStream(src=0).start() time.sleep(2.0) Lines 43-46 check to see if the ArUco tag --type exists in our ARUCO_DICT. If not, we exit the script.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
Otherwise, we load the arucoDict and grab the arucoParams for the detector (Lines 50 and 51). From there, we start our VideoStream and allow our camera sensor to warm up (Lines 55 and 56). We’re now ready to loop over frames from our video stream: # loop over the frames from the video stream while True: # grab the frame from the threaded video stream and resize it # to have a maximum width of 1000 pixels frame = vs.read() frame = imutils.resize(frame, width=1000) # detect ArUco markers in the input frame (corners, ids, rejected) = cv2.aruco.detectMarkers(frame, arucoDict, parameters=arucoParams) Line 62 grabs a frame from our video stream, which we then resize to have a width of 1000 pixels. We then apply the cv2.aruco.detectMarkers function to detect ArUco tags in the current frame. Let’s now parse the results of the ArUco tag detection: # verify *at least* one ArUco marker was detected if len(corners) > 0: # flatten the ArUco IDs list ids = ids.flatten() # loop over the detected ArUCo corners for (markerCorner, markerID) in zip(corners, ids): # extract the marker corners (which are always returned # in top-left, top-right, bottom-right, and bottom-left # order) corners = markerCorner.reshape((4, 2)) (topLeft, topRight, bottomRight, bottomLeft) = corners # convert each of the (x, y)-coordinate pairs to integers topRight = (int(topRight[0]), int(topRight[1])) bottomRight = (int(bottomRight[0]), int(bottomRight[1])) bottomLeft = (int(bottomLeft[0]), int(bottomLeft[1])) topLeft = (int(topLeft[0]), int(topLeft[1])) The above code block is essentially identical to the one from our detect_aruco_image.py script. Here we are: Verifying that at least one ArUco tag was detected (Line 70) Flattening the ArUco ids list (Line 72) Looping over all corners and ids together (Line 75) Extracting the marker corners in top-left, top-right, bottom-right, and bottom-left order (Lines 79 and 80) Converting the corner (x, y)-coordinates from NumPy array data types to Python integers such that we can draw the coordinates using OpenCV’s drawing functions (Lines 83-86) The final step here is to draw our ArUco tag bounding boxes just as we did in detect_aruco_image.py: # draw the bounding box of the ArUCo detection cv2.line(frame, topLeft, topRight, (0, 255, 0), 2) cv2.line(frame, topRight, bottomRight, (0, 255, 0), 2) cv2.line(frame, bottomRight, bottomLeft, (0, 255, 0), 2) cv2.line(frame, bottomLeft, topLeft, (0, 255, 0), 2) # compute and draw the center (x, y)-coordinates of the # ArUco marker cX = int((topLeft[0] + bottomRight[0]) / 2.0) cY = int((topLeft[1] + bottomRight[1]) / 2.0) cv2.circle(frame, (cX, cY), 4, (0, 0, 255), -1) # draw the ArUco marker ID on the frame cv2.putText(frame, str(markerID), (topLeft[0], topLeft[1] - 15), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2) # show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup cv2.destroyAllWindows() vs.stop() Our visualization steps include: Drawing the outlines of the ArUco tag on the frame (Lines 89-92) Drawing the center of the ArUco tag (Lines 96-98) Displaying the ID of the detected ArUco tag (Lines 101-104) Finally, we display the output frame to our screen. If the q key is pressed while the window opened by OpenCV is active, we break from the script and cleanup our video pointers. OpenCV ArUco video detection results Ready to apply ArUco detection to real-time video streams? Start by using the “Downloads” section of this tutorial to download the source code and example images. From there, pop open a shell, and execute the following command: $ python detect_aruco_video.py As you can see, I’m easily able to detect the ArUco markers in real-time video.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial you learned how to detect ArUco markers in images and real-time video streams using OpenCV and Python. Detecting ArUco markers with OpenCV is a three-step process: Set what ArUco dictionary you are using. Define the parameters to the ArUco detector (typically the default options suffice). Apply the ArUco detector with OpenCV’s cv2.aruco.detectMarkers function.
https://pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/
OpenCV’s ArUco marker is extremely fast and, as our results showed, is capable of detecting ArUco markers in real-time. Feel free to use this code as a starting point when using ArUco markers in your own computer vision pipelines. However, let’s say you are developing a computer vision project to automatically detect ArUco markers in images, but you don’t know what marker type is being used, and therefore, you can’t explicitly set the ArUco marker dictionary — what do you do then? How are you going to detect ArUco markers if you don’t know what marker type is being used? I’ll be answering that exact question in next week’s blog post. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
Click here to download the source code to this pos In this tutorial you will learn how to automatically determine ArUco marker type/dictionary with OpenCV and Python. Today’s tutorial is the final part of our three-part series on ArUco marker generation and detection: Generating ArUco markers with OpenCV and Python (tutorial from two weeks ago)Detecting ArUco markers in images and video with OpenCV (last week’s post)Automatically determining ArUco marker type with OpenCV and Python (today’s tutorial) So far in this series, we’ve learned how to generate and detect ArUco markers; however, these methods hinge on the fact that we already know what type of ArUco dictionary was used to generate the markers. That raises the question: What if you didn’t know the ArUco dictionary used to generate markers? Without knowing the ArUco dictionary used, you won’t be able to detect them in your images/video. When that happens you need a method that can automatically determine the ArUco marker type in an image — and that’s exactly what I’ll be showing you how to do today. To learn how to automatically determine ArUco marker type/dictionary with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Determining ArUco marker type with OpenCV and Python In the first part of this tutorial, you will learn about the various types of ArUco markers and AprilTags. From there, you’ll implement a Python script that can automatically detect if any type of ArUco dictionary exists in an image or video stream, thereby allowing you to reliably detect ArUco markers even if you don’t know what ArUco dictionary was used to generate them! We’ll then review the results of our work and discuss next steps (hint: we’ll be doing some augmented reality starting next week).
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
Types of ArUco and AprilTag markers Figure 1: In this example image we have four ArUco markers, but we don’t know what dictionary was used to generate them, so how are we going to actually detect them? Two weeks ago we learned how to generate ArUco markers, and then last week we learned how to detect them in images and video — but what happens if we don’t already know the ArUco dictionary we’re using? Such a situation can arise when you’re developing a computer vision application where you did not generate the ArUco markers yourself. Instead, these markers may have been generated by another person or organization (or maybe you just need a general purpose algorithm to detect any ArUco type in an image or video stream). When such a situation arises, you need to be able to automatically infer ArUco dictionary type. At the time of this writing, the OpenCV library can detect 21 different types of AruCo/AprilTag markers. The following snippet of code shows the unique variable identifier assigned to each type of marker dictionary: # define names of each possible ArUco tag OpenCV supports ARUCO_DICT = { "DICT_4X4_50": cv2.aruco. DICT_4X4_50, "DICT_4X4_100": cv2.aruco. DICT_4X4_100, "DICT_4X4_250": cv2.aruco. DICT_4X4_250, "DICT_4X4_1000": cv2.aruco.
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
DICT_4X4_1000, "DICT_5X5_50": cv2.aruco. DICT_5X5_50, "DICT_5X5_100": cv2.aruco. DICT_5X5_100, "DICT_5X5_250": cv2.aruco. DICT_5X5_250, "DICT_5X5_1000": cv2.aruco. DICT_5X5_1000, "DICT_6X6_50": cv2.aruco. DICT_6X6_50, "DICT_6X6_100": cv2.aruco. DICT_6X6_100, "DICT_6X6_250": cv2.aruco. DICT_6X6_250, "DICT_6X6_1000": cv2.aruco. DICT_6X6_1000, "DICT_7X7_50": cv2.aruco. DICT_7X7_50, "DICT_7X7_100": cv2.aruco.
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
DICT_7X7_100, "DICT_7X7_250": cv2.aruco. DICT_7X7_250, "DICT_7X7_1000": cv2.aruco. DICT_7X7_1000, "DICT_ARUCO_ORIGINAL": cv2.aruco. DICT_ARUCO_ORIGINAL, "DICT_APRILTAG_16h5": cv2.aruco. DICT_APRILTAG_16h5, "DICT_APRILTAG_25h9": cv2.aruco. DICT_APRILTAG_25h9, "DICT_APRILTAG_36h10": cv2.aruco. DICT_APRILTAG_36h10, "DICT_APRILTAG_36h11": cv2.aruco. DICT_APRILTAG_36h11 } In the remainder of this tutorial, you will learn how to automatically check whether any of these ArUco types exists in an input image. To learn more about these ArUco types, please refer to this post. Configuring your development environment In order to generate and detect ArUco markers, you need to have the OpenCV library installed.
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install opencv guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 2: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux!
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
Project structure Start by using the “Downloads” section of this tutorial to download the source code and example images. From there, let’s inspect the directory structure of our project: $ tree . --dirsfirst . ├── images │ ├── example_01.png │ ├── example_02.png │ └── example_03.png └── guess_aruco_type.py 1 directory, 4 files We have a single Python script today, guess_aruco_type.py. This script will examine the examples in the images/ directory and, with no prior knowledge of the ArUco tags in these images, will automatically determine the ArUco tag type. Such a script is extremely useful when you’re tasked with finding ArUco tags in images/video streams but aren’t sure what ArUco dictionary was used to generate these tags. Implementing our ArUco/AprilTag marker type identifier The method we’ll implement for our automatic AruCo/AprilTag type identifier is a bit of a hack, but my feeling is that a hack is just a heuristic that works in practice. Sometimes it’s OK to ditch the elegance and instead just get the damn solution — this script is an example of such a situation. Open up the guess_aruco_type.py file in your project directory structure, and insert the following code: # import the necessary packages import argparse import imutils import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to input image containing ArUCo tag") args = vars(ap.parse_args()) We import our required command line arguments on Lines 2-4 and then parse our command line arguments.
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
Only a single command line argument is required here, --image, which is the path to our input image. With the command line arguments parsed, we can move on to defining our ARUCO_DICT dictionary, which provides the names and unique variable identifiers for each of the ArUco dictionaries that OpenCV supports: # define names of each possible ArUco tag OpenCV supports ARUCO_DICT = { "DICT_4X4_50": cv2.aruco. DICT_4X4_50, "DICT_4X4_100": cv2.aruco. DICT_4X4_100, "DICT_4X4_250": cv2.aruco. DICT_4X4_250, "DICT_4X4_1000": cv2.aruco. DICT_4X4_1000, "DICT_5X5_50": cv2.aruco. DICT_5X5_50, "DICT_5X5_100": cv2.aruco. DICT_5X5_100, "DICT_5X5_250": cv2.aruco. DICT_5X5_250, "DICT_5X5_1000": cv2.aruco. DICT_5X5_1000, "DICT_6X6_50": cv2.aruco.
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
DICT_6X6_50, "DICT_6X6_100": cv2.aruco. DICT_6X6_100, "DICT_6X6_250": cv2.aruco. DICT_6X6_250, "DICT_6X6_1000": cv2.aruco. DICT_6X6_1000, "DICT_7X7_50": cv2.aruco. DICT_7X7_50, "DICT_7X7_100": cv2.aruco. DICT_7X7_100, "DICT_7X7_250": cv2.aruco. DICT_7X7_250, "DICT_7X7_1000": cv2.aruco. DICT_7X7_1000, "DICT_ARUCO_ORIGINAL": cv2.aruco. DICT_ARUCO_ORIGINAL, "DICT_APRILTAG_16h5": cv2.aruco. DICT_APRILTAG_16h5, "DICT_APRILTAG_25h9": cv2.aruco.
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
DICT_APRILTAG_25h9, "DICT_APRILTAG_36h10": cv2.aruco. DICT_APRILTAG_36h10, "DICT_APRILTAG_36h11": cv2.aruco. DICT_APRILTAG_36h11 } I covered the types of ArUco dictionaries, including their name conventions in my previous tutorial Generating ArUco markers with OpenCV and Python. If you would like more information on ArUco dictionaries please refer there; otherwise, simply understand that this dictionary lists out all possible ArUco tags that OpenCV can detect. We’ll exhaustively loop over this dictionary, load the ArUco detector for each entry, and then apply the detector to our input image. If we get a hit for a specific tag type, then we know that ArUco tag exists in the image. Speaking of which, let’s implement that logic now: # load the input image from disk and resize it print("[INFO] loading image...") image = cv2.imread(args["image"]) image = imutils.resize(image, width=600) # loop over the types of ArUco dictionaries for (arucoName, arucoDict) in ARUCO_DICT.items(): # load the ArUCo dictionary, grab the ArUCo parameters, and # attempt to detect the markers for the current dictionary arucoDict = cv2.aruco. Dictionary_get(arucoDict) arucoParams = cv2.aruco. DetectorParameters_create() (corners, ids, rejected) = cv2.aruco.detectMarkers( image, arucoDict, parameters=arucoParams) # if at least one ArUco marker was detected display the ArUco # name to our terminal if len(corners) > 0: print("[INFO] detected {} markers for '{}'".format( len(corners), arucoName)) Lines 39 and 40 load our input --image from disk and resize it. From there we loop over all possible ArUco dictionaries that OpenCV supports on Line 43.
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
For each ArUco dictionary we: Load the arucoDict via cv2.aruco. Dictionary_get Instantiate the ArUco detector parameters Apply cv2.aruco.detectMarkers to detect tags for the current arucoDict in the input image If the length of the resulting corners list is greater than zero (Line 53), then we know the current arucoDict had been used to (potentially) generate the ArUco tags in our input image. In that case we log the number of tags found in the image along with the name of the ArUco dictionary to our terminal so we can investigate further after running the script. Like I said, there isn’t much “elegance” to this script — it’s a downright hack. But that’s OK. Sometimes all you need is a good hack to unblock you and keep you moving forward on your project. ArUco marker type identification results Let’s put our ArUco marker type identifier to work! Make sure you use the “Downloads” section of this tutorial to download the source code and example images to this post. From there, pop open a terminal, and execute the following command: $ python guess_aruco_type.py --image images/example_01.png [INFO] loading image... [INFO] detected 2 markers for 'DICT_5X5_50' [INFO] detected 5 markers for 'DICT_5X5_100' [INFO] detected 5 markers for 'DICT_5X5_250' [INFO] detected 5 markers for 'DICT_5X5_1000' Figure 3: An example image containing ArUco tags generated with a 5×5 dictionary. These ArUco tags were generated in last week’s tutorial.
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
This image contains five example ArUco images (which we generated back in part 1 of this series on ArUco markers). The ArUco markers belong to the 5×5 class and either have IDs up to 50, 100, 250, or 1000, respectively. These results imply that: We know for a fact that these are 5×5 markers. We know that the markers detected in this image have IDs < 50. However, if there are more markers in other images, we may encounter ArUco 5×5 markers with values > 50. If we’re working with just this image, then it’s safe to assume DICT_5X5_50, but if we have more images, keep investigating and find the smallest ArUco dictionary that fits all unique IDs into it. Let’s try another example image: $ python guess_aruco_type.py --image images/example_02.png [INFO] loading image... [INFO] detected 1 markers for 'DICT_4X4_50' [INFO] detected 1 markers for 'DICT_4X4_100' [INFO] detected 1 markers for 'DICT_4X4_250' [INFO] detected 1 markers for 'DICT_4X4_1000' [INFO] detected 4 markers for 'DICT_ARUCO_ORIGINAL' Figure 4: Recognizing ArUco tag types in an image where I didn’t know what ArUco dictionary was used to generate them. Here you can see an example image containing a Pantone color matching card. OpenCV (incorrectly) thinks that these markers might be of the 4×4 class, but if you zoom in on the example image, you’ll see that that’s not true, since these are actually 6×6 markers with an additional bit of padding surrounding the marker. Furthermore, since only one marker was detected for the 4×4 class, and since there are four total markers in the image, we can therefore deduce that these must be DICT_ARUCO_ORIGINAL.
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
We’ll look at one final image, this one containing containing AprilTags: $ python guess_aruco_type.py --image images/example_03.png [INFO] loading image... [INFO] detected 3 markers for 'DICT_APRILTAG_36h11' Figure 5: OpenCV is able to correctly detect that these are AprilTags (and not ArUco tags). Here OpenCV can infer that we are most certainly looking at AprilTags. I hope you enjoyed this series of tutorials on ArUco markers and AprilTags! In the next few weeks, we’ll start looking at practical, real-world applications of ArUco markers, including how to incorporate them into our own computer vision and image processing pipelines. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc.
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
Click here to join PyImageSearch University Summary In this tutorial you learned how to automatically determine ArUco marker type, even if you don’t know what ArUco dictionary was originally used! Our method is a bit of a hack, as it requires us to exhaustively loop over all possible ArUco dictionaries and then attempt to detect that specific ArUco dictionary in the input image. That said, our hack works, so it’s hard to argue with it. Keep in mind that there’s nothing wrong with a “hack.” As I like to say, hack is just a heuristic that works. Starting next week you’ll get to see real-world examples of applying ArUco detection, including augmented reality. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code!
https://pyimagesearch.com/2020/12/28/determining-aruco-marker-type-with-opencv-and-python/
Website
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
Click here to download the source code to this pos In this tutorial you will learn the basics of augmented reality with OpenCV. Augmented reality takes real-world environments and then enhances these environments through computer-generated procedures that perpetually enrich the environment. Typically, this is done using some combination of visual, auditory, and tactile/haptic interactions. Since PyImageSearch is a computer vision blog, we’ll be primarily focusing on the vision side of augmented reality, and more specifically: Taking an input imageDetecting markers/fiducialsSeamlessly transforming new images into the scene This tutorial focuses on the fundamentals of augmented reality with OpenCV. Next week I’ll show you how to perform real-time augmented reality with OpenCV. To learn how to perform augmented reality with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section OpenCV Augmented Reality (AR) In the first part of this tutorial, we’ll briefly discuss what augmented reality is, including how OpenCV can help facilitate augmented reality. From there we’ll configure our development environment for augmented reality and then review our directory structure for the project. We’ll then implement a Python script to perform basic augmented reality with OpenCV.
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
The tutorial will wrap up with a discussion of our results. What is augmented reality? Figure 1: Augmented reality enhances the world around us through computer-generated imagery, noises, tactile responses, etc (image source). We used to see the world only through our five senses: sight, hearing, smell, taste, and touch. That’s changing now. Smartphones are transforming the world, both literally and figuratively, for three of those senses: sight, hearing, and touch. Perhaps one day augmented reality will be able to enhance smell and taste as well. Augmented reality, as the name suggests, augments the real world around us with computer-generated perceptual information. Perhaps the biggest augmented reality success story in recent years is the Pokemon Go app (Figure 2). Figure 2: The popular Pokemon Go app is a great example of computer vision-based augmented reality (image source).
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
To play Pokemon Go, users open the app on their smartphone, which then accesses their camera. Players then observe the world through their camera, walking through real-world environments, including city streets, tranquil parks, and crowded bars and restaurants. The Pokemon Go app places creatures (called Pokemon) inside this virtual world. Players then must capture these Pokemon and collect all of them. Entire companies have been built surrounding augmented reality and virtual reality applications, including Oculus and MagicLeap. While augmented reality (as we understand it today) has existed since the late 1980s/early 1990s, it’s still very much in its infancy. We’ve made incredible strides in a short amount of time — and I believe the best is yet to come (and will likely be coming in the next 10-20 years). But before we can start building state-of-the-art augmented reality applications, we first need to learn the fundamentals. In this tutorial you will learn the basics of augmented reality with OpenCV. Configuring your development environment In order to learn the basics of augmented reality, you need to have the OpenCV library installed.
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 3: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux!
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
Project structure Before we can implement augmented reality with OpenCV, we first need to review our project directory structure. Start by making sure you use the “Downloads” section of this tutorial to download the source code and example images. $ tree . --dirsfirst . ├── examples │ ├── input_01.jpg │ ├── input_02.jpg │ └── input_03.jpg ├── sources │ ├── antelope_canyon.jpg │ ├── jp.jpg │ └── squirrel.jpg ├── markers.pdf └── opencv_ar_image.py 2 directories, 7 files Inside the examples directory you will find a number of images containing a Pantone color match card with ArUco markers on it: Figure 4: Our three input images. We’ll be detecting the ArUco markers on the Pantone color match card and then transforming a source image onto the region. Just like we did in our series on ArUco markers, our goal is to detect each of the four ArUco tags, sort them in top-left, top-right, bottom-left, and bottom-right order, and then apply augmented reality by transforming a source image onto the card. Speaking of source images, we have a total of three source images in our sources directory: Figure 5: Our sample source images that will be transformed onto the input. You can insert your own source images as well. Once we’ve detected our surface, we’ll use OpenCV to transform each of these source images onto the card, resulting in an output similar to below: Figure 6: Sample output of applying augmented reality with OpenCV.