url
stringclasses
675 values
text
stringlengths
0
9.95k
https://pyimagesearch.com/2021/01/23/splitting-and-merging-channels-with-opencv/
And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Let’s start by reviewing our project directory structure. Be sure to use the “Downloads” section of this tutorial to download the source code and example images: $ tree . --dirsfirst . ├── adrian.png ├── opencv_channels.py └── opencv_logo.png 0 directories, 3 files Inside our project, you’ll see that we have a single Python script, opencv_channels.py, which will show us: How to split an input image (adrian.png and opencv_logo.png) into their respective Red, Green, and Blue channelsVisualize each of the RGB channelsMerge the RGB channels back into the original image Let’s get started! How to split and merge channels with OpenCV A color image consists of multiple channels: a Red, a Green, and a Blue component. We have seen that we can access these components via indexing into NumPy arrays. But what if we wanted to split an image into its respective components? As you’ll see, we’ll make use of the cv2.split function. But for the time being, let’s take a look at an example image in Figure 2: Figure 2: Top-left: Red channel of image.
https://pyimagesearch.com/2021/01/23/splitting-and-merging-channels-with-opencv/
Top-right: Green channel. Bottom-left: Blue channel. Bottom-right: Original input image. Here, we have (in the order of appearance) Red, Green, Blue, and original image of myself on a trip to Florida. But given these representations, how do we interpret the different channels of the image? Let’s take a look at the sky’s color in the original image (bottom-right). Notice how the sky is a slightly blue tinge. And when we look at the blue channel image (bottom-left), we see that the blue channel is very light in the region that corresponds to the sky. This is because the blue channel pixels are very bright, indicating that they contribute heavily to the output image. Then, take a look at the black hoodie that I am wearing.
https://pyimagesearch.com/2021/01/23/splitting-and-merging-channels-with-opencv/
In each of the Red, Green, and Blue channels of the image, my black hoodie is very dark — indicating that each of these channels contributes very little to the hoodie region of the output image (giving it a very dark black color). When you investigate each channel individually rather than as a whole, you can visualize how much each channel contributes to the overall output image. Performing this exercise is extremely helpful, especially when applying methods such as thresholding and edge detection, which we’ll cover later in this module. Now that we have visualized our channels, let’s examine some code to accomplish this for us: # import the necessary packages import numpy as np import argparse import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", type=str, default="opencv_logo.png", help="path to the input image") args = vars(ap.parse_args()) Lines 2-4 import our required Python packages. We then parse our command line arguments on Lines 7-10. We only need a single argument here, --image, which points to our input image residing on disk. Let’s now load this image and split it into its respective channels: # load the input image and grab each channel -- note how OpenCV # represents images as NumPy arrays with channels in Blue, Green, # Red ordering rather than Red, Green, Blue image = cv2.imread(args["image"]) (B, G, R) = cv2.split(image) # show each channel individually cv2.imshow("Red", R) cv2.imshow("Green", G) cv2.imshow("Blue", B) cv2.waitKey(0) Line 15 loads our image from disk. We then split it into its Red, Green, and Blue channel components on Line 16 with a call to cv2.split. Usually, we think of images in the RGB color space — the red pixel first, the green pixel second, and the blue pixel third.
https://pyimagesearch.com/2021/01/23/splitting-and-merging-channels-with-opencv/
However, OpenCV stores RGB images as NumPy arrays in reverse channel order. Instead of storing an image in RGB order, it stores the image in BGR order. Thus we unpack the tuple in reverse order. Lines 19-22 then show each channel individually, as in Figure 2. Figure 3: Using OpenCV to split our input image into the Red, Green, and Blue channels, respectively. We can also merge the channels back together again using the cv2.merge function: # merge the image back together again merged = cv2.merge([B, G, R]) cv2.imshow("Merged", merged) cv2.waitKey(0) cv2.destroyAllWindows() We simply specify our channels, again in BGR order, and then cv2.merge takes care of the rest for us (Line 25)! Notice how we reconstruct our original input image from each of the individual RGB channels: Figure 4: Merging the three channels with OpenCV to form our original input image. There is also a second method to visualize each channel’s color contribution. In Figure 3, we simply examine the single-channel representation of an image, which looks like a grayscale image. However, we can also visualize the color contribution of the image as a full RGB image, like this: Figure 5: A second method to visualize each channel’s color contribution.
https://pyimagesearch.com/2021/01/23/splitting-and-merging-channels-with-opencv/
The lighter the given region of a channel, the more it contributes to the output image. Using this method, we can visualize each channel in “color” rather than “grayscale.” This is strictly a visualization technique and not something we would use in a standard computer vision or image processing application. But that said, let’s investigate the code to see how to construct this representation: # visualize each channel in color zeros = np.zeros(image.shape[:2], dtype="uint8") cv2.imshow("Red", cv2.merge([zeros, zeros, R])) cv2.imshow("Green", cv2.merge([zeros, G, zeros])) cv2.imshow("Blue", cv2.merge([B, zeros, zeros])) cv2.waitKey(0) To show the actual “color” of the channel, we first need to take apart the image using cv2.split. We need to reconstruct the image, but this time, having all pixels but the current channel as zero. On Line 31, we construct a NumPy array of zeros, with the same width and height as our original image. Then, to construct the Red channel representation of the image, we make a call to cv2.merge, specifying our zeros array for the Green and Blue channels. We take similar approaches to the other channels in Lines 33 and 34. You can refer to Figure 5 for this code’s output visualization. Channel splitting and merging results To split and merge channels with OpenCV, be sure to use the “Downloads” section of this tutorial to download the source code.
https://pyimagesearch.com/2021/01/23/splitting-and-merging-channels-with-opencv/
Let’s execute our opencv_channels.py script to split each of the individual channels and visualize them: $ python opencv_channels.py You can refer to the previous section to see the script’s output. If you wish to supply a different image to the opencv_channels.py script, all you need to do is supply the --image command line argument: $ python opencv_channels.py --image adrian.png Here, you can see that we’ve taken the input image and split it into its respective Red, Green, and Blue channel components: Figure 6: Splitting an image into its respective channels with OpenCV. And here is the second visualization of each channel: Figure 7: Visualizing the amount each channel contributes to the image. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case.
https://pyimagesearch.com/2021/01/23/splitting-and-merging-channels-with-opencv/
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to split and merge image channels using OpenCV and the cv2.split and cv2.merge functions.
https://pyimagesearch.com/2021/01/23/splitting-and-merging-channels-with-opencv/
While there are NumPy functions you can use for splitting and merging, I strongly encourage you to use the cv2.split and cv2.merge functions — they tend to be easier to read and understand from a code perspective. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
Click here to download the source code to this pos In this tutorial you will learn how to detect low contrast images using OpenCV and scikit-image. Whenever I teach the fundamentals of computer vision and image processing to students eager to learn, one of the first things I teach is: “It’s far easier to write code for images captured in controlled lighting conditions than in dynamic conditions with no guarantees.” If you are able to control the environment and, most importantly, the lighting when you capture an image, the easier it will be to write code to process the image. With controlled lighting conditions you’re able to hard-code parameters, including: Amount of blurringEdge detection boundsThresholding limitsEtc. Essentially, controlled conditions allow you to take advantage of your a priori knowledge of an environment and then write code that handles that specific environment rather than trying to handle every edge case or condition. Of course, controlling your environment and lighting conditions isn’t always possible … … so what do you do then? Do you try to code a super complex image processing pipeline that handles every edge case? Well … you could do that — and probably waste weeks or months doing it and still likely not capture every edge case. Or, you can instead detect when low quality images, specifically low contrast images, are presented to your pipeline. If a low contrast image is detected, you can throw the image out or alert the user to capture an image in better lighting conditions.
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
Doing so will make it far easier for you to develop image processing pipelines (and reduce your headaches along the way). To learn how to detect low contrast images with OpenCV and scikit-image, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Detecting low contrast images with OpenCV, scikit-image, and Python In the first part of this tutorial, we’ll discuss what low contrast images are, the problems they cause for computer vision/image processing practitioners, and how we can programmatically detect these images. From there we’ll configure our development environment and review our project directory structure. With our project structure reviewed, we’ll move on to coding two Python scripts: One to detect low contrast in static imagesAnd another to detect low contrast frames in real-time video streams We’ll wrap up our tutorial with a discussion of our results. What problems do low contrast images/frames create? And how can we detect them? Figure 1: Left: Example of low contrast image where it would be hard to detect the outline of the card. Right: Higher contrast image where detecting the card would be far easier for a computer vision/image processing pipeline.
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
A low contrast image has very little difference between light and dark regions, making it hard to see where the boundary of an object begins and the background of the scene starts. An example of a low contrast image is shown in Figure 1 (left). Here you can see a color matching/correction card on a background. Due to poor lighting conditions (i.e., not enough light), the boundaries of the card against the background are not well defined — by itself, an edge detection algorithm, such as the Canny edge detector, may struggle to detect the boundary of the card, especially if the Canny edge detector parameters are hard-coded. Figure 1 (right) shows an example image of “normal contrast”. We have more detail in this image due to better lighting conditions. Notice that the white of the color matching card sufficiently contrasts the background — it would be far easier for an image processing pipeline to detect the edges of the color matching card (compared to the right image). Whenever you’re tackling a computer vision or image processing problem, always start with the environment the image/frame is captured in. The more you can control and guarantee the lighting conditions, the easier a time you will have writing code to process the scene. However, there will be times when you cannot control the lighting conditions and any parameters you hard-coded into your pipeline (ex.,
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
blur sizes, thresholding limits, Canny edge detection parameters, etc.) may result in incorrect/unusable output. When that inevitably happens, don’t throw in the towel. And certainly don’t start going down the rabbit hole of coding up complex image processing pipelines to handle every edge case. Instead, leverage low contrast image detection. Using low contrast image detection, you can programmatically detect images that are not sufficient for your image processing pipeline. In the remainder of this tutorial, you’ll learn how to detect low contrast images in both static scenes and real-time video streams. We’ll throw out images/frames that are low contrast and not suitable for our pipeline, while keeping only the ones that we know will produce usable results. By the end of this guide, you’ll have a good understanding of low contrast image detection, and you’ll be able to apply it to your own projects, thereby making your own pipelines easier to develop and more stable in production. Configuring your development environment In order to detect low contrast images, you need to have the OpenCV library as well as scikit-image installed.
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
Luckily, both of these are pip-installable: $ pip install opencv-contrib-python $ pip install scikit-image If you need help configuring your development environment for OpenCV and scikit-image, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 2: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux!
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
Project structure Before we get too far in this guide, let’s take a second to inspect our project directory structure. Start by using the “Downloads” section of this tutorial to download the source code, example images, and sample video: $ tree . --dirsfirst . ├── examples │ ├── 01.jpg │ ├── 02.jpg │ └── 03.jpg ├── detect_low_contrast_image.py ├── detect_low_contrast_video.py └── example_video.mp4 1 directory, 6 files We have two Python scripts to review today: detect_low_contrast_image.py: Performs low contrast detection in static images (i.e., images inside the examples directory) detect_low_contrast_video.py: Applies low contrast detection to real-time video streams (in this case, example_video.mp4) You can of course substitute in your own images and video files/streams as you see fit. Implementing low contrast image detection with OpenCV Let’s learn how to detect low contrast images with OpenCV and scikit-image! Open up the detect_low_contrast_image.py file in your project directory structure, and insert the following code. # import the necessary packages from skimage.exposure import is_low_contrast from imutils.paths import list_images import argparse import imutils import cv2 We start off on Lines 2-6 importing our required Python packages. Take special note of the is_low_contrast import from the scikit-image library. This function is used to detect low contrast images by examining an image’s histogram and then determining if the range of brightness spans less than a fractional amount of the full range. We’ll see how to use the is_low_contrast function later in this example.
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
We then import list_images to grab the paths to our images in the examples directory, argparse for command line arguments, imutils for image processing routines, and cv2 for our OpenCV bindings. Let’s move on to parsing our command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--input", required=True, help="path to input directory of images") ap.add_argument("-t", "--thresh", type=float, default=0.35, help="threshold for low contrast") args = vars(ap.parse_args()) We have two command line arguments, the first of which is required and the second optional: --input: Path to our input image residing on disk --thresh: The threshold for low contrast I’ve set the --thresh parameter to a default of 0.35, implying that an image will be considered low contrast “when the range of brightness spans less than this fraction of its data type’s full range” (official scikit-image documentation). Essentially, what this means is that if less than 35% of the range of brightness occupies the full range of the data type, then the image is considered low contrast. To make this a concrete example, consider that an image in OpenCV is represented by an unsigned 8-bit integer that has a range of values [0, 255]. If the distribution of pixel intensities occupies less than 35% of this [0, 255] range, then the image is considered low contrast. You can of course tune the --thresh parameter to whatever percentage you deem fitting for your application, but I’ve found that 35% is a good starting point. Moving on, let’s grab the image paths from our --input directory: # grab the paths to the input images imagePaths = sorted(list(list_images(args["input"]))) # loop over the image paths for (i, imagePath) in enumerate(imagePaths): # load the input image from disk, resize it, and convert it to # grayscale print("[INFO] processing image {}/{}".format(i + 1, len(imagePaths))) image = cv2.imread(imagePath) image = imutils.resize(image, width=450) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # blur the image slightly and perform edge detection blurred = cv2.GaussianBlur(gray, (5, 5), 0) edged = cv2.Canny(blurred, 30, 150) # initialize the text and color to indicate that the input image # is *not* low contrast text = "Low contrast: No" color = (0, 255, 0) Line 17 grabs the paths to our images in the examples directory. We then loop over each of these individual imagePaths on Line 20. For each imagePath we proceed to: Load the image from disk Resize it to have a width of 450 pixels Convert the image to grayscale From there we apply blurring (to reduce high frequency noise) and then apply the Canny edge detector (Lines 30 and 31) to detect edges in the input image.
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
Lines 35 and 36 make the assumption that the image is not low contrast, setting the text and color. The following code block handles the if/else condition if a low contrast image is detected: # check to see if the image is low contrast if is_low_contrast(gray, fraction_threshold=args["thresh"]): # update the text and color text = "Low contrast: Yes" color = (0, 0, 255) # otherwise, the image is *not* low contrast, so we can continue # processing it else: # find contours in the edge map and find the largest one, # which we'll assume is the outline of our color correction # card cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) c = max(cnts, key=cv2.contourArea) # draw the largest contour on the image cv2.drawContours(image, [c], -1, (0, 255, 0), 2) # draw the text on the output image cv2.putText(image, text, (5, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.8, color, 2) # show the output image and edge map cv2.imshow("Image", image) cv2.imshow("Edge", edged) cv2.waitKey(0) Line 39 makes a call to scikit-image’s is_low_contrast function to detect whether our gray image is low contrast or not. Note how we are passing in the fraction_threshold, which is our --thresh command line argument. If the image is indeed low contrast, then we update our text and color variables (Lines 41 and 42). Otherwise, the image is not low contrast, so we can proceed with our image processing pipeline (Lines 46-56). Inside this code block we: Find contours in our edge map Find the largest contour in our cnts list (which we assume will be our card in the input image) Draw the outline of the card on the image Finally, we draw the text on the image and display both the image and edge map to our screen. Low contrast image detection results Let’s now apply low contrast image detection to our own images! Start by using the “Downloads” section of this tutorial to download the the source code and example images: $ python detect_low_contrast_image.py --input examples [INFO] processing image 1/3 [INFO] processing image 2/3 [INFO] processing image 3/3 Figure 3: This example image is labeled as “low contrast”. Applying the Canny edge detector with hard-coded parameters shows that we cannot detect the outline of the card in the image. Ideally, we would discard this image from our pipeline due to its low quality.
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
Our first image here is labeled as “low contrast”. As you can see, applying the Canny edge detector to the low contrast image results in us being unable to detect the outline of the card in the image. If we tried to process this image further and detected the card itself, we would end up detecting some other contour. Instead, by applying low contrast detection, we can simply ignore the image. Our second image has sufficient contrast, and as such, we are able to accurately compute the edge map and extract the contour associated with the card outline: Figure 4: This image is labeled as sufficient contrast. Our final image is also labeled as having sufficient contrast: Figure 5: Automatically detecting low contrast images with OpenCV and scikit-image. We are again able to compute the edge map, perform contour detection, and extract the contour associated with the outline of the card. Implementing low contrast frame detection in real-time video streams In this section you will learn how to implement low contrast frame detection in real-time video streams using OpenCV and Python. Open up the detect_low_contrast_video.py file in your project directory structure, and let’s get to work: # import the necessary packages from skimage.exposure import is_low_contrast import numpy as np import argparse import imutils import cv2 Our import statements here are near identical to our previous script. Note that again we are using scikit-image’s is_low_contrast function to detect low contrast frames.
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
We then have our command line arguments, both of which are optional: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--input", type=str, default="", help="optional path to video file") ap.add_argument("-t", "--thresh", type=float, default=0.35, help="threshold for low contrast") args = vars(ap.parse_args()) The --input switch points to a (optional) video file on disk. By default this script will access your webcam, but if you want to supply a video file, you can do so here. The --thresh parameter is identical to that of our previous script. This argument controls the fraction_threshold parameter to the is_low_contrast function. Refer to the “Implementing low contrast image detection with OpenCV” for a detailed description of this parameter. Let’s now access our video stream: # grab a pointer to the input video stream print("[INFO] accessing video stream...") vs = cv2.VideoCapture(args["input"] if args["input"] else 0) # loop over frames from the video stream while True: # read a frame from the video stream (grabbed, frame) = vs.read() # if the frame was not grabbed then we've reached the end of # the video stream so exit the script if not grabbed: print("[INFO] no frame read from stream - exiting") break # resize the frame, convert it to grayscale, blur it, and then # perform edge detection frame = imutils.resize(frame, width=450) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(gray, (5, 5), 0) edged = cv2.Canny(blurred, 30, 150) # initialize the text and color to indicate that the current # frame is *not* low contrast text = "Low contrast: No" color = (0, 255, 0) Line 18 instantiates a point to our video stream. By default we’ll use our webcam; however, if you are a video file, you can supply the --input command line argument. We then loop over frames from the video stream on Line 21. Inside the loop we: Read the next frame Detect whether we’ve reached the end of the video stream, and if so, break from the loop Preprocess the frame by converting it to grayscale, blurring it, and applying the Canny edge detector We also initialize our text and color variables with the assumption that the image is not low contrast.
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
Our next code block is essentially identical to our previous script: # check to see if the frame is low contrast, and if so, update # the text and color if is_low_contrast(gray, fraction_threshold=args["thresh"]): text = "Low contrast: Yes" color = (0, 0, 255) # otherwise, the frame is *not* low contrast, so we can continue # processing it else: # find contours in the edge map and find the largest one, # which we'll assume is the outline of our color correction # card cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) c = max(cnts, key=cv2.contourArea) # draw the largest contour on the frame cv2.drawContours(frame, [c], -1, (0, 255, 0), 2) Lines 45-47 check to see if the image is low contrast, and if so, we update our text and color variables. Otherwise, we proceed to: Detect contours Find the largest contour Draw the largest contour on the frame Our final code block draws the text on the output frame: # draw the text on the output frame cv2.putText(frame, text, (5, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.8, color, 2) # stack the output frame and edge map next to each other output = np.dstack([edged] * 3) output = np.hstack([frame, output]) # show the output to our screen cv2.imshow("Output", output) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break We also stack the edge map and frame side-by-side so we can more easily visualize the output. The output frame is then displayed to our screen. Detecting low contrast frames in real-time We are now ready to detect low contrast images in real-time video streams! Use the “Downloads” section of this tutorial to download the source code, example images, and sample video file. From there, open up a terminal, and execute the following command: $ python detect_low_contrast_video.py --input example_video.mp4 [INFO] accessing video stream... [INFO] no frame read from stream - exiting As our output shows, our low contrast frame detector is able to detect frames with low contrast and prevent them from proceeding down the rest of our image processing pipeline. Conversely, images with sufficient contrast are allowed to proceed. We then apply edge detection to each of these frames, compute contours, and extract the contour/outline associated with the color correction card. You can use low contrast detection in video streams in the same manner. What's next?
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today.
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial you learned how to detect low contrast images in both static scenes and real-time video streams. We used both the OpenCV library and the scikit-image package to develop our low contrast image detector. While simple, this method can be extremely effective when used in computer vision and image processing pipelines. One of the easiest ways to use this method is to provide feedback to your user. If a user provides your application with a low contrast image, alert them and request that they provide a higher-quality image.
https://pyimagesearch.com/2021/01/25/detecting-low-contrast-images-with-opencv-scikit-image-and-python/
Taking this approach allows you to place “guarantees” on the environment used to capture images that are ultimately presented to your pipeline. Furthermore, it helps the user understand that your application can only be used in certain scenarios and it’s on them to ensure they conform to your standards. The gist here is to not overcomplicate your image processing pipelines. It’s far easier to write OpenCV code when you can place guarantees on the lighting conditions and environment — try to enforce these standards any way you can. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
Click here to download the source code to this pos In this tutorial, you will learn how to use OpenCV’s basic drawing functions. You will learn how to use OpenCV to draw: LinesRectanglesCircles You will also learn how to use OpenCV to draw on images and blank/empty arrays initialized with NumPy. To learn how to use OpenCV’s basic drawing functions, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Drawing with OpenCV In the first part of this tutorial, we will briefly review OpenCV’s drawing functions. We will then configure our development environment and review our project directory structure. With the review taken care of, we will move on to implement two Python scripts: basic_drawing.pyimage_drawing.py These scripts will help you understand how to perform basic drawing functions with OpenCV. By the end of this guide, you will understand how to use OpenCV to draw lines, circles, and rectangles. Drawing functions in OpenCV OpenCV has a number of drawing functions you can use to draw various shapes, including polygons of irregular shapes, but the three most common OpenCV drawing functions you will see are: cv2.line: Draws a line on image, starting at a specified (x, y)-coordinate and ending at another (x, y)-coordinatecv2.circle: Draws a circle on an image specified by the center (x, y)-coordinate and a supplied radiuscv2.rectangle: Draws a rectangle on an image specified by the top-left corner and bottom-right corner (x, y)-coordinates We will cover these three drawing functions today. However, it’s worth noting that more advanced OpenCV drawing functions exist, including: cv2.ellipse: Draws an ellipse on an imagecv2.polylines: Draws the outline of a polygon specified by a set of (x, y)-coordinatescv2.fillPoly: Draws a polygon, but instead of drawing the outline, instead fills in the polygoncv2.arrowedLine: Draws an arrow pointing from a starting (x, y)-coordinate to an ending (x, y)-coordinate These OpenCV drawing functions are used less often but are still worth noting.
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
We use them occasionally on the PyImageSearch blog. Configuring your development environment To follow along with this guide, you need to have the OpenCV library installed on your system. Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have your system up and running in a matter of minutes. Having problems configuring your development environment? Figure 1: Having trouble configuring your development environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — your system will be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux systems? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser!
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Let’s start by reviewing our project directory structure for our OpenCV drawing project: $ tree . --dirsfirst . ├── adrian.png ├── basic_drawing.py └── image_drawing.py 0 directories, 3 files We have two Python scripts to review today: basic_drawing.py: Initializes an empty NumPy array and utilizes OpenCV to draw lines, circles, and rectanglesimage_drawing.py: Loads adrian.png from disk and then draws on the image (rather than an empty/blank NumPy array canvas). We are now ready to get started! Implementing basic drawing functions with OpenCV Before we draw on actual images, let’s first learn how to initialize an empty NumPy array/image and draw on it. Open the basic_drawing.py file in your project directory structure, and let’s get to work. # import the necessary packages import numpy as np import cv2 # initialize our canvas as a 300x300 pixel image with 3 channels # (Red, Green, and Blue) with a black background canvas = np.zeros((300, 300, 3), dtype="uint8") Lines 2 and 3 import the packages we will be using. As a shortcut, we will create an alias for numpy as np.
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
You will see this convention utilized in all PyImageSearch tutorials that leverage NumPy (and in fact, you will commonly see this convention in the Python community as well!) We will also import cv2, so we can have access to the OpenCV library. Initializing our image is handled on Line 7. We construct a NumPy array using the np.zeros method with 300 rows and 300 columns, yielding a 300 x 300 pixel image. We also allocate space for 3 channels — one each for Red, Green, and Blue. As the name suggests, the np.zeros method fills every element in the array with an initial value of zero. Secondly, it’s important to draw your attention to the second argument of the np.zeros method: the data type, dtype. Since we represent our image as an RGB image with pixels in the range [0, 255], we must use an 8-bit unsigned integer, or uint8. There are many other data types that we can use (common ones include 32-bit integers and 32- and 64-bit floats), but we will mainly use uint8 for the majority of the examples in this lesson. Now that we have our canvas initialized, we can do some drawing: # draw a green line from the top-left corner of our canvas to the # bottom-right green = (0, 255, 0) cv2.line(canvas, (0, 0), (300, 300), green) cv2.imshow("Canvas", canvas) cv2.waitKey(0) # draw a 3 pixel thick red line from the top-right corner to the # bottom-left red = (0, 0, 255) cv2.line(canvas, (300, 0), (0, 300), red, 3) cv2.imshow("Canvas", canvas) cv2.waitKey(0) The first thing we do on Line 11 is to define a tuple used to represent the color “green.”
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
Then, we draw a green line from point (0, 0), the top-left corner of the image, to point (300, 300), the bottom-right corner of the image, on Line 12. In order to draw the line, we make use of the cv2.line method: The first argument to this method is the image upon which we are going to draw. In this case, it’s our canvas. The second argument is the starting point of the line. We choose to start our line from the top-left corner of the image, at point (0, 0) — again, remember that the Python language is zero-index. We also need to supply an ending point for the line (the third argument). We define our ending point to be (300, 300), the bottom-right corner of the image. The last argument is the color of our line (in this case, green). Lines 13 and 14 show our image and then wait for a keypress (see Figure 2). Figure 2: Drawing lines with OpenCV.
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
As you can see, using the cv2.line function is quite simple! But there is one other important argument to consider in the cv2.line method: the thickness. On Lines 18-21, we define the color red as a tuple (again, in BGR rather than RGB format). We then draw a red line from the top-right corner of the image to the bottom-left. The last parameter to the method controls the thickness of the line — we decide to make the thickness 3 pixels. Again, we show our image and wait for a keypress: Figure 3: Drawing multiple lines with OpenCV. Drawing a line was simple enough. Now we can move on to drawing rectangles. Check out the code below for more details: # draw a green 50x50 pixel square, starting at 10x10 and ending at 60x60 cv2.rectangle(canvas, (10, 10), (60, 60), green) cv2.imshow("Canvas", canvas) cv2.waitKey(0) # draw another rectangle, this one red with 5 pixel thickness cv2.rectangle(canvas, (50, 200), (200, 225), red, 5) cv2.imshow("Canvas", canvas) cv2.waitKey(0) # draw a final rectangle (blue and filled in ) blue = (255, 0, 0) cv2.rectangle(canvas, (200, 50), (225, 125), blue, -1) cv2.imshow("Canvas", canvas) cv2.waitKey(0) On Line 24, we make use of the cv2.rectangle method. The signature of this method is identical to the cv2.line method above, but let’s explore each argument anyway: The first argument is the image upon which we want to draw our rectangle.
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
We want to draw on our canvas, so we pass it into the method. The second argument is the starting (x, y) position of our rectangle — here, we start our rectangle at point (10, 10).Then, we must provide an ending (x, y) point for the rectangle. We decide to end our rectangle at (60, 60), defining a region of 50 x 50 pixels (take a second to convince yourself that the resulting rectangle is 50 x 50).Finally, the last argument is the color of the rectangle we want to draw. Here, we are drawing a green rectangle. Just as we can control a line’s thickness, we can also control the rectangle’s thickness. Line 29 provides that thickness argument. Here, we draw a red rectangle that is 5 pixels thick, starting at point (50, 200) and ending at (200, 225). At this point, we have only drawn the outline of a rectangle. How do we draw a rectangle that is “completely filled”? Simple.
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
We just pass a negative value for the thickness argument. Line 35 demonstrates how to draw a rectangle of a solid color. We draw a blue rectangle, starting at (200, 50) and ending at (225, 125). By specifying -1 (or use the cv2.FILLED keyword) as the thickness, our rectangle is drawn as a solid blue. Figure 4 displays the full output of drawing our lines and rectangles: Figure 4: Using OpenCV to draw lines and rectangles. As you can see, the output matches our code. We could draw a green line from the top-left corner to the bottom-right corner, followed by a thicker red line from the top-right corner to the bottom-left corner. We were also able to draw a green rectangle, a slightly thicker red rectangle, and a completely filled blue rectangle. That’s great and all — but what about circles? How can we use OpenCV to draw circles?
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
Drawing circles is just as simple as drawing rectangles, but the function arguments are a little different: # re-initialize our canvas as an empty array, then compute the # center (x, y)-coordinates of the canvas canvas = np.zeros((300, 300, 3), dtype="uint8") (centerX, centerY) = (canvas.shape[1] // 2, canvas.shape[0] // 2) white = (255, 255, 255) # loop over increasing radii, from 25 pixels to 150 pixels in 25 # pixel increments for r in range(0, 175, 25): # draw a white circle with the current radius size cv2.circle(canvas, (centerX, centerY), r, white) # show our work of art cv2.imshow("Canvas", canvas) cv2.waitKey(0) On Line 41, we re-initialize our canvas to blank: Figure 5: Re-initializing our canvas as a blank image. Line 42 calculates two variables: centerX and centerY. These two variables represent the (x, y)-coordinates of the image’s center. We calculate the center by examining the shape of our NumPy array and then dividing by two: The height of the image can be found in canvas.shape[0] (number of rows) The width is found in canvas.shape[1] (number of columns) Finally, Line 43 defines a white pixel (i.e., the buckets for each of the Red, Green, and Blue components are “full”). Now, let’s draw some circles! On Line 45, we loop over several radius values, starting at 0 and ending at 150, incrementing by 25 at each step. The range function is exclusive; therefore, we specify a stopping value of 175 rather than 150. To demonstrate this for yourself, open a Python shell, and execute the following code: $ python >>> list(range(0, 175, 25)) [0, 25, 50, 75, 100, 125, 150] Notice how the output of range stops at 150 and does not include 175. Line 49 handles the actual drawing of the circle: The first parameter is our canvas, the image upon which we want to draw the circle. We then need to supply the point around which we will draw our circle. We pass in a tuple of (centerX, centerY) so that our circles will be centered at the center of the image.
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
The third argument is the radius, r, of the circle we wish to draw. Finally, we pass in the color of our circle: in this case, white. Lines 52 and 53 then show our image and wait for a keypress: Figure 6: Drawing a bullseye with OpenCV. Check out Figure 6, and you will see that we have drawn a simple bullseye! The “dot” in the very center of the image is drawn with a radius of 0. The larger circles are drawn with ever-increasing radii sizes from our for loop. Not too bad. But what else can we do? Let’s do some abstract drawing: # re-initialize our canvas once again canvas = np.zeros((300, 300, 3), dtype="uint8") # let's draw 25 random circles for i in range(0, 25): # randomly generate a radius size between 5 and 200, generate a # random color, and then pick a random point on our canvas where # the circle will be drawn radius = np.random.randint(5, high=200) color = np.random.randint(0, high=256, size=(3,)).tolist() pt = np.random.randint(0, high=300, size=(2,)) # draw our random circle on the canvas cv2.circle(canvas, tuple(pt), radius, color, -1) # display our masterpiece to our screen cv2.imshow("Canvas", canvas) cv2.waitKey(0) Our code starts on Line 59 with more looping. This time we aren’t looping over our radii’s size — we are instead going to draw 25 random circles, making use of NumPy’s random number capabilities through the np.random.randint function.
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
To draw a random circle, we need to generate three values: the radius of the circle, the color of the circle, and the pt — the (x, y)-coordinate of where the circle will be drawn. We generate a radius value in the range [5, 200] on Line 63. This value controls how large our circle will be. On Line 64, we randomly generate a color. As we know, the color of an RGB pixel consists of three values in the range [0, 255]. To get three random integers rather than only one integer, we pass the keyword argument size=(3,), instructing NumPy to return a list of three numbers. Lastly, we need an (x, y)-center point to draw our circle. We will generate a point in the range [0, 300), again using NumPy’s np.random.randint function. The drawing of our circle then takes place on Line 68, using the radius, color, and pt that we randomly generated. Notice how we use a thickness of -1, so our circles are drawn as a solid color and not just an outline.
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
Lines 71 and 72 shown our masterpiece, which you can see in Figure 7: Figure 7: Drawing multiple circles with OpenCV. Notice how each circle has a different size, color, and placement on our canvas. OpenCV basic drawing results To execute our basic drawing script, be sure to access the “Downloads” section to retrieve the source code and example image. From there, you can execute the following command: $ python basic_drawing.py Your output should be identical to that of the previous section. Drawing on images with OpenCV Up until this point, we have only explored drawing shapes on a blank canvas. But what if we want to draw shapes on an existing image? It turns out that the code to draw shapes on an existing image is exactly the same as if we were drawing on a blank canvas generated from NumPy. To demonstrate this, let’s look at some code: # import the necessary packages import argparse import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", type=str, default="adrian.png", help="path to the input image") args = vars(ap.parse_args()) Lines 2 and 3 import our required Python packages, while Lines 6-9 parse our command line arguments. We only need a single argument, --image, which is the path to our input image on disk.
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
By default, we set the --image command line argument to point to the adrian.png image in our project directory structure. # load the input image from disk image = cv2.imread(args["image"]) # draw a circle around my face, two filled in circles covering my # eyes, and a rectangle over top of my mouth cv2.circle(image, (168, 188), 90, (0, 0, 255), 2) cv2.circle(image, (150, 164), 10, (0, 0, 255), -1) cv2.circle(image, (192, 174), 10, (0, 0, 255), -1) cv2.rectangle(image, (134, 200), (186, 218), (0, 0, 255), -1) # show the output image cv2.imshow("Output", image) cv2.waitKey(0) Line 12 loads our --image from disk. From there, we proceed to: Draw an empty circle (not filled in) surrounding my head (Line 16)Draw two filled in circles covering my eyes (Lines 17 and 18)Draw a rectangle over my mouth (Line 19) Our final output, image, is then displayed on our screen. OpenCV image drawing results Let’s see how we can use OpenCV to draw on an image versus a “blank canvas” generated by NumPy. Start by accessing the “Downloads” section of this guide to retrieve the source code and example image. You can then execute the following command: $ python image_drawing.py Figure 8: Drawing shapes on an image with OpenCV. Here, you can see that we have drawn an outlined circle surrounding my face, two filled circles over my eyes, and a filled rectangle over my mouth. In fact, there is no difference between drawing shapes on an image loaded from disk versus a blank NumPy array. As long as our image/canvas can be represented as a NumPy array, OpenCV will draw on it just the same. What's next?
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today.
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to draw with OpenCV. Specifically, you learned how to use OpenCV to draw: LinesCirclesRectangles Lines were drawn using the cv2.line function. We used the cv2.circle function to draw circles and the cv2.rectangle method to draw rectangles with OpenCV. Other drawing functions exist in OpenCV. However, these are the functions you will use most often.
https://pyimagesearch.com/2021/01/27/drawing-with-opencv/
To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
Click here to download the source code to this pos In this tutorial, you will learn how to train an Optical Character Recognition (OCR) model using Keras, TensorFlow, and Deep Learning. This post is the first in a two-part series on OCR with Keras and TensorFlow: Part 1: Training an OCR model with Keras and TensorFlow (today’s post)Part 2: Basic handwriting recognition with Keras and TensorFlow (next week’s post) For now, we’ll primarily be focusing on how to train a custom Keras/TensorFlow model to recognize alphanumeric characters (i.e., the digits 0-9 and the letters A-Z). Building on today’s post, next week we’ll learn how we can use this model to correctly classify handwritten characters in custom input images. The goal of this two-part series is to obtain a deeper understanding of how deep learning is applied to the classification of handwriting, and more specifically, our goal is to: Become familiar with some well-known, readily available handwriting datasets for both digits and lettersUnderstand how to train deep learning model to recognize handwritten digits and lettersGain experience in applying our custom-trained model to some real-world sample dataUnderstand some of the challenges with real-world noisy data and how we might want to augment our handwriting datasets to improve our model and results We’ll be starting with the fundamentals of using well-known handwriting datasets and training a ResNet deep learning model on these data. To learn how to train an OCR model with Keras, TensorFlow, and deep learning, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section OCR with Keras, TensorFlow, and Deep Learning In the first part of this tutorial, we’ll discuss the steps required to implement and train a custom OCR model with Keras and TensorFlow. We’ll then examine the handwriting datasets that we’ll use to train our model. From there, we’ll implement a couple of helper/utility functions that will aid us in loading our handwriting datasets from disk and then preprocessing them. Given these helper functions, we’ll be able to create our custom OCR training script with Keras and TensorFlow.
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
After training, we’ll review the results of our OCR work. Let’s get started! Our deep learning OCR datasets Figure 1: We are using two datasets for our OCR training with Keras and TensorFlow. On the left, we have the standard MNIST 0-9 dataset. On the right, we have the Kaggle A-Z dataset from Sachin Patel, which is based on the NIST Special Database 19. In order to train our custom Keras and TensorFlow model, we’ll be utilizing two datasets: The standard MNIST 0-9 dataset by LeCun et al. The Kaggle A-Z dataset by Sachin Patel, based on the NIST Special Database 19 The standard MNIST dataset is built into popular deep learning frameworks, including Keras, TensorFlow, PyTorch, etc. A sample of the MNIST 0-9 dataset can be seen in Figure 1 (left). The MNIST dataset will allow us to recognize the digits 0-9. Each of these digits is contained in a 28 x 28 grayscale image.
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
You can read more about MNIST here. But what about the letters A-Z? The standard MNIST dataset doesn’t include examples of the characters A-Z — how are we going to recognize them? The answer is to use the NIST Special Database 19, which includes A-Z characters. This dataset actually covers 62 ASCII hexadecimal characters corresponding to the digits 0-9, capital letters A-Z, and lowercase letters a-z. To make the dataset easier to use, Kaggle user Sachin Patel has released the dataset in an easy to use CSV file. This dataset takes the capital letters A-Z from NIST Special Database 19 and rescales them to be 28 x 28 grayscale pixels to be in the same format as our MNIST data. For this project, we will be using just the Kaggle A-Z dataset, which will make our preprocessing a breeze. A sample of it can be seen in Figure 1 (right). We’ll be implementing methods and utilities that will allow us to: Load both the datasets for MNIST 0-9 digits and Kaggle A-Z letters from diskCombine these datasets together into a single, unified character datasetHandle class label skew/imbalance from having a different number of samples per characterSuccessfully train a Keras and TensorFlow model on the combined datasetPlot the results of the training and visualize the output of the validation data Configuring your OCR development environment To configure your system for this tutorial, I first recommend following either of these tutorials: How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. Project structure Let’s review the project structure.
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
Once you grab the files from the “Downloads” section of this article, you’ll be presented with the following directory structure: $ tree --dirsfirst --filelimit 10 . ├── pyimagesearch │   ├── az_dataset │   │   ├── __init__.py │   │   └── helpers.py │   ├── models │   │   ├── __init__.py │   │   └── resnet.py │   └── __init__.py ├── a_z_handwritten_data.csv ├── handwriting.model ├── plot.png └── train_ocr_model.py 3 directories, 9 files Once we unzip our download, we find that our ocr-keras-tensorflow/ directory contains the following: pyimagesearch module: includes the sub-modules az_dataset for I/O helper files and models for implementing the ResNet deep learning architecture a_z_handwritten_data.csv: contains the Kaggle A-Z dataset handwriting.model: where the deep learning ResNet model is saved plot.png: plots the results of the most recent run of training of ResNet train_ocr_model.py: the main driver file for training our ResNet model and displaying the results Now that we have the lay of the land, let’s dig into the I/O helper functions we will use to load our digits and letters. Our OCR dataset helper functions In order to train our custom Keras and TensorFlow OCR model, we first need to implement two helper utilities that will allow us to load both the Kaggle A-Z datasets and the MNIST 0-9 digits from disk. These I/O helper functions are appropriately named: load_az_dataset: for the Kaggle A-Z letters load_mnist_dataset: for the MNIST 0-9 digits They can be found in the helpers.py file of az_dataset submodules of pyimagesearch. Let’s go ahead and examine this helpers.py file. We will begin with our import statements and then dig into our two helper functions: load_az_dataset and load_mnist_dataset. # import the necessary packages from tensorflow.keras.datasets import mnist import numpy as np Line 2 imports the MNIST dataset, mnist, which is now one of the standard datasets that conveniently comes with Keras in tensorflow.keras.datasets. Next, let’s dive into load_az_dataset, the helper function to load the Kaggle A-Z letter data. def load_az_dataset(datasetPath): # initialize the list of data and labels data = [] labels = [] # loop over the rows of the A-Z handwritten digit dataset for row in open(datasetPath): # parse the label and image from the row row = row.split(",") label = int(row[0]) image = np.array([int(x) for x in row[1:]], dtype="uint8") # images are represented as single channel (grayscale) images # that are 28x28=784 pixels -- we need to take this flattened # 784-d list of numbers and repshape them into a 28x28 matrix image = image.reshape((28, 28)) # update the list of data and labels data.append(image) labels.append(label) Our function load_az_dataset takes a single argument datasetPath, which is the location of the Kaggle A-Z CSV file (Line 5). Then, we initialize our arrays to store the data and labels (Lines 7 and 8).
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
Each row in Sachin Patel’s CSV file contains 785 columns — one column for the class label (i.e., “A-Z”) plus 784 columns corresponding to the 28 x 28 grayscale pixels. Let’s parse it. Beginning on Line 11, we are going to loop over each row of our CSV file and parse out the label and the associated image. Line 14 parses the label, which will be the integer label associated with a letter A-Z. For example, the letter “A” has a label corresponding to the integer “0” and the letter “Z” has an integer label value of “25”. Next, Line 15 parses our image and casts it as a NumPy array of unsigned 8-bit integers, which correspond to the grayscale values for each pixel from [0, 255]. We reshape our image (Line 20) from a flat 784-dimensional array to one that is 28 x 28, corresponding to the dimensions of each of our images. We will then append each image and label to our data and label arrays respectively (Lines 23 and 24). To finish up this function, we will convert the data and labels to NumPy arrays and return the image data and labels: # convert the data and labels to NumPy arrays data = np.array(data, dtype="float32") labels = np.array(labels, dtype="int") # return a 2-tuple of the A-Z data and labels return (data, labels) Presently, our image data and labels are just Python lists, so we are going to type cast them as NumPy arrays of float32 and int, respectively (Lines 27 and 28). Nice job implementing our first function! Our next I/O helper function, load_mnist_dataset, is considerably simpler.
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
def load_mnist_dataset(): # load the MNIST dataset and stack the training data and testing # data together (we'll create our own training and testing splits # later in the project) ((trainData, trainLabels), (testData, testLabels)) = mnist.load_data() data = np.vstack([trainData, testData]) labels = np.hstack([trainLabels, testLabels]) # return a 2-tuple of the MNIST data and labels return (data, labels) Line 33 loads our MNIST 0-9 digit data using Keras’s helper function, mnist.load_data. Notice that we don’t have to specify a datasetPath like we did for the Kaggle data because Keras, conveniently, has this dataset built-in. Keras’s mnist.load_data comes with a default split for training data, training labels, test data, and test labels. For now, we are just going to combine our training and test data for MNIST using np.vstack for our image data (Line 38) and np.hstack for our labels (Line 39). Later, in train_ocr_model.py, we will be combining our MNIST 0-9 digit data with our Kaggle A-Z letters. At that point, we will create our own custom split of test and training data. Finally, Line 42 returns the image data and associated labels to the calling function. Congratulations! You have now completed the I/O helper functions to load both the digit and letter samples to be used for OCR and deep learning. Next, we will examine our main driver file used for training and viewing the results.
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
Training our OCR Model using Keras and TensorFlow In this section, we are going to train our OCR model using Keras, TensorFlow, and a PyImageSearch implementation of the very popular and successful deep learning architecture, ResNet. Remember to save your model for next week, when we will implement a custom solution for handwriting recognition. To get started, locate our primary driver file, train_ocr_model.py, which is found in the main directory, ocr-keras-tensorflow/. This file contains a reference to a file resnet.py, which is located in the models/ sub-directory under the pyimagesearch module. Note: Although we will not be doing a detailed walk-through of resnet.py in this blog, you can get a feel for the ResNet architecture with my blog post on Fine-tuning ResNet with Keras and Deep Learning. For more advanced details, please my see my book, Deep Learning for Computer Vision with Python. Let’s take a moment to review train_ocr_model.py. Afterward, we will come back and break it down, step by step. First, we’ll review the packages that we will import: # set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from pyimagesearch.models import ResNet from pyimagesearch.az_dataset import load_mnist_dataset from pyimagesearch.az_dataset import load_az_dataset from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.optimizers import SGD from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from imutils import build_montages import matplotlib.pyplot as plt import numpy as np import argparse import cv2 This is a long list of import statements, but don’t worry. It means we have a lot of packages that have already been written to make our lives much easier. Starting off on Line 5, we will import matplotlib and set up the backend of it by writing the results to a file using matplotlib.use("Agg")(Line 6).
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
We then have some imports from our custom pyimagesearch module for our deep learning architecture and our I/O helper functions that we just reviewed: We import ResNet from our pyimagesearch.model, which contains our own custom implementation of the popular ResNet deep learning architecture (Line 9). Next, we import our I/O helper functions load_mnist_data (Line 10) and load_az_dataset (Line 11) from pyimagesearch.az_dataset. We have a couple of imports from the Keras module of TensorFlow, which greatly simplify our data augmentation and training: Line 12 imports ImageDataGenerator to help us efficiently augment our dataset. We then import SGD, the popular Stochastic Gradient Descent (SGD) optimization algorithm (Line 13). Following on, we import three helper functions from scikit-learn to help us label our data, split our testing and training data sets, and print out a nice classification report to show us our results: To convert our labels from integers to a vector in what is called one-hot encoding, we import LabelBinarizer (Line 14). To help us easily split out our testing and training data sets, we import train_test_split from scikit-learn (Line 15). From the metrics submodule, we import classification_report to print out a nicely formatted classification report (Line 16). Next, we will use a custom package that I wrote called imutils. From imutils, we import build_montages to help us build a montage from a list of images (Line 17). For more information on building montages, please refer to my Montages with OpenCV tutorial.
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
We will finally import Matplotlib (Line 18) and OpenCV (Line 21). Now, let’s review our three command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-a", "--az", required=True, help="path to A-Z dataset")th ap.add_argument("-m", "--model", type=str, required=True, help="path to output trained handwriting recognition model") ap.add_argument("-p", "--plot", type=str, default="plot.png", help="path to output training history file") args = vars(ap.parse_args()) We have three arguments to review: --az: The path to the Kaggle A-Z dataset (Lines 25 and 26) --model: The path to output the trained handwriting recognition model (Lines 27 and 28) --plot: The path to output the training history file (Lines 29 and 30) So far, we have our imports, convenience function, and command line args ready to go. We have several steps remaining to set up the training for ResNet, compile it, and train it. Now, we will set up the training parameters for ResNet and load our digit and letter data using the helper functions that we already reviewed: # initialize the number of epochs to train for, initial learning rate, # and batch size EPOCHS = 50 INIT_LR = 1e-1 BS = 128 # load the A-Z and MNIST datasets, respectively print("[INFO] loading datasets...") (azData, azLabels) = load_az_dataset(args["az"]) (digitsData, digitsLabels) = load_mnist_dataset() Lines 35-37 initialize the parameters for the training of our ResNet model. Then, we load the data and labels for the Kaggle A-Z and MNIST 0-9 digits data, respectively (Lines 41 and 42), making use of the I/O helper functions that we reviewed at the beginning of the post. Next, we are going to perform a number of steps to prepare our data and labels to be compatible with our ResNet deep learning model in Keras and TensorFlow: # the MNIST dataset occupies the labels 0-9, so let's add 10 to every # A-Z label to ensure the A-Z characters are not incorrectly labeled # as digits azLabels += 10 # stack the A-Z data and labels with the MNIST digits data and labels data = np.vstack([azData, digitsData]) labels = np.hstack([azLabels, digitsLabels]) # each image in the A-Z and MNIST digts datasets are 28x28 pixels; # however, the architecture we're using is designed for 32x32 images, # so we need to resize them to 32x32 data = [cv2.resize(image, (32, 32)) for image in data] data = np.array(data, dtype="float32") # add a channel dimension to every image in the dataset and scale the # pixel intensities of the images from [0, 255] down to [0, 1] data = np.expand_dims(data, axis=-1) data /= 255.0 As we combine our letters and numbers into a single character data set, we want to remove any ambiguity where there is overlap in the labels so that each label in the combined character set is unique. Currently, our labels for A-Z go from [0, 25], corresponding to each letter of the alphabet. The labels for our digits go from 0-9, so there is overlap — which would be a problematic if we were to just combine them directly. No problem!
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
There is a very simple fix. We will just add ten to all of our A-Z labels so they all have integer label values greater than our digit label values (Line 47). Now, we have a unified labeling schema for digits 0-9 and letters A-Z without any overlap in the values of the labels. Line 50 combines our data sets for our digits and letters into a single character dataset using np.vstack. Likewise, Line 51 unifies our corresponding labels for our digits and letters on using np.hstack. Our ResNet architecture requires the images to have input dimensions of 32 x 32, but our input images currently have a size of 28 x 28. We resize each of the images using cv2.resize(Line 56). We have two final steps to prepare our data for use with ResNet. On Line 61, we will add an extra “channel” dimension to every image in the dataset to make it compatible with the ResNet model in Keras/TensorFlow. Finally, we will scale our pixel intensities from a range of [0, 255] down to [0.0, 1.0] (Line 62).
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
Our next step is to prepare the labels for ResNet, weight the labels to account for the skew in the number of times each class (character) is represented in the data, and partition the data into test and training splits: # convert the labels from integers to vectors le = LabelBinarizer() labels = le.fit_transform(labels) counts = labels.sum(axis=0) # account for skew in the labeled data classTotals = labels.sum(axis=0) classWeight = {} # loop over all classes and calculate the class weight for i in range(0, len(classTotals)): classWeight[i] = classTotals.max() / classTotals[i] # partition the data into training and testing splits using 80% of # the data for training and the remaining 20% for testing (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.20, stratify=labels, random_state=42) We instantiate a LabelBinarizer(Line 65), and then we convert the labels from integers to a vector of binaries with one-hot encoding (Line 66) using le.fit_transform. Lines 70-75 weight each class, based on the frequency of occurrence of each character. Next, we will use the scikit-learn train_test_split utility (Lines 79 and 80) to partition the data into 80% training and 20% testing. From there, we’ll augment our data using an image generator from Keras: # construct the image generator for data augmentation aug = ImageDataGenerator( rotation_range=10, zoom_range=0.05, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.15, horizontal_flip=False, fill_mode="nearest") We can improve the results of our ResNet classifier by augmenting the input data for training using an ImageDataGenerator. Lines 82-90 include various rotations, scaling the size, horizontal translations, vertical translations, and tilts in the images. For more details on data augmentation, see our Keras ImageDataGenerator and Data Augmentation tutorial. Now we are ready to initialize and compile the ResNet network: # initialize and compile our deep neural network print("[INFO] compiling model...") opt = SGD(lr=INIT_LR, decay=INIT_LR / EPOCHS) model = ResNet.build(32, 32, 1, len(le.classes_), (3, 3, 3), (64, 64, 128, 256), reg=0.0005) model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) Using the SGD optimizer and a standard learning rate decay schedule, we build our ResNet architecture (Lines 94-96). Each character/digit is represented as a 32×32 pixel grayscale image as is evident by the first three parameters to ResNet’s build method. Note: For more details on ResNet, be sure to refer to the Practitioner Bundle of Deep Learning for Computer Vision with Python where you’ll learn how to implement and tune the powerful architecture. Lines 97 and 98 compile our model with "categorical_crossentropy" loss and our established SGD optimizer.
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
Please beware that if you are working with a 2-class only dataset (we are not), you would need to use the "binary_crossentropy" loss function. Next, we will train the network, define label names, and evaluate the performance of the network: # train the network print("[INFO] training network...") H = model.fit( aug.flow(trainX, trainY, batch_size=BS), validation_data=(testX, testY), steps_per_epoch=len(trainX) // BS, epochs=EPOCHS, class_weight=classWeight, verbose=1) # define the list of label names labelNames = "0123456789" labelNames += "ABCDEFGHIJKLMNOPQRSTUVWXYZ" labelNames = [l for l in labelNames] # evaluate the network print("[INFO] evaluating network...") predictions = model.predict(testX, batch_size=BS) print(classification_report(testY.argmax(axis=1), predictions.argmax(axis=1), target_names=labelNames)) We train our model using the model.fit method (Lines 102-108). The parameters are as follows: aug.flow: establishes in-line data augmentation (Line 103) validation_data: test input images (testX) and test labels (testY) (Line 104) steps_per_epoch: how many batches are run per each pass of the full training data (Line 105) epochs: the number of complete passes through the full data set during training (Line 106) class_weight: weights due to the imbalance of data samples for various classes (e.g., digits and letters) in the training data (Line 107) verbose: shows a progress bar during the training (Line 108) Note: Formerly, TensorFlow/Keras required use of a method called .fit_generator in order to train a model using data generators (such as data augmentation objects). Now, the .fit method can handle generators/data augmentation as well, making for more-consistent code. This also applies to the migration from .predict_generator to .predict. Be sure to check out my articles about fit and fit_generator as well as data augmentation. Next, we establish labels for each individual character. Lines 111-113 concatenates all of our digits and letters and form an array where each member of the array is a single digit or number. In order to evaluate our model, we make predictions on the test set and print our classification report. We’ll see the report very soon in the next section!
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
Line 118 prints out the results using the convenient scikit-learn classification_report utility. We will save the model to disk, plot the results of the training history, and save the training history: # save the model to disk print("[INFO] serializing network...") model.save(args["model"], save_format="h5") # construct a plot that plots and saves the training history N = np.arange(0, EPOCHS) plt.style.use("ggplot") plt.figure() plt.plot(N, H.history["loss"], label="train_loss") plt.plot(N, H.history["val_loss"], label="val_loss") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(args["plot"]) As we have finished our training, we need to save the model comprised of the architecture and final weights. We will save our model, to disk, as a Hierarchical Data Format version 5 (HDF5) file, which is specified by the save_format (Line 123). Next, we use matplotlib’s plt to generate a line plot for the training loss and validation set loss along with titles, labels for the axes, and a legend. The data for the training and validation losses come from the history of H, the results of model.fit from above with one point for every epoch (Lines 127-134). The plot of the training loss curves is saved to plot.png (Line 135). Finally, let’s code our visualization procedure so we can see whether our model is working or not: # initialize our list of output test images images = [] # randomly select a few testing characters for i in np.random.choice(np.arange(0, len(testY)), size=(49,)): # classify the character probs = model.predict(testX[np.newaxis, i]) prediction = probs.argmax(axis=1) label = labelNames[prediction[0]] # extract the image from the test data and initialize the text # label color as green (correct) image = (testX[i] * 255).astype("uint8") color = (0, 255, 0) # otherwise, the class label prediction is incorrect if prediction[0] ! = np.argmax(testY[i]): color = (0, 0, 255) # merge the channels into one image, resize the image from 32x32 # to 96x96 so we can better see it and then draw the predicted # label on the image image = cv2.merge([image] * 3) image = cv2.resize(image, (96, 96), interpolation=cv2.INTER_LINEAR) cv2.putText(image, label, (5, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, color, 2) # add the image to our list of output images images.append(image) # construct the montage for the images montage = build_montages(images, (96, 96), (7, 7))[0] # show the output montage cv2.imshow("OCR Results", montage) cv2.waitKey(0) Line 138 initializes our array of test images. Starting on Line 141, we randomly select 49 characters (to form a 7×7 grid) and proceed to: Classify the character using our ResNet-based model (Lines 143-145) Grab the individual character image from our test data (Line 149) Set an annotation text color as green (correct) or red (incorrect) via Lines 150-154 Create a RGB representation of our single channel image and resize it for inclusion in our visualization montage (Lines 159 and 160) Annotate the colored text label (Lines 161 and 162) Add the image to our output images array (Line 165) To close out, we assemble each annotated character image into an OpenCV Montage visualization grid, displaying the result until a key is pressed (Lines 168-172). Congratulations!
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
We learned a lot along the way! Next, we’ll see the results of our hard work. Keras and TensorFlow OCR training results Recall from the last section that our script (1) loads MNIST 0-9 digits and Kaggle A-Z letters, (2) trains a ResNet model on the dataset, and (3) produces a visualization so that we can ensure it is working properly. In this section, we’ll execute our OCR model training and visualization script. To get started, use the “Downloads” section of this tutorial to download the source code and datasets. From there, open up a terminal, and execute the command below: $ python train_ocr_model.py --az a_z_handwritten_data.csv --model handwriting.model [INFO] loading datasets... [INFO] compiling model... [INFO] training network... Epoch 1/50 2765/2765 [==============================] - 93s 34ms/step - loss: 0.9160 - accuracy: 0.8287 - val_loss: 0.4713 - val_accuracy: 0.9406 Epoch 2/50 2765/2765 [==============================] - 87s 31ms/step - loss: 0.4635 - accuracy: 0.9386 - val_loss: 0.4116 - val_accuracy: 0.9519 Epoch 3/50 2765/2765 [==============================] - 87s 32ms/step - loss: 0.4291 - accuracy: 0.9463 - val_loss: 0.3971 - val_accuracy: 0.9543 ... Epoch 48/50 2765/2765 [==============================] - 86s 31ms/step - loss: 0.3447 - accuracy: 0.9627 - val_loss: 0.3443 - val_accuracy: 0.9625 Epoch 49/50 2765/2765 [==============================] - 85s 31ms/step - loss: 0.3449 - accuracy: 0.9625 - val_loss: 0.3433 - val_accuracy: 0.9622 Epoch 50/50 2765/2765 [==============================] - 86s 31ms/step - loss: 0.3445 - accuracy: 0.9625 - val_loss: 0.3411 - val_accuracy: 0.9635 [INFO] evaluating network... precision recall f1-score support 0 0.52 0.51 0.51 1381 1 0.97 0.98 0.97 1575 2 0.87 0.96 0.92 1398 3 0.98 0.99 0.99 1428 4 0.90 0.95 0.92 1365 5 0.87 0.88 0.88 1263 6 0.95 0.98 0.96 1375 7 0.96 0.99 0.97 1459 8 0.95 0.98 0.96 1365 9 0.96 0.98 0.97 1392 A 0.98 0.99 0.99 2774 B 0.98 0.98 0.98 1734 C 0.99 0.99 0.99 4682 D 0.95 0.95 0.95 2027 E 0.99 0.99 0.99 2288 F 0.99 0.96 0.97 232 G 0.97 0.93 0.95 1152 H 0.97 0.95 0.96 1444 I 0.97 0.95 0.96 224 J 0.98 0.96 0.97 1699 K 0.98 0.96 0.97 1121 L 0.98 0.98 0.98 2317 M 0.99 0.99 0.99 2467 N 0.99 0.99 0.99 3802 O 0.94 0.94 0.94 11565 P 1.00 0.99 0.99 3868 Q 0.96 0.97 0.97 1162 R 0.98 0.99 0.99 2313 S 0.98 0.98 0.98 9684 T 0.99 0.99 0.99 4499 U 0.98 0.99 0.99 5802 V 0.98 0.99 0.98 836 W 0.99 0.98 0.98 2157 X 0.99 0.99 0.99 1254 Y 0.98 0.94 0.96 2172 Z 0.96 0.90 0.93 1215 accuracy 0.96 88491 macro avg 0.96 0.96 0.96 88491 weighted avg 0.96 0.96 0.96 88491 [INFO] serializing network... As you can see, our Keras/TensorFlow OCR model is obtaining ~96% accuracy on the testing set. The training history can be seen below: Figure 2: Here’s a plot of our training history. It shows little signs of overfitting, implying that our Keras and TensorFlow model is performing well on our OCR task. As evidenced by the plot, there are few signs of overfitting, implying that our Keras and TensorFlow model is performing well at our basic OCR task. Let’s take a look at some sample output from our testing set: Figure 3: We can see from our sample output that our Keras and TensorFlow OCR model is performing quite well in identifying our character set.
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
As you can see, our Keras/TensorFlow OCR model is performing quite well! And finally, if you check your current working directory, you should find a new file named handwriting.model: $ ls *.model handwriting.model This file is is our serialized Keras and TensorFlow OCR model — we’ll be using it in next week’s tutorial on handwriting recognition. Applying our OCR model to handwriting recognition Figure 4: Next week, we will extend this tutorial to handwriting recognition. At this point, you’re probably thinking: Hey Adrian, It’s pretty cool that we trained a Keras/TensorFlow OCR model — but what good does it do just sitting on my hard drive? How can I use it to make predictions and actually recognize handwriting? Rest assured, that very question will be addressed in next week’s tutorial — stay tuned; you won’t want to miss it! What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated?
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
https://pyimagesearch.com/2020/08/17/ocr-with-keras-tensorflow-and-deep-learning/
✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to train a custom OCR model using Keras and TensorFlow. Our model was trained to recognize alphanumeric characters including the digits 0-9 as well as the letters A-Z. Overall, our Keras and TensorFlow OCR model was able to obtain ~96% accuracy on our testing set. In next week’s tutorial, you’ll learn how to take our trained Keras/TensorFlow OCR model and use it for handwriting recognition on custom input images. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
Click here to download the source code to this pos In this tutorial, you will learn how to perform image segmentation with Mask R-CNN, GrabCut, and OpenCV. A couple months ago, you learned how to use the GrabCut algorithm to segment foreground objects from the background. GrabCut worked fairly well but required that we manually supply where in the input image the object was so that GrabCut could apply its segmentation magic. Mask R-CNN, on the other hand, can automatically predict both the bounding box and the pixel-wise segmentation mask of each object in an input image. The downside is that masks produced by Mask R-CNN aren’t always “clean” — there is typically a bit of background that “bleeds” into the foreground segmentation. That raises the following questions: Is it possible to combine Mask R-CNN and GrabCut together? Can we use Mask R-CNN to compute the initial segmentation and then refine it using GrabCut? We certainly can — and the rest of this tutorial will show you how. To learn how to perform image segmentation with Mask R-CNN, GrabCut, and OpenCV, just keep reading. Looking for the source code to this post?
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
Jump Right To The Downloads Section Image Segmentation with Mask R-CNN, GrabCut, and OpenCV In the first part of this tutorial, we’ll discuss why we may want to combine GrabCut with Mask R-CNN for image segmentation. From there, we’ll implement a Python script that: Loads an input image from diskComputes a pixel-wise segmentation mask for each object in the input imageApplies GrabCut to the object via the mask to improve the image segmentation We’ll then review the results of applying Mask R-CNN and GrabCut together. The “Summary” of the tutorial covers some of the limitations of this method. Why use GrabCut and Mask R-CNN together for image segmentation? Figure 1: What is the purpose of using GrabCut and Mask R-CNN together for image segmentation with OpenCV? Mask R-CNN is a state-of-the-art deep neural network architecture used for image segmentation. Using Mask R-CNN, we can automatically compute pixel-wise masks for objects in the image, allowing us to segment the foreground from the background. An example mask computed via Mask R-CNN can be seen in Figure 1 at the top of this section. On the top-left, we have an input image of a barn scene. Mask R-CNN has detected a horse and then automatically computed its corresponding segmentation mask (top-right).And on the bottom, we can see the results of applying the computed mask to the input image — notice how the horse has been automatically segmented.
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
However, the output of Mask R-CNN is far from a perfect mask. We can see that the background (ex., dirt from the field the horse is standing on) is “bleeding” into the foreground. Our goal here is to refine this mask using GrabCut to obtain a better segmentation: Figure 2: Sometimes, GrabCut works well to refine the Mask R-CNN results. In this tutorial, we’ll seek to do just that using OpenCV. In the image above, you can see the output of applying GrabCut using the mask predicted by Mask R-CNN as the GrabCut seed. Notice how the segmentation is a bit tighter, specifically around the horse’s legs. Unfortunately, we’ve now lost the top of the horse’s head as well as its hooves. Using GrabCut and Mask R-CNN together can be a bit of a trade-off. In some cases, it will work very well — and in other cases, it will make your results worse.
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
It’s all highly dependent on your application and what types of images you are segmenting. In the rest of today’s tutorial, we’ll explore the results of applying Mask R-CNN and GrabCut together. Configuring your development environment This tutorial only requires that you have OpenCV installed in a Python virtual environment. For most readers, the best way to get started is to follow my pip install opencv tutorial, which instructs how to set up the environment and which Python packages you need on macOS, Ubuntu, or Raspbian. Alternatively, if you have a CUDA-capable GPU on hand, you can follow my OpenCV with CUDA installation guide. Project structure Go ahead and grab the code and Mask R-CNN deep learning model from the “Downloads” section of this blog post. Once you extract the .zip, you’ll be presented with the following files: $ tree --dirsfirst . ├── mask-rcnn-coco │   ├── colors.txt │   ├── frozen_inference_graph.pb │   ├── mask_rcnn_inception_v2_coco_2018_01_28.pbtxt │   └── object_detection_classes_coco.txt ├── example.jpg └── mask_rcnn_grabcut.py 1 directory, 6 files The mask-rcnn-coco/ directory contains a pre-trained Mask R-CNN TensorFlow model trained on the MS-COCO dataset. Class names are included in a separate text file in the folder. The logic for our Mask R-CNN and GrabCut image segmentation tutorial is housed in the mask_rcnn_grabcut.py Python script.
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
We’ll test our methodology, seeking to mask out objects from the included example.jpg photo. Implementing image segmentation with Mask R-CNN and GrabCut Let’s get started implementing Mask R-CNN and GrabCut together for image segmentation with OpenCV. Open up a new file, name it mask_rcnn_grabcut.py, and insert the following code: # import the necessary packages import numpy as rnp import argparse import imutils import cv2 import os # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-m", "--mask-rcnn", required=True, help="base path to mask-rcnn directory") ap.add_argument("-i", "--image", required=True, help="path to input image") ap.add_argument("-c", "--confidence", type=float, default=0.5, help="minimum probability to filter weak detections") ap.add_argument("-t", "--threshold", type=float, default=0.3, help="minimum threshold for pixel-wise mask segmentation") ap.add_argument("-u", "--use-gpu", type=bool, default=0, help="boolean indicating if CUDA GPU should be used") ap.add_argument("-e", "--iter", type=int, default=10, help="# of GrabCut iterations (larger value => slower runtime)") args = vars(ap.parse_args()) After importing necessary packages (Lines 2-6), we define our command line arguments (Lines 9-22): --mask-rcnn: The base path to our Mask R-CNN directory containing our pre-trained TensorFlow segmentation model and class names. --image: The path to our input photo for segmentation. --confidence: Probability value used to filter weak object detections (here we default this value to 50%). --threshold: Adjust this value to control the minimum threshold for pixel-wise mask segmentations. --use-gpu: A boolean indicating whether a CUDA-capable GPU should be used, but ideally better results. --iter: The number of GrabCut iterations to perform. More iterations lead to a longer runtime.
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
From here, we’ll load our deep learning model’s labels and associate a random color with each: # load the COCO class labels our Mask R-CNN was trained on labelsPath = os.path.sep.join([args["mask_rcnn"], "object_detection_classes_coco.txt"]) LABELS = open(labelsPath).read().strip().split("\n") # initialize a list of colors to represent each possible class label np.random.seed(42) COLORS = np.random.randint(0, 255, size=(len(LABELS), 3), dtype="uint8") After loading our class LABELS (Lines 25-27), we generate a corresponding set of random COLORS (one for each class) via Lines 30-32. Let’s go ahead and load our pre-trained Mask R-CNN model: # derive the paths to the Mask R-CNN weights and model configuration weightsPath = os.path.sep.join([args["mask_rcnn"], "frozen_inference_graph.pb"]) configPath = os.path.sep.join([args["mask_rcnn"], "mask_rcnn_inception_v2_coco_2018_01_28.pbtxt"]) # load our Mask R-CNN trained on the COCO dataset (90 classes) # from disk print("[INFO] loading Mask R-CNN from disk...") net = cv2.dnn.readNetFromTensorflow(weightsPath, configPath) # check if we are going to use GPU if args["use_gpu"]: # set CUDA as the preferable backend and target print("[INFO] setting preferable backend and target to CUDA...") net.setPreferableBackend(cv2.dnn. DNN_BACKEND_CUDA) net.setPreferableTarget(cv2.dnn. DNN_TARGET_CUDA) Lines 35-38 derive paths to our model’s configuration and pre-trained weights. Our model is TensorFlow-based. However, OpenCV’s DNN module is able to load the model and prepare it for inference using a CUDA-capable NVIDIA GPU, if desired (Lines 43-50). Now that our model is loaded, we’re ready to also load our image and perform inference: # load our input image from disk and display it to our screen image = cv2.imread(args["image"]) image = imutils.resize(image, width=600) cv2.imshow("Input", image) # construct a blob from the input image and then perform a # forward pass of the Mask R-CNN, giving us (1) the bounding box # coordinates of the objects in the image along with (2) the # pixel-wise segmentation for each specific object blob = cv2.dnn.blobFromImage(image, swapRB=True, crop=False) net.setInput(blob) (boxes, masks) = net.forward(["detection_out_final", "detection_masks"]) We load our input --image from disk and display it to our screen prior to performing any segmentation actions (Lines 53-55). From there, we pre-process the input by constructing a blob (Line 61). To perform Mask R-CNN inference, we pass the blob through our network, resulting in both object bounding boxes and pixel-wise segmentation masks (Lines 62-64). Given each of our detections, now we’ll proceed to generate each of the following four visualization images: rcnnMask: R-CNN mask rcnnOutput: R-CNN masked output outputMask: GrabCut mask based on mask approximations from our Mask R-CNN (refer to the “GrabCut with OpenCV: Initialization with masks” section of our previous GrabCut tutorial) output: GrabCut + Mask R-CNN masked output Be sure to refer to this list so you can keep track of each of the output images over the remaining code blocks.
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
Let’s begin looping over the detections: # loop over the number of detected objects for i in range(0, boxes.shape[2]): # extract the class ID of the detection along with the # confidence (i.e., probability) associated with the # prediction classID = int(boxes[0, 0, i, 1]) confidence = boxes[0, 0, i, 2] # filter out weak predictions by ensuring the detected # probability is greater than the minimum probability if confidence > args["confidence"]: # show the class label print("[INFO] showing output for '{}'...".format( LABELS[classID])) # scale the bounding box coordinates back relative to the # size of the image and then compute the width and the # height of the bounding box (H, W) = image.shape[:2] box = boxes[0, 0, i, 3:7] * np.array([W, H, W, H]) (startX, startY, endX, endY) = box.astype("int") boxW = endX - startX boxH = endY - startY Line 67 begins our loop over the detection, at which point we proceed to: Extract the classID and confidence (Lines 71 and 72) Filter out weak predictions, based on our --confidence threshold (Line 76) Scale bounding box coordinates according to the original dimensions of the image (Lines 84 and 85) Extract bounding box coordinates, and determine the width and height of said box (Lines 86-88) From here, we’re ready to start working on generating our R-CNN mask and masked image: # extract the pixel-wise segmentation for the object, resize # the mask such that it's the same dimensions as the bounding # box, and then finally threshold to create a *binary* mask mask = masks[i, classID] mask = cv2.resize(mask, (boxW, boxH), interpolation=cv2.INTER_CUBIC) mask = (mask > args["threshold"]).astype("uint8") * 255 # allocate a memory for our output Mask R-CNN mask and store # the predicted Mask R-CNN mask in the GrabCut mask rcnnMask = np.zeros(image.shape[:2], dtype="uint8") rcnnMask[startY:endY, startX:endX] = mask # apply a bitwise AND to the input image to show the output # of applying the Mask R-CNN mask to the image rcnnOutput = cv2.bitwise_and(image, image, mask=rcnnMask) # show the output of the Mask R-CNN and bitwise AND operation cv2.imshow("R-CNN Mask", rcnnMask) cv2.imshow("R-CNN Output", rcnnOutput) cv2.waitKey(0) First, we extract the mask, resize it according to the bounding box dimensions, and binarize it (Lines 93-96). Then, we allocate memory for the output Mask R-CNN mask and store the object mask into the bounding box ROI (Lines 100 and 101). Applying a bitwise AND to both the image and the rcnnMask results in our rcnnOutput (Line 105). The first two images are then displayed via Lines 108-110 with a pause for inspection and a keypress. Now, we’re ready to perform mask-based GrabCut: # clone the Mask R-CNN mask (so we can use it when applying # GrabCut) and set any mask values greater than zero to be # "probable foreground" (otherwise they are "definite # background") gcMask = rcnnMask.copy() gcMask[gcMask > 0] = cv2.GC_PR_FGD gcMask[gcMask == 0] = cv2.GC_BGD # allocate memory for two arrays that the GrabCut algorithm # internally uses when segmenting the foreground from the # background and then apply GrabCut using the mask # segmentation method print("[INFO] applying GrabCut to '{}' ROI...".format( LABELS[classID])) fgModel = np.zeros((1, 65), dtype="float") bgModel = np.zeros((1, 65), dtype="float") (gcMask, bgModel, fgModel) = cv2.grabCut(image, gcMask, None, bgModel, fgModel, iterCount=args["iter"], mode=cv2.GC_INIT_WITH_MASK) Recall from my previous GrabCut tutorial that there are two means of performing segmentation with GrabCut: Bounding box-basedMask-based (the method we’re about to perform) Line 116 clones the rcnnMask so that we can use it when applying GrabCut. We then set the “probable foreground” and “definite background” values (Lines 117 and 118). We also allocate arrays for the foreground and background models that OpenCV’s GrabCut algorithm needs internally (Lines 126 and 127). From there, we call cv2.grabCut with the necessary parameters (Lines 128-130), including our initialized mask (the result of our Mask R-CNN). I highly recommend referring to the “OpenCV GrabCut” section from my first GrabCut blog post if you need a refresher on what each of OpenCV’s GrabCut input parameters and 3-tuple return signature are. Regarding the return, we only care about the gcMask as we’ll see next.
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
Let’s go ahead and generate our final two output images: # set all definite background and probable background pixels # to 0 while definite foreground and probable foreground # pixels are set to 1, then scale the mask from the range # [0, 1] to [0, 255] outputMask = np.where( (gcMask == cv2.GC_BGD) | (gcMask == cv2.GC_PR_BGD), 0, 1) outputMask = (outputMask * 255).astype("uint8") # apply a bitwise AND to the image using our mask generated # by GrabCut to generate our final output image output = cv2.bitwise_and(image, image, mask=outputMask) # show the output GrabCut mask as well as the output of # applying the GrabCut mask to the original input image cv2.imshow("GrabCut Mask", outputMask) cv2.imshow("Output", output) cv2.waitKey(0) To start, we set all “definite background” and “probable background” pixels to 0, and set all “definite foreground” and “probable foreground” pixels to 1 (Lines 136 and 137). Then, Line 138 converts the mask to the [0, 255] range as 8-bit unsigned integers. Applying a bitwise AND to our original image and Mask R-CNN + GrabCut outputMask results in our output (Line 142). Our final two image visualizations are then displayed via the remaining lines. In the next section, we’ll inspect our results. Mask R-CNN and GrabCut image segmentation results We are now ready to apply Mask R-CNN and GrabCut for image segmentation. Make sure you used the “Downloads” section of this tutorial to download the source code, example image, and pre-trained Mask R-CNN weights. For reference, here is the input image that we’ll be applying GrabCut and Mask R-CNN to: Figure 3: Our input example photo consists of a horse, rider (person), person (in background by fence), dog, truck, and farther away objects that will likely be perceived as background. We’ll apply GrabCut and Mask R-CNN with OpenCV to segment the objects in the image. Open up a terminal, and execute the following command: $ python mask_rcnn_grabcut.py --mask-rcnn mask-rcnn-coco --image example.jpg [INFO] loading Mask R-CNN from disk... [INFO] showing output for 'horse'... [INFO] applying GrabCut to 'horse' ROI... [INFO] showing output for 'person'... [INFO] applying GrabCut to 'person' ROI... [INFO] showing output for 'dog'... [INFO] applying GrabCut to 'dog' ROI... [INFO] showing output for 'truck'... [INFO] applying GrabCut to 'truck' ROI... [INFO] showing output for 'person'... [INFO] applying GrabCut to 'person' ROI... Let’s now take a look at each individual image segmentation: Figure 4: Top-left: R-CNN mask of a horse.
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
Top-right: R-CNN masked output. Bottom-left: GrabCut mask generated from the R-CNN mask initialization. Bottom-right: R-CNN + GrabCut masked output. As you can see, the results aren’t ideal — parts of the horse are excluded from the output. Here, you can see that Mask R-CNN has detected a horse in the input image. We then pass in that mask through GrabCut to refine the mask in hopes of obtaining a better image segmentation. While we are able to remove the background by the horse’s legs, it unfortunately cuts off the hooves and the top of the horse’s head. Let’s now take a look at segmenting the rider sitting on top of the horse: Figure 5: Great image segmentation results with Mask R-CNN, GrabCut, and OpenCV of the person riding the horse. This segmentation is considerably better than the previous one; however, the hair on the person’s head is lost after applying GrabCut. Here is the output of segmenting the truck from the input image: Figure 6: In this case, Mask R-CNN performed really well in isolating the truck from the photo.
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
We then apply GrabCut, producing subpar segmentation results. Mask R-CNN does a really great job segmenting the truck; however, GrabCut thinks only the grille, hood, and windshield are in the foreground, removing the rest. This next image contains the visualizations for segmenting the second person (the one in the distance by the fence): Figure 7: Person segmentation with Mask R-CNN and GrabCut with OpenCV performs really well in this case. This is one of the best examples of how Mask R-CNN and GrabCut can be successfully used together for image segmentation. Notice how we have a significantly tighter segmentation — any background (such as the grass in the field) that has bled into the foreground has been removed after applying GrabCut. And finally, here is the output of applying Mask R-CNN and GrabCut to the dog: Figure 8: Image Segmentation with Mask R-CNN, GrabCut, and OpenCV of a dog results in the dog’s head and paws being excluded from the segmentation result. The mask produced by Mask R-CNN still has a significant amount of background in it. By applying GrabCut, can remove that background, but unfortunately the top of the dog’s head is lost with it. Mixed results, limitations, and drawbacks After looking at the mixed results from this tutorial, you’re probably wondering why I even bothered to write a tutorial on using GrabCut and Mask R-CNN together — in many cases, it seemed that applying GrabCut to a Mask R-CNN mask actually made the results worse! And while that is true, there are still situations (such as the second person segmentation in Figure 7) where applying GrabCut to the Mask R-CNN mask actually improved the segmentation.
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
I used an image with a complex foreground/background to show you the limitations of this method, but images with less complexity will obtain better results. A great example could be segmenting clothes from an input image to build a fashion search engine. Instance segmentation networks such as Mask R-CNN, U-Net, etc. can predict the location and mask of each article of clothing, and from there, GrabCut can refine the mask. While there will certainly be mixed results when applying Mask R-CNN and GrabCut together for image segmentation, it can still be worth an experiment to see if your results improve. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations?
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to perform image segmentation using Mask R-CNN, GrabCut, and OpenCV. We used the Mask R-CNN deep neural network to compute the initial foreground segmentation mask for a given object in an image. The mask from Mask R-CNN can be automatically computed but often has background that “bleeds” into the foreground segmentation mask. To remedy that problem, we used GrabCut to refine the mask produced by Mask R-CNN. In some cases, GrabCut produced image segmentations that were better than the original masks produced by Mask R-CNN. And in other cases, the resulting image segmentations were worse — we would have been better off just sticking with the masks produced by Mask R-CNN. The biggest limitation is that even with the masks/bounding boxes automatically produced by Mask R-CNN, GrabCut is still an algorithm that iteratively requires manual annotation to provide the best results. Since we’re not manually providing hints and suggestions to GrabCut, the masks cannot be improved further. Had we been using a photo editing software package like Photoshop, GIMP, etc.,
https://pyimagesearch.com/2020/09/28/image-segmentation-with-mask-r-cnn-grabcut-and-opencv/
then we would have a nice, easy-to-use GUI that would allow us to provide hints to GrabCut as to what is foreground versus what is background. You should certainly try using GrabCut to refine your Mask R-CNN masks. In some cases, you’ll find that it works perfectly, and you’ll obtain higher quality image segmentations. And in other situations, you might be better off just using the Mask R-CNN masks. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
Click here to download the source code to this pos In this tutorial, you will learn how to break deep learning models using image-based adversarial attacks. We will implement our adversarial attacks using the Keras and TensorFlow deep learning libraries. Imagine it’s twenty years from now. Nearly all cars and trucks on the road have been replaced with autonomous vehicles, powered by Artificial Intelligence, deep learning, and computer vision — every turn, lane switch, acceleration, and brake is powered by a deep neural network. Now, imagine you’re on the highway. You’re sitting in the “driver’s seat” (is it really a “driver’s seat” if the car is doing the driving?) while your spouse is in the passenger seat, and your kids are in the back. Looking ahead, you see a large sticker plastered on the lane your car is driving in. It looks innocent enough. It’s just a big print of the graffiti artist Banksy’s popular Girl with Balloon work.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
Some high school kids probably just put it there as part of a weird dare/practical joke. Figure 1: Performing an adversarial attack requires taking an input image (left), purposely perturbing it with a noise vector (middle), which forces the network to misclassify the input image, ultimately resulting in an incorrect classification, potentially with major consequences (right). A split second later, your car reacts by violently breaking hard and then switching lanes as if the large art print plastered on the road is a human, an animal, or another vehicle. You’re jerked so hard that you feel the whiplash. Your spouse screams while Cheerios from your kid in the backseat rocket forward, hitting the windshield and bouncing all over the center console. You and your family are safe … but it could have been a lot worse. What happened? Why did your self-driving car react that way? Was it some sort of weird “bug” in the code/software your car is running? The answer is that the deep neural network powering the “sight” component of your vehicle just saw an adversarial image.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
Adversarial images are: Images that have pixels purposely and intentionally perturbed to confuse and deceive models …… but at the same time, look harmless and innocent to humans. These images cause deep neural networks to purposely make incorrect predictions. Adversarial images are perturbed in such a way that the model is unable to correctly classify them. In fact, it may be impossible for humans to visually identify a normal image from one that has been visually perturbed for an adversarial attack — essentially, the two images will appear identical to the human eye. While not an exact (or correct) comparison, I like to explain adversarial attacks in the context of image steganography. Using steganography algorithms, we can embed data (such as plaintext messages) in an image without distorting the appearance of the image itself. This image can be innocently transmitted to the receiver, who can then extract the hidden message from the image. Similarly, adversarial attacks embed a message in an input image — but instead of a plaintext message meant for human consumption, an adversarial attack instead embeds a noise vector in the input image. This noise vector is purposely constructed to fool and confuse deep learning models. But how do adversarial attacks work?
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
And how can we defend against them? This tutorial, along with the rest of the posts in this series, will cover that exact same question. To learn how to break deep learning models with adversarial attacks and images using Keras/TensorFlow, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Adversarial images and attacks with Keras and TensorFlow In the first part of this tutorial, we’ll discuss what adversarial attacks are and how they impact deep learning models. From there, we’ll implement three separate Python scripts: The first one will be a helper utility used to load and parse class labels from the ImageNet dataset. Our next Python script will perform basic image classification using ResNet, pre-trained on the ImageNet dataset (thereby demonstrating “standard” image classification).The final Python script will perform an adversarial attack and construct an adversarial image that purposely confuses our ResNet model, even though the two images look identical to the human eye. Let’s get started! What are adversarial images and adversarial attacks? And how to they impact deep learning models?
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
Figure 2: When performing an adversarial attack, we present an input image (left) to our neural network. We then use gradient descent to construct the noise vector (middle). This noise vector is added to the input image, resulting in a misclassification (right). ( Image source: Figure 1 of Explaining and Harnessing Adversarial Examples) In 2014, Goodfellow et al. published a paper entitled Explaining and Harnessing Adversarial Examples, which showed an intriguing property of deep neural networks — it’s possible to purposely perturb an input image such that the neural network misclassifies it. This type of perturbation is called an adversarial attack. The classic example of an adversarial attack can be seen in Figure 2 above. On the left, we have our input image which our neural network correctly classifies as “panda” with 57.7% confidence. In the middle, we have a noise vector, which to the human eye, appears to be random. However, it’s far from random.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
Instead, the pixels in noise vector are “equal to the sign of the elements of the gradient of the cost function with the respect to the input image” (Goodfellow et al.). We then add this noise vector to the input image, which produces the output (right) in Figure 2. To us, this image appears identical to the input; however, our neural network now classifies the image as a “gibbon” (a small ape, similar to a monkey) with 99.7% confidence. Creepy, right? A brief history of adversarial attacks and images Figure 3: A timeline of adversarial machine learning and security of deep neural network publications (Image source: Figure 8 of Can Machine Learning Be Secure?) Adversarial machine learning is not a new field, nor are these attacks specific to deep neural networks. In 2006, Barreno et al. published a paper entitled Can Machine Learning Be Secure? This paper discussed adversarial attacks, including proposed defenses against them. Back in 2006, the top state-of-the-art machine learning models included Support Vector Machines (SVMs) and Random Forests (RFs) — it’s been shown that both these types of models are susceptible to adversarial attacks.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
With the rise in popularity of deep neural networks starting in 2012, it was hoped that these highly non-linear models would be less susceptible to attacks; however, Goodfellow et al. ( among others) dashed these hopes. It turns out that deep neural networks are susceptible to adversarial attacks, just like their predecessors. For more information on the history of adversarial attacks, I recommend reading Biggio and Roli’s excellent 2017 paper, Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning. Why are adversarial attacks and images a problem? Figure 4: Why are adversarial attacks such a problem? Why should we be concerned? ( image source) The example at the top of this tutorial outlined why adversarial attacks could cause massive damage to health, life, and property. Examples with less severe consequences could be a group of hackers identifies that a specific model is being used by Google for spam filtering in Gmail, or a given model is being used by Facebook to automatically detect pornography in their NSFW filter. If these hackers wanted to flood Gmail users with emails that bypass Gmail’s spam filters, or upload massive amounts of pornography to Facebook that bypasses their NSFW filters, they could theoretically do so.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
These are all examples of adversarial attacks with less consequences. An adversarial attack in a scenario with higher consequences could include hacker-terrorists identifying that a specific deep neural network is being used for nearly all self-driving cars in the world (imagine if Tesla had a monopoly on the market and was the only self-driving car producer). Adversarial images could then be strategically placed along roads and highways, causing massive pileups, property damage, and even injury/death to passengers in the vehicles. The limit to adversarial attacks is only limited by your imagination, your knowledge of a given model, and how much access you have to the model itself. Can we defend against adversarial attacks? The good news is that we can help reduce the impact of adversarial attacks (but not necessarily eliminate them completely). That topic won’t be covered in today’s tutorial, but will be covered in a future tutorial on PyImageSearch. Configuring your development environment To configure your system for this tutorial, I recommend following either of these tutorials: How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. That said, are you: Short on time?Learning on your employer’s administratively locked laptop?Wanting to skip the hassle of fighting with package managers, bash/ZSH profiles, and virtual environments?Ready to run the code right now (and experiment with it to your heart’s content)? Then join PyImageSearch Plus today!
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
Gain access to PyImageSearch tutorial Jupyter Notebooks that run on Google’s Colab ecosystem in your browser — no installation required! Project structure Start by using the “Downloads” section of this tutorial to download the source code and example images. From there, let’s inspect our project directory structure. $ tree --dirsfirst . ├── pyimagesearch │   ├── __init__.py │   ├── imagenet_class_index.json │   └── utils.py ├── adversarial.png ├── generate_basic_adversary.py ├── pig.jpg └── predict_normal.py 1 directory, 7 files Inside the pyimagesearch module, we have two files: imagenet_class_index.json: A JSON file, which maps ImageNet class labels to human-readable strings. We’ll be using this JSON file to determine the integer index for a particular class label — this integer index will aid us when we construct our adversarial image attack. utils.py: Contains a simple Python helper function used to load and parse the imagenet_class_index.json. We then have two Python scripts that we’ll be reviewing today: predict_normal.py: Accepts an input image (pig.jpg), loads our ResNet50 model, and classifies it. The output of this script will be the ImageNet class label index of the predicted class label. generate_basic_adversary.py: Using the output of our predict_normal.py script, we’ll construct an adversarial attack that is able to fool ResNet.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
The output of this script (adversarial.png) will be saved to disk. Ready to implement your first adversarial attack with Keras and TensorFlow? Let’s dive in. Our ImageNet class label/index helper utility Before we can perform either normal image classification or classification with an image perturbed via an adversarial attack, we first need to create a Python helper function used to load and parse the class labels of the ImageNet dataset. We have provided a JSON file that contains the ImageNet class label indexes, identifiers, and human-readable strings inside the imagenet_class_index.json file in the pyimagesearch module of our project directory structure. I’ve included the first few lines of this JSON file below: { "0": [ "n01440764", "tench" ], "1": [ "n01443537", "goldfish" ], "2": [ "n01484850", "great_white_shark" ], "3": [ "n01491361", "tiger_shark" ], ... "106": [ "n01883070", "wombat" ], ... Here you can see that the file is a dictionary. The key to the dictionary is the integer class label index, while the value is 2-tuple consisting of: The ImageNet unique identifier for the labelThe human-readable class label Our goal is to implement a Python function that will parse the JSON file by: Accepting an input class labelReturning the integer class label index of the corresponding label Essentially, we are inverting the key/value relationship in the imagenet_class_index.json file. Let’s start implementing our helper function now. Open up the utils.py file in the pyimagesearch module, and insert the following code: # import necessary packages import json import os def get_class_idx(label): # build the path to the ImageNet class label mappings file labelPath = os.path.join(os.path.dirname(__file__), "imagenet_class_index.json") Lines 2 and 3 import our required Python packages. We’ll be using the json Python module to load our JSON file, while the os package will be used to construct file paths, agnostic of which operating system you are using.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
We then define our get_class_idx helper function. The goal of this function is to accept an input class label and then obtain the integer index of the prediction (i.e., which index out of the 1,000 class labels that a model trained on ImageNet would be able to predict). Line 7 constructs the path to the imagenet_class_index.json, which lives inside the pyimagesearch module. Let’s load the contents of that JSON file now: # open the ImageNet class mappings file and load the mappings as # a dictionary with the human-readable class label as the key and # the integer index as the value with open(labelPath) as f: imageNetClasses = {labels[1]: int(idx) for (idx, labels) in json.load(f).items()} # check to see if the input class label has a corresponding # integer index value, and if so return it; otherwise return # a None-type value return imageNetClasses.get(label, None) Lines 13-15 open the labelPath file and proceed to invert the key/value relationship such that the key is the human-readable label string and the value is the integer index that corresponds to that label. In order to obtain the integer index for the input label, we make a call to the .get method of the imageNetClasses dictionary (Line 20) — this call will return either: The integer index of the label (if it exists in the dictionary) And if the label does not exist in imageNetClasses, it will return None This value is then returned to the calling function. Let’s put our get_class_idx helper function to work in the following section. Normal image classification without adversarial attacks using Keras and TensorFlow With our ImageNet class label/index helper function implemented, let’s first create an image classification script that performs basic classification with no adversarial attacks. This script will demonstrate that our ResNet model is performing as we would it expect it to (i.e., making correct predictions). Later in this tutorial, you’ll discover how to construct an adversarial image such that it confuses ResNet. Let’s get started with our basic image classification script — open up the predict_normal.py file in your project directory structure, and insert the following code: # import necessary packages from pyimagesearch.utils import get_class_idx from tensorflow.keras.applications import ResNet50 from tensorflow.keras.applications.resnet50 import decode_predictions from tensorflow.keras.applications.resnet50 import preprocess_input import numpy as np import argparse import imutils import cv2 We import our required Python packages on Lines 2-9.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
These will all look fairly standard to you if you’ve ever worked with Keras, TensorFlow, and OpenCV before. That said, if you are new to Keras and TensorFlow, I strongly encourage you to read my Keras Tutorial: How to get started with Keras, Deep Learning, and Python guide. Additionally, you may want to read my book Deep Learning for Computer Vision with Python to obtain a deeper understanding of how to train your own custom neural networks. With all that said, take notice of Line 2, where we import our get_class_idx function, which we defined in the previous section — this function will allow us to obtain the integer index of the top predicted label from our ResNet50 model. Let’s move on to defining our preprocess_image helper function: def preprocess_image(image): # swap color channels, preprocess the image, and add in a batch # dimension image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = preprocess_input(image) image = cv2.resize(image, (224, 224)) image = np.expand_dims(image, axis=0) # return the preprocessed image return image The preprocess_image method accepts a single required argument, the image that we wish to preprocess. We preprocess the image by: Swapping the image from BGR to RGB channel ordering Calling the preprocess_input image function, which performs ResNet50-specific preprocessing and scaling Resizing the image to 224×224 Adding in a batch dimension The preprocessed image is then returned to the calling function. Next, let’s parse our command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to input image") args = vars(ap.parse_args()) We only need a single command line argument here, --image, which is the path to our input image residing on disk. If you’ve never worked with command line arguments and argparse before, I suggest you read the following tutorial. Let’s now load our input image from disk and preprocess it: # load image from disk and make a clone for annotation print("[INFO] loading image...") image = cv2.imread(args["image"]) output = image.copy() # preprocess the input image output = imutils.resize(output, width=400) preprocessedImage = preprocess_image(image) A call to cv2.imread loads our input image from disk.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
We clone it on Line 31 so we can later draw on it/annotate it with the final output class label prediction. We resize the output image to have a width of 400 pixels, such that it fits on our screen. We also call our preprocess_image function on the input image to prepare it for classification by ResNet. With our image preprocessed, we can load ResNet and classify the image: # load the pre-trained ResNet50 model print("[INFO] loading pre-trained ResNet50 model...") model = ResNet50(weights="imagenet") # make predictions on the input image and parse the top-3 predictions print("[INFO] making predictions...") predictions = model.predict(preprocessedImage) predictions = decode_predictions(predictions, top=3)[0] On Line 39 we load ResNet from disk with weights pre-trained on the ImageNet dataset. Lines 43 and 44 make predictions on our pre-procssed image, which we then decode using the decode_predictions helper function in Keras/TensorFlow. Let’s now loop over the top-3 predictions from the network and display the class labels: # loop over the top three predictions for (i, (imagenetID, label, prob)) in enumerate(predictions): # print the ImageNet class label ID of the top prediction to our # terminal (we'll need this label for our next script which will # perform the actual adversarial attack) if i == 0: print("[INFO] {} => {}".format(label, get_class_idx(label))) # display the prediction to our screen print("[INFO] {}. {}: {: .2f}%".format(i + 1, label, prob * 100)) Line 47 begins a loop over the top-3 predictions. If this is the first prediction (i.e., the top-1 prediction), we display the human-readable label to our terminal and then look up the ImageNet integer index of the corresponding label using our get_class_idx function. We also display the top-3 labels and corresponding probability to our terminal. The final step is to draw the top-1 prediction on the output image: # draw the top-most predicted label on the image along with the # confidence score text = "{}: {:.2f}%".format(predictions[0][1], predictions[0][2] * 100) cv2.putText(output, text, (3, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 0), 2) # show the output image cv2.imshow("Output", output) cv2.waitKey(0) The output image is displayed to our terminal until the window opened by OpenCV is clicked on and a key pressed.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
Non-adversarial image classification results We are now ready to perform basic image classification (i.e., no adversarial attack) with ResNet. Start by using the “Downloads” section of this tutorial to download the source code and example images. From there, open up a terminal and execute the following command: $ python predict_normal.py --image pig.jpg [INFO] loading image... [INFO] loading pre-trained ResNet50 model... [INFO] making predictions... [INFO] hog => 341 [INFO] 1. hog: 99.97% [INFO] 2. wild_boar: 0.03% [INFO] 3. piggy_bank: 0.00% Figure 5: Our pre-trained ResNet model is able to correctly classify this image as “hog”. Here you can see that we have classified an input image of a pig, with 99.97% confidence. Additionally, take note of the “hog” ImageNet label ID (341) — we’ll be using this class label ID in the next section, where we will perform an adversarial attack on the hog input image. Implementing adversarial images and attacks with Keras and TensorFlow We will now learn how to implement adversarial attacks with Keras and TensorFlow. Open up the generate_basic_adversary.py file in our project directory structure, and insert the following code: # import necessary packages from tensorflow.keras.optimizers import Adam from tensorflow.keras.applications import ResNet50 from tensorflow.keras.losses import SparseCategoricalCrossentropy from tensorflow.keras.applications.resnet50 import decode_predictions from tensorflow.keras.applications.resnet50 import preprocess_input import tensorflow as tf import numpy as np import argparse import cv2 We start by importing our required Python packages on Lines 2-10.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
You’ll notice that we are once again using the ResNet50 architecture with its corresponding preprocess_input function (for preprocessing/scaling input images) and decode_predictions utility to decode output predictions and display the human-readable ImageNet labels. The SparseCategoricalCrossentropy computes the categorical cross-entropy loss between the labels and predictions. By using the sparse version implementation of categorical cross-entropy, we do not have to explicitly one-hot encode our class labels like we would if we were using scikit-learn’s LabelBinarizer or Keras/TensorFlow’s to_categorical utility. Just like we had a preprocess_image utility in our predict_normal.py script, we also need one for this script as well: def preprocess_image(image): # swap color channels, resize the input image, and add a batch # dimension image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = cv2.resize(image, (224, 224)) image = np.expand_dims(image, axis=0) # return the preprocessed image return image This implementation is identical to the one above with the exception of leaving out the preprocess_input function call — you’ll see why we are leaving out that call once we start constructing our adversarial image. Next up, we have a simple helper utility, clip_eps: def clip_eps(tensor, eps): # clip the values of the tensor to a given range and return it return tf.clip_by_value(tensor, clip_value_min=-eps, clip_value_max=eps) The goal of this function is to accept an input tensor and then clip any values inside the input to the range [-eps, eps]. The clipped tensor is then returned to the calling function. We now arrive at the generate_adversaries function, which is the meat of our adversarial attack: def generate_adversaries(model, baseImage, delta, classIdx, steps=50): # iterate over the number of steps for step in range(0, steps): # record our gradients with tf. GradientTape() as tape: # explicitly indicate that our perturbation vector should # be tracked for gradient updates tape.watch(delta) The generate_adversaries method is the workhorse of our script. This function accepts four required parameters, including an optional fifth one: model: Our ResNet50 model (you could swap in a different pre-trained model such as VGG16, MobileNet, etc. if you prefer).
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
baseImage: The original non-perturbed input image that we wish to construct an adversarial attack for, causing our model to misclassify it. delta: Our noise vector, which will be added to the baseImage , ultimately causing the misclassification. We’ll update this delta vector by means of gradient descent. classIdx: The integer class label index we obtained by running the predict_normal.py script. steps: Number of gradient descent steps to perform (defaults to 50 steps). Line 29 starts a loop over our number of steps. We then use GradientTape to record our gradients. Calling the .watch method of the tape explicitly indicates that our perturbation vector should be tracked for updates. We can now construct our adversarial image: # add our perturbation vector to the base image and # preprocess the resulting image adversary = preprocess_input(baseImage + delta) # run this newly constructed image tensor through our # model and calculate the loss with respect to the # *original* class index predictions = model(adversary, training=False) loss = -sccLoss(tf.convert_to_tensor([classIdx]), predictions) # check to see if we are logging the loss value, and if # so, display it to our terminal if step % 5 == 0: print("step: {}, loss: {}...".format(step, loss.numpy())) # calculate the gradients of loss with respect to the # perturbation vector gradients = tape.gradient(loss, delta) # update the weights, clip the perturbation vector, and # update its value optimizer.apply_gradients([(gradients, delta)]) delta.assign_add(clip_eps(delta, eps=EPS)) # return the perturbation vector return delta Line 38 constructs our adversary image by adding the delta perturbation vector to the baseImage. The result of this adding is passed through ResNet50’s preprocess_input function to scale and normalize the resulting adversarial image.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
From there, the following takes place: Line 43 takes our model and makes predictions on the newly constructed adversary. Lines 44 and 45 calculate the loss with respect to the original classIdx (i.e., the integer index of the top-1 ImageNet class label, which we obtained by running predict_normal.py). Lines 49-51 show our resulting loss every five steps. Outside of the with statement now, we calculate the gradients of the loss with respect to our perturbation vector (Line 55). We can then update the delta vector and clip and values that fall outside the [-EPS, EPS] range. Finally, we return the resulting perturbation vector to the calling function — the final delta value will allow us to construct the adversarial attack used to fool our model. With the workhorse of our adversarial script implemented, let’s move on to parsing our command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--input", required=True, help="path to original input image") ap.add_argument("-o", "--output", required=True, help="path to output adversarial image") ap.add_argument("-c", "--class-idx", type=int, required=True, help="ImageNet class ID of the predicted label") args = vars(ap.parse_args()) Our adversarial attack Python script requires three command line arguments: --input: The path to the input image (i.e., pig.jpg) residing on disk. --output: The output adversarial image after constructing the attack (adversarial.png) --class-idx: The integer class label index from the ImageNet dataset. We obtained this value by running predict_normal.py in the “Non-adversarial image classification results” section of this tutorial.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
We can now perform a couple of initializations and load/preprocess our --input image: # define the epsilon and learning rate constants EPS = 2 / 255.0 LR = 0.1 # load the input image from disk and preprocess it print("[INFO] loading image...") image = cv2.imread(args["input"]) image = preprocess_image(image) Line 76 defines our epsilon (EPS) value used for clipping tensors when constructing the adversarial image. An EPS value of 2 / 255.0 is a standard value used in adversarial publications and tutorials (the following guide is also helpful if you’re interested in learning more about this “default” value). We then define our learning rate on Line 77. A value of LR = 0.1 was obtained by empirical tuning — you may need to update this value when constructing your own adversarial images. Lines 81 and 82 load our input image from disk and preprocess it using our preprocess_image helper function. Next, we can load our ResNet model: # load the pre-trained ResNet50 model for running inference print("[INFO] loading pre-trained ResNet50 model...") model = ResNet50(weights="imagenet") # initialize optimizer and loss function optimizer = Adam(learning_rate=LR) sccLoss = SparseCategoricalCrossentropy() Line 86 loads the ResNet50 model, pre-trained on the ImageNet dataset. We’ll use the Adam optimizer, along with the sparse categorical-loss implementation, when updating our perturbation vector. Let’s now construct our adversarial image: # create a tensor based off the input image and initialize the # perturbation vector (we will update this vector via training) baseImage = tf.constant(image, dtype=tf.float32) delta = tf. Variable(tf.zeros_like(baseImage), trainable=True) # generate the perturbation vector to create an adversarial example print("[INFO] generating perturbation...") deltaUpdated = generate_adversaries(model, baseImage, delta, args["class_idx"]) # create the adversarial example, swap color channels, and save the # output image to disk print("[INFO] creating adversarial example...") adverImage = (baseImage + deltaUpdated).numpy().squeeze() adverImage = np.clip(adverImage, 0, 255).astype("uint8") adverImage = cv2.cvtColor(adverImage, cv2.COLOR_RGB2BGR) cv2.imwrite(args["output"], adverImage) Line 94 constructs a tensor from our input image, while Line 95 initializes delta, our perturbation vector. To actually construct and update the delta vector, we make a call to generate_adversaries, passing in our ResNet50 model, input image, perturbation vector, and integer class label index.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
The generate_adversaries function runs, updating the delta pertubration vector along the way, resulting in deltaUpdated, the final noise vector. We construct our final adversarial image (adverImage) on Line 105 by adding the deltaUpdated vector to baseImage. Afterward, we proceed to post-process the resulting adversarial image by: Clipping any values that fall outside the range [0, 255]Converting the image to an unsigned 8-bit integer (so that OpenCV can now operate on the image)Swapping color channel ordering from RGB to BGR After the above preprocessing steps, we write the output adversarial image to disk. The real question is, can our newly constructed adversarial image fool our ResNet model? The next code block will address that question: # run inference with this adversarial example, parse the results, # and display the top-1 predicted result print("[INFO] running inference on the adversarial example...") preprocessedImage = preprocess_input(baseImage + deltaUpdated) predictions = model.predict(preprocessedImage) predictions = decode_predictions(predictions, top=3)[0] label = predictions[0][1] confidence = predictions[0][2] * 100 print("[INFO] label: {} confidence: {:.2f}%".format(label, confidence)) # draw the top-most predicted label on the adversarial image along # with the confidence score text = "{}: {:.2f}%".format(label, confidence) cv2.putText(adverImage, text, (3, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2) # show the output image cv2.imshow("Output", adverImage) cv2.waitKey(0) We once again construct our adversarial image on Line 113 by adding the delta noise vector to our original input image, but this time we call ResNet’s preprocess_input utility on it. The resulting preprocessed image is passed through ResNet, after which we grab the top-3 predictions and decode them (Lines 114 and 115). We then grab the label and corresponding probability/confidence with the top-1 prediction and display these values to our terminal (Lines 116-119). The final step is to draw the top prediction on our output adversarial image and display it to our screen. Results of adversarial images and attacks Ready to see an adversarial attack in action? Make sure you used the “Downloads” section of this tutorial to download the source code and example images.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
From there, you can open up a terminal and execute the following command: $ python generate_basic_adversary.py --input pig.jpg --output adversarial.png --class-idx 341 [INFO] loading image... [INFO] loading pre-trained ResNet50 model... [INFO] generating perturbation... step: 0, loss: -0.0004124982515349984... step: 5, loss: -0.0010656398953869939... step: 10, loss: -0.005332294851541519... step: 15, loss: -0.06327803432941437... step: 20, loss: -0.7707189321517944... step: 25, loss: -3.4659299850463867... step: 30, loss: -7.515471935272217... step: 35, loss: -13.503922462463379... step: 40, loss: -16.118188858032227... step: 45, loss: -16.118192672729492... [INFO] creating adversarial example... [INFO] running inference on the adversarial example... [INFO] label: wombat confidence: 100.00% Figure 6: Previously, this input image was correctly classified as “hog” but is now classified as “wombat” due to our adversarial attack! Our input pig.jpg, which was correctly classified as “hog” in the previous section is now labeled as a “wombat”! I’ve placed the original pig.jpg image next to the adversarial image generated by our generate_basic_adversary.py script below: Figure 7: On the left, we have our original input image, which is correctly classified. On the right, we have our output adversarial image, which is incorrectly classified as “wombat” — the human eye is unable to spot any differences between these images. On the left is the original hog image, while on the right we have the output adversarial image, which is incorrectly classified as a “wombat”. As you can see, there is no perceptible difference between the two images — our human eyes can see the difference between these two images, but to ResNet, they are totally different. That’s all well and good, but we clearly don’t have control over the final class label in the adversarial image. That raises the question: Is it possible to control what the final output class label of the input image is? The answer is yes — and I’ll be covering that question in next week’s tutorial. I’ll conclude by saying that it’s easy to get scared of adversarial images and adversarial attacks if you let your imagination get the best of you.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
But as we’ll see in a later tutorial on PyImageSearch, we can actually defend against these types of attacks. More on that later. Credits This tutorial would not have been possible without the research of Goodfellow, Szegedy, and many other deep learning researchers. Additionally, I want to call out that the implementation used in today’s tutorial is inspired by TensorFlow’s official implementation of the Fast Gradient Sign Method. I strongly suggest you take a look at their example, which does a fantastic job explaining the more theoretical and mathematically motivated aspects of this tutorial. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations?
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned about adversarial attacks, how they work, and the threat they pose to a world becoming more and more reliant on Artificial Intelligence and deep neural networks. We then implemented a basic adversarial attack algorithm using the Keras and TensorFlow deep learning libraries. Using adversarial attacks, we can purposely perturb an input image such that: The input image is misclassifiedHowever, to the human eye, the perturbed image looks identical to the original However, using the method applied here today, we have absolutely no control over what the final class label of the image is — all we’re doing is creating and embedding a noise vector that causes the deep neural network to misclassify the image. But what if we could control what the final target class label is? For example, is it possible to take an image of a “dog” and construct an adversarial attack such that the Convolutional Neural Network thinks the image is a “cat”? The answer is yes — and we’ll be covering that exact same topic in next week’s tutorial. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
https://pyimagesearch.com/2020/10/19/adversarial-images-and-attacks-with-keras-and-tensorflow/
Download the code! Website
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/
Click here to download the source code to this pos In this tutorial, you will learn how to perform targeted adversarial attacks and construct targeted adversarial images using Keras, TensorFlow, and Deep Learning. Last week’s tutorial covered untargeted adversarial learning, which is the process of: Step #1: Accepting an input image and determining its class label using a pre-trained CNNStep #2: Constructing a noise vector that purposely perturbs the resulting image when added to the input image, in such a way that:Step #2a: The input image is incorrectly classified by the pre-trained CNNStep #2b: Yet, to the human eye, the perturbed image is indistinguishable from the original With untargeted adversarial learning, we don’t care what the new class label of the input image is, provided that it is incorrectly classified by the CNN. For example, the following image shows that we have applied adversarial learning to take an input correctly classified as “hog” and perturbed it such that the image is now incorrectly classified as “wombat”: Figure 1: On the left, we have our input image, which is correctly classified a “hog”. By constructing an adversarial attack, we can perturb the input image such that it is incorrectly classified (right). However, we have no control over what the final incorrect class label is — can we somehow modify our adversarial attack algorithm such that we have control over the final output label? In untargeted adversarial learning, we have no control over what the final, perturbed class label is. But what if we wanted to have control? Is that possible? It is absolutely is — and in order to control the class label of the perturbed image, we need to apply targeted adversarial learning. The remainder of this tutorial will show you how to apply targeted adversarial learning.
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/
To learn how to perform targeted adversarial learning with Keras and TensorFlow, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Targeted adversarial attacks with Keras and TensorFlow In the first part of this tutorial, we’ll briefly discuss what adversarial attacks and adversarial images are. I’ll then explain the difference between targeted adversarial attacks versus untargeted ones. Next, we’ll review our project directory structure, and from there, we’ll implement a Python script that will apply targeted adversarial learning using Keras and TensorFlow. We’ll wrap up this tutorial with a discussion of our results. What are adversarial attacks? And what are image adversaries? Figure 2: When performing an adversarial attack, we present an input image (left) to our neural network. We then use gradient descent to construct the noise vector (middle).
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/
This noise vector is added to the input image, resulting in a misclassification (right). ( Image source: Figure 1 of Explaining and Harnessing Adversarial Examples) If you are new to adversarial attacks and have not heard of adversarial images before, I suggest you first read my blog post, Adversarial images and attacks with Keras and TensorFlow before reading this guide. The gist is that adversarial images are purposely constructed to fool pre-trained models. For example, if a pre-trained CNN is able to correctly classify an input image, an adversarial attack seeks to take that very same image and: Perturb it such that the image is now incorrectly classified …… yet the new, perturbed image looks identical to the original (at least to the human eye) It’s important to understand how adversarial attacks work and how adversarial images are constructed — knowing this will help you train your CNNs such that they can defend against these types of adversarial attacks (a topic that I will cover in a future tutorial). How is a targeted adversarial attack different from an untargeted one? Figure 3: When performing an untargeted adversarial attack, we have no control over the output class label. However, when performing a targeted adversarial attack, we are able to incorporate label information into the gradient update process. Figure 3 above visually shows the difference between an untargeted adversarial attack and a targeted one. When constructing an untargeted adversarial attack, we have no control over what the final output class label of the perturbed image will be — our only goal is to force the model to incorrectly classify the input image. Figure 3 (top) is an example of an untargeted adversarial attack.
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/
Here, we input the image of a “pig” — the adversarial attack algorithm then perturbs the input image such that it’s misclassified as a “wombat”, but again, we did not specify what the target class label should be (and frankly, the untargeted algorithm doesn’t care, as long as the input image is now incorrectly classified). On the other hand, targeted adversarial attacks give us more control over what the final predicted label of the perturbed image is. Figure 3 (bottom) is an example of a targeted adversarial attack. We once again input our image of a “pig”, but we also supply the target class label of the perturbed image (which in this case is a “Lakeland terrier”, a type of dog). Our targeted adversarial attack algorithm is then able to perturb the input image of the pig such that it is now misclassified as a Lakeland terrier. You’ll learn how to perform such a targeted adversarial attack in the remainder of this tutorial. Configuring your development environment To configure your system for this tutorial, I recommend following either of these tutorials: How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. That said, are you: Short on time?Learning on your employer’s administratively locked laptop?Wanting to skip the hassle of fighting with package managers, bash/ZSH profiles, and virtual environments?Ready to run the code right now (and experiment with it to your heart’s content)? Then join PyImageSearch Plus today! Gain access to our PyImageSearch tutorial Jupyter Notebooks, which run on Google’s Colab ecosystem in your browser — no installation required.
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/
Project structure Before we can start implementing targeted adversarial attack with Keras and TensorFlow, we first need to review our project directory structure. Start by using the “Downloads” section of this tutorial to download the source code and example images. From there, inspect the directory structure: $ tree --dirsfirst . ├── pyimagesearch │   ├── __init__.py │   ├── imagenet_class_index.json │   └── utils.py ├── adversarial.png ├── generate_targeted_adversary.py ├── pig.jpg └── predict_normal.py 1 directory, 7 files Our directory structure is identical to last week’s guide on Adversarial images and attacks with Keras and TensorFlow. The pyimagesearch module contains utils.py, a helper utility that loads and parses the ImageNet class label indexes located in imagenet_class_index.json. We covered this helper function in last week’s tutorial and will not be covering the implementation here today — I suggest you read my previous tutorial for more details on it. We then have two Python scripts: predict_normal.py: Accepts an input image (pig.jpg), loads our ResNet50 model, and classifies it. The output of this script will be the ImageNet class label index of the predicted class label. This script was also covered in last week’s tutorial, and I will not be reviewing it here. Please refer back to my Adversarial images and attacks with Keras and TensorFlow guide if you would like a review of the implementation.
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/
generate_targeted_adversary.py: Using the output of our predict_normal.py script, we’ll apply a targeted adversarial attack that allows us to perturb the input image such that it is misclassified to a label of our choosing. The output, adversarial.png, will be serialized to disk. Let’s get to work implementing targeted adversarial attacks! Step #1: Obtaining original class label predictions using our pre-trained CNN Before we can perform a targeted adversarial attack, we must first determine what the predicted class label from a pre-trained CNN is. For the purposes of this tutorial, we’ll be using the ResNet architecture, pre-trained on the ImageNet dataset. For any given input image, we’ll need to: Load the imagePreprocess itPass it through ResNetObtain the class label predictionDetermine the integer index of the class label Once we have both the integer index of the predicted class label, along with the target class label, we want the network to predict what the image is; then we’ll be able to perform a targeted adversarial attack. Let’s get started by obtaining the class label prediction and index of the following image of a pig: Figure 4: Our input image of a “pig”. We’ll be performing a targeted adversarial attack such that this image is incorrectly classified as a “Lakeland terrier” (a type of dog). To accomplish this task, we’ll be using the predict_normal.py script in our project directory structure. This script was reviewed in last week’s tutorial, so we won’t be reviewing it here today — if you’re interested in seeing the code behind this script, refer to my previous tutorial.
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/
With all that said, start by using the “Downloads” section of this tutorial to download the source code and example images. $ python predict_normal.py --image pig.jpg [INFO] loading image... [INFO] loading pre-trained ResNet50 model... [INFO] making predictions... [INFO] hog => 341 [INFO] 1. hog: 99.97% [INFO] 2. wild_boar: 0.03% [INFO] 3. piggy_bank: 0.00% Figure 5: Our pre-trained ResNet model is able to correctly classify this image as “hog”. Here you can see that our input pig.jpg image is classified as a “hog” with 99.97% confidence. In our next section, you’ll learn how to perturb this image such that it’s misclassified as a “Lakeland terrier” (a type of dog). But for now, make note of Line 5 of our terminal output, which shows that the ImageNet class label index of the predicted label “hog” is 341 — we’ll need this value in the next section. Step #2: Implementing targeted adversarial attacks with Keras and TensorFlow We are now ready to implement targeted adversarial attacks and construct a targeted adversarial image using Keras and TensorFlow. Open up the generate_targeted_adversary.py file in your project directory structure, and insert the following code: # import necessary packages from tensorflow.keras.optimizers import Adam from tensorflow.keras.applications import ResNet50 from tensorflow.keras.losses import SparseCategoricalCrossentropy from tensorflow.keras.applications.resnet50 import decode_predictions from tensorflow.keras.applications.resnet50 import preprocess_input import tensorflow as tf import numpy as np import argparse import cv2 We start by importing our required Python packages on Lines 2-10.