url
stringclasses
675 values
text
stringlengths
0
9.95k
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
Why on earth would I apply deep learning and go through the trouble of training a network? This question, although a legitimate one, does indeed contain a large misconception regarding autoencoders. Yes, during the training process, our goal is to train a network that can learn how to reconstruct our input data — but the true value of the autoencoder lives inside that latent-space representation. Keep in mind that autoencoders compress our input data and, more to the point, when we train autoencoders, what we really care about is the encoder, , and the latent-space representation, . The decoder, , is used to train the autoencoder end-to-end, but in practical applications, we often (but not always) care more about the encoder and the latent-space. Later in this tutorial, we’ll be training an autoencoder on the MNIST dataset. The MNIST dataset consists of digits that are 28×28 pixels with a single channel, implying that each digit is represented by 28 x 28 = 784 values. The autoencoder we’ll be training here will be able to compress those digits into a vector of only 16 values — that’s a reduction of nearly 98%! So what can we do if an input data point is compressed into such a small vector? That’s where things get really interesting.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
What are applications of autoencoders? Figure 3: Autoencoders are typically used for dimensionality reduction, denoising, and anomaly/outlier detection. Outside of computer vision, they are extremely useful for Natural Language Processing (NLP) and text comprehension. In this tutorial, we’ll use Python and Keras/TensorFlow to train a deep learning autoencoder. ( image source) Autoencoders are typically used for: Dimensionality reduction (i.e., think PCA but more powerful/intelligent).Denoising (ex., removing noise and preprocessing images to improve OCR accuracy).Anomaly/outlier detection (ex., detecting mislabeled data points in a dataset or detecting when an input data point falls well outside our typical data distribution). Outside of the computer vision field, you’ll see autoencoders applied to Natural Language Processing (NLP) and text comprehension problems, including understanding the semantic meaning of words, constructing word embeddings, and even text summarization. How are autoencoders different from GANs? If you’ve done any prior work with Generative Adversarial Networks (GANs), you might be wondering how autoencoders are different from GANs.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
Both GANs and autoencoders are generative models; however, an autoencoder is essentially learning an identity function via compression. The autoencoder will accept our input data, compress it down to the latent-space representation, and then attempt to reconstruct the input using just the latent-space vector. Typically, the latent-space representation will have much fewer dimensions than the original input data. GANs on the other hand: Accept a low dimensional input. Build a high dimensional space from it. Generate the final output, which is not part of the original training data but ideally passes as such. Furthermore, GANs have an evolving loss landscape, which autoencoders do not. As a GAN is trained, the generative model generates “fake” images that are then mixed with actual “real” images — the discriminator model must then determine which images are “real” vs. “fake/generated”. As the generative model becomes better and better at generating fake images that can fool the discriminator, the loss landscape evolves and changes (this is one of the reasons why training GANs is so damn hard). While both GANs and autoencoders are generative models, most of their similarities end there.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
Autoencoders cannot generate new, realistic data points that could be considered “passable” by humans. Instead, autoencoders are primarily used as a method to compress input data points into a latent-space representation. That latent-space representation can then be used for compression, denoising, anomaly detection, etc. For more details on the differences between GANs and autoencoders, I suggest giving this thread on Quora a read. Configuring your development environment To follow along with today’s tutorial on autoencoders, you should use TensorFlow 2.0. I have two installation tutorials for TF 2.0 and associated packages to bring your development system up to speed: How to install TensorFlow 2.0 on Ubuntu (Ubuntu 18.04 OS; CPU and optional NVIDIA GPU)How to install TensorFlow 2.0 on macOS (Catalina and Mojave OSes) Please note: PyImageSearch does not support Windows — refer to our FAQ. Project structure Be sure to grab the “Downloads” associated with the blog post. From there, extract the .zip and inspect the file/folder layout: $ tree --dirsfirst . ├── pyimagesearch │   ├── __init__.py │   └── convautoencoder.py ├── output.png ├── plot.png └── train_conv_autoencoder.py 1 directory, 5 files We will review two Python scripts today: convautoencoder.py: Contains the ConvAutoencoder class and build method required to assemble our neural network with tf.keras. train_conv_autoencoder.py: Trains a digits autoencoder on the MNIST dataset.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
Once the autoencoder is trained, we’ll loop over a number of output examples and write them to disk for later inspection. Our training script results in both a plot.png figure and output.png image. The output image contains side-by-side samples of the original versus reconstructed image. In the next section, we will implement our autoencoder with the high-level Keras API built into TensorFlow. Implementing a convolutional autoencoder with Keras and TensorFlow Before we can train an autoencoder, we first need to implement the autoencoder architecture itself. To do so, we’ll be using Keras and TensorFlow. My implementation loosely follows Francois Chollet’s own implementation of autoencoders on the official Keras blog. My primary contribution here is to go into a bit more detail regarding the implementation itself. Open up the convautoencoder.py file in your project structure, and insert the following code: # import the necessary packages from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import Conv2DTranspose from tensorflow.keras.layers import LeakyReLU from tensorflow.keras.layers import Activation from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Reshape from tensorflow.keras.layers import Input from tensorflow.keras.models import Model from tensorflow.keras import backend as K import numpy as np class ConvAutoencoder: @staticmethod def build(width, height, depth, filters=(32, 64), latentDim=16): # initialize the input shape to be "channels last" along with # the channels dimension itself # channels dimension itself inputShape = (height, width, depth) chanDim = -1 We begin with a selection of imports from tf.keras and one from NumPy. If you don’t have TensorFlow 2.0 installed on your system, refer to the “Configuring your development environment” section above.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
Our ConvAutoencoder class contains one static method, build, which accepts five parameters: width: Width of the input image in pixels. height: Height of the input image in pixels. depth: Number of channels (i.e., depth) of the input volume. filters: A tuple that contains the set of filters for convolution operations. By default, this parameter includes both 32 and 64 filters. latentDim: The number of neurons in our fully-connected (Dense) latent vector. By default, if this parameter is not passed, the value is set to 16. From there, we initialize the inputShape and channel dimension (we assume “channels last” ordering). We’re now ready to initialize our input and begin adding layers to our network: # define the input to the encoder inputs = Input(shape=inputShape) x = inputs # loop over the number of filters for f in filters: # apply a CONV => RELU => BN operation x = Conv2D(f, (3, 3), strides=2, padding="same")(x) x = LeakyReLU(alpha=0.2)(x) x = BatchNormalization(axis=chanDim)(x) # flatten the network and then construct our latent vector volumeSize = K.int_shape(x) x = Flatten()(x) latent = Dense(latentDim)(x) # build the encoder model encoder = Model(inputs, latent, name="encoder") Lines 25 and 26 define the input to the encoder. With our inputs ready, we go loop over the number of filters and add our sets of CONV=>LeakyReLU=>BN layers (Lines 29-33).
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
Next, we flatten the network and construct our latent vector (Lines 36-38) — this is our actual latent-space representation (i.e., the “compressed” data representation). We then build our encoder model (Line 41). If we were to do a print(encoder.summary()) of the encoder, assuming 28×28 single channel images (depth=1) and filters=(32, 64) and latentDim=16, we would have the following: Model: "encoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 14, 14, 32) 320 _________________________________________________________________ leaky_re_lu (LeakyReLU) (None, 14, 14, 32) 0 _________________________________________________________________ batch_normalization (BatchNo (None, 14, 14, 32) 128 _________________________________________________________________ conv2d_1 (Conv2D) (None, 7, 7, 64) 18496 _________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, 7, 7, 64) 0 _________________________________________________________________ batch_normalization_1 (Batch (None, 7, 7, 64) 256 _________________________________________________________________ flatten (Flatten) (None, 3136) 0 _________________________________________________________________ dense (Dense) (None, 16) 50192 ================================================================= Total params: 69,392 Trainable params: 69,200 Non-trainable params: 192 _________________________________________________________________ Here we can observe that: Our encoder begins by accepting a 28x28x1 input volume. We then apply two rounds of CONV=>RELU=>BN, each with 3×3 strided convolution. The strided convolution allows us to reduce the spatial dimensions of our volumes. After applying our final batch normalization, we end up with a 7x7x64 volume, which is flattened into a 3136-dim vector. Our fully-connected layer (i.e., the Dense layer) serves our as our latent-space representation. Next, let’s learn how the decoder model can take this latent-space representation and reconstruct the original input image: # start building the decoder model which will accept the # output of the encoder as its inputs latentInputs = Input(shape=(latentDim,)) x = Dense(np.prod(volumeSize[1:]))(latentInputs) x = Reshape((volumeSize[1], volumeSize[2], volumeSize[3]))(x) # loop over our number of filters again, but this time in # reverse order for f in filters[::-1]: # apply a CONV_TRANSPOSE => RELU => BN operation x = Conv2DTranspose(f, (3, 3), strides=2, padding="same")(x) x = LeakyReLU(alpha=0.2)(x) x = BatchNormalization(axis=chanDim)(x) To start building the decoder model, we: Construct the input to the decoder model based on the latentDim. ( Lines 45 and 46). Accept the 1D latentDim vector and turn it into a 2D volume so that we can start applying convolution (Line 47).
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
Loop over the number of filters, this time in reverse order while applying a CONV_TRANSPOSE => RELU => BN operation (Lines 51-56). Transposed convolution is used to increase the spatial dimensions (i.e., width and height) of the volume. Let’s finish creating our autoencoder: # apply a single CONV_TRANSPOSE layer used to recover the # original depth of the image x = Conv2DTranspose(depth, (3, 3), padding="same")(x) outputs = Activation("sigmoid")(x) # build the decoder model decoder = Model(latentInputs, outputs, name="decoder") # our autoencoder is the encoder + decoder autoencoder = Model(inputs, decoder(encoder(inputs)), name="autoencoder") # return a 3-tuple of the encoder, decoder, and autoencoder return (encoder, decoder, autoencoder) Wrapping up, we: Apply a final CONV_TRANSPOSE layer used to recover the original channel depth of the image (1 channel for single channel/grayscale images or 3 channels for RGB images) on Line 60. Apply a sigmoid activation function (Line 61). Build the decoder model, and add it with the encoder to the autoencoder (Lines 64-68). The autoencoder becomes the encoder + decoder. Return a 3-tuple of the encoder, decoder, and autoencoder. If we were to complete a print(decoder.summary()) operation here, we would have the following: Model: "decoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 16)] 0 _________________________________________________________________ dense_1 (Dense) (None, 3136) 53312 _________________________________________________________________ reshape (Reshape) (None, 7, 7, 64) 0 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 14, 14, 64) 36928 _________________________________________________________________ leaky_re_lu_2 (LeakyReLU) (None, 14, 14, 64) 0 _________________________________________________________________ batch_normalization_2 (Batch (None, 14, 14, 64) 256 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 28, 28, 32) 18464 _________________________________________________________________ leaky_re_lu_3 (LeakyReLU) (None, 28, 28, 32) 0 _________________________________________________________________ batch_normalization_3 (Batch (None, 28, 28, 32) 128 _________________________________________________________________ conv2d_transpose_2 (Conv2DTr (None, 28, 28, 1) 289 _________________________________________________________________ activation (Activation) (None, 28, 28, 1) 0 ================================================================= Total params: 109,377 Trainable params: 109,185 Non-trainable params: 192 _________________________________________________________________ The decoder accepts our 16-dim latent representation from the encoder and then builds a new fully-connected layer of 3136-dim, which is the product of 7 x 7 x 64 = 3136. Using our new 3136-dim FC layer, we reshape it into a 3D volume of 7 x 7 x 64. From there we can start applying our CONV_TRANSPOSE=>RELU=>BN operation.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
Unlike standard strided convolution, which is used to decrease volume size, our transposed convolution is used to increase volume size. Finally, a transposed convolution layer is applied to recover the original channel depth of the image. Since our images are grayscale, we learn a single filter, the output of which is a 28 x 28 x 1 volume (i.e., the dimensions of the original MNIST digit images). A print(autoencoder.summary()) operation shows the composed nature of the encoder and decoder: Model: "autoencoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ encoder (Model) (None, 16) 69392 _________________________________________________________________ decoder (Model) (None, 28, 28, 1) 109377 ================================================================= Total params: 178,769 Trainable params: 178,385 Non-trainable params: 384 _________________________________________________________________ The input to our encoder is the original 28 x 28 x 1 images from the MNIST dataset. Our encoder then learns a 16-dim latent-space representation of the data, after which the decoder reconstructs the original 28 x 28 x 1 images. In the next section, we will develop our script to train our autoencoder. Creating the convolutional autoencoder training script With our autoencoder architecture implemented, let’s move on to the training script. Open up the train_conv_autoencoder.py in your project directory structure, and insert the following code: # set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from pyimagesearch.convautoencoder import ConvAutoencoder from tensorflow.keras.optimizers import Adam from tensorflow.keras.datasets import mnist import matplotlib.pyplot as plt import numpy as np import argparse import cv2 # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-s", "--samples", type=int, default=8, help="# number of samples to visualize when decoding") ap.add_argument("-o", "--output", type=str, default="output.png", help="path to output visualization file") ap.add_argument("-p", "--plot", type=str, default="plot.png", help="path to output plot file") args = vars(ap.parse_args()) On Lines 2-12, we handle our imports. We’ll use the "Agg" backend of matplotlib so that we can export our training plot to disk.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
We need our custom ConvAutoencoder architecture class which we implemented in the previous section. We will use the Adam optimizer as we train on the MNIST benchmarking dataset. For visualization, we’ll employ OpenCV. Next, we’ll parse three command line arguments, all of which are optional: --samples: The number of output samples for visualization. By default this value is set to 8. --output: The path the output visualization image. We’ll name our visualization output.png by default --plot: The path to our matplotlib output plot. A default of plot.png is assigned if this argument is not provided in the terminal. Now we’ll set a couple hyperparameters and preprocess our MNIST dataset: # initialize the number of epochs to train for and batch size EPOCHS = 25 BS = 32 # load the MNIST dataset print("[INFO] loading MNIST dataset...") ((trainX, _), (testX, _)) = mnist.load_data() # add a channel dimension to every image in the dataset, then scale # the pixel intensities to the range [0, 1] trainX = np.expand_dims(trainX, axis=-1) testX = np.expand_dims(testX, axis=-1) trainX = trainX.astype("float32") / 255.0 testX = testX.astype("float32") / 255.0 Lines 25 and 26 initialize the batch size and number of training epochs. From there, we’ll work with our MNIST dataset.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
TensorFlow/Keras has a handy load_data method that we can call on mnist to grab the data (Line 30). From there, Lines 34-37 (1) add a channel dimension to every image in the dataset and (2) scale the pixel intensities to the range [0, 1]. We’re now ready to build and train our autoencoder: # construct our convolutional autoencoder print("[INFO] building autoencoder...") (encoder, decoder, autoencoder) = ConvAutoencoder.build(28, 28, 1) opt = Adam(lr=1e-3) autoencoder.compile(loss="mse", optimizer=opt) # train the convolutional autoencoder H = autoencoder.fit( trainX, trainX, validation_data=(testX, testX), epochs=EPOCHS, batch_size=BS) To build the convolutional autoencoder, we call the build method on our ConvAutoencoder class and pass the necessary arguments (Line 41). Recall that this results in the (encoder, decoder, autoencoder) tuple — going forward in this script, we only need the autoencoder for training and predictions. We initialize our Adam optimizer with an initial learning rate of 1e-3 and go ahead and compile it with mean-squared error loss (Lines 42 and 43). From there, we fit (train) our autoencoder on the MNIST data (Lines 46-50). Let’s go ahead and plot our training history: # construct a plot that plots and saves the training history N = np.arange(0, EPOCHS) plt.style.use("ggplot") plt.figure() plt.plot(N, H.history["loss"], label="train_loss") plt.plot(N, H.history["val_loss"], label="val_loss") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(args["plot"]) And from there, we’ll make predictions on our testing set: # use the convolutional autoencoder to make predictions on the # testing images, then initialize our list of output images print("[INFO] making predictions...") decoded = autoencoder.predict(testX) outputs = None # loop over our number of output samples for i in range(0, args["samples"]): # grab the original image and reconstructed image original = (testX[i] * 255).astype("uint8") recon = (decoded[i] * 255).astype("uint8") # stack the original and reconstructed image side-by-side output = np.hstack([original, recon]) # if the outputs array is empty, initialize it as the current # side-by-side image display if outputs is None: outputs = output # otherwise, vertically stack the outputs else: outputs = np.vstack([outputs, output]) # save the outputs image to disk cv2.imwrite(args["output"], outputs) Line 67 makes predictions on the test set. We then loop over the number of --samples passed as a command line argument (Line 71) so that we can build our visualization. Inside the loop, we: Grab both the original and reconstructed images (Lines 73 and 74). Stack the pair of images side-by-side (Line 77).
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
Stack the pairs vertically (Lines 81-86). Finally, we output the visualization image to disk (Line 89). In the next section, we’ll see the results of our hard work. Training the convolutional autoencoder with Keras and TensorFlow We are now ready to see our autoencoder in action! Make sure you use the “Downloads” section of this post to download the source code — from there you can execute the following command: $ python train_conv_autoencoder.py [INFO] loading MNIST dataset... [INFO] building autoencoder... Train on 60000 samples, validate on 10000 samples Epoch 1/25 60000/60000 [==============================] - 68s 1ms/sample - loss: 0.0188 - val_loss: 0.0108 Epoch 2/25 60000/60000 [==============================] - 68s 1ms/sample - loss: 0.0104 - val_loss: 0.0096 Epoch 3/25 60000/60000 [==============================] - 68s 1ms/sample - loss: 0.0094 - val_loss: 0.0086 Epoch 4/25 60000/60000 [==============================] - 68s 1ms/sample - loss: 0.0088 - val_loss: 0.0086 Epoch 5/25 60000/60000 [==============================] - 68s 1ms/sample - loss: 0.0084 - val_loss: 0.0080 ... Epoch 20/25 60000/60000 [==============================] - 83s 1ms/sample - loss: 0.0067 - val_loss: 0.0069 Epoch 21/25 60000/60000 [==============================] - 83s 1ms/sample - loss: 0.0066 - val_loss: 0.0069 Epoch 22/25 60000/60000 [==============================] - 83s 1ms/sample - loss: 0.0066 - val_loss: 0.0068 Epoch 23/25 60000/60000 [==============================] - 83s 1ms/sample - loss: 0.0066 - val_loss: 0.0068 Epoch 24/25 60000/60000 [==============================] - 83s 1ms/sample - loss: 0.0065 - val_loss: 0.0067 Epoch 25/25 60000/60000 [==============================] - 83s 1ms/sample - loss: 0.0065 - val_loss: 0.0068 [INFO] making predictions... Figure 4: Our deep learning autoencoder training history plot was generated with matplotlib. Our autoencoder was trained with Keras, TensorFlow, and Deep Learning. As Figure 4 and the terminal output demonstrate, our training process was able to minimize the reconstruction loss of the autoencoder. But how well did the autoencoder do at reconstructing the training data? The answer is very good: Figure 5: A sample of of Keras/TensorFlow deep learning autoencoder inputs (left) and outputs (right). In Figure 5, on the left is our original image while the right is the reconstructed digit predicted by the autoencoder.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
As you can see, the digits are nearly indistinguishable from each other! At this point, you may be thinking: Great … so I can train a network to reconstruct my original image. But you said that what really matters is the internal latent-space representation. How can I access that representation, and how can I use it for denoising and anomaly/outlier detection? Those are great questions — I’ll be addressing both in my next two tutorials here on PyImageSearch, so stay tuned! What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations?
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned the fundamentals of autoencoders. Autoencoders are generative models that consist of an encoder and a decoder model. When trained, the encoder takes input data point and learns a latent-space representation of the data. This latent-space representation is a compressed representation of the data, allowing the model to represent it in far fewer parameters than the original data. The decoder model then takes the latent-space representation and attempts to reconstruct the original data point from it. When trained end-to-end, the encoder and decoder function in a composed manner. In practice, we use autoencoders for dimensionality reduction, compression, denoising, and anomaly detection. After we understood the fundamentals, we implemented a convolutional autoencoder using Keras and TensorFlow. In next week’s tutorial, we’ll learn how to use a convolutional autoencoder for denoising.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/02/24/denoising-autoencoders-with-keras-tensorflow-and-deep-learning/
Click here to download the source code to this pos In this tutorial, you will learn how to use autoencoders to denoise images using Keras, TensorFlow, and Deep Learning. Today’s tutorial is part two in our three-part series on the applications of autoencoders: Autoencoders with Keras, TensorFlow, and Deep Learning (last week’s tutorial)Denoising autoenecoders with Keras, TensorFlow and Deep Learning (today’s tutorial)Anomaly detection with Keras, TensorFlow, and Deep Learning (next week’s tutorial) Last week you learned the fundamentals of autoencoders, including how to train your very first autoencoder using Keras and TensorFlow — however, the real-world application of that tutorial was admittedly a bit limited due to the fact that we needed to lay the groundwork. Today, we’re going to take a deeper dive and learn how autoencoders can be used for denoising, also called “noise reduction,” which is the process of removing noise from a signal. The term “noise” here could be: Produced by a faulty or poor quality image sensorRandom variations in brightness or colorQuantization noiseArtifacts due to JPEG compressionImage perturbations produced by an image scanner or threshold post-processingPoor paper quality (crinkles and folds) when trying to perform OCR From the perspective of image processing and computer vision, you should think of noise as anything that could be removed by a really good pre-processing filter. Our goal is to train an autoencoder to perform such pre-processing — we call such models denoising autoencoders. To learn how to train a denoising autoencoder with Keras and TensorFlow, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Denoising autoencoders with Keras, TensorFlow, and Deep Learning In the first part of this tutorial, we’ll discuss what denoising autoencoders are and why we may want to use them. From there I’ll show you how to implement and train a denoising autoencoder using Keras and TensorFlow. We’ll wrap up this tutorial by examining the results of our denoising autoencoder.
https://pyimagesearch.com/2020/02/24/denoising-autoencoders-with-keras-tensorflow-and-deep-learning/
What are denoising autoencoders, and why would we use them? Figure 1: A denoising autoencoder processes a noisy image, generating a clean image on the output side. Can we learn how to train denoising autoencoders with Keras, TensorFlow, and Deep Learning today in less than an hour? ( image source) Denoising autoencoders are an extension of simple autoencoders; however, it’s worth noting that denoising autoencoders were not originally meant to automatically denoise an image. Instead, the denoising autoencoder procedure was invented to help: The hidden layers of the autoencoder learn more robust filtersReduce the risk of overfitting in the autoencoderPrevent the autoencoder from learning a simple identify function In Vincent et al. ’s 2008 ICML paper, Extracting and Composing Robust Features with Denoising Autoencoders, the authors found that they could improve the robustness of their internal layers (i.e., latent-space representation) by purposely introducing noise to their signal. Noise was stochastically (i.e., randomly) added to the input data, and then the autoencoder was trained to recover the original, nonperturbed signal. From an image processing standpoint, we can train an autoencoder to perform automatic image pre-processing for us. A great example would be pre-processing an image to improve the accuracy of an optical character recognition (OCR) algorithm. If you’ve ever applied OCR before, you know how just a little bit of the wrong type of noise (ex.,
https://pyimagesearch.com/2020/02/24/denoising-autoencoders-with-keras-tensorflow-and-deep-learning/
printer ink smudges, poor image quality during the scan, etc.) can dramatically hurt the performance of your OCR method. Using denoising autoencoders, we can automatically pre-process the image, improve the quality, and therefore increase the accuracy of the downstream OCR algorithm. If you’re interested in learning more about denoising autoencoders, I would strongly encourage you to read this article as well Bengio and Delalleau’s paper, Justifying and Generalizing Contrastive Divergence. For more information on denoising autoencoders for OCR-related preprocessing, take a look at this dataset on Kaggle. Configuring your development environment To follow along with today’s tutorial on autoencoders, you should use TensorFlow 2.0. I have two installation tutorials for TF 2.0 and associated packages to bring your development system up to speed: How to install TensorFlow 2.0 on Ubuntu (Ubuntu 18.04 OS; CPU and optional NVIDIA GPU)How to install TensorFlow 2.0 on macOS (Catalina and Mojave OSes) Please note: PyImageSearch does not support Windows — refer to our FAQ. Project structure Go ahead and grab the .zip from the “Downloads” section of today’s tutorial. From there, extract the zip. You’ll be presented with the following project layout: $ tree --dirsfirst .
https://pyimagesearch.com/2020/02/24/denoising-autoencoders-with-keras-tensorflow-and-deep-learning/
├── pyimagesearch │   ├── __init__.py │   └── convautoencoder.py ├── output.png ├── plot.png └── train_denoising_autoencoder.py 1 directory, 5 files The pyimagesearch module contains the ConvAutoencoder class. We reviewed this class in our previous tutorial; however, we’ll briefly walk through it again today. The heart of today’s tutorial is inside the train_denoising_autoencoder.py Python training script. This script is different from the previous tutorial in one main way: We will purposely add noise to our MNIST training images using a random normal distribution centered at 0.5 with a standard deviation of 0.5. The purpose of adding noise to our training data is so that our autoencoder can effectively remove noise from an input image (i.e., denoise). Implementing our denoising autoencoder with Keras and TensorFlow The denoising autoencoder we’ll be implementing today is essentially identical to the one we implemented in last week’s tutorial on autoencoder fundamentals. We’ll review the model architecture here today as a matter of completeness, but make sure you refer to last week’s guide for more details. With that said, open up the convautoencoder.py file in your project structure, and insert the following code: # import the necessary packages from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import Conv2DTranspose from tensorflow.keras.layers import LeakyReLU from tensorflow.keras.layers import Activation from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Reshape from tensorflow.keras.layers import Input from tensorflow.keras.models import Model from tensorflow.keras import backend as K import numpy as np class ConvAutoencoder: @staticmethod def build(width, height, depth, filters=(32, 64), latentDim=16): # initialize the input shape to be "channels last" along with # the channels dimension itself # channels dimension itself inputShape = (height, width, depth) chanDim = -1 # define the input to the encoder inputs = Input(shape=inputShape) x = inputs Imports include tf.keras and NumPy. Our ConvAutoencoder class contains one static method, build which accepts five parameters: width: Width of the input image in pixels height: Heigh of the input image in pixels depth: Number of channels (i.e., depth) of the input volume filters: A tuple that contains the set of filters for convolution operations. By default, if this parameter is not provided by the caller, we’ll add two sets of CONV => RELU => BN with 32 and 64 filters latentDim: The number of neurons in our fully-connected (Dense) latent vector.
https://pyimagesearch.com/2020/02/24/denoising-autoencoders-with-keras-tensorflow-and-deep-learning/
By default, if this parameter is not passed, the value is set to 16 From there, we initialize the inputShape and define the Input to the encoder (Lines 25 and 26). Let’s begin building our encoder’s filters: # loop over the number of filters for f in filters: # apply a CONV => RELU => BN operation x = Conv2D(f, (3, 3), strides=2, padding="same")(x) x = LeakyReLU(alpha=0.2)(x) x = BatchNormalization(axis=chanDim)(x) # flatten the network and then construct our latent vector volumeSize = K.int_shape(x) x = Flatten()(x) latent = Dense(latentDim)(x) # build the encoder model encoder = Model(inputs, latent, name="encoder") Using Keras’ functional API, we go ahead and Loop over number of filters and add our sets of CONV => RELU => BN layers (Lines 29-33). We then flatten the network and construct our latent vector (Lines 36-38). The latent-space representation is the compressed form of our data. From there, we build the encoder portion of our autoencoder (Line 41). Next, we’ll use our latent-space representation to reconstruct the original input image. # start building the decoder model which will accept the # output of the encoder as its inputs latentInputs = Input(shape=(latentDim,)) x = Dense(np.prod(volumeSize[1:]))(latentInputs) x = Reshape((volumeSize[1], volumeSize[2], volumeSize[3]))(x) # loop over our number of filters again, but this time in # reverse order for f in filters[::-1]: # apply a CONV_TRANSPOSE => RELU => BN operation x = Conv2DTranspose(f, (3, 3), strides=2, padding="same")(x) x = LeakyReLU(alpha=0.2)(x) x = BatchNormalization(axis=chanDim)(x) # apply a single CONV_TRANSPOSE layer used to recover the # original depth of the image x = Conv2DTranspose(depth, (3, 3), padding="same")(x) outputs = Activation("sigmoid")(x) # build the decoder model decoder = Model(latentInputs, outputs, name="decoder") # our autoencoder is the encoder + decoder autoencoder = Model(inputs, decoder(encoder(inputs)), name="autoencoder") # return a 3-tuple of the encoder, decoder, and autoencoder return (encoder, decoder, autoencoder) Here, we are taking the latent input and use a fully-connected layer to reshape it into a 3D volume (i.e., the image data). We loop over our filters again, but in reverse order, applying CONV_TRANSPOSE => RELU => BN layers where the CONV_TRANSPOSE layer’s purpose is to increase the volume size. Finally, we build the decoder model and construct the autoencoder. Remember, the concept of an autoencoder — discussed last week — consists of both the encoder and decoder components.
https://pyimagesearch.com/2020/02/24/denoising-autoencoders-with-keras-tensorflow-and-deep-learning/
Implementing the denoising autoencoder training script Let’s now implement the training script used to: Add stochastic noise to the MNIST dataset Train a denoising autoencoder on the noisy dataset Automatically recover the original digits from the noise My implementation follows Francois Chollet’s own implementation of denoising autoencoders on the official Keras blog — my primary contribution here is to go into a bit more detail regarding the implementation itself. Open up the train_denoising_autoencoder.py file, and insert the following code: # set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from pyimagesearch.convautoencoder import ConvAutoencoder from tensorflow.keras.optimizers import Adam from tensorflow.keras.datasets import mnist import matplotlib.pyplot as plt import numpy as np import argparse import cv2 # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-s", "--samples", type=int, default=8, help="# number of samples to visualize when decoding") ap.add_argument("-o", "--output", type=str, default="output.png", help="path to output visualization file") ap.add_argument("-p", "--plot", type=str, default="plot.png", help="path to output plot file") args = vars(ap.parse_args()) On Lines 2-12 we handle our imports. We’ll use the "Agg" backend of matplotlib so that we can export our training plot to disk. Our custom ConvAutoencoder class implemented in the previous section contains the autoencoder architecture itself. Modeling after Chollet’s example, we will also use the Adam optimizer. Our script accepts three optional command line arguments: --samples: The number of output samples for visualization. By default this value is set to 8. --output: The path to the output visualization image. We’ll name our visualization output.png by default.
https://pyimagesearch.com/2020/02/24/denoising-autoencoders-with-keras-tensorflow-and-deep-learning/
--plot: The path to our matplotlib output plot. A default of plot.png is assigned if this argument is not provided in the terminal. Next, we initialize hyperparameters and preprocess our MNIST dataset: # initialize the number of epochs to train for and batch size EPOCHS = 25 BS = 32 # load the MNIST dataset print("[INFO] loading MNIST dataset...") ((trainX, _), (testX, _)) = mnist.load_data() # add a channel dimension to every image in the dataset, then scale # the pixel intensities to the range [0, 1] trainX = np.expand_dims(trainX, axis=-1) testX = np.expand_dims(testX, axis=-1) trainX = trainX.astype("float32") / 255.0 testX = testX.astype("float32") / 255.0 Our training epochs will be 25 and we’ll use a batch size of 32. We go ahead and grab the MNIST dataset (Line 30) while Lines 34-37 (1) add a channel dimension to every image in the dataset, and (2) scale the pixel intensities to the range [0, 1]. At this point, we’ll deviate from last week’s tutorial: # sample noise from a random normal distribution centered at 0.5 (since # our images lie in the range [0, 1]) and a standard deviation of 0.5 trainNoise = np.random.normal(loc=0.5, scale=0.5, size=trainX.shape) testNoise = np.random.normal(loc=0.5, scale=0.5, size=testX.shape) trainXNoisy = np.clip(trainX + trainNoise, 0, 1) testXNoisy = np.clip(testX + testNoise, 0, 1) To add random noise to the MNIST digits, we use NumPy’s random normal distribution centered at 0.5 with a standard deviation of 0.5 (Lines 41-44). The following figure shows an example of how our images look before (left) adding noise followed by after (right): Figure 2: Prior to training a denoising autoencoder on MNIST with Keras, TensorFlow, and Deep Learning, we take input images (left) and deliberately add noise to them (right). As you can see, our images are quite corrupted — recovering the original digit from the noise will require a powerful model. Luckily, our denoising autoencoder will be up to the task: # construct our convolutional autoencoder print("[INFO] building autoencoder...") (encoder, decoder, autoencoder) = ConvAutoencoder.build(28, 28, 1) opt = Adam(lr=1e-3) autoencoder.compile(loss="mse", optimizer=opt) # train the convolutional autoencoder H = autoencoder.fit( trainXNoisy, trainX, validation_data=(testXNoisy, testX), epochs=EPOCHS, batch_size=BS) # construct a plot that plots and saves the training history N = np.arange(0, EPOCHS) plt.style.use("ggplot") plt.figure() plt.plot(N, H.history["loss"], label="train_loss") plt.plot(N, H.history["val_loss"], label="val_loss") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(args["plot"]) Line 48 builds our denoising autoencoder, passing the necessary arguments. Using our Adam optimizer with an initial learning rate of 1e-3, we go ahead and compile the autoencoder with mean-squared error loss (Lines 49 and 50). Training is launched via Lines 53-57.
https://pyimagesearch.com/2020/02/24/denoising-autoencoders-with-keras-tensorflow-and-deep-learning/
Using the training history data, H, Lines 60-69 plot the loss, saving the resulting figure to disk. Let’s write a quick loop that will help us visualize the denoising autoencoder results: # use the convolutional autoencoder to make predictions on the # testing images, then initialize our list of output images print("[INFO] making predictions...") decoded = autoencoder.predict(testXNoisy) outputs = None # loop over our number of output samples for i in range(0, args["samples"]): # grab the original image and reconstructed image original = (testXNoisy[i] * 255).astype("uint8") recon = (decoded[i] * 255).astype("uint8") # stack the original and reconstructed image side-by-side output = np.hstack([original, recon]) # if the outputs array is empty, initialize it as the current # side-by-side image display if outputs is None: outputs = output # otherwise, vertically stack the outputs else: outputs = np.vstack([outputs, output]) # save the outputs image to disk cv2.imwrite(args["output"], outputs) We go ahead and use our trained autoencoder to remove the noise from the images in our testing set (Line 74). We then grab N --samples worth of original and reconstructed data, and put together a visualization montage (Lines 78-93). Line 96 writes the visualization figure to disk for inspection. Training the denoising autoencoder with Keras and TensorFlow To train your denoising autoencoder, make sure you use the “Downloads” section of this tutorial to download the source code. From there, open up a terminal and execute the following command: $ python train_denoising_autoencoder.py --output output_denoising.png \ --plot plot_denoising.png [INFO] loading MNIST dataset... [INFO] building autoencoder... Train on 60000 samples, validate on 10000 samples Epoch 1/25 60000/60000 [==============================] - 85s 1ms/sample - loss: 0.0285 - val_loss: 0.0191 Epoch 2/25 60000/60000 [==============================] - 83s 1ms/sample - loss: 0.0187 - val_loss: 0.0211 Epoch 3/25 60000/60000 [==============================] - 84s 1ms/sample - loss: 0.0177 - val_loss: 0.0174 Epoch 4/25 60000/60000 [==============================] - 84s 1ms/sample - loss: 0.0171 - val_loss: 0.0170 Epoch 5/25 60000/60000 [==============================] - 83s 1ms/sample - loss: 0.0167 - val_loss: 0.0177 ... Epoch 21/25 60000/60000 [==============================] - 67s 1ms/sample - loss: 0.0146 - val_loss: 0.0161 Epoch 22/25 60000/60000 [==============================] - 67s 1ms/sample - loss: 0.0145 - val_loss: 0.0164 Epoch 23/25 60000/60000 [==============================] - 67s 1ms/sample - loss: 0.0145 - val_loss: 0.0158 Epoch 24/25 60000/60000 [==============================] - 67s 1ms/sample - loss: 0.0144 - val_loss: 0.0155 Epoch 25/25 60000/60000 [==============================] - 66s 1ms/sample - loss: 0.0144 - val_loss: 0.0157 [INFO] making predictions... Figure 3: Example results from training a deep learning denoising autoencoder with Keras and Tensorflow on the MNIST benchmarking dataset. Inside our training script, we added random noise with NumPy to the MNIST images. Training the denoising autoencoder on my iMac Pro with a 3 GHz Intel Xeon W processor took ~32.20 minutes. As Figure 3 shows, our training process was stable and shows no signs of overfitting. Denoising autoencoder results Our denoising autoencoder has been successfully trained, but how did it perform when removing the noise we added to the MNIST dataset?
https://pyimagesearch.com/2020/02/24/denoising-autoencoders-with-keras-tensorflow-and-deep-learning/
To answer that question, take a look at Figure 4: Figure 4: The results of removing noise from MNIST images using a denoising autoencoder trained with Keras, TensorFlow, and Deep Learning. On the left we have the original MNIST digits that we added noise to while on the right we have the output of the denoising autoencoder — we can clearly see that the denoising autoencoder was able to recover the original signal (i.e., digit) from the image while removing the noise. More advanced denosing autoencoders can be used to automatically pre-process images to facilitate better OCR accuracy. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case.
https://pyimagesearch.com/2020/02/24/denoising-autoencoders-with-keras-tensorflow-and-deep-learning/
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned about denoising autoencoders, which, as the name suggests, are models that are used to remove noise from a signal.
https://pyimagesearch.com/2020/02/24/denoising-autoencoders-with-keras-tensorflow-and-deep-learning/
In the context of computer vision, denoising autoencoders can be seen as very powerful filters that can be used for automatic pre-processing. For example, a denoising autoencoder could be used to automatically pre-process an image, improving its quality for an OCR algorithm and thereby increasing OCR accuracy. To demonstrate a denoising autoencoder in action, we added noise to the MNIST dataset, greatly degrading the image quality to the point where any model would struggle to correctly classify the digit in the image. Using our denoising autoencoder, we were able to remove the noise from the image, recovering the original signal (i.e., the digit). In next week’s tutorial, you’ll learn about another real-world application of autoencoders — anomaly and outlier detection. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
Click here to download the source code to this pos In this tutorial, you will learn how to perform anomaly and outlier detection using autoencoders, Keras, and TensorFlow. Back in January, I showed you how to use standard machine learning models to perform anomaly detection and outlier detection in image datasets. Our approach worked well enough, but it begged the question: Could deep learning be used to improve the accuracy of our anomaly detector? To answer such a question would require us to dive further down the rabbit hole and answer questions such as: What model architecture should we use?Are some deep neural network architectures better than others for anomaly/outlier detection?How do we handle the class imbalance problem?What if we wanted to train an unsupervised anomaly detector? This tutorial addresses all of these questions, and by the end of it, you’ll be able to perform anomaly detection in your own image datasets using deep learning. To learn how to perform anomaly detection with Keras, TensorFlow, and Deep Learning, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Anomaly detection with Keras, TensorFlow, and Deep Learning In the first part of this tutorial, we’ll discuss anomaly detection, including: What makes anomaly detection so challengingWhy traditional deep learning methods are not sufficient for anomaly/outlier detectionHow autoencoders can be used for anomaly detection From there, we’ll implement an autoencoder architecture that can be used for anomaly detection using Keras and TensorFlow. We’ll then train our autoencoder model in an unsupervised fashion. Once the autoencoder is trained, I’ll show you how you can use the autoencoder to identify outliers/anomalies in both your training/testing set as well as in new images that are not part of your dataset splits.
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
What is anomaly detection? Figure 1: In this tutorial, we will detect anomalies with Keras, TensorFlow, and Deep Learning (image source). To quote my intro to anomaly detection tutorial: Anomalies are defined as events that deviate from the standard, happen rarely, and don’t follow the rest of the “pattern.” Examples of anomalies include: Large dips and spikes in the stock market due to world eventsDefective items in a factory/on a conveyor beltContaminated samples in a lab Depending on your exact use case and application, anomalies only typically occur 0.001-1% of the time — that’s an incredibly small fraction of the time. The problem is only compounded by the fact that there is a massive imbalance in our class labels. By definition, anomalies will rarely occur, so the majority of our data points will be of valid events. To detect anomalies, machine learning researchers have created algorithms such as Isolation Forests, One-class SVMs, Elliptic Envelopes, and Local Outlier Factor to help detect such events; however, all of these methods are rooted in traditional machine learning. What about deep learning? Can deep learning be used for anomaly detection as well? The answer is yes — but you need to frame the problem correctly.
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
How can deep learning and autoencoders be used for anomaly detection? As I discussed in my intro to autoencoder tutorial, autoencoders are a type of unsupervised neural network that can: Accept an input set of dataInternally compress the data into a latent-space representationReconstruct the input data from the latent representation To accomplish this task, an autoencoder uses two components: an encoder and a decoder. The encoder accepts the input data and compresses it into the latent-space representation. The decoder then attempts to reconstruct the input data from the latent space. When trained in an end-to-end fashion, the hidden layers of the network learn filters that are robust and even capable of denoising the input data. However, what makes autoencoders so special from an anomaly detection perspective is the reconstruction loss. When we train an autoencoder, we typically measure the mean-squared-error (MSE) between: The input imageThe reconstructed image from the autoencoder The lower the loss, the better a job the autoencoder is doing at reconstructing the image. Let’s now suppose that we trained an autoencoder on the entirety of the MNIST dataset: Figure 2: Samples from the MNIST handwritten digit benchmarking dataset. We will use MNIST to develop an unsupervised autoencoder with Keras, TensorFlow, and deep learning. We then present the autoencoder with a digit and tell it to reconstruct it: Figure 3: Reconstructing a digit from MNIST with autoencoders, Keras, TensorFlow, and deep learning.
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
We would expect the autoencoder to do a really good job at reconstructing the digit, as that is exactly what the autoencoder was trained to do — and if we were to look at the MSE between the input image and the reconstructed image, we would find that it’s quite low. Let’s now suppose we presented our autoencoder with a photo of an elephant and asked it to reconstruct it: Figure 4: When we attempt to reconstruct an image with an autoencoder, but the result has a high MSE, we have an outlier. In this tutorial, we will detect anomalies with autoencoders, Keras, and deep learning. Since the autoencoder has never seen an elephant before, and more to the point, was never trained to reconstruct an elephant, our MSE will be very high. If the MSE of the reconstruction is high, then we likely have an outlier. Alon Agmon does a great job explaining this concept in more detail in this article. Configuring your development environment To follow along with today’s tutorial on anomaly detection, I recommend you use TensorFlow 2.0. To configure your system and install TensorFlow 2.0, you can follow either my Ubuntu or macOS guide: How to install TensorFlow 2.0 on Ubuntu (Ubuntu 18.04 OS; CPU and optional NVIDIA GPU)How to install TensorFlow 2.0 on macOS (Catalina and Mojave OSes) Please note: PyImageSearch does not support Windows — refer to our FAQ. Project structure Go ahead and grab the code from the “Downloads” section of this post. Once you’ve unzipped the project, you’ll be presented with the following structure: $ tree --dirsfirst .
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
├── output │   ├── autoencoder.model │   └── images.pickle ├── pyimagesearch │   ├── __init__.py │   └── convautoencoder.py ├── find_anomalies.py ├── plot.png ├── recon_vis.png └── train_unsupervised_autoencoder.py 2 directories, 8 files Our convautoencoder.py file contains the ConvAutoencoder class which is responsible for building a Keras/TensorFlow autoencoder implementation. We will train an autoencoder with unlabeled data inside train_unsupervised_autoencoder.py, resulting in the following outputs: autoencoder.model: The serialized, trained autoencoder model. images.pickle: A serialized set of unlabeled images for us to find anomalies in. plot.png: A plot consisting of our training loss curves. recon_vis.png: A visualization figure that compares samples of ground-truth digit images versus each reconstructed image. From there, we will develop an anomaly detector inside find_anomalies.py and apply our autoencoder to reconstruct data and find anomalies. Implementing our autoencoder for anomaly detection with Keras and TensorFlow The first step to anomaly detection with deep learning is to implement our autoencoder script. Our convolutional autoencoder implementation is identical to the ones from our introduction to autoencoders post as well as our denoising autoencoders tutorial; however, we’ll review it here as a matter of completeness — if you want additional details on autoencoders, be sure to refer to those posts. Open up convautoencoder.py and inspect it: # import the necessary packages from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import Conv2DTranspose from tensorflow.keras.layers import LeakyReLU from tensorflow.keras.layers import Activation from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Reshape from tensorflow.keras.layers import Input from tensorflow.keras.models import Model from tensorflow.keras import backend as K import numpy as np class ConvAutoencoder: @staticmethod def build(width, height, depth, filters=(32, 64), latentDim=16): # initialize the input shape to be "channels last" along with # the channels dimension itself # channels dimension itself inputShape = (height, width, depth) chanDim = -1 # define the input to the encoder inputs = Input(shape=inputShape) x = inputs # loop over the number of filters for f in filters: # apply a CONV => RELU => BN operation x = Conv2D(f, (3, 3), strides=2, padding="same")(x) x = LeakyReLU(alpha=0.2)(x) x = BatchNormalization(axis=chanDim)(x) # flatten the network and then construct our latent vector volumeSize = K.int_shape(x) x = Flatten()(x) latent = Dense(latentDim)(x) # build the encoder model encoder = Model(inputs, latent, name="encoder") Imports include tf.keras and NumPy. Our ConvAutoencoder class contains one static method, build, which accepts five parameters: width: Width of the input images.
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
height: Height of the input images. depth: Number of channels in the images. filters: Number of filters the encoder and decoder will learn, respectively latentDim: Dimensionality of the latent-space representation. The Input is then defined for the encoder at which point we use Keras’ functional API to loop over our filters and add our sets of CONV => LeakyReLU => BN layers. We then flatten the network and construct our latent vector. The latent-space representation is the compressed form of our data. In the above code block we used the encoder portion of our autoencoder to construct our latent-space representation — this same representation will now be used to reconstruct the original input image: # start building the decoder model which will accept the # output of the encoder as its inputs latentInputs = Input(shape=(latentDim,)) x = Dense(np.prod(volumeSize[1:]))(latentInputs) x = Reshape((volumeSize[1], volumeSize[2], volumeSize[3]))(x) # loop over our number of filters again, but this time in # reverse order for f in filters[::-1]: # apply a CONV_TRANSPOSE => RELU => BN operation x = Conv2DTranspose(f, (3, 3), strides=2, padding="same")(x) x = LeakyReLU(alpha=0.2)(x) x = BatchNormalization(axis=chanDim)(x) # apply a single CONV_TRANSPOSE layer used to recover the # original depth of the image x = Conv2DTranspose(depth, (3, 3), padding="same")(x) outputs = Activation("sigmoid")(x) # build the decoder model decoder = Model(latentInputs, outputs, name="decoder") # our autoencoder is the encoder + decoder autoencoder = Model(inputs, decoder(encoder(inputs)), name="autoencoder") # return a 3-tuple of the encoder, decoder, and autoencoder return (encoder, decoder, autoencoder) Here, we are take the latent input and use a fully-connected layer to reshape it into a 3D volume (i.e., the image data). We loop over our filters once again, but in reverse order, applying a series of CONV_TRANSPOSE => RELU => BN layers. The CONV_TRANSPOSE layer’s purpose is to increase the volume size back to the original image spatial dimensions. Finally, we build the decoder model and construct the autoencoder.
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
Recall that an autoencoder consists of both the encoder and decoder components. We then return a 3-tuple of the encoder, decoder, and autoencoder. Again, if you need further details on the implementation of our autoencoder, be sure to review the aforementioned tutorials. Implementing the anomaly detection training script With our autoencoder implemented, we are now ready to move on to our training script. Open up the train_unsupervised_autoencoder.py file in your project directory, and insert the following code: # set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from pyimagesearch.convautoencoder import ConvAutoencoder from tensorflow.keras.optimizers import Adam from tensorflow.keras.datasets import mnist from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import numpy as np import argparse import random import pickle import cv2 Imports include our implementation of ConvAutoencoder, the mnist dataset, and a few imports from TensorFlow, scikit-learn, and OpenCV. Given that we’re performing unsupervised learning, next we’ll define a function to build an unsupervised dataset: def build_unsupervised_dataset(data, labels, validLabel=1, anomalyLabel=3, contam=0.01, seed=42): # grab all indexes of the supplied class label that are *truly* # that particular label, then grab the indexes of the image # labels that will serve as our "anomalies" validIdxs = np.where(labels == validLabel)[0] anomalyIdxs = np.where(labels == anomalyLabel)[0] # randomly shuffle both sets of indexes random.shuffle(validIdxs) random.shuffle(anomalyIdxs) # compute the total number of anomaly data points to select i = int(len(validIdxs) * contam) anomalyIdxs = anomalyIdxs[:i] # use NumPy array indexing to extract both the valid images and # "anomlay" images validImages = data[validIdxs] anomalyImages = data[anomalyIdxs] # stack the valid images and anomaly images together to form a # single data matrix and then shuffle the rows images = np.vstack([validImages, anomalyImages]) np.random.seed(seed) np.random.shuffle(images) # return the set of images return images Our build_supervised_dataset function accepts a labeled dataset (i.e., for supervised learning) and turns it into an unlabeled dataset (i.e., for unsupervised learning). The function accepts a set of input data and labels, including valid label and anomaly label. Given that our validLabel=1 by default, only MNIST numeral ones are selected; however, we’ll also contaminate our dataset with a set of numeral three images (validLabel=3). The contam percentage is used to help us sample and select anomaly datapoints. From our set of labels (and using the valid label), we generate a list of validIdxs (Line 22).
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
The exact same process is applied to grab anomalyIdxs (Line 23). We then proceed to randomly shuffle the indices (Lines 26 and 27). Given our anomaly contamination percentage, we reduce our set of anomalyIdxs (Lines 30 and 31). Lines 35 and 36 then build two sets of images: (1) valid images and (2) anomaly images. Each of these lists is stacked to form a single data matrix and then shuffled and returned (Lines 40-45). Notice that the labels have been intentionally discarded, effectively making our dataset ready for unsupervised learning. Our next function will help us visualize predictions made by our unsupervised autoencoder: def visualize_predictions(decoded, gt, samples=10): # initialize our list of output images outputs = None # loop over our number of output samples for i in range(0, samples): # grab the original image and reconstructed image original = (gt[i] * 255).astype("uint8") recon = (decoded[i] * 255).astype("uint8") # stack the original and reconstructed image side-by-side output = np.hstack([original, recon]) # if the outputs array is empty, initialize it as the current # side-by-side image display if outputs is None: outputs = output # otherwise, vertically stack the outputs else: outputs = np.vstack([outputs, output]) # return the output images return outputs The visualize_predictions function is a helper method used to visualize the input images to our autoencoder as well as their corresponding output reconstructions. Both the original and reconstructed (recon) images will be arranged side-by-side and stacked vertically according to the number of samples parameter. This code should look familiar if you read either my introduction to autoencoders guide or denoising autoencoder tutorial. Now that we’ve defined our imports and necessary functions, we’ll go ahead and parse our command line arguments: # construct the argument parse and parse the arguments ap = argparse.
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
ArgumentParser() ap.add_argument("-d", "--dataset", type=str, required=True, help="path to output dataset file") ap.add_argument("-m", "--model", type=str, required=True, help="path to output trained autoencoder") ap.add_argument("-v", "--vis", type=str, default="recon_vis.png", help="path to output reconstruction visualization file") ap.add_argument("-p", "--plot", type=str, default="plot.png", help="path to output plot file") args = vars(ap.parse_args()) Our function accepts four command line arguments, all of which are output file paths: --dataset: Defines the path to our output dataset file --model: Specifies the path to our output trained autoencoder --vis: An optional argument that specifies the output visualization file path. By default, I’ve named this file recon_vis.png; however, you are welcome to override it with a different path and filename --plot: Optionally indicates the path to our output training history plot. By default, the plot will be named plot.png in the current working directory We’re now ready to prepare our data for training: # initialize the number of epochs to train for, initial learning rate, # and batch size EPOCHS = 20 INIT_LR = 1e-3 BS = 32 # load the MNIST dataset print("[INFO] loading MNIST dataset...") ((trainX, trainY), (testX, testY)) = mnist.load_data() # build our unsupervised dataset of images with a small amount of # contamination (i.e., anomalies) added into it print("[INFO] creating unsupervised dataset...") images = build_unsupervised_dataset(trainX, trainY, validLabel=1, anomalyLabel=3, contam=0.01) # add a channel dimension to every image in the dataset, then scale # the pixel intensities to the range [0, 1] images = np.expand_dims(images, axis=-1) images = images.astype("float32") / 255.0 # construct the training and testing split (trainX, testX) = train_test_split(images, test_size=0.2, random_state=42) First, we initialize three hyperparameters: (1) the number of training epochs, (2) the initial learning rate, and (3) our batch size (Lines 86-88). Line 92 loads MNIST while Lines 97 and 98 build our unsupervised dataset with 1% contamination (i.e., anomalies) added into it. From here forward, our dataset does not have labels, and our autoencoder will attempt to learn patterns without prior knowledge of what the data is. Now that we’ve built out unsupervised dataset, it consists of 99% numeral ones and 1% numeral threes (i.e., anomalies/outliers). From there, we preprocess our dataset by adding a channel dimension and scaling pixel intensities to the range [0, 1] (Lines 102 and 103). Using scikit-learn’s convenience function, we then split data into 80% training and 20% testing sets (Lines 106 and 107). Our data is ready to go, so let’s build our autoencoder and train it: # construct our convolutional autoencoder print("[INFO] building autoencoder...") (encoder, decoder, autoencoder) = ConvAutoencoder.build(28, 28, 1) opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS) autoencoder.compile(loss="mse", optimizer=opt) # train the convolutional autoencoder H = autoencoder.fit( trainX, trainX, validation_data=(testX, testX), epochs=EPOCHS, batch_size=BS) # use the convolutional autoencoder to make predictions on the # testing images, construct the visualization, and then save it # to disk print("[INFO] making predictions...") decoded = autoencoder.predict(testX) vis = visualize_predictions(decoded, testX) cv2.imwrite(args["vis"], vis) We construct our autoencoder with the Adam optimizer and compile it with mean-squared-error loss (Lines 111-113). Lines 116-120 launch the training procedure with TensorFlow/Keras.
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
Our autoencoder will attempt to learn how to reconstruct the original input images. Images that cannot be easily reconstructed will have a large loss value. Once training is complete, we’ll need a way to evaluate and visually inspect our results. Luckily, we have our visualize_predictions convenience function in our back pocket. Lines 126-128 make predictions on the test set, build a visualization image from the results, and write the output image to disk. From here, we’ll wrap up: # construct a plot that plots and saves the training history N = np.arange(0, EPOCHS) plt.style.use("ggplot") plt.figure() plt.plot(N, H.history["loss"], label="train_loss") plt.plot(N, H.history["val_loss"], label="val_loss") plt.title("Training Loss") plt.xlabel("Epoch #") plt.ylabel("Loss") plt.legend(loc="lower left") plt.savefig(args["plot"]) # serialize the image data to disk print("[INFO] saving image data...") f = open(args["dataset"], "wb") f.write(pickle.dumps(images)) f.close() # serialize the autoencoder model to disk print("[INFO] saving autoencoder...") autoencoder.save(args["model"], save_format="h5") To close out, we: Plot our training history loss curves and export the resulting plot to disk (Lines 131-140) Serialize our unsupervised, sampled MNIST dataset to disk as a Python pickle file so that we can use it to find anomalies in the find_anomalies.py script (Lines 144-146) Save our trained autoencoder (Line 150) Fantastic job developing the unsupervised autoencoder training script. Training our anomaly detector using Keras and TensorFlow To train our anomaly detector, make sure you use the “Downloads” section of this tutorial to download the source code. From there, fire up a terminal and execute the following command: $ python train_unsupervised_autoencoder.py \ --dataset output/images.pickle \ --model output/autoencoder.model [INFO] loading MNIST dataset... [INFO] creating unsupervised dataset... [INFO] building autoencoder... Train on 5447 samples, validate on 1362 samples Epoch 1/20 5447/5447 [==============================] - 7s 1ms/sample - loss: 0.0421 - val_loss: 0.0405 Epoch 2/20 5447/5447 [==============================] - 6s 1ms/sample - loss: 0.0129 - val_loss: 0.0306 Epoch 3/20 5447/5447 [==============================] - 6s 1ms/sample - loss: 0.0045 - val_loss: 0.0088 Epoch 4/20 5447/5447 [==============================] - 6s 1ms/sample - loss: 0.0033 - val_loss: 0.0037 Epoch 5/20 5447/5447 [==============================] - 6s 1ms/sample - loss: 0.0029 - val_loss: 0.0027 ... Epoch 16/20 5447/5447 [==============================] - 6s 1ms/sample - loss: 0.0018 - val_loss: 0.0020 Epoch 17/20 5447/5447 [==============================] - 6s 1ms/sample - loss: 0.0018 - val_loss: 0.0020 Epoch 18/20 5447/5447 [==============================] - 6s 1ms/sample - loss: 0.0017 - val_loss: 0.0021 Epoch 19/20 5447/5447 [==============================] - 6s 1ms/sample - loss: 0.0018 - val_loss: 0.0021 Epoch 20/20 5447/5447 [==============================] - 6s 1ms/sample - loss: 0.0016 - val_loss: 0.0019 [INFO] making predictions... [INFO] saving image data... [INFO] saving autoencoder... Figure 5: In this plot we have our loss curves from training an autoencoder with Keras, TensorFlow, and deep learning. Training the entire model took ~2 minutes on my 3Ghz Intel Xeon processor, and as our training history plot in Figure 5 shows, our training is quite stable. Furthermore, we can look at our output recon_vis.png visualization file to see that our autoencoder has learned to correctly reconstruct the 1 digit from the MNIST dataset: Figure 6: Reconstructing a handwritten digit using a deep learning autoencoder trained with Keras and TensorFlow.
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
Before proceeding to the next section, you should verify that both the autoencoder.model and images.pickle files have been correctly saved to your output directory: $ ls output/ autoencoder.model images.pickle You’ll be needing these files in the next section. Implementing our script to find anomalies/outliers using the autoencoder Our goal is to now: Take our pre-trained autoencoder Use it to make predictions (i.e., reconstruct the digits in our dataset) Measure the MSE between the original input images and reconstructions Compute quanitles for the MSEs, and use these quantiles to identify outliers and anomalies Open up the find_anomalies.py file, and let’s get started: # import the necessary packages from tensorflow.keras.models import load_model import numpy as np import argparse import pickle import cv2 # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-d", "--dataset", type=str, required=True, help="path to input image dataset file") ap.add_argument("-m", "--model", type=str, required=True, help="path to trained autoencoder") ap.add_argument("-q", "--quantile", type=float, default=0.999, help="q-th quantile used to identify outliers") args = vars(ap.parse_args()) We’ll begin with imports and command line arguments. The load_model import from tf.keras enables us to load the serialized autoencoder model from disk. Command line arguments include: --dataset: The path to our input dataset pickle file that was exported to disk as a result of our unsupervised training script --model: Our trained autoencoder path --quantile: The q-th quantile to identify outliers From here, we’ll (1) load our autoencoder and data, and (2) make predictions: # load the model and image data from disk print("[INFO] loading autoencoder and image data...") autoencoder = load_model(args["model"]) images = pickle.loads(open(args["dataset"], "rb").read()) # make predictions on our image data and initialize our list of # reconstruction errors decoded = autoencoder.predict(images) errors = [] # loop over all original images and their corresponding # reconstructions for (image, recon) in zip(images, decoded): # compute the mean squared error between the ground-truth image # and the reconstructed image, then add it to our list of errors mse = np.mean((image - recon) ** 2) errors.append(mse) Lines 20 and 21 load the autoencoder and images data from disk. We then pass the set of images through our autoencoder to make predictions and attempt to reconstruct the inputs (Line 25). Looping over the original and reconstructed images, Lines 30-34 compute the mean squared error between the ground-truth and reconstructed image, building a list of errors. From here, we’ll detect the anomalies: # compute the q-th quantile of the errors which serves as our # threshold to identify anomalies -- any data point that our model # reconstructed with > threshold error will be marked as an outlier thresh = np.quantile(errors, args["quantile"]) idxs = np.where(np.array(errors) >= thresh)[0] print("[INFO] mse threshold: {}".format(thresh)) print("[INFO] {} outliers found".format(len(idxs))) Lines 39 computes the q-th quantile of the error — this value will serve as our threshold to detect outliers. Measuring each error against the thresh, Line 40 determines the indices of all anomalies in the data. Thus, any MSE with a value >= thresh is considered an outlier.
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
Next, we’ll loop over anomaly indices in our dataset: # initialize the outputs array outputs = None # loop over the indexes of images with a high mean squared error term for i in idxs: # grab the original image and reconstructed image original = (images[i] * 255).astype("uint8") recon = (decoded[i] * 255).astype("uint8") # stack the original and reconstructed image side-by-side output = np.hstack([original, recon]) # if the outputs array is empty, initialize it as the current # side-by-side image display if outputs is None: outputs = output # otherwise, vertically stack the outputs else: outputs = np.vstack([outputs, output]) # show the output visualization cv2.imshow("Output", outputs) cv2.waitKey(0) Inside the loop, we arrange each original and recon image side-by-side, vertically stacking all results as an outputs image. Lines 66 and 67 display the resulting image. Anomaly detection with deep learning results We are now ready to detect anomalies in our dataset using deep learning and our trained Keras/TensorFlow model. Start by making sure you’ve used the “Downloads” section of this tutorial to download the source code — from there you can execute the following command to detect anomalies in our dataset: $ python find_anomalies.py --dataset output/images.pickle \ --model output/autoencoder.model [INFO] loading autoencoder and image data... [INFO] mse threshold: 0.02863757349550724 [INFO] 7 outliers found With an MSE threshold of ~0.0286, which corresponds to the 99.9% quantile, our autoencoder was able to find seven outliers, five of which are correctly labeled as such: Figure 7: Shown are anomalies that have been detected from reconstructing data with a Keras-based autoencoder. Depsite the fact that the autoencoder was only trained on 1% of all 3 digits in the MNIST dataset (67 total samples), the autoencoder does a surpsingly good job at reconstructing them, given the limited data — but we can see that the MSE for these reconstructions was higher than the rest. Furthermore, the 1 digits that were incorrectly labeled as outliers could be considered suspicious as well. Deep learning practitioners can use autoencoders to spot outliers in their datasets even if the image was correctly labeled! Images that are correctly labeled but demonstrate a problem for a deep neural network architecture should be indicative of a subclass of images that are worth exploring more — autoencoders can help you spot these outlier subclasses. My autoencoder anomaly detection accuracy is not good enough. What should I do?
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
Figure 8: Anomaly detection with unsupervised deep learning models is an active area of research and is far from solved. ( image source: Figure 4 of Deep Learning for Anomaly Detection: A Survey by Chalapathy and Chawla) Unsupervised learning, and specifically anomaly/outlier detection, is far from a solved area of machine learning, deep learning, and computer vision — there is no off-the-shelf solution for anomaly detection that is 100% correct. I would recommend you read the 2019 survey paper, Deep Learning for Anomaly Detection: A Survey, by Chalapathy and Chawla for more information on the current state-of-the-art on deep learning-based anomaly detection. While promising, keep in mind that the field is rapidly evolving, but again, anomaly/outlier detection are far from solved problems. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc.
https://pyimagesearch.com/2020/03/02/anomaly-detection-with-keras-tensorflow-and-deep-learning/
Click here to join PyImageSearch University Summary In this tutorial, you learned how to perform anomaly and outlier detection using Keras, TensorFlow, and Deep Learning. Traditional classification architectures are not sufficient for anomaly detection as: They are not meant to be used in an unsupervised manner They struggle to handle severe class imbalance And therefore, they struggle to correctly recall the outliers Autoencoders on the other hand: Are naturally suited for unsupervised problems Learn to both encode and reconstruct input images Can detect outliers by measuring the error between the encoded image and reconstructed image We trained our autoencoder on the MNIST dataset in an unsupervised fashion by removing the class labels, grabbing all labels with a value of 1, and then using 1% of the 3 labels. As our results demonstrated, our autoencoder was able to pick out many of the 3 digits that were used to “contaminate” our 1‘s. If you enjoyed this tutorial on deep learning-based anomaly detection, be sure to let me know in the comments! Your feedback helps guide me on what tutorials to write in the future. To download the source code to this blog post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
Click here to download the source code to this pos In this tutorial, you will learn how to visualize class activation maps for debugging deep neural networks using an algorithm called Grad-CAM. We’ll then implement Grad-CAM using Keras and TensorFlow. While deep learning has facilitated unprecedented accuracy in image classification, object detection, and image segmentation, one of their biggest problems is model interpretability, a core component in model understanding and model debugging. In practice, deep learning models are treated as “black box” methods, and many times we have no reasonable idea as to: Where the network is “looking” in the input image Which series of neurons activated in the forward-pass during inference/prediction How the network arrived at its final output That raises an interesting question — how can you trust the decisions of a model if you cannot properly validate how it arrived there? To help deep learning practitioners visually debug their models and properly understand where it’s “looking” in an image, Selvaraju et al. created Gradient-weighted Class Activation Mapping, or more simply, Grad-CAM: Grad-CAM uses the gradients of any target concept (say logits for “dog” or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept.” Using Grad-CAM, we can visually validate where our network is looking, verifying that it is indeed looking at the correct patterns in the image and activating around those patterns. If the network is not activating around the proper patterns/objects in the image, then we know: Our network hasn’t properly learned the underlying patterns in our dataset Our training procedure needs to be revisited We may need to collect additional data And most importantly, our model is not ready for deployment. Grad-CAM is a tool that should be in any deep learning practitioner’s toolbox — take the time to learn how to apply it now. To learn how to use Grad-CAM to debug your deep neural networks and visualize class activation maps with Keras and TensorFlow, just keep reading!
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
Looking for the source code to this post? Jump Right To The Downloads Section Grad-CAM: Visualize class activation maps with Keras, TensorFlow, and Deep Learning In the first part of this article, I’ll share with you a cautionary tale on the importance of debugging and visually verifying that your convolutional neural network is “looking” at the right places in an image. From there, we’ll dive into Grad-CAM, an algorithm that can be used visualize the class activation maps of a Convolutional Neural Network (CNN), thereby allowing you to verify that your network is “looking” and “activating” at the correct locations. We’ll then implement Grad-CAM using Keras and TensorFlow. After our Grad-CAM implementation is complete, we’ll look at a few examples of visualizing class activation maps. Why would we want to visualize class activation maps in Convolutional Neural Networks? Figure 1: Deep learning models are often criticized for being “black box” algorithms where we don’t know what is going on under the hood. Using a gradient camera (i.e., Grad-CAM), deep learning practitioners can visualize CNN layer activation heatmaps with Keras/TensorFlow. Visualizations like this allow us to peek at what the “black box” is doing, ensuring that engineers don’t fall prey to the urban legend of an unfortunate AI developer who created a cloud detector rather than the Army’s desire of a tank detector. ( image source) There’s an old urban legend in the computer vision community that researchers use to caution budding machine learning practitioners against the dangers of deploying a model without first verifying that it’s working properly.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
In this tale, the United States Army wanted to use neural networks to automatically detect camouflaged tanks. Researchers assigned to the project gathered a dataset of 200 images: 100 of which contained camouflaged tanks hiding in trees 100 of which did not contain tanks and were images solely of trees/forest The researchers took this dataset and then split it into an even 50/50 training and testing split, ensuring the class labels were balanced. A neural network was trained on the training set and obtained a 100% accuracy. The researchers were incredibly pleased with this result and eagerly applied it to to their testing data. Once again, they obtained 100% accuracy. The researchers called the Pentagon, excited with the news that they had just “solved” camouflaged tank detection. A few weeks later, the research team received a call from the Pentagon — they were extremely unhappy with the performance of the camouflaged tank detector. The neural network that performed so well in the lab was performing terribly in the field. Flummoxed, the researchers returned to their experiments, training model after model using different training procedures, only to arrive at the same result — 100% accuracy on both their training and testing sets. It wasn’t until one clever researcher visually inspected their dataset and finally realized the problem: Photos of camouflaged tanks were captured on sunny days Images of the forest (without tanks) were captured on cloudy days Essentially, the U.S. Army had created a multimillion dollar cloud detector.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
While not true, this old urban legend does a good job illustrating the importance of model interoperability. Had the research team had an algorithm like Grad-CAM, they would have noticed that the model was activating around the presence/absence of clouds, and not the tanks themselves (hence their problem). Grad-CAM would have saved taxpayers millions of dollars, and not to mention, allowed the researchers to save face with the Pentagon — after a catastrophe like that, it’s unlikely they would be getting any more work or research grants. What is Gradient-weighted Class Activation Mapping (Grad-CAM) and why would we use it? Figure 2: Visualizations of Grad-CAM activation maps applied to an image of a dog and cat with Keras, TensorFlow and deep learning. ( image source: Figure 1 of Selvaraju et al.) As a deep learning practitioner, it’s your responsibility to ensure your model is performing correctly. One way you can do that is to debug your model and visually validate that it is “looking” and “activating” at the correct locations in an image. To help deep learning practitioners debug their networks, Selvaraju et al. published a novel paper entitled, Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
This method is: Easily implemented Works with nearly any Convolutional Neural Network architecture Can be used to visually debug where a network is looking in an image Grad-CAM works by (1) finding the final convolutional layer in the network and then (2) examining the gradient information flowing into that layer. The output of Grad-CAM is a heatmap visualization for a given class label (either the top, predicted label or an arbitrary label we select for debugging). We can use this heatmap to visually verify where in the image the CNN is looking. For more information on how Grad-CAM works, I would recommend you read Selvaraju et al. ’s paper as well as this excellent article by Divyanshu Mishra (just note that their implementation will not work with TensorFlow 2.0 while ours does work with TF 2.0). Configuring your development environment In order to use our Grad-CAM implementation, we need to configure our system with a few software packages including: TensorFlow (2.0 recommended) OpenCV imutils Luckily, each of these packages is pip-installable. My personal recommendation is for you to follow one of my TensorFlow 2.0 installation tutorials: How to install TensorFlow 2.0 on Ubuntu (Ubuntu 18.04 OS; CPU and optional NVIDIA GPU) How to install TensorFlow 2.0 on macOS (Catalina and Mojave OSes) Please note: PyImageSearch does not support Windows — refer to our FAQ. While we do not support Windows, the code presented in this blog post will work on Windows with a properly configured system. Either of those tutorials will teach you how to configure a Python virtual environment with all the necessary software for this tutorial. I highly encourage virtual environments for Python work — industry considers them a best practice as well.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
If you’ve never worked with a Python virtual environment, you can learn more about them in this RealPython article. Once your system is configured, you are ready to follow the rest of this tutorial. Project structure Let’s inspect our tutorial’s project structure. But first, be sure to grab the code and example images from the “Downloads” section of this blog post. From there, extract the files, and use the tree command in your terminal: $ tree --dirsfirst . ├── images │   ├── beagle.jpg │   ├── soccer_ball.jpg │   └── space_shuttle.jpg ├── pyimagesearch │   ├── __init__.py │   └── gradcam.py └── apply_gradcam.py 2 directories, 6 files The pyimagesearch module today contains the Grad-CAM implementation inside the GradCAM class. Our apply_gradcam.py driver script accepts any of our sample images/ and applies either a VGG16 or ResNet CNN trained on ImageNet to both (1) compute the Grad-CAM heatmap and (2) display the results in an OpenCV window. Let’s dive into the implementation. Implementing Grad-CAM using Keras and TensorFlow Despite the fact that the Grad-CAM algorithm is relatively straightforward, I struggled to find a TensorFlow 2.0-compatible implementation. The closest one I found was in tf-explain; however, that method could only be used when training — it could not be used after a model had been trained.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
Therefore, I decided to create my own Grad-CAM implementation, basing my work on that of tf-explain, ensuring that my Grad-CAM implementation: Is compatible with Keras and TensorFlow 2.0 Could be used after a model was already trained And could also be easily modified to work as a callback during training (not covered in this post) Let’s dive into our Keras and TensorFlow Grad-CAM implementation. Open up the gradcam.py file in your project directory structure, and let’s get started: # import the necessary packages from tensorflow.keras.models import Model import tensorflow as tf import numpy as np import cv2 class GradCAM: def __init__(self, model, classIdx, layerName=None): # store the model, the class index used to measure the class # activation map, and the layer to be used when visualizing # the class activation map self.model = model self.classIdx = classIdx self.layerName = layerName # if the layer name is None, attempt to automatically find # the target output layer if self.layerName is None: self.layerName = self.find_target_layer() Before we define the GradCAM class, we need to import several packages. These include a TensorFlow Model for which we will construct our gradient model, NumPy for mathematical calculations, and OpenCV. Our GradCAM class and constructor are then defined beginning on Lines 7 and 8. The constructor accepts and stores: A TensorFlow model which we’ll use to compute a heatmap The classIdx — a specific class index that we’ll use to measure our class activation heatmap An optional CONV layerName of the model in case we want to visualize the heatmap of a specific layer of our CNN; otherwise, if a specific layer name is not provided, we will automatically infer on the final CONV/POOL layer of the model architecture (Lines 18 and 19) Now that our constructor is defined and our class attributes are set, let’s define a method to find our target layer: def find_target_layer(self): # attempt to find the final convolutional layer in the network # by looping over the layers of the network in reverse order for layer in reversed(self.model.layers): # check to see if the layer has a 4D output if len(layer.output_shape) == 4: return layer.name # otherwise, we could not find a 4D layer so the GradCAM # algorithm cannot be applied raise ValueError("Could not find 4D layer. Cannot apply GradCAM.") Our find_target_layer function loops over all layers in the network in reverse order, during which time it checks to see if the current layer has a 4D output (implying a CONV or POOL layer). If find such a 4D output, we return that layer name (Lines 24-27). Otherwise, if the network does not have a 4D output, then we cannot apply Grad-CAM, at which point, we raise a ValueError exception, causing our program to stop (Line 31). In our next function, we’ll compute our visualization heatmap, given an input image: def compute_heatmap(self, image, eps=1e-8): # construct our gradient model by supplying (1) the inputs # to our pre-trained model, (2) the output of the (presumably) # final 4D layer in the network, and (3) the output of the # softmax activations from the model gradModel = Model( inputs=[self.model.inputs], outputs=[self.model.get_layer(self.layerName).output, self.model.output]) Line 33 defines the compute_heatmap method, which is the heart of our Grad-CAM.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
Let’s take this implementation one step at a time to learn how it works. First, our Grad-CAM requires that we pass in the image for which we want to visualize class activations mappings for. From there, we construct our gradModel (Lines 38-41), which consists of both an input and an output: inputs: The standard image input to the model outputs: The outputs of the layerName class attribute used to generate the class activation mappings. Notice how we call get_layer on the model itself while also grabbing the output of that specific layer Once our gradient model is constructed, we’ll proceed to compute gradients: # record operations for automatic differentiation with tf. GradientTape() as tape: # cast the image tensor to a float-32 data type, pass the # image through the gradient model, and grab the loss # associated with the specific class index inputs = tf.cast(image, tf.float32) (convOutputs, predictions) = gradModel(inputs) loss = predictions[:, self.classIdx] # use automatic differentiation to compute the gradients grads = tape.gradient(loss, convOutputs) Going forward, we need to understand the definition of automatic differentiation and what TensorFlow calls a gradient tape. First, automatic differentiation is the process of computing a value and computing derivatives of that value (CS321 Toronto, Wikipedia). TenorFlow 2.0 provides an implementation of automatic differentiation through what they call gradient tape: TensorFlow provides the tf. GradientTape API for automatic differentiation — computing the gradient of a computation with respect to its input variables. TensorFlow “records” all operations executed inside the context of a tf. GradientTape onto a “tape”.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
TensorFlow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a “recorded” computation using reverse mode differentiation” (TensorFlow’s Automatic differentiation and gradient tape Tutorial). I suggest you spend some time on TensorFlow’s GradientTape documentation, specifically the gradient method, which we will now use. We start recording operations for automatic differentiation using GradientTape (Line 44). Line 48 accepts the input image and casts it to a 32-bit floating point type. A forward pass through the gradient model (Line 49) produces the convOutputs and predictions of the layerName layer. We then extract the loss associated with our predictions and specific classIdx we are interested in (Line 50). Notice that our inference stops at the specific layer we are concerned about. We do not need to compute a full forward pass. Line 53 uses automatic differentiation to compute the gradients that we will call grads (Line 53). Given our gradients, we’ll now compute guided gradients: # compute the guided gradients castConvOutputs = tf.cast(convOutputs > 0, "float32") castGrads = tf.cast(grads > 0, "float32") guidedGrads = castConvOutputs * castGrads * grads # the convolution and guided gradients have a batch dimension # (which we don't need) so let's grab the volume itself and # discard the batch convOutputs = convOutputs[0] guidedGrads = guidedGrads[0] First, we find all outputs and gradients with a value > 0 and cast them from a binary mask to a 32-bit floating point data type (Lines 56 and 57).
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
Then we compute the guided gradients by multiplication (Line 58). Keep in mind that both castConvOutputs and castGrads contain only values of 1’s and 0’s; therefore, during this multiplication if any of castConvOutputs, castGrads, and grads are zero, then the output value for that particular index in the volume will be zero. Essentially, what we are doing here is finding positive values of both castConvOutputs and castGrads, followed by multiplying them by the gradient of the differentiation — this operation will allow us to visualize where in the volume the network is activating later in the compute_heatmap function. The convolution and guided gradients have a batch dimension that we don’t need. Lines 63 and 64 grab the volume itself and discard the batch from convOutput and guidedGrads. We’re closing in on our visualization heatmap; let’s continue: # compute the average of the gradient values, and using them # as weights, compute the ponderation of the filters with # respect to the weights weights = tf.reduce_mean(guidedGrads, axis=(0, 1)) cam = tf.reduce_sum(tf.multiply(weights, convOutputs), axis=-1) Line 69 computes the weights of the gradient values by computing the mean of the guidedGrads, which is essentially a 1 x 1 x N average across the volume. We then take those weights and sum the ponderated (i.e., mathematically weighted) maps into the Grad-CAM visualization (cam) on Line 70. Our next step is to generate the output heatmap associated with our image: # grab the spatial dimensions of the input image and resize # the output class activation map to match the input image # dimensions (w, h) = (image.shape[2], image.shape[1]) heatmap = cv2.resize(cam.numpy(), (w, h)) # normalize the heatmap such that all values lie in the range # [0, 1], scale the resulting values to the range [0, 255], # and then convert to an unsigned 8-bit integer numer = heatmap - np.min(heatmap) denom = (heatmap.max() - heatmap.min()) + eps heatmap = numer / denom heatmap = (heatmap * 255).astype("uint8") # return the resulting heatmap to the calling function return heatmap We grab the original dimensions of input image and scale our cam mapping to the original image dimensions (Lines 75 and 76). From there, we perform min-max rescaling to the range [0, 1] and then convert the pixel values back to the range [0, 255] (Lines 81-84). Finally, the last step of our compute_heatmap method returns the heatmap to the caller.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
Given that we have computed our heatmap, now we’d like a method to transparently overlay the Grad-CAM heatmap on our input image. Let’s go ahead and define such a utility: def overlay_heatmap(self, heatmap, image, alpha=0.5, colormap=cv2.COLORMAP_VIRIDIS): # apply the supplied color map to the heatmap and then # overlay the heatmap on the input image heatmap = cv2.applyColorMap(heatmap, colormap) output = cv2.addWeighted(image, alpha, heatmap, 1 - alpha, 0) # return a 2-tuple of the color mapped heatmap and the output, # overlaid image return (heatmap, output) Our heatmap produced by the previous compute_heatmap function is a single channel, grayscale representation of where the network activated in the image — larger values correspond to a higher activation, smaller values to a lower activation. In order to overlay the heatmap, we first need to apply a pseudo/false-color to the heatmap. To do so, we will use OpenCV’s built in VIRIDIS colormap (i.e., cv2.COLORMAP_VIRIDIS). The temperature of the VIRIDIS is shown below: Figure 3: The VIRIDIS color map will be applied to our Grad-CAM heatmap so that we can visualize deep learning activation maps with Keras and TensorFlow. ( image source) Notice how darker input grayscale values will result in a dark purple RGB color, while lighter input grayscale values will map to a light green or yellow. Lines 93 applies the color map to the input heatmap using the VIRIDIS. From there, we transparently overlay the heatmap on our output visualization (Line 94). The alpha channel is directly weighted into the BGR image (i.e., we are not adding an alpha channel to the image). To learn more about transparent overlays, I suggest you read my Transparent overlays with OpenCV tutorial.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
Finally, Line 98 returns a 2-tuple of the heatmap (with the VIRIDIS colormap applied) along with the output visualization image. Creating the Grad-CAM visualization script With our Grad-CAM implementation complete, we can now move on to the driver script used to apply it for class activation mapping. As stated previously, our apply_gradcam.py driver script accepts an image and performs inference using either a VGG16 or ResNet CNN trained on ImageNet to both (1) compute the Grad-CAM heatmap and (2) display the results in an OpenCV window. You will be able to use this visualization script to actually “see” what is going on under the hood of your deep learning model, which many critics say is too much of a “black box” especially when it comes to public safety concerns such as self-driving cars. Let’s dive in by opening up the apply_gradcam.py in your project structure and inserting the following code: # import the necessary packages from pyimagesearch.gradcam import GradCAM from tensorflow.keras.applications import ResNet50 from tensorflow.keras.applications import VGG16 from tensorflow.keras.preprocessing.image import img_to_array from tensorflow.keras.preprocessing.image import load_img from tensorflow.keras.applications import imagenet_utils import numpy as np import argparse import imutils import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to the input image") ap.add_argument("-m", "--model", type=str, default="vgg", choices=("vgg", "resnet"), help="model to be used") args = vars(ap.parse_args()) This script’s most notable imports are our GradCAM implementation, ResNet/VGG architectures, and OpenCV. Our script accepts two command line arguments: --image: The path to our input image which we seek to both classify and apply Grad-CAM to. --model: The deep learning model we would like to apply. By default, we will use VGG16 with our Grad-CAM. Alternatively, you can specify ResNet50.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
Your choices in this example are limited to vgg or resenet entered directly in your terminal when you type the command, but you can modify this script to work with your own architectures as well. Given the --model argument, let’s load our model: # initialize the model to be VGG16 Model = VGG16 # check to see if we are using ResNet if args["model"] == "resnet": Model = ResNet50 # load the pre-trained CNN from disk print("[INFO] loading model...") model = Model(weights="imagenet") Lines 23-31 load either VGG16 or ResNet50 with pre-trained ImageNet weights. Alternatively, you could load your own model; we’re using VGG16 and ResNet50 in our example and for the sake of simplicity. Next, we’ll load and preprocess our --image: # load the original image from disk (in OpenCV format) and then # resize the image to its target dimensions orig = cv2.imread(args["image"]) resized = cv2.resize(orig, (224, 224)) # load the input image from disk (in Keras/TensorFlow format) and # preprocess it image = load_img(args["image"], target_size=(224, 224)) image = img_to_array(image) image = np.expand_dims(image, axis=0) image = imagenet_utils.preprocess_input(image) Given our input image (provided via command line argument), Line 35 loads it from disk in OpenCV BGR format while Line 40 loads the same image in TensorFlow/Keras RGB format. Our first pre-processing step resizes the image to 224×224 pixels (Line 36 and Line 40). If at this stage we inspect the .shape of our image , you’ll notice the shape of the NumPy array is (224, 224, 3) — each image is 224 pixels wide and 224 pixels tall, and has 3 channels (one for each of the Red, Green, and Blue channels, respectively). However, before we can pass our image through our CNN for classification, we need to expand the dimensions to be (1, 224, 224, 3). Why do we do this? When classifying images using Deep Learning and Convolutional Neural Networks, we often send images through the network in “batches” for efficiency. Thus, it’s actually quite rare to pass only one image at a time through the network — unless of course, you only have one image to classify and apply Grad-MAP to (like we do).
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
Thus, we convert the image to an array and add a batch dimension (Lines 41 and 42). We then preprocess the image on Line 43 by subtracting the mean RGB pixel intensity computed from the ImageNet dataset (i.e., mean subtraction). For the purposes of classification (i.e., not Grad-CAM yet), next we’ll make predictions on the image with our model: # use the network to make predictions on the input image and find # the class label index with the largest corresponding probability preds = model.predict(image) i = np.argmax(preds[0]) # decode the ImageNet predictions to obtain the human-readable label decoded = imagenet_utils.decode_predictions(preds) (imagenetID, label, prob) = decoded[0][0] label = "{}: {:.2f}%".format(label, prob * 100) print("[INFO] {}".format(label)) Line 47 performs inference, passing our image through our CNN. We then find the class label index with largest corresponding probability (Lines 48-53). Alternatively, you could hardcode the class label index you want to visualize for if you believe your model is struggling with a particular class label and you want to visualize the class activation mappings for it. At this point, we’re ready to compute our Grad-CAM heatmap visualization: # initialize our gradient class activation map and build the heatmap cam = GradCAM(model, i) heatmap = cam.compute_heatmap(image) # resize the resulting heatmap to the original input image dimensions # and then overlay heatmap on top of the image heatmap = cv2.resize(heatmap, (orig.shape[1], orig.shape[0])) (heatmap, output) = cam.overlay_heatmap(heatmap, orig, alpha=0.5) To apply Grad-CAM, we instantiate a GradCAM object with our model and highest probability class index, i (Line 57). Then we compute the heatmap — the heart of Grad-CAM lies in the compute_heatmap method (Line 58). We then scale/resize the heatmap to our original input dimensions and overlay the heatmap on our output image with 50% alpha transparency (Lines 62 and 63). Finally, we produce a stacked visualization consisting of (1) the original image, (2) the heatmap, and (3) the heatmap transparently overlaid on the original image with the predicted class label: # draw the predicted label on the output image cv2.rectangle(output, (0, 0), (340, 40), (0, 0, 0), -1) cv2.putText(output, label, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255, 255, 255), 2) # display the original image and resulting heatmap and output image # to our screen output = np.vstack([orig, heatmap, output]) output = imutils.resize(output, height=700) cv2.imshow("Output", output) cv2.waitKey(0) Lines 66-68 draw the predicted class label on the top of the output Grad-CAM image. We then stack our three images for visualization, resize to a known height that will fit on our screen, and display the result in an OpenCV window (Lines 72-75).
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
In the next section, we’ll apply Grad-CAM to three sample images and see if the results meet our expectations. Visualizing class activation maps with Grad-CAM, Keras, and TensorFlow To use Grad-CAM to visualize class activation maps, make sure you use the “Downloads” section of this tutorial to download our Keras and TensorFlow Grad-CAM implementation. From there, open up a terminal, and execute the following command: $ python apply_gradcam.py --image images/space_shuttle.jpg [INFO] loading model... [INFO] space_shuttle: 100.00% Figure 4: Visualizing Grad-CAM activation maps with Keras, TensorFlow, and deep learning applied to a space shuttle photo. Here you can see that VGG16 has correctly classified our input image as space shuttle with 100% confidence — and by looking at our Grad-CAM output in Figure 4, we can see that VGG16 is correctly activating around patterns on the space shuttle, verifying that the network is behaving as expected. Let’s try another image: $ python apply_gradcam.py --image images/beagle.jpg [INFO] loading model... [INFO] beagle: 73.94% Figure 5: Applying Grad-CAM to visualize activation maps with Keras, TensorFlow, and deep learning applied to a photo of my beagle, Janie. This time, we are passing in an image of my dog, Janie. VGG16 correctly labels the image as beagle. Examining the Grad-CAM output in Figure 5, we can see that VGG16 is activating around the face of Janie, indicating that my dog’s face is an important characteristic used by the network to classify her as a beagle. Let’s examine one final image, this time using the ResNet architecture: $ python apply_gradcam.py --image images/soccer_ball.jpg --model resnet [INFO] loading model... [INFO] soccer_ball: 99.97% Figure 6: In this visualization, we have applied Grad-CAM with Keras, TensorFlow, and deep learning applied to a soccer ball photo. Our soccer ball is correctly classified with 99.97% accuracy, but what is more interesting is the class activation visualization in Figure 6 — notice how our network is effectively ignoring the soccer field, activating only around the soccer ball.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
This activation behavior verifies that our model has correctly learned the soccer ball class during training. After training your own CNNs, I would strongly encourage you to apply Grad-CAM and visually verify that your model is learning the patterns that you think it learning (and not some other pattern that occurs by happenstance in your dataset). What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms.
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned about Grad-CAM, an algorithm that can be used to visualize class activation maps and debug your Convolutional Neural Networks, ensuring that your network is “looking” at the correct locations in an image. Keep in mind that if your network is performing well on your training and testing sets, there is still a chance that your accuracy resulted by accident or happenstance!
https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/
Your “high accuracy” model may be activating under patterns you did not notice or perceive in the image dataset. I would suggest you make a conscious effort to incorporate Grad-CAM into your own deep learning pipelines and visually verify that your model is performing correctly. The last thing you want to do is deploy a model that you think is performing well but in reality is activating under patterns irrelevant to the objects in images you want to recognize. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
Click here to download the source code to this post In this tutorial, you will learn how to automatically detect COVID-19 in a hand-created X-ray image dataset using Keras, TensorFlow, and Deep Learning. Like most people in the world right now, I’m genuinely concerned about COVID-19. I find myself constantly analyzing my personal health and wondering if/when I will contract it. The more I worry about it, the more it turns into a painful mind game of legitimate symptoms combined with hypochondria: I woke up this morning feeling a bit achy and run down. As I pulled myself out of bed, I noticed my nose was running (although it’s now reported that a runny nose is not a symptom of COVID-19). By the time I made it to the bathroom to grab a tissue, I was coughing as well. At first, I didn’t think much of it — I have pollen allergies and due to the warm weather on the eastern coast of the United States, spring has come early this year. My allergies were likely just acting up. But my symptoms didn’t improve throughout the day. I’m actually sitting here, writing the this tutorial, with a thermometer in my mouth; and glancing down I see that it reads 99.4° Fahrenheit.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
My body runs a bit cooler than most, typically in the 97.4°F range. Anything above 99°F is a low-grade fever for me. Cough and low-grade fever? That could be COVID-19…or it could simply be my allergies. It’s impossible to know without a test, and that “not knowing” is what makes this situation so scary from a visceral human level. As humans, there is nothing more terrifying than the unknown. Despite my anxieties, I try to rationalize them away. I’m in my early 30s, very much in shape, and my immune system is strong. I’ll quarantine myself (just in case), rest up, and pull through just fine — COVID-19 doesn’t scare me from my own personal health perspective (at least that’s what I keep telling myself). That said, I am worried about my older relatives, including anyone that has pre-existing conditions, or those in a nursing home or hospital.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
They are vulnerable and it would be truly devastating to see them go due to COVID-19. Instead of sitting idly by and letting whatever is ailing me keep me down (be it allergies, COVID-19, or my own personal anxieties), I decided to do what I do best — focus on the overall CV/DL community by writing code, running experiments, and educating others on how to use computer vision and deep learning in practical, real-world applications. That said, I’ll be honest, this is not the most scientific article I’ve ever written. Far from it, in fact. The methods and datasets used would not be worthy of publication. But they serve as a starting point for those who need to feel like they’re doing something to help. I care about you and I care about this community. I want to do what I can to help — this blog post is my way of mentally handling a tough time, while simultaneously helping others in a similar situation. I hope you see it as such. Inside of today’s tutorial, you will learn how to: Sample an open source dataset of X-ray images for patients who have tested positive for COVID-19 Sample “normal” (i.e., not infected) X-ray images from healthy patients Train a CNN to automatically detect COVID-19 in X-ray images via the dataset we created Evaluate the results from an educational perspective Disclaimer: I’ve hinted at this already but I’ll say it explicitly here.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
The methods and techniques used in this post are meant for educational purposes only. This is not a scientifically rigorous study, nor will it be published in a journal. This article is for readers who are interested in (1) Computer Vision/Deep Learning and want to learn via practical, hands-on methods and (2) are inspired by current events. I kindly ask that you treat it as such. To learn how you could detect COVID-19 in X-ray images by using Keras, TensorFlow, and Deep Learning, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Detecting COVID-19 in X-ray images with Keras, TensorFlow, and Deep Learning In the first part of this tutorial, we’ll discuss how COVID-19 could be detected in chest X-rays of patients. From there, we’ll review our COVID-19 chest X-ray dataset. I’ll then show you how to train a deep learning model using Keras and TensorFlow to predict COVID-19 in our image dataset. Disclaimer This blog post on automatic COVID-19 detection is for educational purposes only.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
It is not meant to be a reliable, highly accurate COVID-19 diagnosis system, nor has it been professionally or academically vetted. My goal is simply to inspire you and open your eyes to how studying computer vision/deep learning and then applying that knowledge to the medical field can make a big impact on the world. Simply put: You don’t need a degree in medicine to make an impact in the medical field — deep learning practitioners working closely with doctors and medical professionals can solve complex problems, save lives, and make the world a better place. My hope is that this tutorial inspires you to do just that. But with that said, researchers, journal curators, and peer review systems are being overwhelmed with submissions containing COVID-19 prediction models of questionable quality. Please do not take the code/model from this post and submit it to a journal or Open Science — you’ll only add to the noise. Furthermore, if you intend on performing research using this post (or any other COVID-19 article you find online), make sure you refer to the TRIPOD guidelines on reporting predictive models. As you’re likely aware, artificial intelligence applied to the medical domain can have very real consequences. Only publish or deploy such models if you are a medical expert, or closely consulting with one. How could COVID-19 be detected in X-ray images?
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
Figure 1: Example of an X-ray image taken from a patient with a positive test for COVID-19. Using X-ray images we can train a machine learning classifier to detect COVID-19 using Keras and TensorFlow. COVID-19 tests are currently hard to come by — there are simply not enough of them and they cannot be manufactured fast enough, which is causing panic. When there’s panic, there are nefarious people looking to take advantage of others, namely by selling fake COVID-19 test kits after finding victims on social media platforms and chat applications. Given that there are limited COVID-19 testing kits, we need to rely on other diagnosis measures. For the purposes of this tutorial, I thought to explore X-ray images as doctors frequently use X-rays and CT scans to diagnose pneumonia, lung inflammation, abscesses, and/or enlarged lymph nodes. Since COVID-19 attacks the epithelial cells that line our respiratory tract, we can use X-rays to analyze the health of a patient’s lungs. And given that nearly all hospitals have X-ray imaging machines, it could be possible to use X-rays to test for COVID-19 without the dedicated test kits. A drawback is that X-ray analysis requires a radiology expert and takes significant time — which is precious when people are sick around the world. Therefore developing an automated analysis system is required to save medical professionals valuable time.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
Note: There are newer publications that suggest CT scans are better for diagnosing COVID-19, but all we have to work with for this tutorial is an X-ray image dataset. Secondly, I am not a medical expert and I presume there are other, more reliable, methods that doctors and medical professionals will use to detect COVID-19 outside of the dedicated test kits. Our COVID-19 patient X-ray image dataset Figure 2: CoronaVirus (COVID-19) chest X-ray image data. On the left we have positive (i.e., infected) X-ray images, whereas on the right we have negative samples. These images are used to train a deep learning model with TensorFlow and Keras to automatically predict whether a patient has COVID-19 (i.e., coronavirus). The COVID-19 X-ray image dataset we’ll be using for this tutorial was curated by Dr. Joseph Cohen, a postdoctoral fellow at the University of Montreal. One week ago, Dr. Cohen started collecting X-ray images of COVID-19 cases and publishing them in the following GitHub repo. Inside the repo you’ll find example of COVID-19 cases, as well as MERS, SARS, and ARDS. In order to create the COVID-19 X-ray image dataset for this tutorial, I: Parsed the metadata.csv file found in Dr. Cohen’s repository. Selected all rows that are: Positive for COVID-19 (i.e., ignoring MERS, SARS, and ARDS cases).
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
Posterioranterior (PA) view of the lungs. I used the PA view as, to my knowledge, that was the view used for my “healthy” cases, as discussed below; however, I’m sure that a medical professional will be able clarify and correct me if I am incorrect (which I very well may be, this is just an example). In total, that left me with 25 X-ray images of positive COVID-19 cases (Figure 2, left). The next step was to sample X-ray images of healthy patients. To do so, I used Kaggle’s Chest X-Ray Images (Pneumonia) dataset and sampled 25 X-ray images from healthy patients (Figure 2, right). There are a number of problems with Kaggle’s Chest X-Ray dataset, namely noisy/incorrect labels, but it served as a good enough starting point for this proof of concept COVID-19 detector. After gathering my dataset, I was left with 50 total images, equally split with 25 images of COVID-19 positive X-rays and 25 images of healthy patient X-rays. I’ve included my sample dataset in the “Downloads” section of this tutorial, so you do not have to recreate it. Additionally, I have included my Python scripts used to generate the dataset in the downloads as well, but these scripts will not be reviewed in this tutorial as they are outside the scope of the post. Project structure Go ahead and grab today’s code and data from the “Downloads” section of this tutorial.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
From there, extract the files and you’ll be presented with the following directory structure: $ tree --dirsfirst --filelimit 10 . ├── dataset │   ├── covid [25 entries] │   └── normal [25 entries] ├── build_covid_dataset.py ├── sample_kaggle_dataset.py ├── train_covid19.py ├── plot.png └── covid19.model 3 directories, 5 files Our coronavirus (COVID-19) chest X-ray data is in the dataset/ directory where our two classes of data are separated into covid/ and normal/. Both of my dataset building scripts are provided; however, we will not be reviewing them today. Instead, we will review the train_covid19.py script which trains our COVID-19 detector. Let’s dive in and get to work! Implementing our COVID-19 training script using Keras and TensorFlow Now that we’ve reviewed our image dataset along with the corresponding directory structure for our project, let’s move on to fine-tuning a Convolutional Neural Network to automatically diagnose COVID-19 using Keras, TensorFlow, and deep learning. Open up the train_covid19.py file in your directory structure and insert the following code: # import the necessary packages from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.applications import VGG16 from tensorflow.keras.layers import AveragePooling2D from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Input from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam from tensorflow.keras.utils import to_categorical from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from imutils import paths import matplotlib.pyplot as plt import numpy as np import argparse import cv2 import os This script takes advantage of TensorFlow 2.0 and Keras deep learning libraries via a selection of tensorflow.keras imports. Additionally, we use scikit-learn, the de facto Python library for machine learning, matplotlib for plotting, and OpenCV for loading and preprocessing images in the dataset. To learn how to install TensorFlow 2.0 (including relevant scikit-learn, OpenCV, and matplotlib libraries), just follow my Ubuntu or macOS guide. With our imports taken care of, next we will parse command line arguments and initialize hyperparameters: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-d", "--dataset", required=True, help="path to input dataset") ap.add_argument("-p", "--plot", type=str, default="plot.png", help="path to output loss/accuracy plot") ap.add_argument("-m", "--model", type=str, default="covid19.model", help="path to output loss/accuracy plot") args = vars(ap.parse_args()) # initialize the initial learning rate, number of epochs to train for, # and batch size INIT_LR = 1e-3 EPOCHS = 25 BS = 8 Our three command line arguments (Lines 24-31) include: --dataset: The path to our input dataset of chest X-ray images.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
--plot: An optional path to an output training history plot. By default the plot is named plot.png unless otherwise specified via the command line. --model: The optional path to our output COVID-19 model; by default it will be named covid19.model. From there we initialize our initial learning rate, number of training epochs, and batch size hyperparameters (Lines 35-37). We’re now ready to load and preprocess our X-ray data: # grab the list of images in our dataset directory, then initialize # the list of data (i.e., images) and class images print("[INFO] loading images...") imagePaths = list(paths.list_images(args["dataset"])) data = [] labels = [] # loop over the image paths for imagePath in imagePaths: # extract the class label from the filename label = imagePath.split(os.path.sep)[-2] # load the image, swap color channels, and resize it to be a fixed # 224x224 pixels while ignoring aspect ratio image = cv2.imread(imagePath) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = cv2.resize(image, (224, 224)) # update the data and labels lists, respectively data.append(image) labels.append(label) # convert the data and labels to NumPy arrays while scaling the pixel # intensities to the range [0, 1] data = np.array(data) / 255.0 labels = np.array(labels) To load our data, we grab all paths to images in in the --dataset directory (Lines 42). Then, for each imagePath, we: Extract the class label (either covid or normal) from the path (Line 49). Load the image, and preprocess it by converting to RGB channel ordering, and resizing it to 224×224 pixels so that it is ready for our Convolutional Neural Network (Lines 53-55). Update our data and labels lists respectively (Lines 58 and 59). We then scale pixel intensities to the range [0, 1] and convert both our data and labels to NumPy array format (Lines 63 and 64). Next we will one-hot encode our labels and create our training/testing splits: # perform one-hot encoding on the labels lb = LabelBinarizer() labels = lb.fit_transform(labels) labels = to_categorical(labels) # partition the data into training and testing splits using 80% of # the data for training and the remaining 20% for testing (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.20, stratify=labels, random_state=42) # initialize the training data augmentation object trainAug = ImageDataGenerator( rotation_range=15, fill_mode="nearest") One-hot encoding of labels takes place on Lines 67-69 meaning that our data will be in the following format: [[0.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
1.] [0. 1.] [0. 1.] ... [1. 0.] [1. 0.] [1.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
0.]] Each encoded label consists of a two element array with one of the elements being “hot” (i.e., 1) versus “not” (i.e., 0). Lines 73 and 74 then construct our data split, reserving 80% of the data for training and 20% for testing. In order to ensure that our model generalizes, we perform data augmentation by setting the random image rotation setting to 15 degrees clockwise or counterclockwise. Lines 77-79 initialize the data augmentation generator object. From here we will initialize our VGGNet model and set it up for fine-tuning: # load the VGG16 network, ensuring the head FC layer sets are left # off baseModel = VGG16(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3))) # construct the head of the model that will be placed on top of the # the base model headModel = baseModel.output headModel = AveragePooling2D(pool_size=(4, 4))(headModel) headModel = Flatten(name="flatten")(headModel) headModel = Dense(64, activation="relu")(headModel) headModel = Dropout(0.5)(headModel) headModel = Dense(2, activation="softmax")(headModel) # place the head FC model on top of the base model (this will become # the actual model we will train) model = Model(inputs=baseModel.input, outputs=headModel) # loop over all layers in the base model and freeze them so they will # *not* be updated during the first training process for layer in baseModel.layers: layer.trainable = False Lines 83 and 84 instantiate the VGG16 network with weights pre-trained on ImageNet, leaving off the FC layer head. From there, we construct a new fully-connected layer head consisting of POOL => FC = SOFTMAX layers (Lines 88-93) and append it on top of VGG16 (Line 97). We then freeze the CONV weights of VGG16 such that only the FC layer head will be trained (Lines 101-102); this completes our fine-tuning setup. We’re now ready to compile and train our COVID-19 (coronavirus) deep learning model: # compile our model print("[INFO] compiling model...") opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS) model.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"]) # train the head of the network print("[INFO] training head...") H = model.fit_generator( trainAug.flow(trainX, trainY, batch_size=BS), steps_per_epoch=len(trainX) // BS, validation_data=(testX, testY), validation_steps=len(testX) // BS, epochs=EPOCHS) Lines 106-108 compile the network with learning rate decay and the Adam optimizer. Given that this is a 2-class problem, we use "binary_crossentropy" loss rather than categorical crossentropy.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
To kick off our COVID-19 neural network training process, we make a call to Keras’ fit_generator method, while passing in our chest X-ray data via our data augmentation object (Lines 112-117). Next, we’ll evaluate our model: # make predictions on the testing set print("[INFO] evaluating network...") predIdxs = model.predict(testX, batch_size=BS) # for each image in the testing set we need to find the index of the # label with corresponding largest predicted probability predIdxs = np.argmax(predIdxs, axis=1) # show a nicely formatted classification report print(classification_report(testY.argmax(axis=1), predIdxs, target_names=lb.classes_)) For evaluation, we first make predictions on the testing set and grab the prediction indices (Lines 121-125). We then generate and print out a classification report using scikit-learn’s helper utility (Lines 128 and 129). Next we’ll compute a confusion matrix for further statistical evaluation: # compute the confusion matrix and and use it to derive the raw # accuracy, sensitivity, and specificity cm = confusion_matrix(testY.argmax(axis=1), predIdxs) total = sum(sum(cm)) acc = (cm[0, 0] + cm[1, 1]) / total sensitivity = cm[0, 0] / (cm[0, 0] + cm[0, 1]) specificity = cm[1, 1] / (cm[1, 0] + cm[1, 1]) # show the confusion matrix, accuracy, sensitivity, and specificity print(cm) print("acc: {:.4f}".format(acc)) print("sensitivity: {:.4f}".format(sensitivity)) print("specificity: {:.4f}".format(specificity)) Here we: Generate a confusion matrix (Line 133) Use the confusion matrix to derive the accuracy, sensitivity, and specificity (Lines 135-137) and print each of these metrics (Lines 141-143) We then plot our training accuracy/loss history for inspection, outputting the plot to an image file: # plot the training loss and accuracy N = EPOCHS plt.style.use("ggplot") plt.figure() plt.plot(np.arange(0, N), H.history["loss"], label="train_loss") plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss") plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc") plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc") plt.title("Training Loss and Accuracy on COVID-19 Dataset") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(args["plot"]) Finally we serialize our tf.keras COVID-19 classifier model to disk: # serialize the model to disk print("[INFO] saving COVID-19 detector model...") model.save(args["model"], save_format="h5") Training our COVID-19 detector with Keras and TensorFlow With our train_covid19.py script implemented, we are now ready to train our automatic COVID-19 detector. Make sure you use the “Downloads” section of this tutorial to download the source code, COVID-19 X-ray dataset, and pre-trained model. From there, open up a terminal and execute the following command to train the COVID-19 detector: $ python train_covid19.py --dataset dataset [INFO] loading images... [INFO] compiling model... [INFO] training head... Epoch 1/25 5/5 [==============================] - 20s 4s/step - loss: 0.7169 - accuracy: 0.6000 - val_loss: 0.6590 - val_accuracy: 0.5000 Epoch 2/25 5/5 [==============================] - 0s 86ms/step - loss: 0.8088 - accuracy: 0.4250 - val_loss: 0.6112 - val_accuracy: 0.9000 Epoch 3/25 5/5 [==============================] - 0s 99ms/step - loss: 0.6809 - accuracy: 0.5500 - val_loss: 0.6054 - val_accuracy: 0.5000 Epoch 4/25 5/5 [==============================] - 1s 100ms/step - loss: 0.6723 - accuracy: 0.6000 - val_loss: 0.5771 - val_accuracy: 0.6000 ... Epoch 22/25 5/5 [==============================] - 0s 99ms/step - loss: 0.3271 - accuracy: 0.9250 - val_loss: 0.2902 - val_accuracy: 0.9000 Epoch 23/25 5/5 [==============================] - 0s 99ms/step - loss: 0.3634 - accuracy: 0.9250 - val_loss: 0.2690 - val_accuracy: 0.9000 Epoch 24/25 5/5 [==============================] - 27s 5s/step - loss: 0.3175 - accuracy: 0.9250 - val_loss: 0.2395 - val_accuracy: 0.9000 Epoch 25/25 5/5 [==============================] - 1s 101ms/step - loss: 0.3655 - accuracy: 0.8250 - val_loss: 0.2522 - val_accuracy: 0.9000 [INFO] evaluating network... precision recall f1-score support covid 0.83 1.00 0.91 5 normal 1.00 0.80 0.89 5 accuracy 0.90 10 macro avg 0.92 0.90 0.90 10 weighted avg 0.92 0.90 0.90 10 [[5 0] [1 4]] acc: 0.9000 sensitivity: 1.0000 specificity: 0.8000 [INFO] saving COVID-19 detector model... Automatic COVID-19 diagnosis from X-ray image results Disclaimer: The following section does not claim, nor does it intend to “solve”, COVID-19 detection. It is written in the context, and from the results, of this tutorial only. It is an example for budding computer vision and deep learning practitioners so they can learn about various metrics, including raw accuracy, sensitivity, and specificity (and the tradeoffs we must consider when working with medical applications). Again, this section/tutorial does not claim to solve COVID-19 detection. As you can see from the results above, our automatic COVID-19 detector is obtaining ~90-92% accuracy on our sample dataset based solely on X-ray images — no other data, including geographical location, population density, etc.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
was used to train this model. We are also obtaining 100% sensitivity and 80% specificity implying that: Of patients that do have COVID-19 (i.e., true positives), we could accurately identify them as “COVID-19 positive” 100% of the time using our model. Of patients that do not have COVID-19 (i.e., true negatives), we could accurately identify them as “COVID-19 negative” only 80% of the time using our model. As our training history plot shows, our network is not overfitting, despite having very limited training data: Figure 3: This deep learning training history plot showing accuracy and loss curves demonstrates that our model is not overfitting despite limited COVID-19 X-ray training data used in our Keras/TensorFlow model. Being able to accurately detect COVID-19 with 100% accuracy is great; however, our true negative rate is a bit concerning — we don’t want to classify someone as “COVID-19 negative” when they are “COVID-19 positive”. In fact, the last thing we want to do is tell a patient they are COVID-19 negative, and then have them go home and infect their family and friends; thereby transmitting the disease further. We also want to be really careful with our false positive rate — we don’t want to mistakenly classify someone as “COVID-19 positive”, quarantine them with other COVID-19 positive patients, and then infect a person who never actually had the virus. Balancing sensitivity and specificity is incredibly challenging when it comes to medical applications, especially infectious diseases that can be rapidly transmitted, such as COVID-19. When it comes to medical computer vision and deep learning, we must always be mindful of the fact that our predictive models can have very real consequences — a missed diagnosis can cost lives. Again, these results are gathered for educational purposes only.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
This article and accompanying results are not intended to be a journal article nor does it conform to the TRIPOD guidelines on reporting predictive models. I would suggest you refer to these guidelines for more information, if you are so interested. Limitations, improvements, and future work Figure 4: Currently, artificial intelligence (AI) experts and deep learning practitioners are suffering from a lack of quality COVID-19 data to effectively train automatic image-based detection systems. ( image source) One of the biggest limitations of the method discussed in this tutorial is data. We simply don’t have enough (reliable) data to train a COVID-19 detector. Hospitals are already overwhelmed with the number of COVID-19 cases, and given patients rights and confidentiality, it becomes even harder to assemble quality medical image datasets in a timely fashion. I imagine in the next 12-18 months we’ll have more high quality COVID-19 image datasets; but for the time being, we can only make do with what we have. I have done my best (given my current mental state and physical health) to put together a tutorial for my readers who are interested in applying computer vision and deep learning to the COVID-19 pandemic given my limited time and resources; however, I must remind you that I am not a trained medical expert. For the COVID-19 detector to be deployed in the field, it would have to go through rigorous testing by trained medical professionals, working hand-in-hand with expert deep learning practitioners. The method covered here today is certainly not such a method, and is meant for educational purposes only.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
Furthermore, we need to be concerned with what the model is actually “learning”. As I discussed in last week’s Grad-CAM tutorial, it’s possible that our model is learning patterns that are not relevant to COVID-19, and instead are just variations between the two data splits (i.e., positive versus negative COVID-19 diagnosis). It would take a trained medical professional and rigorous testing to validate the results coming out of our COVID-19 detector. And finally, future (and better) COVID-19 detectors will be multi-modal. Right now we are using only image data (i.e., X-rays) — better automatic COVID-19 detectors should leverage multiple data sources not limited to just images, including patient vitals, population density, geographical location, etc. Image data by itself is typically not sufficient for these types of applications. For these reasons, I must once again stress that this tutorial is meant for educational purposes only — it is not meant to be a robust COVID-19 detector. If you believe that yourself or a loved one has COVID-19, you should follow the protocols outlined by the Center for Disease Control (CDC), World Health Organization (WHO), or local country, state, or jurisdiction. I hope you enjoyed this tutorial and found it educational. It’s also my hope that this tutorial serves as a starting point for anyone interested in applying computer vision and deep learning to automatic COVID-19 detection.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
What’s next? I typically end my blog posts by recommending one of my books/courses, so that you can learn more about applying Computer Vision and Deep Learning to your own projects. Out of respect for the severity of the coronavirus, I am not going to do that — this isn’t the time or the place. Instead, what I will say is we’re in a very scary season of life right now. Like all seasons, it will pass, but we need to hunker down and prepare for a cold winter — it’s likely that the worst has yet to come. To be frank, I feel incredibly depressed and isolated. I see:​ Stock markets tanking. Countries locking down their borders. Massive sporting events being cancelled. Some of the world’s most popular bands postponing their tours.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
And locally, my favorite restaurants and coffee shops shuttering their doors. That’s all on the macro-level — but what about the micro-level? What about us as individuals? It’s too easy to get caught up in the global statistics. We see numbers like 6,000 dead and 160,000 confirmed cases (with potentially multiple orders of magnitude more due to lack of COVID-19 testing kits and that some people are choosing to self-quarantine). When we think in those terms we lose sight of ourselves and our loved ones. We need to take things day-by-day. We need to think at the individual level for our own mental health and sanity. We need safe spaces where we can retreat to. When I started PyImageSearch over 5 years ago, I knew it was going to be a safe space.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
I set the example for what PyImageSearch was to become and I still do to this day. For this reason, I don’t allow harassment in any shape or form, including, but not limited to, racism, sexism, xenophobia, elitism, bullying, etc. The PyImageSearch community is special. People here respect others — and if they don’t, I remove them. Perhaps one of my favorite displays of kind, accepting, and altruistic human character came when I ran PyImageConf 2018 — attendees were overwhelmed with how friendly and welcoming the conference was. Dave Snowdon, software engineer and PyImageConf attendee said: PyImageConf was without a doubt the most friendly and welcoming conference I’ve been to. The technical content was also great too! It was privilege to meet and learn from some of the people who’ve contributed their time to build the tools that we rely on for our work (and play). David Stone, Doctor of Engineering and professor at Virginia Commonwealth University shared the following: Thanks for putting together PyImageConf. I also agree that it was the most friendly conference that I have attended.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
Why do I say all this? Because I know you may be scared right now. I know you might be at your whits end (trust me, I am too). And most importantly, because I want PyImageSearch to be your safe space. You might be a student home from school after your semester prematurely ended, disappointed that your education has been put on hold. You may be a developer, totally lost after your workplace chained its doors for the foreseeable future. You may be a researcher, frustrated that you can’t continue your experiments and authoring that novel paper. You might be a parent, trying, unsuccessfully, to juggle two kids and a mandatory “work from home” requirement. Or, you may be like me — just trying to get through the day by learning a new skill, algorithm, or technique. I’ve received a number of emails from PyImageSearch readers who want to use this downtime to study Computer Vision and Deep Learning rather than going stir crazy in their homes.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
I respect that and I want to help, and to a degree, I believe it is my moral obligation to help how I can: To start, there are over 350+ free tutorials you can learn from on the PyImageSearch blog. I publish a new tutorial every Monday at 10AM EST. I’ve categorized, cross-referenced, and compiled these tutorials on my “Get Started” page. The most popular topics on the “Get Started” page include “Deep Learning” and “Face Applications”. All these guides are 100% free. Use them to study and learn from. That said, many readers have also been requesting that I run a sale on my books and courses. At first, I was a bit hesitant about it — the last thing I want is for people to think I’m somehow using the coronavirus as a scheme to “make money”. But the truth is, being a small business owner who is not only responsible for myself and my family, but the lives and families of my teammates, can be terrifying and overwhelming at times — peoples lives, including small businesses, will be destroyed by this virus. To that end, just like: Bands and performers are offering discounted “online only” shows Restaurants are offering home delivery Fitness coaches are offering training sessions online …I’ll be following suit.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
Starting tomorrow I’ll be running a sale on PyImageSearch books. This sale isn’t meant for profit and it’s certainly not planned (I’ve spent my entire weekend, sick, trying to put all this together). Instead, it’s sale to help people, like me (and perhaps like yourself), who are struggling to find their safe space during this mess. Let myself and PyImageSearch become your retreat. I typically only run one big sale per year (Black Friday), but given how many people are requesting it, I believe it’s something that I need to do for those who want to use this downtime to study and/or as a distraction from the rest of the world. Feel free to join in or not. It’s totally okay. We all process these tough times in our own ways. But if you need rest, if you need a haven, if you need a retreat through education — I’ll be here. Thank you and stay safe.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial you learned how you could use Keras, TensorFlow, and Deep Learning to train an automatic COVID-19 detector on a dataset of X-ray images. High quality, peer reviewed image datasets for COVID-19 don’t exist (yet), so we had to work with what we had, namely Joseph Cohen’s GitHub repo of open-source X-ray images: We sampled 25 images from Cohen’s dataset, taking only the posterioranterior (PA) view of COVID-19 positive cases. We then sampled 25 images of healthy patients using Kaggle’s Chest X-Ray Images (Pneumonia) dataset. From there we used Keras and TensorFlow to train a COVID-19 detector that was capable of obtaining 90-92% accuracy on our testing set with 100% sensitivity and 80% specificity (given our limited dataset).
https://pyimagesearch.com/2020/03/16/detecting-covid-19-in-x-ray-images-with-keras-tensorflow-and-deep-learning/
Keep in mind that the COVID-19 detector covered in this tutorial is for educational purposes only (refer to my “Disclaimer” at the top of this tutorial). My goal is to inspire deep learning practitioners, such as yourself, and open your eyes to how deep learning and computer vision can make a big impact on the world. I hope you enjoyed this blog post. To download the source code to this post (including the pre-trained COVID-19 diagnosis model), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/03/23/using-tensorflow-and-gradienttape-to-train-a-keras-model/
Click here to download the source code to this pos In this tutorial, you will learn how to use TensorFlow’s GradientTape function to create custom training loops to train Keras models. Today’s tutorial was inspired by a question I received by PyImageSearch reader Timothy: Hi Adrian, I just read your tutorial on Grad-CAM and noticed that you used a function named GradientTape when computing gradients. I’ve heard GradientTape is a brand new function in TensorFlow 2.0 and that it can be used for automatic differentiation and writing custom training loops, but I can’t find many examples of it online. Could you shed some light on how to use GradientTape for custom training loops? Timothy is correct on both fronts: GradientTape is a brand-new function in TensorFlow 2.0 And it can be used to write custom training loops (both for Keras models and models implemented in “pure” TensorFlow) One of the largest criticisms of the TensorFlow 1.x low-level API, as well as the Keras high-level API, was that it made it very challenging for deep learning researchers to write custom training loops that could: Customize the data batching process Handle multiple inputs and/or outputs with different spatial dimensions Utilize a custom loss function Access gradients for specific layers and update them in a unique manner That’s not to say you couldn’t create custom training loops with Keras and TensorFlow 1.x. You could; it was just a bit of a bear and ultimately one of the driving reasons why some researchers ended up switching to PyTorch — they simply didn’t want the headache anymore and desired a better way to implement their training procedures. That all changed in TensorFlow 2.0. With the TensorFlow 2.0 release, we now have the GradientTape function, which makes it easier than ever to write custom training loops for both TensorFlow and Keras models, thanks to automatic differentiation. Whether you’re a deep learning practitioner or a seasoned researcher, you should learn how to use the GradientTape function — it allows you to create custom training loops for models implemented in Keras’ easy-to-use API, giving you the best of both worlds. You just can’t beat that combination.
https://pyimagesearch.com/2020/03/23/using-tensorflow-and-gradienttape-to-train-a-keras-model/
To learn how to use TensorFlow’s GradientTape function to train Keras models, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Using TensorFlow and GradientTape to train a Keras model In the first part of this tutorial, we will discuss automatic differentiation, including how it’s different from classical methods for differentiation, such as symbol differentiation and numerical differentiation. We’ll then discuss the four components, at a bare minimum, required to create custom training loops to train a deep neural network. Afterward, we’ll show you how to use TensorFlow’s GradientTape function to implement such a custom training loop. Finally, we’ll use our custom training loop to train a Keras model and check results. GradientTape: What is automatic differentiation? Figure 1: Using TensorFlow and GradientTape to train a Keras model requires conceptual knowledge of automatic differentiation — a set of techniques to automatically compute the derivative of a function by applying the chain rule. ( image source) Automatic differentiation (also called computational differentiation) refers to a set of techniques that can automatically compute the derivative of a function by repeatedly applying the chain rule. To quote Wikipedia’s excellent article on automatic differentiation: Automatic differentiation exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.)
https://pyimagesearch.com/2020/03/23/using-tensorflow-and-gradienttape-to-train-a-keras-model/
and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program. Unlike classical differentiation algorithms such as symbolic differentiation (which is inefficient) and numerical differentiation (which is prone to discretization and round-off errors), automatic differentiation is fast and efficient, and best of all, it can compute partial derivatives with respect to many inputs (which is exactly what we need when applying gradient descent to train our models). To learn more about the inner-workings of automatic differentiation algorithms, I would recommend reviewing the slides to this University of Toronto lecture as well as working through this example by Chi-Feng Wang. 4 components of a deep neural network training loop with TensorFlow, GradientTape, and Keras When implementing custom training loops with Keras and TensorFlow, you to need to define, at a bare minimum, four components: Component 1: The model architecture Component 2: The loss function used when computing the model loss Component 3: The optimizer used to update the model weights Component 4: The step function that encapsulates the forward and backward pass of the network Each of these components could be simple or complex, but at a bare minimum, you will need all four when creating a custom training loop for your own models. Once you’ve defined them, GradientTape takes care of the rest. Project structure Go ahead and grab the “Downloads” to today’s blog post and unzip the code. You’ll be presented with the following project: $ tree . └── gradient_tape_example.py 0 directories, 1 file Today’s zip consists of only one Python file — our GradientTape example script. Our Python script will use GradientTape to train a custom CNN on the MNIST dataset (TensorFlow will download MNIST if you don’t have it already cached on your system).
https://pyimagesearch.com/2020/03/23/using-tensorflow-and-gradienttape-to-train-a-keras-model/
Let’s jump into the implementation of GradientTape next. Implementing the TensorFlow and GradientTape training script Let’s learn how to use TensorFlow’s GradientTape function to implement a custom training loop to train a Keras model. Open up the gradient_tape_example.py file in your project directory structure, and let’s get started: # import the necessary packages from tensorflow.keras.models import Sequential from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import MaxPooling2D from tensorflow.keras.layers import Activation from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam from tensorflow.keras.losses import categorical_crossentropy from tensorflow.keras.utils import to_categorical from tensorflow.keras.datasets import mnist import tensorflow as tf import numpy as np import time import sys We begin with our imports from TensorFlow 2.0 and NumPy. If you inspect carefully, you won’t see GradientTape; we can access it via tf. GradientTape. We will be using the MNIST dataset (mnist) for our example in this tutorial. Let’s go ahead and build our model using TensorFlow/Keras’ Sequential API: def build_model(width, height, depth, classes): # initialize the input shape and channels dimension to be # "channels last" ordering inputShape = (height, width, depth) chanDim = -1 # build the model using Keras' Sequential API model = Sequential([ # CONV => RELU => BN => POOL layer set Conv2D(16, (3, 3), padding="same", input_shape=inputShape), Activation("relu"), BatchNormalization(axis=chanDim), MaxPooling2D(pool_size=(2, 2)), # (CONV => RELU => BN) * 2 => POOL layer set Conv2D(32, (3, 3), padding="same"), Activation("relu"), BatchNormalization(axis=chanDim), Conv2D(32, (3, 3), padding="same"), Activation("relu"), BatchNormalization(axis=chanDim), MaxPooling2D(pool_size=(2, 2)), # (CONV => RELU => BN) * 3 => POOL layer set Conv2D(64, (3, 3), padding="same"), Activation("relu"), BatchNormalization(axis=chanDim), Conv2D(64, (3, 3), padding="same"), Activation("relu"), BatchNormalization(axis=chanDim), Conv2D(64, (3, 3), padding="same"), Activation("relu"), BatchNormalization(axis=chanDim), MaxPooling2D(pool_size=(2, 2)), # first (and only) set of FC => RELU layers Flatten(), Dense(256), Activation("relu"), BatchNormalization(), Dropout(0.5), # softmax classifier Dense(classes), Activation("softmax") ]) # return the built model to the calling function return model Here we define our build_model function used to construct the model architecture (Component #1 of creating a custom training loop). The function accepts the shape parameters for our data: width and height: The spatial dimensions of each input image depth: The number of channels for our images (1 for grayscale as in the case of MNIST or 3 for RGB color images) classes: The number of unique class labels in our dataset Our model is representative of VGG-esque architecture (i.e., inspired by the variants of VGGNet), as it contains 3×3 convolutions and stacking of CONV => RELU => BN layers before a POOL to reduce volume size. Fifty percent dropout (randomly disconnecting neurons) is added to the set of FC => RELU layers, as it is proven to increase model generalization. Once our model is built, Line 67 returns it to the caller.
https://pyimagesearch.com/2020/03/23/using-tensorflow-and-gradienttape-to-train-a-keras-model/
Let’s work on Components 2, 3, and 4: def step(X, y): # keep track of our gradients with tf. GradientTape() as tape: # make a prediction using the model and then calculate the # loss pred = model(X) loss = categorical_crossentropy(y, pred) # calculate the gradients using our tape and then update the # model weights grads = tape.gradient(loss, model.trainable_variables) opt.apply_gradients(zip(grads, model.trainable_variables)) Our step function accepts training images X and their corresponding class labels y (in our example, MNIST images and labels). Now let’s record our gradients by: Gathering predictions on our training data using our model (Line 74) Computing the loss (Component #2 of creating a custom training loop) on Line 75 We then calculate our gradients using tape.gradients and by passing our loss and trainable variables (Line 79). We use our optimizer to update the model weights using the gradients on Line 80 (Component #3). The step function as a whole rounds out Component #4, encapsulating our forward and backward pass of data using our GradientTape and then updating our model weights. With both our build_model and step functions defined, now we’ll prepare data: # initialize the number of epochs to train for, batch size, and # initial learning rate EPOCHS = 25 BS = 64 INIT_LR = 1e-3 # load the MNIST dataset print("[INFO] loading MNIST dataset...") ((trainX, trainY), (testX, testY)) = mnist.load_data() # add a channel dimension to every image in the dataset, then scale # the pixel intensities to the range [0, 1] trainX = np.expand_dims(trainX, axis=-1) testX = np.expand_dims(testX, axis=-1) trainX = trainX.astype("float32") / 255.0 testX = testX.astype("float32") / 255.0 # one-hot encode the labels trainY = to_categorical(trainY, 10) testY = to_categorical(testY, 10) Lines 84-86 initialize our training epochs, batch size, and initial learning rate. We then load MNIST data (Line 90) and proceed to preprocess it by: Adding a single channel dimension (Lines 94 and 95) Scaling pixel intensities to the range [0, 1] (Lines 96 and 97) One-hot encoding our labels (Lines 100 and 101) Note: As GradientTape is an advanced concept, you should be familiar with these preprocessing steps. If you need to brush up on these fundamentals, definitely consider picking up a copy of Deep Learning for Computer Vision with Python. With our data in hand and ready to go, we’ll build our model: # build our model and initialize our optimizer print("[INFO] creating model...") model = build_model(28, 28, 1, 10) opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS) Here we build our CNN architecture utilizing our build_model function while passing the shape of our data. The shape consists of 28×28 pixel images with a single channel and 10 classes corresponding to digits 0-9 in MNIST.
https://pyimagesearch.com/2020/03/23/using-tensorflow-and-gradienttape-to-train-a-keras-model/
We then initialize our Adam optimizer with a standard learning rate decay schedule. We’re now ready to train our model with our GradientTape: # compute the number of batch updates per epoch numUpdates = int(trainX.shape[0] / BS) # loop over the number of epochs for epoch in range(0, EPOCHS): # show the current epoch number print("[INFO] starting epoch {}/{}...".format( epoch + 1, EPOCHS), end="") sys.stdout.flush() epochStart = time.time() # loop over the data in batch size increments for i in range(0, numUpdates): # determine starting and ending slice indexes for the current # batch start = i * BS end = start + BS # take a step step(trainX[start:end], trainY[start:end]) # show timing information for the epoch epochEnd = time.time() elapsed = (epochEnd - epochStart) / 60.0 print("took {:.4} minutes".format(elapsed)) Line 109 computes the number of batch updates we will conduct during each epoch. From there, we begin looping over our number of training epochs beginning on Line 112. Inside, we: Print the epoch number and grab the epochStart timestamp (Lines 114-117) Loop over our data in batch-sized increments (Line 120). Inside, we use the step function to compute a forward and backward pass, and then update the model weights Display the elapsed time for how long the training epoch took (Lines 130-132) Finally, we’ll calculate the loss and accuracy on the testing set: # in order to calculate accuracy using Keras' functions we first need # to compile the model model.compile(optimizer=opt, loss=categorical_crossentropy, metrics=["acc"]) # now that the model is compiled we can compute the accuracy (loss, acc) = model.evaluate(testX, testY) print("[INFO] test accuracy: {:.4f}".format(acc)) In order to use Keras’ evaluate helper function to evaluate the accuracy of the model on our testing set, we first need to compile our model (Lines 136 and 137). Lines 140 and 141 then evaluate and print out the accuracy for our model in our terminal. At this point, we have both trained and evaluated a model with GradientTape. In the next section, we’ll put our script to work for us. Training our Keras model with TensorFlow and GradientTape To see our GradientTape custom training loop in action, make sure you use the “Downloads” section of this tutorial to download the source code. From there, open up a terminal and execute the following command: $ time python gradient_tape_example.py [INFO] loading MNIST dataset... [INFO] creating model... [INFO] starting epoch 1/25...took 1.039 minutes [INFO] starting epoch 2/25...took 1.039 minutes [INFO] starting epoch 3/25...took 1.023 minutes [INFO] starting epoch 4/25...took 1.031 minutes [INFO] starting epoch 5/25...took 0.9819 minutes [INFO] starting epoch 6/25...took 0.9909 minutes [INFO] starting epoch 7/25...took 1.029 minutes [INFO] starting epoch 8/25...took 1.035 minutes [INFO] starting epoch 9/25...took 1.039 minutes [INFO] starting epoch 10/25...took 1.019 minutes [INFO] starting epoch 11/25...took 1.029 minutes [INFO] starting epoch 12/25...took 1.023 minutes [INFO] starting epoch 13/25...took 1.027 minutes [INFO] starting epoch 14/25...took 0.9743 minutes [INFO] starting epoch 15/25...took 0.9678 minutes [INFO] starting epoch 16/25...took 0.9633 minutes [INFO] starting epoch 17/25...took 0.964 minutes [INFO] starting epoch 18/25...took 0.9634 minutes [INFO] starting epoch 19/25...took 0.9638 minutes [INFO] starting epoch 20/25...took 0.964 minutes [INFO] starting epoch 21/25...took 0.9638 minutes [INFO] starting epoch 22/25...took 0.9636 minutes [INFO] starting epoch 23/25...took 0.9631 minutes [INFO] starting epoch 24/25...took 0.9629 minutes [INFO] starting epoch 25/25...took 0.9633 minutes 10000/10000 [==============================] - 1s 141us/sample - loss: 0.0441 - acc: 0.9927 [INFO] test accuracy: 0.9927 real 24m57.643s user 72m57.355s sys 115m42.568s Our model is obtaining 99.27% accuracy on our testing set after we trained it using our GradientTape custom training procedure.
https://pyimagesearch.com/2020/03/23/using-tensorflow-and-gradienttape-to-train-a-keras-model/
As I mentioned earlier in this tutorial, this guide is meant to be a gentle introduction to using GradientTape for custom training loops. At a bare minimum, you need to define the four components of a training procedure including the model architecture, loss function, optimizer, and step function — each of these components could be incredibly simple or extremely complex, but each of them must be present. In future tutorials, I’ll cover more advanced use cases of GradientTape, but in the meantime, if you’re interested in learning more about the GradientTape method, I would suggest you refer to the official TensorFlow documentation as well as this excellent article by Sebastian Theiler. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case.
https://pyimagesearch.com/2020/03/23/using-tensorflow-and-gradienttape-to-train-a-keras-model/
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to use TensorFlow’s GradientTape function, a brand-new method in TensorFlow 2.0 to implement a custom training loop.
https://pyimagesearch.com/2020/03/23/using-tensorflow-and-gradienttape-to-train-a-keras-model/
We then used our custom training loop to train a Keras model. Using GradientTape gives us the best of both worlds: We can implement our own custom training procedures And we can still enjoy the easy-to-use Keras API This tutorial covered a basic custom training loop — future tutorials will explore more advanced use cases. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
Click here to download the source code to this pos In this tutorial, you will learn how to use convolutional autoencoders to create a Content-based Image Retrieval system (i.e., image search engine) using Keras and TensorFlow. A few weeks ago, I authored a series of tutorials on autoencoders: Part 1: Intro to autoencodersPart 2: Denoising autoencodersPart 3: Anomaly detection with autoencoders The tutorials were a big hit; however, one topic I did not touch on was Content-based Image Retrieval (CBIR), which is really just a fancy academic word for image search engines. Image search engines are similar to text search engines, only instead of presenting the search engine with a text query, you instead provide an image query — the image search engine then finds all visually similar/relevant images in its database and returns them to you (just as a text search engine would return links to articles, blog posts, etc.). Deep learning-based CBIR and image retrieval can be framed as a form of unsupervised learning: When training the autoencoder, we do not use any class labelsThe autoencoder is then used to compute the latent-space vector representation for each image in our dataset (i.e., our “feature vector” for a given image)Then, at search time, we compute the distance between the latent-space vectors — the smaller the distance, the more relevant/visually similar two images are We can thus break up the CBIR project into three distinct phases: Phase #1: Train the autoencoderPhase #2: Extract features from all images in our dataset by computing their latent-space representations using the autoencoderPhase #3: Compare latent-space vectors to find all relevant images in the dataset I’ll show you how to implement each of these phases in this tutorial, leaving you with a fully functioning autoencoder and image retrieval system. To learn how to use autoencoders for image retrieval with Keras and TensorFlow, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Autoencoders for Content-based Image Retrieval with Keras and TensorFlow In the first part of this tutorial, we’ll discuss how autoencoders can be used for image retrieval and building image search engines. From there, we’ll implement a convolutional autoencoder that we’ll then train on our image dataset. Once the autoencoder is trained, we’ll compute feature vectors for each image in our dataset. Computing the feature vector for a given image requires only a forward-pass of the image through the network — the output of the encoder (i.e., the latent-space representation) will serve as our feature vector.
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
After all images are encoded, we can then compare vectors by computing the distance between them. Images with a smaller distance will be more similar than images with a larger distance. Finally, we will review the results of applying our autoencoder for image retrieval. How can autoencoders be used for image retrieval and image search engines? Figure 1: The process of using an autoencoder for an image search engine using Keras and TensorFlow. Top: We train an autoencoder on our input dataset in an unsupervised fashion. Bottom: We use the autoencoder to extract and store features in an index and then search the index with a query image’s feature vector, finding the most similar images via a distance metric. As discussed in my intro to autoencoders tutorial, autoencoders: Accept an input set of data (i.e., the input)Internally compress the input data into a latent-space representation (i.e., a single vector that compresses and quantifies the input)Reconstruct the input data from this latent representation (i.e., the output) To build an image retrieval system with an autoencoder, what we really care about is that latent-space representation vector. Once an autoencoder has been trained to encode images, we can: Use the encoder portion of the network to compute the latent-space representation of each image in our dataset — this representation serves as our feature vector that quantifies the contents of an imageCompare the feature vector from our query image to all feature vectors in our dataset (typically you would use either the Euclidean or cosine distance) Feature vectors that have a smaller distance will be considered more similar, while images with a larger distance will be deemed less similar. We can then sort our results based on the distance (from smallest to largest) and finally display the image retrieval results to the end user.
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
Project structure Go ahead and grab this tutorial’s files from the “Downloads” section. From there, extract the .zip, and open the folder for inspection: $ tree --dirsfirst . ├── output │   ├── autoencoder.h5 │   ├── index.pickle │   ├── plot.png │   └── recon_vis.png ├── pyimagesearch │   ├── __init__.py │   └── convautoencoder.py ├── index_images.py ├── search.py └── train_autoencoder.py 2 directories, 9 files This tutorial consists of three Python driver scripts: train_autoencoder.py: Trains an autoencoder on the MNIST handwritten digits dataset using the ConvAutoencoder CNN/class index_images.py: Using the encoder portion of our trained autoencoder, we’ll compute feature vectors for each image in the dataset and add the features to a searchable index search.py: Queries our index for similar images using a similarity metric Our output/ directory contains our trained autoencoder and index. Training also results in a training history plot and visualization image that can be exported to the output/ folder. Implementing our convolutional autoencoder architecture for image retrieval Before we can train our autoencoder, we must first implement the architecture itself. To do so, we’ll be using Keras and TensorFlow. We’ve already implemented convolutional autoencoders a handful of times before on the PyImageSearch blog, so while I’ll be covering the complete implementation here today, you’ll want to refer to my intro to autoencoders tutorial for more details. Open up the convautoencoder.py file in the pyimagesearch module, and let’s get to work: # import the necessary packages from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import Conv2DTranspose from tensorflow.keras.layers import LeakyReLU from tensorflow.keras.layers import Activation from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Reshape from tensorflow.keras.layers import Input from tensorflow.keras.models import Model from tensorflow.keras import backend as K import numpy as np Imports include a selection from tf.keras as well as NumPy. We’ll go ahead and define our autoencoder class next: class ConvAutoencoder: @staticmethod def build(width, height, depth, filters=(32, 64), latentDim=16): # initialize the input shape to be "channels last" along with # the channels dimension itself # channels dimension itself inputShape = (height, width, depth) chanDim = -1 # define the input to the encoder inputs = Input(shape=inputShape) x = inputs # loop over the number of filters for f in filters: # apply a CONV => RELU => BN operation x = Conv2D(f, (3, 3), strides=2, padding="same")(x) x = LeakyReLU(alpha=0.2)(x) x = BatchNormalization(axis=chanDim)(x) # flatten the network and then construct our latent vector volumeSize = K.int_shape(x) x = Flatten()(x) latent = Dense(latentDim, name="encoded")(x) Our ConvAutoencoder class contains one static method, build, which accepts five parameters: (1) width, (2) height, (3) depth, (4) filters, and (5) latentDim. The Input is then defined for the encoder, at which point we use Keras’ functional API to loop over our filters and add our sets of CONV => LeakyReLU => BN layers (Lines 21-33).
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
We then flatten the network and construct our latent vector (Lines 36-38). The latent-space representation is the compressed form of our data — once trained, the output of this layer will be our feature vector used to quantify and represent the contents of the input image. From here, we will construct the input to the decoder portion of the network: # start building the decoder model which will accept the # output of the encoder as its inputs x = Dense(np.prod(volumeSize[1:]))(latent) x = Reshape((volumeSize[1], volumeSize[2], volumeSize[3]))(x) # loop over our number of filters again, but this time in # reverse order for f in filters[::-1]: # apply a CONV_TRANSPOSE => RELU => BN operation x = Conv2DTranspose(f, (3, 3), strides=2, padding="same")(x) x = LeakyReLU(alpha=0.2)(x) x = BatchNormalization(axis=chanDim)(x) # apply a single CONV_TRANSPOSE layer used to recover the # original depth of the image x = Conv2DTranspose(depth, (3, 3), padding="same")(x) outputs = Activation("sigmoid", name="decoded")(x) # construct our autoencoder model autoencoder = Model(inputs, outputs, name="autoencoder") # return the autoencoder model return autoencoder The decoder model accepts the output of the encoder as its inputs (Lines 42 and 43). Looping over filters in reverse order, we construct CONV_TRANSPOSE => LeakyReLU => BN layer blocks (Lines 47-52). Lines 56-63 recover the original depth of the image. We wrap up by constructing and returning our autoencoder model (Lines 60-63). For more details on our implementation, be sure to refer to our intro to autoencoders with Keras and TensorFlow tutorial. Creating the autoencoder training script using Keras and TensorFlow With our autoencoder implemented, let’s move on to the training script (Phase #1). Open the train_autoencoder.py script, and insert the following code: # set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from pyimagesearch.convautoencoder import ConvAutoencoder from tensorflow.keras.optimizers import Adam from tensorflow.keras.datasets import mnist import matplotlib.pyplot as plt import numpy as np import argparse import cv2 On Lines 2-12, we handle our imports. We’ll use the "Agg" backend of matplotlib so that we can export our training plot to disk.
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
We need our custom ConvAutoencoder architecture class from the previous section. We will take advantage of the Adam optimizer as we train on the MNIST benchmarking dataset. For visualization, we’ll employ OpenCV in the visualize_predictions helper function: def visualize_predictions(decoded, gt, samples=10): # initialize our list of output images outputs = None # loop over our number of output samples for i in range(0, samples): # grab the original image and reconstructed image original = (gt[i] * 255).astype("uint8") recon = (decoded[i] * 255).astype("uint8") # stack the original and reconstructed image side-by-side output = np.hstack([original, recon]) # if the outputs array is empty, initialize it as the current # side-by-side image display if outputs is None: outputs = output # otherwise, vertically stack the outputs else: outputs = np.vstack([outputs, output]) # return the output images return outputs Inside the visualize_predictions helper, we compare our original ground-truth input images (gt) to the output reconstructed images from the autoencoder (decoded) and generate a side-by-side comparison montage. Line 16 initializes our list of output images. We then loop over the samples: Grabbing both the original and reconstructed images (Lines 21 and 22) Stacking the pair of images side-by-side (Line 25) Stacking the pairs vertically (Lines 29-34) Finally, we return the visualization image to the caller (Line 37). We’ll need a few command line arguments for our script to run from our terminal/command line: # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-m", "--model", type=str, required=True, help="path to output trained autoencoder") ap.add_argument("-v", "--vis", type=str, default="recon_vis.png", help="path to output reconstruction visualization file") ap.add_argument("-p", "--plot", type=str, default="plot.png", help="path to output plot file") args = vars(ap.parse_args()) Here we parse three command line arguments: --model: Points to the path of our trained output autoencoder — the result of executing this script --vis: The path to the output visualization image. We’ll name our visualization recon_vis.png by default --plot: The path to our matplotlib output plot. A default of plot.png is assigned if this argument is not provided in the terminal Now that our imports, helper function, and command line arguments are ready, we’ll prepare to train our autoencoder: # initialize the number of epochs to train for, initial learning rate, # and batch size EPOCHS = 20 INIT_LR = 1e-3 BS = 32 # load the MNIST dataset print("[INFO] loading MNIST dataset...") ((trainX, _), (testX, _)) = mnist.load_data() # add a channel dimension to every image in the dataset, then scale # the pixel intensities to the range [0, 1] trainX = np.expand_dims(trainX, axis=-1) testX = np.expand_dims(testX, axis=-1) trainX = trainX.astype("float32") / 255.0 testX = testX.astype("float32") / 255.0 # construct our convolutional autoencoder print("[INFO] building autoencoder...") autoencoder = ConvAutoencoder.build(28, 28, 1) opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS) autoencoder.compile(loss="mse", optimizer=opt) # train the convolutional autoencoder H = autoencoder.fit( trainX, trainX, validation_data=(testX, testX), epochs=EPOCHS, batch_size=BS) Hyperparameter constants including the number of training epochs, learning rate, and batch size are defined on Lines 51-53. Our autoencoder (and therefore our CBIR system) will be trained on the MNIST handwritten digits dataset which we load from disk on Line 57.
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
To preprocess MNIST images, we add a channel dimension to the training/testing sets (Lines 61 and 62) and scale pixel intensities to the range [0, 1] (Lines 63 and 64). With our data ready to go, Lines 68-70 compile our autoencoder with the Adam optimizer and mean-squared error loss. Lines 73-77 then fit our model to the data (i.e., train our autoencoder). Once the model is trained, we’ll make predictions with it: # use the convolutional autoencoder to make predictions on the # testing images, construct the visualization, and then save it # to disk print("[INFO] making predictions...") decoded = autoencoder.predict(testX) vis = visualize_predictions(decoded, testX) cv2.imwrite(args["vis"], vis) # construct a plot that plots and saves the training history N = np.arange(0, EPOCHS) plt.style.use("ggplot") plt.figure() plt.plot(N, H.history["loss"], label="train_loss") plt.plot(N, H.history["val_loss"], label="val_loss") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(args["plot"]) # serialize the autoencoder model to disk print("[INFO] saving autoencoder...") autoencoder.save(args["model"], save_format="h5") Lines 83 and 84 make predictions on the testing set and generate our autoencoder visualization using our helper function. Line 85 writes the visualization to disk using OpenCV. Finally, we plot training history (Lines 88-97) and serialize our autoencoder to disk (Line 101). In the next section, we’ll put the training script to work. Training the autoencoder We are now ready to train our convolutional autoencoder for image retrieval. Make sure you use the “Downloads” section of this tutorial to download the source code, and from there, execute the following command to start the training process: $ python train_autoencoder.py --model output/autoencoder.h5 \ --vis output/recon_vis.png --plot output/plot.png [INFO] loading MNIST dataset... [INFO] building autoencoder... Train on 60000 samples, validate on 10000 samples Epoch 1/20 60000/60000 [==============================] - 73s 1ms/sample - loss: 0.0182 - val_loss: 0.0124 Epoch 2/20 60000/60000 [==============================] - 73s 1ms/sample - loss: 0.0101 - val_loss: 0.0092 Epoch 3/20 60000/60000 [==============================] - 73s 1ms/sample - loss: 0.0090 - val_loss: 0.0084 ... Epoch 18/20 60000/60000 [==============================] - 72s 1ms/sample - loss: 0.0065 - val_loss: 0.0067 Epoch 19/20 60000/60000 [==============================] - 73s 1ms/sample - loss: 0.0065 - val_loss: 0.0067 Epoch 20/20 60000/60000 [==============================] - 73s 1ms/sample - loss: 0.0064 - val_loss: 0.0067 [INFO] making predictions... [INFO] saving autoencoder... On my 3Ghz Intel Xeon W processor, the entire training process took ~24 minutes. Looking at the plot in Figure 2, we can see that the training process was stable with no signs of overfitting: Figure 2: Training an autoencoder with Keras and TensorFlow for Content-based Image Retrieval (CBIR).