url
stringclasses
675 values
text
stringlengths
0
9.95k
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Using the Fashion MNIST dataset with Adam shows signs of overfitting past epoch 30. RAdam appears more stable in this experiment. We now evaluate GoogLeNet trained on Fashion MNIST using Adam and Rectified Adam. Below is the classification report from the Adam optimizer: precision recall f1-score support top 0.84 0.89 0.86 1000 trouser 1.00 0.99 0.99 1000 pullover 0.87 0.94 0.90 1000 dress 0.95 0.88 0.91 1000 coat 0.95 0.85 0.89 1000 sandal 0.98 0.99 0.98 1000 shirt 0.79 0.82 0.81 1000 sneaker 0.99 0.90 0.94 1000 bag 0.98 0.99 0.99 1000 ankle boot 0.91 0.99 0.95 1000 micro avg 0.92 0.92 0.92 10000 macro avg 0.93 0.92 0.92 10000 weighted avg 0.93 0.92 0.92 10000 As well as the output from the Rectified Adam optimizer: precision recall f1-score support top 0.91 0.83 0.87 1000 trouser 0.99 0.99 0.99 1000 pullover 0.94 0.85 0.89 1000 dress 0.96 0.86 0.90 1000 coat 0.90 0.91 0.90 1000 sandal 0.98 0.98 0.98 1000 shirt 0.70 0.88 0.78 1000 sneaker 0.97 0.96 0.96 1000 bag 0.98 0.99 0.99 1000 ankle boot 0.97 0.97 0.97 1000 micro avg 0.92 0.92 0.92 10000 macro avg 0.93 0.92 0.92 10000 weighted avg 0.93 0.92 0.92 10000 This time both optimizers obtain 93% accuracy, but what’s more interesting is to take a look at the training history plot in Figure 11. Here we can see that training loss starts to diverge past epoch 30 for the Adam optimizer — this divergence grows wider and wider as we continue training. At this point, we should start to be concerned about overfitting using Adam. On the other hand, Rectified Adam’s performance is stable with no signs of overfitting. In this particular experiment, it’s clear that Rectified Adam is generalizing better, and had we wished to deploy this model to production, the Rectified Adam optimizer version would be the one to go with. Fashion MNIST – ResNet Figure 12: Which deep learning optimizer is better — Adam or Rectified Adam (RAdam) — using the ResNet CNN on the Fashion MNIST dataset? Our final experiment compares Adam vs. Rectified Adam optimizer trained on the Fashion MNIST dataset using ResNet.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Below is the output of the Adam optimizer: precision recall f1-score support top 0.89 0.83 0.86 1000 trouser 0.99 0.99 0.99 1000 pullover 0.84 0.93 0.88 1000 dress 0.94 0.83 0.88 1000 coat 0.93 0.85 0.89 1000 sandal 0.99 0.92 0.95 1000 shirt 0.71 0.85 0.78 1000 sneaker 0.88 0.99 0.93 1000 bag 1.00 0.98 0.99 1000 ankle boot 0.98 0.93 0.95 1000 micro avg 0.91 0.91 0.91 10000 macro avg 0.92 0.91 0.91 10000 weighted avg 0.92 0.91 0.91 10000 Here is the output of the Rectified Adam optimizer: precision recall f1-score support top 0.88 0.86 0.87 1000 trouser 0.99 0.99 0.99 1000 pullover 0.91 0.87 0.89 1000 dress 0.96 0.83 0.89 1000 coat 0.86 0.92 0.89 1000 sandal 0.98 0.98 0.98 1000 shirt 0.72 0.80 0.75 1000 sneaker 0.95 0.96 0.96 1000 bag 0.98 0.99 0.99 1000 ankle boot 0.97 0.96 0.96 1000 micro avg 0.92 0.92 0.92 10000 macro avg 0.92 0.92 0.92 10000 weighted avg 0.92 0.92 0.92 10000 Both models obtain 92% accuracy, but take a look at the training history plot in Figure 12. You can observe that Adam optimizer results in lower loss and that the validation loss follows the training curve. The Rectified Adam loss is arguably more stable with fewer fluctuations (as compared to standard Adam). Exactly which one is “better” in this experiment would be dependent on how well the model generalizes to images outside the training, validation, and testing set. Further experiments would be required to mark the winner here, but my gut tells me that it’s Rectified Adam as (1) accuracy on the testing set is identical, and (2) lower loss doesn’t necessarily mean better generalization (in some cases it means that the model may fail to generalize well) — but at the same time, training/validation loss are near identical for Adam. Without further experiments it’s hard to make the call. Adam vs. Rectified Adam Experiments with CIFAR-10 Figure 13: The CIFAR-10 benchmarking dataset has 10 classes. We will use it for Rectified Adam experimentation to evaluate if RAdam or Adam is the better choice (image source). In these experiments, we’ll be comparing Adam vs. Rectified Adam performance using MiniVGGNet, GoogLeNet, and ResNet, all trained on the CIFAR-10 dataset. CIFAR-10 – MiniVGGNet Figure 14: Is the RAdam or Adam deep learning optimizer better using the MiniVGGNet CNN on the CIFAR-10 dataset?
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Our next experiment compares Adam to Rectified Adam by training MiniVGGNet on the CIFAR-10 dataset. Below is the output of training using the Adam optimizer: precision recall f1-score support airplane 0.90 0.79 0.84 1000 automobile 0.90 0.93 0.91 1000 bird 0.90 0.63 0.74 1000 cat 0.78 0.68 0.73 1000 deer 0.83 0.79 0.81 1000 dog 0.81 0.76 0.79 1000 frog 0.70 0.95 0.81 1000 horse 0.85 0.91 0.88 1000 ship 0.93 0.89 0.91 1000 truck 0.77 0.95 0.85 1000 micro avg 0.83 0.83 0.83 10000 macro avg 0.84 0.83 0.83 10000 weighted avg 0.84 0.83 0.83 10000 And here is the output from Rectified Adam: precision recall f1-score support airplane 0.84 0.72 0.78 1000 automobile 0.89 0.84 0.86 1000 bird 0.80 0.41 0.54 1000 cat 0.66 0.43 0.52 1000 deer 0.66 0.65 0.66 1000 dog 0.72 0.55 0.62 1000 frog 0.48 0.96 0.64 1000 horse 0.84 0.75 0.79 1000 ship 0.87 0.88 0.88 1000 truck 0.68 0.95 0.79 1000 micro avg 0.71 0.71 0.71 10000 macro avg 0.74 0.71 0.71 10000 weighted avg 0.74 0.71 0.71 10000 Here the Adam optimizer (84% accuracy) beats out Rectified Adam (74% accuracy). Furthermore, validation loss is lower than training loss for the majority of training, implying that we can “train harder” by reducing our regularization strength and potentially increasing model capacity. CIFAR-10 – GoogLeNet Figure 15: Which is a better deep learning optimizer with the GoogLeNet CNN? The training accuracy/loss plot shows results from using Adam and RAdam as part of automated deep learning experiment data collection. Next, let’s check out GoogLeNet trained on CIFAR-10 using Adam and Rectified Adam. Here is the output of Adam: precision recall f1-score support airplane 0.89 0.92 0.91 1000 automobile 0.92 0.97 0.94 1000 bird 0.90 0.87 0.88 1000 cat 0.79 0.86 0.82 1000 deer 0.92 0.85 0.89 1000 dog 0.92 0.81 0.86 1000 frog 0.87 0.96 0.91 1000 horse 0.95 0.91 0.93 1000 ship 0.96 0.92 0.94 1000 truck 0.90 0.94 0.92 1000 micro avg 0.90 0.90 0.90 10000 macro avg 0.90 0.90 0.90 10000 weighted avg 0.90 0.90 0.90 10000 And here is the output of Rectified Adam: precision recall f1-score support airplane 0.88 0.88 0.88 1000 automobile 0.93 0.95 0.94 1000 bird 0.84 0.82 0.83 1000 cat 0.79 0.75 0.77 1000 deer 0.89 0.82 0.85 1000 dog 0.89 0.77 0.82 1000 frog 0.80 0.96 0.87 1000 horse 0.89 0.92 0.91 1000 ship 0.95 0.92 0.93 1000 truck 0.88 0.95 0.91 1000 micro avg 0.87 0.87 0.87 10000 macro avg 0.87 0.87 0.87 10000 weighted avg 0.87 0.87 0.87 10000 The Adam optimizer obtains 90% accuracy, slightly beating out the 87% accuracy of Rectified Adam. However, Figure 15 tells an interesting story — past epoch 20 there is quite the divergence between Adam’s training and validation loss. While the Adam optimized model obtained higher accuracy, there are signs of overfitting as validation loss is essentially stagnant past epoch 30. Additional experiments would be required to mark a true winner but I imagine it would be Rectified Adam after some additional hyperparameter tuning.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
CIFAR-10 – ResNet Figure 16: This Keras deep learning tutorial helps to answer the question: Is Rectified Adam or Adam the better deep learning optimizer? One of the 24 experiments uses the ResNet CNN and CIFAR-10 dataset. Next, let’s check out ResNet trained using Adam and Rectified Adam on CIFAR-10. Below you can find the output of the standard Adam optimizer: precision recall f1-score support airplane 0.80 0.92 0.86 1000 automobile 0.92 0.96 0.94 1000 bird 0.93 0.74 0.82 1000 cat 0.93 0.63 0.75 1000 deer 0.95 0.80 0.87 1000 dog 0.77 0.88 0.82 1000 frog 0.75 0.97 0.84 1000 horse 0.90 0.92 0.91 1000 ship 0.93 0.93 0.93 1000 truck 0.91 0.93 0.92 1000 micro avg 0.87 0.87 0.87 10000 macro avg 0.88 0.87 0.87 10000 weighted avg 0.88 0.87 0.87 10000 As well as the output from Rectified Adam: precision recall f1-score support airplane 0.86 0.86 0.86 1000 automobile 0.89 0.95 0.92 1000 bird 0.85 0.72 0.78 1000 cat 0.78 0.66 0.71 1000 deer 0.83 0.81 0.82 1000 dog 0.82 0.70 0.76 1000 frog 0.72 0.95 0.82 1000 horse 0.86 0.90 0.87 1000 ship 0.94 0.90 0.92 1000 truck 0.84 0.93 0.88 1000 micro avg 0.84 0.84 0.84 10000 macro avg 0.84 0.84 0.83 10000 weighted avg 0.84 0.84 0.83 10000 Adam is the winner here, obtaining 88% accuracy versus Rectified Adam’s 84%. Adam vs. Rectified Adam Experiments with CIFAR-100 Figure 17: The CIFAR-100 classification dataset is the brother of CIFAR-10 and includes more classes of images. ( image source) The CIFAR-100 dataset is the bigger brother of the CIFAR-10 dataset. As the name suggests, CIFAR-100 includes 100 class labels versus the 10 class labels of CIFAR-10. While there are more class labels in CIFAR-100, there are actually fewer images per class (CIFAR-10 has 6,000 images per class while CIFAR-100 only has 600 images per class). CIFAR-100 is, therefore, a more challenging dataset than CIFAR-10. In this section, we’ll investigate Adam vs. Rectified Adam’s performance on the CIFAR-100 dataset.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
CIFAR-100 – MiniVGGNet Figure 18: Will RAdam stand up to Adam as a preferable deep learning optimizer? How does Rectified Adam stack up to SGD? In this experiment (one of 24), we train MiniVGGNet on the CIFAR-100 dataset and analyze the results. Let’s apply Adam and Rectified Adam to the MiniVGGNet architecture trained on CIFAR-100. Below is the output from the Adam optimizer: precision recall f1-score support apple 0.94 0.76 0.84 100 aquarium_fish 0.69 0.66 0.67 100 baby 0.56 0.45 0.50 100 bear 0.45 0.22 0.30 100 beaver 0.31 0.14 0.19 100 bed 0.48 0.59 0.53 100 bee 0.60 0.69 0.64 100 beetle 0.51 0.49 0.50 100 bicycle 0.50 0.65 0.57 100 bottle 0.74 0.63 0.68 100 bowl 0.51 0.38 0.44 100 boy 0.45 0.37 0.41 100 bridge 0.64 0.68 0.66 100 bus 0.42 0.57 0.49 100 butterfly 0.52 0.50 0.51 100 camel 0.61 0.33 0.43 100 can 0.44 0.68 0.54 100 castle 0.74 0.71 0.72 100 caterpillar 0.78 0.40 0.53 100 cattle 0.58 0.48 0.52 100 chair 0.72 0.80 0.76 100 chimpanzee 0.74 0.64 0.68 100 clock 0.39 0.62 0.48 100 cloud 0.88 0.46 0.61 100 cockroach 0.80 0.66 0.73 100 couch 0.56 0.27 0.36 100 crab 0.43 0.52 0.47 100 crocodile 0.34 0.32 0.33 100 cup 0.74 0.73 0.73 100 ..."d" - "t" classes omitted for brevity wardrobe 0.67 0.87 0.76 100 whale 0.67 0.58 0.62 100 willow_tree 0.52 0.44 0.48 100 wolf 0.40 0.48 0.44 100 woman 0.39 0.19 0.26 100 worm 0.66 0.56 0.61 100 micro avg 0.53 0.53 0.53 10000 macro avg 0.58 0.53 0.53 10000 weighted avg 0.58 0.53 0.53 10000 And here is the output from Rectified Adam: precision recall f1-score support apple 0.82 0.70 0.76 100 aquarium_fish 0.57 0.46 0.51 100 baby 0.55 0.26 0.35 100 bear 0.22 0.11 0.15 100 beaver 0.17 0.18 0.17 100 bed 0.47 0.37 0.42 100 bee 0.49 0.47 0.48 100 beetle 0.32 0.52 0.39 100 bicycle 0.36 0.64 0.46 100 bottle 0.74 0.40 0.52 100 bowl 0.47 0.29 0.36 100 boy 0.54 0.26 0.35 100 bridge 0.38 0.43 0.40 100 bus 0.34 0.35 0.34 100 butterfly 0.40 0.34 0.37 100 camel 0.37 0.19 0.25 100 can 0.57 0.45 0.50 100 castle 0.50 0.57 0.53 100 caterpillar 0.50 0.21 0.30 100 cattle 0.47 0.35 0.40 100 chair 0.54 0.72 0.62 100 chimpanzee 0.59 0.47 0.53 100 clock 0.29 0.37 0.33 100 cloud 0.77 0.60 0.67 100 cockroach 0.57 0.64 0.60 100 couch 0.42 0.18 0.25 100 crab 0.25 0.50 0.33 100 crocodile 0.30 0.28 0.29 100 cup 0.71 0.60 0.65 100 ..."d" - "t" classes omitted for brevity wardrobe 0.61 0.82 0.70 100 whale 0.57 0.39 0.46 100 willow_tree 0.36 0.27 0.31 100 wolf 0.32 0.39 0.35 100 woman 0.35 0.09 0.14 100 worm 0.62 0.32 0.42 100 micro avg 0.41 0.41 0.41 10000 macro avg 0.46 0.41 0.41 10000 weighted avg 0.46 0.41 0.41 10000 The Adam optimizer is the clear winner (58% accuracy) over Rectified Adam (46% accuracy). And just like in our CIFAR-10 experiments, we can likely improve our model performance further by relaxing regularization and increasing model capacity. CIFAR-100 – GoogLeNet Figure 19: Adam vs. RAdam optimizer on the CIFAR-100 dataset using GoogLeNet. Let’s now perform the same experiment, only this time use GoogLeNet. Here’s the output from the Adam optimizer: precision recall f1-score support apple 0.95 0.80 0.87 100 aquarium_fish 0.88 0.66 0.75 100 baby 0.59 0.39 0.47 100 bear 0.47 0.28 0.35 100 beaver 0.20 0.53 0.29 100 bed 0.79 0.56 0.65 100 bee 0.78 0.69 0.73 100 beetle 0.56 0.58 0.57 100 bicycle 0.91 0.63 0.75 100 bottle 0.80 0.71 0.75 100 bowl 0.46 0.37 0.41 100 boy 0.49 0.47 0.48 100 bridge 0.80 0.61 0.69 100 bus 0.62 0.60 0.61 100 butterfly 0.34 0.64 0.44 100 camel 0.93 0.37 0.53 100 can 0.42 0.69 0.52 100 castle 0.94 0.50 0.65 100 caterpillar 0.28 0.77 0.41 100 cattle 0.56 0.55 0.55 100 chair 0.85 0.77 0.81 100 chimpanzee 0.95 0.58 0.72 100 clock 0.56 0.62 0.59 100 cloud 0.88 0.68 0.77 100 cockroach 0.82 0.74 0.78 100 couch 0.66 0.40 0.50 100 crab 0.40 0.72 0.52 100 crocodile 0.36 0.47 0.41 100 cup 0.65 0.68 0.66 100 ..."d" - "t" classes omitted for brevity wardrobe 0.86 0.82 0.84 100 whale 0.40 0.80 0.53 100 willow_tree 0.46 0.62 0.53 100 wolf 0.86 0.37 0.52 100 woman 0.56 0.31 0.40 100 worm 0.79 0.57 0.66 100 micro avg 0.56 0.56 0.56 10000 macro avg 0.66 0.56 0.57 10000 weighted avg 0.66 0.56 0.57 10000 And here is the output from Rectified Adam: precision recall f1-score support apple 0.93 0.76 0.84 100 aquarium_fish 0.72 0.77 0.74 100 baby 0.53 0.54 0.53 100 bear 0.47 0.26 0.34 100 beaver 0.26 0.22 0.24 100 bed 0.53 0.49 0.51 100 bee 0.52 0.62 0.56 100 beetle 0.50 0.55 0.52 100 bicycle 0.67 0.79 0.72 100 bottle 0.78 0.62 0.69 100 bowl 0.41 0.42 0.41 100 boy 0.45 0.41 0.43 100 bridge 0.59 0.72 0.65 100 bus 0.45 0.53 0.49 100 butterfly 0.27 0.58 0.37 100 camel 0.56 0.50 0.53 100 can 0.58 0.68 0.63 100 castle 0.81 0.73 0.77 100 caterpillar 0.51 0.52 0.51 100 cattle 0.56 0.59 0.58 100 chair 0.68 0.76 0.72 100 chimpanzee 0.83 0.73 0.78 100 clock 0.46 0.56 0.50 100 cloud 0.88 0.69 0.78 100 cockroach 0.79 0.68 0.73 100 couch 0.44 0.39 0.41 100 crab 0.46 0.47 0.46 100 crocodile 0.40 0.40 0.40 100 cup 0.76 0.62 0.68 100 ..."d" - "t" classes omitted for brevity wardrobe 0.76 0.87 0.81 100 whale 0.56 0.61 0.59 100 willow_tree 0.65 0.30 0.41 100 wolf 0.61 0.55 0.58 100 woman 0.39 0.30 0.34 100 worm 0.62 0.61 0.62 100 micro avg 0.57 0.57 0.57 10000 macro avg 0.59 0.57 0.57 10000 weighted avg 0.59 0.57 0.57 10000 The Adam optimizer obtains 66% accuracy, better than Rectified Adam’s 59%. However, looking at Figure 19 we can see that the validation loss from Adam is quite unstable — towards the end of training validation loss even starts to increase, a sign of overfitting.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
CIFAR-100 – ResNet Figure 20: Training a ResNet model on the CIFAR-100 dataset using both RAdam and Adam for comparison. Which deep learning optimizer is actually better for this experiment? Below we can find the output of training ResNet using Adam on the CIFAR-100 dataset: precision recall f1-score support apple 0.80 0.89 0.84 100 aquarium_fish 0.86 0.75 0.80 100 baby 0.75 0.40 0.52 100 bear 0.71 0.29 0.41 100 beaver 0.40 0.40 0.40 100 bed 0.91 0.59 0.72 100 bee 0.71 0.76 0.73 100 beetle 0.82 0.42 0.56 100 bicycle 0.54 0.89 0.67 100 bottle 0.93 0.62 0.74 100 bowl 0.75 0.36 0.49 100 boy 0.43 0.49 0.46 100 bridge 0.54 0.78 0.64 100 bus 0.68 0.48 0.56 100 butterfly 0.34 0.71 0.46 100 camel 0.72 0.68 0.70 100 can 0.69 0.60 0.64 100 castle 0.96 0.69 0.80 100 caterpillar 0.57 0.62 0.60 100 cattle 0.91 0.51 0.65 100 chair 0.79 0.82 0.80 100 chimpanzee 0.80 0.79 0.79 100 clock 0.41 0.86 0.55 100 cloud 0.89 0.74 0.81 100 cockroach 0.85 0.78 0.81 100 couch 0.73 0.44 0.55 100 crab 0.42 0.70 0.53 100 crocodile 0.47 0.55 0.51 100 cup 0.88 0.75 0.81 100 ..."d" - "t" classes omitted for brevity wardrobe 0.79 0.85 0.82 100 whale 0.58 0.75 0.65 100 willow_tree 0.71 0.37 0.49 100 wolf 0.79 0.64 0.71 100 woman 0.42 0.49 0.45 100 worm 0.48 0.80 0.60 100 micro avg 0.63 0.63 0.63 10000 macro avg 0.68 0.63 0.63 10000 weighted avg 0.68 0.63 0.63 10000 And here is the output of Rectified Adam: precision recall f1-score support apple 0.86 0.72 0.78 100 aquarium_fish 0.56 0.62 0.59 100 baby 0.49 0.43 0.46 100 bear 0.36 0.20 0.26 100 beaver 0.27 0.17 0.21 100 bed 0.45 0.42 0.43 100 bee 0.54 0.61 0.57 100 beetle 0.47 0.55 0.51 100 bicycle 0.45 0.69 0.54 100 bottle 0.64 0.54 0.59 100 bowl 0.39 0.31 0.35 100 boy 0.43 0.35 0.38 100 bridge 0.52 0.67 0.59 100 bus 0.34 0.47 0.40 100 butterfly 0.33 0.39 0.36 100 camel 0.47 0.37 0.41 100 can 0.49 0.55 0.52 100 castle 0.76 0.67 0.71 100 caterpillar 0.43 0.43 0.43 100 cattle 0.56 0.45 0.50 100 chair 0.63 0.78 0.70 100 chimpanzee 0.70 0.71 0.71 100 clock 0.38 0.49 0.43 100 cloud 0.80 0.61 0.69 100 cockroach 0.73 0.72 0.73 100 couch 0.49 0.36 0.42 100 crab 0.27 0.45 0.34 100 crocodile 0.32 0.26 0.29 100 cup 0.63 0.49 0.55 100 ..."d" - "t" classes omitted for brevity wardrobe 0.68 0.84 0.75 100 whale 0.53 0.54 0.54 100 willow_tree 0.60 0.29 0.39 100 wolf 0.38 0.35 0.36 100 woman 0.33 0.29 0.31 100 worm 0.59 0.63 0.61 100 micro avg 0.50 0.50 0.50 10000 macro avg 0.51 0.50 0.49 10000 weighted avg 0.51 0.50 0.49 10000 The Adam optimizer (68% accuracy) crushes Rectified Adam (51% accuracy) here, but we need to be careful of overfitting. As Figure 20 shows there is quite the divergence between training and validation loss when using the Adam optimizer. But on the other hand, Rectified Adam really stagnates past epoch 20. I would be inclined to go with the Adam optimized model here as it obtains significantly higher accuracy; however, I would suggest running some generalization tests using both the Adam and Rectified Adam versions of the model. What can we take away from these experiments? One of the first takeaways comes from looking at the training plots of the experiments — using the Rectified Adam optimizer can lead to more stable training. When training with Rectified Adam we see there are significantly fewer fluctuations, spikes, and drops in validation loss (as compared to standard Adam). Furthermore, the Rectified Adam validation loss is much more likely to follow training loss, in some cases near exactly.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Keep in mind that raw accuracy isn’t everything when it comes to training your own custom neural networks — stability matters as well as it goes hand-in-hand with generalization. Whenever I’m training a custom CNN I’m not only looking for high accuracy models, I’m also looking for stability. Stability typically implies that a model is converging nicely and will ideally generalize well. In this regard, Rectified Adam delivers on its promises from the Liu et al. paper. Secondly, you should note that Adam obtains lower loss than Rectified Adam in every single experiment. This behavior is not necessarily a bad thing — it could imply that Rectified Adam is generalizing better, but it’s hard to say without running further experiments using images outside the respective training and testing sets. Again, keep in mind that lower loss is not necessarily a better model! When you encounter very low loss (especially loss near zero) your model may be overfitting to your training set. You need to obtain mastery level experience operating these three optimizers Figure 21: Mastering deep learning optimizers is like driving a car.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
You know your car and you drive it well no matter the road condition. On the other hand, if you get in an unfamiliar car, something doesn’t feel right until you have a few hours cumulatively behind the wheel. Optimizers are no different. I suggest that SGD be your daily driver until you are comfortable trying alternatives. Then you can mix in RMSprop and Adam. Learn how to use them before jumping into the latest deep learning optimizer. Becoming familiar with a given optimization algorithm is similar to mastering how to drive a car — you drive your own car better than other people’s cars because you’ve spent so much time driving it; you understand your car and its intricacies. Often times, a given optimizer is chosen to train a network on a dataset not because the optimizer itself is better, but because the driver (i.e., you, the deep learning practitioner) is more familiar with the optimizer and understands the “art” behind tuning its respective parameters. As a deep learning practitioner you should gain experience operating a wide variety of optimizers, but in my opinion, you should focus your efforts on learning how to train networks using the three following optimizers: SGD RMSprop Adam You might be surprised to see SGD is included in this list — isn’t SGD an older, less efficient optimizer than the newer adaptive methods, including Adam, Adagrad, Adadelta, etc.? Yes, it absolutely is.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
But here’s the thing — nearly every state-of-the-art computer vision model is trained using SGD. Consider the ImageNet classification challenge for example: AlexNet (there’s no mention in the paper but both the official implementation and CaffeNet used SGD) VGGNet (Section 3.1, Training) ResNet (Section 3.4, Implementation) SqueezeNet (it’s not mentioned in the paper, but SGD was used in their solver.prototxt) Every single one of those classification networks was trained using SGD. Now let’s consider the object detection networks trained on the COCO dataset: Faster R-CNN (Section 3.1.3, Training RPNs and Section 3.2, Sharing Features for RPN and Fast R-CNN, as well as Girshick et al. ’s GitHub solver.prototxt) Single Shot Detectors (Section 3, Experimental Results) RetinaNet (Section 4.1, Optimization) YOLO (I can’t find mention of it in the paper but I believe Joseph Redmon used SGD, correct me if I’m wrong) You guessed it — SGD was used to train all of them. Yes, SGD may the “old, unsexy” optimizer compared to its younger counterparts, but here’s the thing, standard SGD just works. That’s not to say that you shouldn’t learn how to use other optimizers — you absolutely should! But before you go down that rabbit hole, obtain a mastery level of SGD first. From there, start exploring other optimizers — I typically recommend RMSprop and Adam. And if you find Adam is working well, consider replacing Adam with Rectified Adam to see if you can get an additional boost in accuracy (sort of like how replacing ReLUs with ELUs can usually give you a small boost). Once you understand how to use those optimizers on a variety of datasets, continue your studies and explore other optimizers as well.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
All that said, if you’re new to deep learning, don’t immediately try jumping into the more “advanced” optimizers — you’ll only run into trouble later in your deep learning career. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, we investigated the claims from Liu et al. that the Rectified Adam optimizer outperforms the standard Adam optimizer in terms of: Better accuracy (or at least identical accuracy when compared to Adam) And in fewer epochs than standard Adam To evaluate those claims we trained three CNN models: ResNet GoogLeNet MiniVGGNet These models were trained on four datasets: MNIST Fashion MNIST CIFAR-10 CIFAR-100 Each combination of the model architecture and dataset were trained using two optimizers: Adam Rectified Adam In total, we ran 3 x 4 x 2 = 24 different experiments used to compare standard Adam to Rectified Adam. The result?
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
In each and every experiment Rectified Adam either performed worse or obtained identical accuracy compared to standard Adam. That said, training with Rectified Adam was more stable than standard Adam, likely implying that Rectified Adam could generalize better (but additional experiments would be required to validate that claim). Liu et al. ’s study of warmup can be utilized in adaptive learning rate optimizers and will likely help future researchers build on their work and create even better optimizers. For the time being, my personal opinion is that you’re better off sticking with standard Adam for your initial experiments. If you find that Adam is working well for your experiments, substitute in Rectified Adam to see if you can improve your accuracy. You should especially try to use the Rectified Adam optimizer if you notice that Adam is working well, but you need better generalization. The second takeaway from this guide is that you should obtain mastery level experience operating these three optimizers: SGD RMSprop Adam You should especially learn how to operate SGD. Yes, SGD is “less sexy” compared to the newer adaptive learning rate methods, but nearly every computer vision state-of-the-art architecture has been trained using it. Learn how to operate these three optimizers first.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Once you have a good understanding of how they work and how to tune their respective hyperparameters, then move on to other optimizers. If you need help learning how to use these optimizers and tune their hyperparameters, be sure to refer to Deep Learning for Computer Vision with Python where I cover my tips, suggestions, and best practices in detail. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
Click here to download the source code to this pos In this tutorial you’ll discover the difference between Keras and tf.keras , including what’s new in TensorFlow 2.0. Today’s tutorial is inspired from an email I received last Tuesday from PyImageSearch reader, Jeremiah. Jeremiah asks: Hi Adrian, I saw that TensorFlow 2.0 was released a few days ago. TensorFlow developers seem to be promoting Keras, or rather, something called tf.keras, as the recommended high-level API for TensorFlow 2.0. But I thought Keras was its own separate package? I’m so confused on “which Keras package” I should be using when training my own networks. Secondly, is TensorFlow 2.0 worth upgrading to? I’ve seen a few tutorials in the deep learning blogosphere discussing TensorFlow 2.0 but with all the confusion regarding Keras, tf.keras , and TensorFlow 2.0, I’m at a loss for where to start. Could you shed some light on this area? Great questions, Jeremiah.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
Just in case you didn’t hear, the long-awaited TensorFlow 2.0 was officially released on September 30th. And while it’s certainly a time for celebration, many deep learning practitioners such as Jeremiah are scratching their heads: What does the TensorFlow 2.0 release mean for me as a Keras user? Am I supposed to use the keras package for training my own neural networks? Or should I be using the tf.keras submodule inside TensorFlow 2.0 instead? Are there TensorFlow 2.0 features that I should care about as a Keras user? The transition from TensorFlow 1.x to TensorFlow 2.0 is going to be a bit of a rocky one, at least to start, but with the right understanding, you’ll be able to navigate the migration with ease. Inside the rest of this tutorial, I’ll be discussing the similarities between Keras, tf.keras , and the TensorFlow 2.0 release, including the features you should care about. To learn the difference between Keras, tf.keras, and TensorFlow 2.0, just keep reading! Keras vs. tf.keras: What’s the difference in TensorFlow 2.0? In the first part of this tutorial, we’ll discuss the intertwined history between Keras and TensorFlow, including how their joint popularities fed each other, growing and nurturing each other, leading us to where we are today.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
I’ll then discuss why you should be using tf.keras for all your future deep learning projects and experiments. Next, I’ll discuss the concept of a “computational backend” and how TensorFlow’s popularity enabled it to become Keras’ most prevalent backend, paving the way for Keras to be integrated into the tf.keras submodule of TensorFlow. Finally, we’ll discuss some of the most popular TensorFlow 2.0 features you should care about as a Keras user, including: Sessions and eager execution Automatic differentiation Model and layer subclassing Better multi-GPU/distributed training support Included in TensorFlow 2.0 is a complete ecosystem comprised of TensorFlow Lite (for mobile and embedded devices) and TensorFlow Extended for development production machine learning pipelines (for deploying production models). Let’s get started! The intertwined relationship between Keras and TensorFlow Figure 1: Keras and TensorFlow have a complicated history together. Read this section for the Cliff’s Notes of their love affair. With TensorFlow 2.0, you should be using tf.keras rather than the separate Keras package. Understanding the complicated, intertwined relationship between Keras and TensorFlow is like listening to the love story of two high school sweethearts who start dating, break up, and eventually find their way together — it’s long, detailed, and at some points even contradictory. Instead of recalling the full love story for you, instead we’ll review the CliffsNotes: Keras was originally created and developed by Google AI Developer/Researcher, Francois Chollet. Francois committed and released the first version of Keras to his GitHub on March 27th, 2015.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
Initially, Francois developed Keras to facilitate his own research and experiments. However, with the explosion of deep learning popularity, many developers, programmers, and machine learning practitioners flocked to Keras due to its easy-to-use API. Back then, there weren’t too many deep learning libraries available — the popular ones included Torch, Theano, and Caffe. The problem with these libraries was that it was like trying to write assembly/C++ to perform your experiments — tedious, time-consuming, and inefficient. Keras, on the other hand, was extremely easy to use, making it possible for researchers and developers to iterate on their experiments faster. In order to train your own custom neural networks, Keras required a backend. A backend is a computational engine — it builds the network graph/topology, runs the optimizers, and performs the actual number crunching. To understand the concept of a backend, consider building a website from scratch. Here you may use the PHP programming language and a SQL database. Your SQL database is your backend.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
You could use MySQL, PostgreSQL, or SQL Server as your database; however, your PHP code used to interact with the database will not change (provided you’re using some sort of MVC paradigm that abstracts the database layer, of course). Essentially, PHP doesn’t care what database is being used, as long as it plays with PHP’s rules. The same is true with Keras. You can think of the backend as your database and Keras as your programming language used to access the database. You can swap in whatever backend you like, and as long as it abides by certain rules, your code doesn’t have to change. Therefore, you can think of Keras as a set of abstractions that makes it easier to perform deep learning (side note: While Keras always enabled rapid prototyping, it was not flexible enough for researchers. That’s changing in TensorFlow 2.0 — more on that later in this article). Originally, Keras’ default backend was Theano and was the default until v1.1.0. At the same time, Google had released TensorFlow, a symbolic math library used for machine learning and training neural networks. Keras started supporting TensorFlow as a backend, and slowly but surely, TensorFlow became the most popular backend, resulting in TensorFlow being the default backend starting from the release of Keras v1.1.0.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
Once TensorFlow became the default backend for Keras, by definition, both TensorFlow and Keras usage grew together — you could not have Keras without TensorFlow, and if you installed Keras on your system, you were also installing TensorFlow. Similarly, TensorFlow users were becoming increasingly more drawn to the simplicity of the high-level Keras API. The tf.keras submodule was introduced in TensorFlow v1.10.0, the first step in integrating Keras directly within the TensorFlow package itself. The tf.keras package is/was separate from the keras package you would install via pip (i.e., pip install keras). The original keras package was not subsumed into tensorflow to ensure compatibility and so that they could both organically develop. However, that’s now changing — when Google announced TensorFlow 2.0 in June 2019, they declared that Keras is now the official high-level API of TensorFlow for quick and easy model design and training. With the release of Keras 2.3.0, Francois has stated that: This is the first release of Keras that brings the keras package in sync with tf.keras It is the final release of Keras that will support multiple backends (i.e., Theano, CNTK, etc.). And most importantly, going forward all deep learning practitioners should switch their code to TensorFlow 2.0 and the tf.keras package. The original keras package will still receive bug fixes, but moving forward, you should be using tf.keras. As you can tell, the history between Keras and TensorFlow is long, complicated, and intertwined.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
But the most important takeaway for you, as a Keras user, is that you should be using TensorFlow 2.0 and tf.keras for future projects. Start using tf.keras in all future projects Figure 2: What’s the difference between Keras and tf.keras in TensorFlow 2.0? On September 17th, 2019 Keras v2.3.0 was officially released — in the release Francois Chollet (the creator and chief maintainer of Keras), stated that: Keras v2.3.0 is the first release of Keras that brings keras in sync with tf.keras It will be the the last major release to support backends other than TensorFlow (i.e., Theano, CNTK, etc.) And most importantly, deep learning practitioners should start moving to TensorFlow 2.0 and the tf.keras package For the majority of your projects, that’s as simple as changing your import lines from: from keras... import ... To prefacing the import with tensorflow: from tensorflow.keras... import ... If you are using custom training loops or using Sessions then you’ll have to update your code to use the new GradientTape feature, but overall, it’s fairly easy to update your code. To help you in (automatically) updating your code from keras to tf.keras, Google has released a script named tf_upgrade_v2 script, which, as the name suggests, analyzes your code and reports which lines need to be updated — the script can even perform the upgrade process for you. You can refer here to learn more about automatically updating your code to TensorFlow 2.0. Computational “backends” for Keras Figure 3: What computational backends does Keras support? What does it mean to use Keras directly in TensorFlow via tf.keras? As I mentioned earlier in this post, Keras relies on the concept of a computational backend. The computational backend performs all the “heavy lifting” in terms of constructing a graph of the model, numeric computation, etc.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
Keras then sits on top of this computational engine as an abstraction, making it easier for deep learning developers/practitioners to implement and train their models. Originally, Keras supported Theano as its preferred computational backend — it then later supported other backends, including CNTK and mxnet, to name a few. However, the most popular backend, by far, was TensorFlow which eventually became the default computation backend for Keras. As more and more TensorFlow users started using Keras for its easy to use high-level API, the more TensorFlow developers had to seriously consider subsuming the Keras project into a separate module in TensorFlow called tf.keras. TensorFlow v1.10 was the first release of TensorFlow to include a branch of keras inside tf.keras. Now that TensorFlow 2.0 is released both keras and tf.keras are in sync, implying that keras and tf.keras are still separate projects; however, developers should start using tf.keras moving forward as the keras package will only support bug fixes. To quote Francois Chollet, the creator and maintainer of Keras: This is also the last major release of multi-backend Keras. Going forward, we recommend that users consider switching their Keras code to tf.keras in TensorFlow 2.0. It implements the same Keras 2.3.0 API (so switching should be as easy as changing the Keras import statements), but it has many advantages for TensorFlow users, such as support for eager execution, distribution, TPU training, and generally far better integration between low-level TensorFlow and high-level concepts like Layer and Model. It is also better maintained.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
If you’re a both a Keras and TensorFlow user, you should consider switching your code over to TensorFlow 2.0 and tf.keras. Sessions and Eager Execution in TensorFlow 2.0 Figure 4: Eager execution is a more Pythonic way of working dynamic computational graphs. TensorFlow 2.0 supports eager execution (as does PyTorch). You can take advantage of eager execution and sessions with TensorFlow 2.0 and tf.keras. ( image source) TensorFlow 1.10+ users that utilize the Keras API within tf.keras will be familiar with creating a Session to train their model: with tf. Session() as session: session.run(tf.global_variables_initializer()) session.run(tf.tables_initializer()) model.fit(X_train, y_train, validation_data=(X_valid, y_valid), epochs=10, batch_size=64) Creating the Session object and requiring the entire model graph to be built ahead of time was a bit of a pain, so TensorFlow 2.0 introduced the concept of Eager Execution, thereby simplifying the code to: model.fit(X_train, y_train, validation_data=(X_valid, y_valid), epochs=10, batch_size=64) The benefit of Eager Execution is that the entire model graph does not have to be built. Instead, operations are evaluated immediately, making it easier to get started building your models (as well as debugging them). For more details on Eager Execution, including how to use it with TensorFlow 2.0, refer to this article. And if you want a comparison on Eager Execution vs. Sessions and the impact it has on the speed of training a model, refer to this page. Automatic differentiation and GradientTape with TensorFlow 2.0 Figure 5: How is TensorFlow 2.0 better at handling custom layers or loss functions?
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
The answer lies in automatic differentiation and GradientTape. ( image source) If you’re a researcher who needed to implement custom layers or loss functions, you likely didn’t like TensorFlow 1.x (and rightfully so). TensorFlow 1.x’s custom implementations were clunky to say the least — a lot was left to be desired. With the release of TensorFlow 2.0 that is starting to change — it’s now far easier to implement your own custom losses. One way it’s becoming easier is through automatic differentiation and the GradientTape implementation. To utilize GradientTape all we need to do is implement our model architecture: # Define our model architecture model = tf.keras. Sequential([ tf.keras.layers. Dropout(rate=0.2, input_shape=X.shape[1:]), tf.keras.layers. Dense(units=64, activation='relu'), tf.keras.layers. Dropout(rate=0.2), tf.keras.layers.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
Dense(units=1, activation='sigmoid') ]) Define our loss function and optimizer: # Define loss and optimizer loss_func = tf.keras.losses. BinaryCrossentropy() optimizer = tf.keras.optimizers. Adam() Create the function responsible for performing a single batch update: def train_loop(features, labels): # Define the GradientTape context with tf. GradientTape() as tape: # Get the probabilities predictions = model(features) # Calculate the loss loss = loss_func(labels, predictions) # Get the gradients gradients = tape.gradient(loss, model.trainable_variables) # Update the weights optimizer.apply_gradients(zip(gradients, model.trainable_variables)) return loss And then train the model: # Train the model def train_model(): start = time.time() for epoch in range(10): for step, (x, y) in enumerate(dataset): loss = train_loop(x, y) print('Epoch %d: last batch loss = %.4f' % (epoch, float(loss))) print("It took {} seconds".format(time.time() - start)) # Initiate training train_model() The GradientTape magic handles differentiation for us behind the scenes, making it far easier to work with custom losses and layers. And speaking of custom layer and model implementations, be sure to refer to the next section. Model and layer subclassing in TensorFlow 2.0 TensorFlow 2.0 and tf.keras provide us with three separate methods to implement our own custom models: Sequential Function Subclassing Both the sequential and functional paradigms have been inside Keras for quite a while, but the subclassing feature is still unknown to many deep learning practitioners. I’ll be doing a dedicated tutorial on the three methods next week, but for the time being, let’s take a look at how to implement a simple CNN based on the seminal LeNet architecture using (1) TensorFlow 2.0, (2) tf.keras, and (3) the model subclassing feature: class LeNet(tf.keras. Model): def __init__(self): super(LeNet, self).__init__() self.conv2d_1 = tf.keras.layers. Conv2D(filters=6, kernel_size=(3, 3), activation='relu', input_shape=(32,32,1)) self.average_pool = tf.keras.layers. AveragePooling2D() self.conv2d_2 = tf.keras.layers.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
Conv2D(filters=16, kernel_size=(3, 3), activation='relu') self.flatten = tf.keras.layers. Flatten() self.fc_1 = tf.keras.layers. Dense(120, activation='relu') self.fc_2 = tf.keras.layers. Dense(84, activation='relu') self.out = tf.keras.layers. Dense(10, activation='softmax') def call(self, input): x = self.conv2d_1(input) x = self.average_pool(x) x = self.conv2d_2(x) x = self.average_pool(x) x = self.flatten(x) x = self.fc_2(self.fc_1(x)) return self.out(x) lenet = LeNet() Notice how the LeNet class is a subclass of Model. The constructor (i.e., the init) of LeNet defines each of the individual layers inside the model. The call method then performs the forward-pass, enabling you to customize the forward pass as you see fit. The benefit of using model subclassing is that your model: Becomes fully-customizable. Enables you to implement and utilize your own custom loss implementations. And since your architecture inherits the Model class, you can still call methods like .fit(), .compile(), and .evaluate(), thereby maintaining the easy-to-use (and familiar) Keras API.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
If you’re interested in learning more about LeNet, you can refer to this previous article. TensorFlow 2.0 introduces better multi-GPU and distributed training support Figure 6: Is TenorFlow 2.0 better with multiple GPU training? Yes, with the single worker MirroredStrategy. ( image source) TensorFlow 2.0 and tf.keras provide better multi-GPU and distributed training through their MirroredStrategy. To quote the TensorFlow 2.0 documentation, “The MirroredStrategy supports synchronous distributed training on multiple GPUs on one machine”. If you want to use multiple machines (each having potentially multiple GPUs), you should take a look at the MultiWorkerMirroredStrategy. Or, if you are using Google’s cloud for training, check out the TPUStrategy. For now though, let’s assume you are on a single machine that has multiple GPUs and you want to ensure all of your GPUs are used for training. You can accomplish this by first creating your MirroredStrategy: strategy = tf.distribute. MirroredStrategy() print ('Number of devices: {}'.format(strategy.num_replicas_in_sync)) You then need to declare your model architecture and compile it within the scope of the strategy: # Call the distribution scope context manager with strategy.scope(): # Define a model to fit the above data model = tf.keras.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
Sequential([ tf.keras.layers. Dropout(rate=0.2, input_shape=X.shape[1:]), tf.keras.layers. Dense(units=64, activation='relu'), tf.keras.layers. Dropout(rate=0.2), tf.keras.layers. Dense(units=1, activation='sigmoid') ]) # Compile the model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) And from there you can call .fit to train the model: # Train the model model.fit(X, y, epochs=5) Provided your machine has multiple GPUs, TensorFlow will take care of the multi-GPU training for you. TensorFlow 2.0 is an ecosystem, including TF 2.0, TF Lite, TFX, quantization, and deployment Figure 7: What is new in the TensorFlow 2.0 ecosystem? Should I use Keras separately or should I use tf.keras? TensorFlow 2.0 is more than a computational engine and a deep learning library for training neural networks — it’s so much more. With TensorFlow Lite (TF Lite) we can train, optimize, and quantize models that are designed to run on resource-constrained devices such as smartphones and other embedded devices (i.e., Raspberry Pi, Google Coral, etc.). Or, if you need to deploy your model to production, you can use TensorFlow Extended (TFX), an end-to-end platform for model deployment.
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
Once your research and experiments are complete, you can leverage TFX to prepare the model for production and scale your model using Google’s ecosystem. With TensorFlow 2.0 we are truly starting to see a better, more efficient bridge between research, experimentation, model preparation/quantization, and deployment to production. I’m truly excited about the release of TensorFlow 2.0 and the impact it will have on the deep learning community. Credits All code examples from this post came from TensorFlow 2.0’s official examples. Be sure to refer to the complete code examples provided by Francois Chollet for more details. Additionally, definitely check out Sayak Paul’s Ten Important Updates from TensorFlow 2.0 article which helped inspire today’s blog post. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated?
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned about Keras, tf.keras, and TensorFlow 2.0. The first important takeaway is that deep learning practitioners using the keras package should start using tf.keras inside TensorFlow 2.0. Not only will you enjoy the added speed and optimization of TensorFlow 2.0, but you’ll also receive new feature updates — the latest release of the keras package (v2.3.0) will be the last release to support multiple backends and feature updates. Moving forward, the keras package will receive only bug fixes. You should seriously consider moving to tf.keras and TensorFlow 2.0 in your future projects. The second takeaway is that TensorFlow 2.0 is that it’s more than a GPU-accelerated deep learning library. Not only do you have the ability to train your own models using TensorFlow 2.0 and tf.keras, but you can now: Take those models and prepare them for mobile/embedded deployment using TensorFlow Lite (TF Lite). Deploy the models to production using TensorFlow Extended (TF Extended).
https://pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
From my perspective, I’ve already started porting my original keras code to tf.keras. I would suggest you start doing the same. I hope you enjoyed today’s tutorial — I’ll be back with new TensorFlow 2.0 and tf.keras tutorials soon. To be notified when future tutorials are published here on PyImageSearch (and receive my free 17-page Resource Guide PDF on Computer Vision, Deep Learning, and OpenCV), just enter your email address in the form below! Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning. Join the Newsletter! Website
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
Click here to download the source code to this pos In this tutorial, you will learn how to automatically detect natural disasters (earthquakes, floods, wildfires, cyclones/hurricanes) with up to 95% accuracy using Keras, Computer Vision, and Deep Learning. I remember the first time I ever experienced a natural disaster — I was just a kid in kindergarten, no more than 6-7 years old. We were outside for recess, playing on the jungle gym, running around like the wild animals that young children are. Rain was in the forecast. It was cloudy. And very humid. My mother had given me a coat to wear outside, but I was hot and uncomfortable — the humidity made the cotton/polyester blend stick to my skin. The coat, just like the air around me, was suffocating. All of a sudden the sky changed from “normal rain clouds” to an ominous green. The recess monitor reached into her pocket, grabbed her whistle, and blew it, indicating it was time for us to settle our wild animal antics and come inside for schooling.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
After recess we would typically sit in a circle around the teacher’s desk for show-and-tell. But not this time. We were immediately rushed into the hallway and were told to cover our heads with our hands — a tornado had just touched down near our school. Just the thought of a tornado is enough to scare a kid. But to actually experience one? That’s something else entirely. The wind picked up dramatically, an angry tempest howling and berating our school with tree branches, rocks, and whatever loose debris was not tied down. The entire ordeal couldn’t have lasted more than 5-10 minutes — but it felt like a terrifying eternity. It turned out that we were safe the entire time. After the tornado had touched down it started carving a path through the cornfields away from our school, not toward it.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
We were lucky. It’s interesting how experiences as a young kid, especially the ones that scare you, shape you and mold you after you grow up. A few days after the event my mom took me to the local library. I picked out every book on tornados and hurricanes that I could find. Even though I only had a basic reading level at the time, I devoured them, studying the pictures intently until I could recreate them in my mind — imagining what it would be like to be inside one of those storms. Later, in graduate school, I experienced the historic June 29th, 2012 derecho that delivered 60+ MPH sustained winds and gusts of over 100 MPH, knocking down power lines and toppling large trees. That storm killed 29 people, injured hundreds of others, and caused loss of electricity and power in parts of the United States east coast for over 6 days, an unprecedented amount of time in the modern-day United States. Natural disasters cannot be prevented — but they can be detected, giving people precious time to get to safety. In this tutorial, you’ll learn how we can use Computer Vision and Deep Learning to help detect natural disasters. To learn how to detect natural disasters with Keras, Computer Vision, and Deep Learning, just keep reading!
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
Looking for the source code to this post? Jump Right To The Downloads Section Detecting Natural Disasters with Keras and Deep Learning In the first part of this tutorial, we’ll discuss how computer vision and deep learning algorithms can be used to automatically detect natural disasters in images and video streams. From there we’ll review our natural disaster dataset which consists of four classes: Cyclone/hurricane Earthquake Flood Wildfire We’ll then design a set of experiments that will: Help us fine-tune VGG16 (pre-trained on ImageNet) on our dataset. Find optimal learning rates. Train our model and obtain > 95% accuracy! Let’s get started! How can computer vision and deep learning detect natural disasters? Figure 1: We can detect natural disasters with Keras and Deep Learning using a dataset of natural disaster images. ( image source) Natural disasters cannot be prevented — but they can be detected. All around the world we use sensors to monitor for natural disasters: Seismic sensors (seismometers) and vibration sensors (seismoscopes) are used to monitor for earthquakes (and downstream tsunamis).
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
Radar maps are used to detect the signature “hook echo” of a tornado (i.e., a hook that extends from the radar echo). Flood sensors are used to measure moisture levels while water level sensors monitor the height of water along a river, stream, etc. Wildfire sensors are still in their infancy but hopefully will be able to detect trace amounts of smoke and fire. Each of these sensors is highly specialized to the task at hand — detect a natural disaster early, alert people, and allow them to get to safety. Using computer vision we can augment existing sensors, thereby increasing the accuracy of natural disaster detectors, and most importantly, allow people to take precautions, stay safe, and prevent/reduce the number of deaths and injuries that happen due to these disasters. Our natural disasters image dataset Figure 2: A dataset of natural disaster images. We’ll use this dataset to train a natural disaster detector with Keras and Deep Learning. The dataset we are using here today was curated by PyImageSearch reader, Gautam Kumar. Gautam used Google Images to gather a total of 4,428 images belonging to four separate classes: Cyclone/Hurricane: 928 images Earthquake: 1,350 Flood: 1,073 Wildfire: 1,077 He then trained a Convolutional Neural Network to recognize each of the natural disaster cases. Gautam shared his work on his LinkedIn profile, gathering the attention of many deep learning practitioners (myself included).
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
I asked him if he would be willing to (1) share his dataset with the PyImageSearch community and (2) allow me to write a tutorial using the dataset. Gautam agreed, and here we are today! I again want to give a big, heartfelt thank you to Gautam for his hard work and contribution — be sure to thank him if you have the chance! Downloading the natural disasters dataset Figure 3: Gautam Kumar’s dataset for detecting natural disasters with Keras and deep learning. You can use this link to download the original natural disasters dataset via Google Drive. After you download the archive you should unzip it and inspect the contents: $ tree --dirsfirst --filelimit 10 Cyclone_Wildfire_Flood_Earthquake_Database Cyclone_Wildfire_Flood_Earthquake_Database ├── Cyclone [928 entries] ├── Earthquake [1350 entries] ├── Flood [1073 entries] ├── Wildfire [1077 entries] └── readme.txt 4 directories, 1 file Here you can see that each of the natural disasters has its own directory with examples of each class residing inside its respective parent directory. Project structure Using the tree command, let’s review today’s project available via the “Downloads” section of this tutorial: $ tree --dirsfirst --filelimit 10 . ├── Cyclone_Wildfire_Flood_Earthquake_Database │   ├── Cyclone [928 entries] │   ├── Earthquake [1350 entries] │   ├── Flood [1073 entries] │   ├── Wildfire [1077 entries] │   └── readme.txt ├── output │   ├── natural_disaster.model │   │   ├── assets │   │   ├── variables │   │   │   ├── variables.data-00000-of-00002 │   │   │   ├── variables.data-00001-of-00002 │   │   │   └── variables.index │   │   └── saved_model.pb │   ├── clr_plot.png │   ├── lrfind_plot.png │   └── training_plot.png ├── pyimagesearch │   ├── __init__.py │   ├── clr_callback.py │   ├── config.py │   └── learningratefinder.py ├── videos │   ├── floods_101_nat_geo.mp4 │   ├── fort_mcmurray_wildfire.mp4 │   ├── hurricane_lorenzo.mp4 │   ├── san_andreas.mp4 │   └── terrific_natural_disasters_compilation.mp4 ├── Cyclone_Wildfire_Flood_Earthquake_Database.zip ├── train.py └── predict.py 11 directories, 20 files Our project contains: The natural disaster dataset. Refer to the previous two sections. An output/ directory where our model and plots will be stored.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
The results from my experiment are included. Our pyimagesearch module containing our Cyclical Learning Rate Keras callback, a configuration file, and Keras Learning Rate Finder. A selection of videos/ for testing the video classification prediction script. Our training script, train.py . This script will perform fine-tuning on a VGG16 model pre-trained on the ImageNet dataset. Our video classification prediction script, predict.py , which performs a rolling average prediction to classify the video in real-time. Our configuration file Our project is going to span multiple Python files, so to keep our code tidy and organized (and ensure that we don’t have a multitude of command line arguments), let’s instead create a configuration file to store all important paths and variables. Open up the config.py file inside the pyimagesearch module and insert the following code: # import the necessary packages import os # initialize the path to the input directory containing our dataset # of images DATASET_PATH = "Cyclone_Wildfire_Flood_Earthquake_Database" # initialize the class labels in the dataset CLASSES = ["Cyclone", "Earthquake", "Flood", "Wildfire"] The os module import allows us to build OS-agnostic paths directly in this config file (Line 2). Line 6 specifies the root path to our natural disaster dataset. Line 7 provides the names of class labels (i.e. the names of the subdirectories in the dataset).
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
Let’s define our dataset splits: # define the size of the training, validation (which comes from the # train split), and testing splits, respectively TRAIN_SPLIT = 0.75 VAL_SPLIT = 0.1 TEST_SPLIT = 0.25 Lines 13-15 house our training, testing, and validation split sizes. Take note that the validation split is 10% of the training split (not 10% of all the data). Next, we’ll define our training parameters: # define the minimum learning rate, maximum learning rate, batch size, # step size, CLR method, and number of epochs MIN_LR = 1e-6 MAX_LR = 1e-4 BATCH_SIZE = 32 STEP_SIZE = 8 CLR_METHOD = "triangular" NUM_EPOCHS = 48 Lines 19 and 20 contain the minimum and maximum learning rate for Cyclical Learning Rates (CLR).We’ll learn how to set these learning rate values in the “Finding our initial learning rate” section below. Lines 21-24 define the batch size, step size, CLR method, and the number of training epochs. From there we’ll define the output paths: # set the path to the serialized model after training MODEL_PATH = os.path.sep.join(["output", "natural_disaster.model"]) # define the path to the output learning rate finder plot, training # history plot and cyclical learning rate plot LRFIND_PLOT_PATH = os.path.sep.join(["output", "lrfind_plot.png"]) TRAINING_PLOT_PATH = os.path.sep.join(["output", "training_plot.png"]) CLR_PLOT_PATH = os.path.sep.join(["output", "clr_plot.png"]) Lines 27-33 define the following output paths: Serialized model after training Learning rate finder plot Training history plot CLR plot Implementing our training script with Keras Our training procedure will consist of two steps: Step #1: Use our learning rate finder to find optimal learning rates to fine-tune our VGG16 CNN on our dataset. Step #2: Use our optimal learning rates in conjunction with Cyclical Learning Rates (CLR) to obtain a high accuracy model. Our train.py file will handle both of these steps. Go ahead and open up train.py in your favorite code editor and insert the following code: # set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.applications import VGG16 from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Input from tensorflow.keras.models import Model from tensorflow.keras.optimizers import SGD from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from pyimagesearch.learningratefinder import LearningRateFinder from pyimagesearch.clr_callback import CyclicLR from pyimagesearch import config from imutils import paths import matplotlib.pyplot as plt import numpy as np import argparse import pickle import cv2 import sys import os Lines 2-27 import necessary packages including: matplotlib : For plotting (using the "Agg" backend so plot images can be saved to disk). tensorflow : Imports including our VGG16 CNN, data augmentation, layer types, and SGD optimizer. scikit-learn : Imports including a label binarizer, dataset splitting function, and an evaluation reporting tool.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
LearningRateFinder : Our Keras Learning Rate Finder class. CyclicLR : A Keras callback that oscillates learning rates, known as Cyclical Learning Rates. CLRs lead to faster convergence and typically require fewer experiments for hyperparameter updates. config : The custom configuration settings we reviewed in the previous section. paths : Includes a function for listing the image paths in a directory tree. cv2 : OpenCV for preprocessing and display. Let’s parse command line arguments and grab our image paths: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-f", "--lr-find", type=int, default=0, help="whether or not to find optimal learning rate") args = vars(ap.parse_args()) # grab the paths to all images in our dataset directory and initialize # our lists of images and class labels print("[INFO] loading images...") imagePaths = list(paths.list_images(config. DATASET_PATH)) data = [] labels = [] Recall that most of our settings are in config.py . There is one exception.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
The --lr-find command line argument tells our script whether or not to find the optimal learning rate (Lines 30-33). Line 38 grabs paths to all images in our dataset. We then initialize two synchronized lists to hold our image data and labels (Lines 39 and 40). Let’s populate the data and labels lists now: # loop over the image paths for imagePath in imagePaths: # extract the class label label = imagePath.split(os.path.sep)[-2] # load the image, convert it to RGB channel ordering, and resize # it to be a fixed 224x224 pixels, ignoring aspect ratio image = cv2.imread(imagePath) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = cv2.resize(image, (224, 224)) # update the data and labels lists, respectively data.append(image) labels.append(label) # convert the data and labels to NumPy arrays print("[INFO] processing data...") data = np.array(data, dtype="float32") labels = np.array(labels) # perform one-hot encoding on the labels lb = LabelBinarizer() labels = lb.fit_transform(labels) Lines 43-55 loop over imagePaths, while: Extracting the class label from the path (Line 45). Loading and preprocessing the image (Lines 49-51). Images are converted to RGB channel ordering and resized to 224×224 for VGG16. Adding the preprocessed image to the data list (Line 54). Adding the label to the labels list (Lines 55). Line 59 performs a final preprocessing step by converting the data to a "float32" datatype NumPy array. Similarly, Line 60 converts labels to an array so that Lines 63 and 64 can perform one-hot encoding.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
From here, we’ll partition our data and set up data augmentation: # partition the data into training and testing splits (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=config. TEST_SPLIT, random_state=42) # take the validation split from the training split (trainX, valX, trainY, valY) = train_test_split(trainX, trainY, test_size=config. VAL_SPLIT, random_state=84) # initialize the training data augmentation object aug = ImageDataGenerator( rotation_range=30, zoom_range=0.15, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.15, horizontal_flip=True, fill_mode="nearest") Lines 67-72 construct training, testing, and validation splits. Lines 75-82 instantiate our data augmentation object. Read more about data augmentation in my previous posts as well as in the Practitioner Bundle of Deep Learning for Computer Vision with Python. At this point we’ll set up our VGG16 model for fine-tuning: # load the VGG16 network, ensuring the head FC layer sets are left # off baseModel = VGG16(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3))) # construct the head of the model that will be placed on top of the # the base model headModel = baseModel.output headModel = Flatten(name="flatten")(headModel) headModel = Dense(512, activation="relu")(headModel) headModel = Dropout(0.5)(headModel) headModel = Dense(len(config. CLASSES), activation="softmax")(headModel) # place the head FC model on top of the base model (this will become # the actual model we will train) model = Model(inputs=baseModel.input, outputs=headModel) # loop over all layers in the base model and freeze them so they will # *not* be updated during the first training process for layer in baseModel.layers: layer.trainable = False # compile our model (this needs to be done after our setting our # layers to being non-trainable print("[INFO] compiling model...") opt = SGD(lr=config. MIN_LR, momentum=0.9) model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) Lines 86 and 87 load VGG16 using pre-trained ImageNet weights (but without the fully-connected layer head). Lines 91-95 create a new fully-connected layer head followed by Line 99 which adds the new FC layer to the body of VGG16. Lines 103 and 104 mark the body of VGG16 as not trainable — we will be training (i.e. fine-tuning) only the FC layer head.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
Lines 109-111 then compile our model with the Stochastic Gradient Descent (SGD ) optimizer and our specified minimum learning rate. The first time you run the script, you should set the --lr-find command line argument to use the Keras Learning Rate Finder to determine the optimal learning rate. Let’s see how that works: # check to see if we are attempting to find an optimal learning rate # before training for the full number of epochs if args["lr_find"] > 0: # initialize the learning rate finder and then train with learning # rates ranging from 1e-10 to 1e+1 print("[INFO] finding learning rate...") lrf = LearningRateFinder(model) lrf.find( aug.flow(trainX, trainY, batch_size=config. BATCH_SIZE), 1e-10, 1e+1, stepsPerEpoch=np.ceil((trainX.shape[0] / float(config. BATCH_SIZE))), epochs=20, batchSize=config. BATCH_SIZE) # plot the loss for the various learning rates and save the # resulting plot to disk lrf.plot_loss() plt.savefig(config. LRFIND_PLOT_PATH) # gracefully exit the script so we can adjust our learning rates # in the config and then train the network for our full set of # epochs print("[INFO] learning rate finder complete") print("[INFO] examine plot and adjust learning rates before training") sys.exit(0) Line 115 checks to see if we should attempt to find optimal learning rates. Assuming so, we: Initialize LearningRateFinder (Line 119). Start training with a 1e-10 learning rate and exponentially increase it until we hit 1e+1 (Lines 120-125). Plot the loss vs. learning rate and save the resulting figure (Lines 129 and 130).
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
Gracefully exit the script after printing a message instructing the user to inspect the learning rate finder plot (Lines 135-137). After this code executes we now need to: Step #1: Review the generated plot. Step #2: Update config.py with our MIN_LR and MAX_LR, respectively. Step #3: Train the network on our full dataset. Assuming we have completed Steps #1 and #2, let’s now handle Step #3 where our minimum and maximum learning rate have already been found and updated in the config. In this case, it is time to initialize our Cyclical Learning Rate class and commence training: # otherwise, we have already defined a learning rate space to train # over, so compute the step size and initialize the cyclic learning # rate method stepSize = config. STEP_SIZE * (trainX.shape[0] // config. BATCH_SIZE) clr = CyclicLR( mode=config. CLR_METHOD, base_lr=config. MIN_LR, max_lr=config.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
MAX_LR, step_size=stepSize) # train the network print("[INFO] training network...") H = model.fit_generator( aug.flow(trainX, trainY, batch_size=config. BATCH_SIZE), validation_data=(valX, valY), steps_per_epoch=trainX.shape[0] // config. BATCH_SIZE, epochs=config. NUM_EPOCHS, callbacks=[clr], verbose=1) Lines 142-147 initialize our CyclicLR . Lines 151-157 then train our model using .fit_generator with our aug data augmentation object and our clr callback. Upon training completion, we proceed to evaluate and save our model : # evaluate the network and show a classification report print("[INFO] evaluating network...") predictions = model.predict(testX, batch_size=config. BATCH_SIZE) print(classification_report(testY.argmax(axis=1), predictions.argmax(axis=1), target_names=config. CLASSES)) # serialize the model to disk print("[INFO] serializing network to '{}'...".format(config. MODEL_PATH)) model.save(config. MODEL_PATH) Line 161 makes predictions on our test set.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
Those predictions are passed into Lines 162 and 163 which print a classification report summary. Line 167 serializes and saves the fine-tuned model to disk. Finally, let’s plot both our training history and CLR history: # construct a plot that plots and saves the training history N = np.arange(0, config. NUM_EPOCHS) plt.style.use("ggplot") plt.figure() plt.plot(N, H.history["loss"], label="train_loss") plt.plot(N, H.history["val_loss"], label="val_loss") plt.plot(N, H.history["accuracy"], label="train_acc") plt.plot(N, H.history["val_accuracy"], label="val_acc") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(config. TRAINING_PLOT_PATH) # plot the learning rate history N = np.arange(0, len(clr.history["lr"])) plt.figure() plt.plot(N, clr.history["lr"]) plt.title("Cyclical Learning Rate (CLR)") plt.xlabel("Training Iterations") plt.ylabel("Learning Rate") plt.savefig(config. CLR_PLOT_PATH) Lines 170-181 generate a plot of our training history and save the plot to disk. Note: In TensorFlow 2.0, the history dictionary keys have changed from acc to accuracy and val_acc to val_accuracy . It is especially confusing since “accuracy” is spelled out now, but “validation” is not. Take special care with this nuance depending on your TensorFlow version. Lines 184-190 plot our Cyclical Learning Rate history and save the figure to disk.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
Finding our initial learning rate Before we attempt to fine-tune our model to recognize natural disasters, let’s first use our learning rate finder to find an optimal set of learning rate ranges. Using this optimal learning rate range we’ll then be able to apply Cyclical Learning Rates to improve our model accuracy. Make sure you have both: Used the “Downloads” section of this tutorial to download the source code. Downloaded the dataset using the “Downloading the natural disasters dataset” section above. From there, open up a terminal and execute the following command: $ python train.py --lr-find 1 [INFO] loading images... [INFO] processing data... [INFO] compiling model... [INFO] finding learning rate... Epoch 1/20 94/94 [==============================] - 29s 314ms/step - loss: 9.7411 - accuracy: 0.2664 Epoch 2/20 94/94 [==============================] - 28s 295ms/step - loss: 9.5912 - accuracy: 0.2701 Epoch 3/20 94/94 [==============================] - 27s 291ms/step - loss: 9.4601 - accuracy: 0.2731 ... Epoch 12/20 94/94 [==============================] - 27s 290ms/step - loss: 2.7111 - accuracy: 0.7764 Epoch 13/20 94/94 [==============================] - 27s 286ms/step - loss: 5.9785 - accuracy: 0.6084 Epoch 14/20 47/94 [==============>...............] - ETA: 13s - loss: 10.8441 - accuracy: 0.3261 [INFO] learning rate finder complete [INFO] examine plot and adjust learning rates before training Provided the train.py script exited without error, you should now have a file named lrfind_plot.png in your output directory. Take a second now to inspect this image: Figure 4: Using a Keras Learning Rate Finder to find the optimal learning rates to fine tune our CNN on our natural disaster dataset. We will use the dataset to train a model for detecting natural disasters with the Keras deep learning framework. Examining the plot you can see that our model initially starts to learn and gain traction around 1e-6 . Our loss continues to drop until approximately 1e-4 where it starts to rise again, a sure sign of overfitting. Our optimal learning rate range is, therefore, 1e-6 to 1e-4 .
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
Update our learning rates Now that we know our optimal learning rates, let’s go back to our config.py file and update them accordingly: # define the minimum learning rate, maximum learning rate, batch size, # step size, CLR method, and number of epochs MIN_LR = 1e-6 MAX_LR = 1e-4 BATCH_SIZE = 32 STEP_SIZE = 8 CLR_METHOD = "triangular" NUM_EPOCHS = 48 Notice on Lines 19 and 20 (highlighted) of our configuration file that the MIN_LR and MAX_LR learning rate values are freshly updated. These values were found by inspecting our Keras Learning Rate Finder plot in the section above. Training the natural disaster detection model with Keras We can now fine-tune our model to recognize natural disasters! Execute the following command which will train our network over the full set of epochs: $ python train.py [INFO] loading images... [INFO] processing data... [INFO] compiling model... [INFO] training network... Epoch 1/48 93/93 [==============================] - 32s 343ms/step - loss: 8.5819 - accuracy: 0.3254 - val_loss: 2.5915 - val_accuracy: 0.6829 Epoch 2/48 93/93 [==============================] - 30s 320ms/step - loss: 4.2144 - accuracy: 0.6194 - val_loss: 1.2390 - val_accuracy: 0.8573 Epoch 3/48 93/93 [==============================] - 29s 316ms/step - loss: 2.5044 - accuracy: 0.7605 - val_loss: 1.0052 - val_accuracy: 0.8862 Epoch 4/48 93/93 [==============================] - 30s 322ms/step - loss: 2.0702 - accuracy: 0.8011 - val_loss: 0.9150 - val_accuracy: 0.9070 Epoch 5/48 93/93 [==============================] - 29s 313ms/step - loss: 1.5996 - accuracy: 0.8366 - val_loss: 0.7397 - val_accuracy: 0.9268 ... Epoch 44/48 93/93 [==============================] - 28s 304ms/step - loss: 0.2180 - accuracy: 0.9275 - val_loss: 0.2608 - val_accuracy: 0.9476 Epoch 45/48 93/93 [==============================] - 29s 315ms/step - loss: 0.2521 - accuracy: 0.9178 - val_loss: 0.2693 - val_accuracy: 0.9449 Epoch 46/48 93/93 [==============================] - 29s 312ms/step - loss: 0.2330 - accuracy: 0.9284 - val_loss: 0.2687 - val_accuracy: 0.9467 Epoch 47/48 93/93 [==============================] - 29s 310ms/step - loss: 0.2120 - accuracy: 0.9322 - val_loss: 0.2646 - val_accuracy: 0.9476 Epoch 48/48 93/93 [==============================] - 29s 311ms/step - loss: 0.2237 - accuracy: 0.9318 - val_loss: 0.2664 - val_accuracy: 0.9485 [INFO] evaluating network... precision recall f1-score support Cyclone 0.99 0.97 0.98 205 Earthquake 0.96 0.93 0.95 362 Flood 0.90 0.94 0.92 267 Wildfire 0.96 0.97 0.96 273 accuracy 0.95 1107 macro avg 0.95 0.95 0.95 1107 weighted avg 0.95 0.95 0.95 1107 [INFO] serializing network to 'output/natural_disaster.model'... Here you can see that we are obtaining 95% accuracy when recognizing natural disasters in the testing set! Examining our training plot we can see that our validation loss follows our training loss, implying there is little overfitting within our dataset itself: Figure 5: Training history accuracy/loss curves for creating a natural disaster classifier using Keras and deep learning. Finally, we have our learning rate plot which shows our our CLR callback oscillates the learning rate between our MIN_LR and MAX_LR, respectively: Figure 6: Cyclical learning rates are used with Keras and deep learning for detecting natural disasters. Implementing our natural disaster prediction script Now that our model has been trained, let’s see how we can use it to make predictions on images/video it has never seen before — and thereby pave the way for an automatic natural disaster detection system. To create this script we’ll take advantage of the temporal nature of videos, specifically the assumption that subsequent frames in a video will have similar semantic contents. By performing rolling prediction accuracy we’ll be able to “smooth out” the predictions and avoid “prediction flickering”. I have already covered this near-identical script in-depth in my Video Classification with Keras and Deep Learning article.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
Be sure to refer to that article for the full background and more-detailed code explanations. To accomplish natural disaster video classification let’s inspect predict.py: # import the necessary packages from tensorflow.keras.models import load_model from pyimagesearch import config from collections import deque import numpy as np import argparse import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--input", required=True, help="path to our input video") ap.add_argument("-o", "--output", required=True, help="path to our output video") ap.add_argument("-s", "--size", type=int, default=128, help="size of queue for averaging") ap.add_argument("-d", "--display", type=int, default=-1, help="whether or not output frame should be displayed to screen") args = vars(ap.parse_args()) Lines 2-7 load necessary packages and modules. In particular, we’ll be using deque from Python’s collections module to assist with our rolling average algorithm. Lines 10-19 parse command line arguments including the path to our input/output videos, size of our rolling average queue, and whether we will display the output frame to our screen while the video is being generated. Let’s go ahead and load our natural disaster classification model and initialize our queue + video stream: # load the trained model from disk print("[INFO] loading model and label binarizer...") model = load_model(config. MODEL_PATH) # initialize the predictions queue Q = deque(maxlen=args["size"]) # initialize the video stream, pointer to output video file, and # frame dimensions print("[INFO] processing video...") vs = cv2.VideoCapture(args["input"]) writer = None (W, H) = (None, None) With our model , Q , and vs ready to go, we’ll begin looping over frames: # loop over frames from the video file stream while True: # read the next frame from the file (grabbed, frame) = vs.read() # if the frame was not grabbed, then we have reached the end # of the stream if not grabbed: break # if the frame dimensions are empty, grab them if W is None or H is None: (H, W) = frame.shape[:2] # clone the output frame, then convert it from BGR to RGB # ordering and resize the frame to a fixed 224x224 output = frame.copy() frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame = cv2.resize(frame, (224, 224)) frame = frame.astype("float32") Lines 38-47 grab a frame and store its dimensions. Lines 51-54 duplicate our frame for output purposes and then preprocess it for classification. The preprocessing steps are, and must be, the same as those that we performed for training. Now let’s make a natural disaster prediction on the frame: # make predictions on the frame and then update the predictions # queue preds = model.predict(np.expand_dims(frame, axis=0))[0] Q.append(preds) # perform prediction averaging over the current history of # previous predictions results = np.array(Q).mean(axis=0) i = np.argmax(results) label = config.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
CLASSES[i] Lines 58 and 59 perform inference and add the predictions to our queue. Line 63 performs a rolling average prediction of the predictions available in the Q . Lines 64 and 65 then extract the highest probability class label so that we can annotate our frame: # draw the activity on the output frame text = "activity: {}".format(label) cv2.putText(output, text, (35, 50), cv2.FONT_HERSHEY_SIMPLEX, 1.25, (0, 255, 0), 5) # check if the video writer is None if writer is None: # initialize our video writer fourcc = cv2.VideoWriter_fourcc(*"MJPG") writer = cv2.VideoWriter(args["output"], fourcc, 30, (W, H), True) # write the output frame to disk writer.write(output) # check to see if we should display the output frame to our # screen if args["display"] > 0: # show the output image cv2.imshow("Output", output) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # release the file pointers print("[INFO] cleaning up...") writer.release() vs.release() Lines 68-70 annotate the natural disaster activity in the corner of the output frame. Lines 73-80 handle writing the output frame to a video file. If the --display flag is set, Lines 84-91 display the frame to the screen and capture keypresses. Otherwise, processing continues until completion at which point the loop is finished and we perform cleanup (Lines 95 and 96). Predicting natural disasters with Keras For the purposes of this tutorial, I downloaded example natural disaster videos via YouTube — the exact videos are listed in the “Credits” section below. You can either use your own example videos or download the videos via the credits list. Either way, make sure you have used the “Downloads” section of this tutorial to download the source code and pre-trained natural disaster prediction model. Once downloaded you can use the following command to launch the predict.py script: $ python predict.py --input videos/terrific_natural_disasters_compilation.mp4 \ --output output/natural_disasters_output.avi [INFO] processing video... [INFO] cleaning up... Here you can see a sample result of our model correctly classifying this video clip as “flood”: Figure 7: Natural disaster “flood” classification with Keras and Deep Learning.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
The following example comes from the 2016 Fort McMurray wildfire: Figure 8: Detecting “wildfires” and other natural disasters with Keras, deep learning, and computer vision. For fun, I then tried applying the natural disaster detector to the movie San Andreas (2015): Figure 9: Detecting “earthquake” damage with Keras, deep learning, and Python. Notice how our model was able to correctly label the video clip as an (overly dramatized) earthquake. You can find a full demo video below: What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Credits Dataset curator: Gautam Kumar Video sources for the demo: Hurricane Lorenzo Wreaks Havoc on The Azores on its way to UK Fort McMurray wildfire: A timeline of a disaster Floods 101 | National Geographic 9.6 Magnitude Earthquake (Scenes from the film San Andreas 2015) Terrific National Disasters Compilation Audio for the demo video: Bensound’s “Epic” Summary In this tutorial, you learned how to use computer vision and the Keras deep learning library to automatically detect natural disasters from images.
https://pyimagesearch.com/2019/11/11/detecting-natural-disasters-with-keras-and-deep-learning/
To create our natural disaster detector we fine-tuned VGG16 (pre-trained on ImageNet) on a dataset of 4,428 images belonging to four classes: Cyclone/hurricane Earthquake Flood Wildfire After our model was trained we evaluated it on the testing set, finding that it obtained 95% classification accuracy. Using this model you can continue to perform research in natural disaster detection, ultimately helping save lives and reduce injury. I hope you enjoyed this post! To download the source code to the post (and be notified when future tutorials are published on PyImageSearch), just enter your email address in the form below. Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Click here to download the source code to this pos In this tutorial, you will learn how to detect fire and smoke using Computer Vision, OpenCV, and the Keras Deep Learning library. Today’s tutorial is inspired by an email I received last week from PyImageSearch reader, Daniel. Daniel writes: Hey Adrian, I’m not sure if you’ve seen the news, but my home state of California has been absolutely ravaged by wildfires over the past few weeks. My family lives in the Los Angeles area, not too far from the Getty fire. It’s hard not to be concerned about our home and our safety. It’s a scary situation and it got me thinking: Do you think computer vision could be used to detect wildfires? What about fires that start in people’s homes? If you could write a tutorial on the topic I would appreciate it. I’d love to learn from it and do my part to help others. The short answer is, yes, computer vision and deep learning can be used to detect wildfires: IoT/Edge devices equipped with cameras can be deployed strategically throughout hillsides, ridges, and high elevation areas, automatically monitoring for signs of smoke or fire.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Drones and quadcopters can be flown above areas prone to wildfires, strategically scanning for smoke. Satellites can be used to take photos of large acreage areas while computer vision and deep learning algorithms process these images, looking for signs of smoke. That’s all fine and good for wildfires — but what if you wanted to monitor your own home for smoke or fire? The answer there is to augment existing sensors to aid in fire/smoke detection: Existing smoke detectors utilize photoelectric sensors and a light source to detect if the light source particles are being scattered (implying smoke is present). You could then distribute temperature sensors around the house to monitor the temperature of each room. Cameras could also be placed in areas where fires are likely to start (kitchen, garage, etc.). Each individual sensor could be used to trigger an alarm or you could relay the sensor information to a central hub that aggregates and analyzes the sensor data, computing a probability of a home fire. Unfortunately, that’s all easier said than done. While there are 100s of computer vision/deep learning practitioners around the world actively working on fire and smoke detection (including PyImageSearch Gurus member, David Bonn), it’s still an open-ended problem. That said, today I’ll help you get your start in smoke and fire detection — by the end of this tutorial, you’ll have a deep learning model capable of detecting fire in images (I’ve even included my pre-trained model to get you up and running immediately).
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
To learn how to create your own fire and smoke detector with Computer Vision, Deep Learning, and Keras, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Fire and smoke detection with Keras and Deep Learning Figure 1: Wildfires can quickly become out of control and endanger lives in many parts of the world. In this article, we will learn to conduct fire and smoke detection with Keras and deep learning. In the first part of this tutorial we’ll discuss the two datasets we’ll be using for fire and smoke detection. From there we’ll review or directory structure for the project and then implement FireDetectionNet, the CNN architecture we’ll be using to detect fire and smoke in images/video. Next, we’ll train our fire detection model and analyze the classification accuracy and results. We’ll wrap up the tutorial by discussing some of the limitations and drawbacks of the approach, including how you can improve and extend the method. Our fire and smoke dataset Figure 2: Today’s fire detection dataset is curated by Gautam Kumar and pruned by David Bonn (both of whom are PyImageSearch readers). We will put the dataset to work with Keras and deep learning to create a fire/smoke detector.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
The dataset we’ll be using for fire and smoke examples was curated by PyImageSearch reader, Gautam Kumar. Guatam gathered a total of 1,315 images by searching Google Images for queries related to the term “fire”, “smoke”, etc. However, the original dataset has not been cleansed of extraneous, irrelevant images that are not related to fire and smoke (i.e., examples of famous buildings before a fire occurred). Fellow PyImageSearch reader, David Bonn, took the time to manually go through the fire/smoke images and identify ones that should not be included. Note: I took the list of extraneous images identified by David and then created a shell script to delete them from the dataset. The shell script can be found in the “Downloads” section of this tutorial. The 8-scenes dataset Figure 3: We will combine Gautam’s fire dataset with the 8-scenes natural image dataset so that we can classify Fire vs. Non-fire using Keras and deep learning. The dataset we’ll be using for Non-fire examples is called 8-scenes as it contains 2,688 image examples belonging to eight natural scene categories (all without fire): Coast Mountain Forest Open country Street Inside city Tall buildings Highways The dataset was originally curated by Oliva and Torralba in their 2001 paper, Modeling the shape of the scene: a holistic representation of the spatial envelope. The 8-scenes dataset is a natural complement to our fire/smoke dataset as it depicts natural scenes as they should look without fire or smoke present. While this dataset has 8 unique classes, we will consider the dataset as a single Non-fire class when we combine it with Gautam’s Fire dataset.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Project structure Figure 4: The project structure for today’s tutorial on fire and smoke detection with deep learning using the Keras/TensorFlow framework. Go ahead and grab today’s .zip from the source code and pre-trained model using the “Downloads” section of this blog post. From there you can unzip it on your machine and your project will look like Figure 4. There is an exception: neither dataset .zip (white arrows) will be present yet. We will download, extract, and prune the datasets in the next section. Our output/ directory contains: Our serialized fire detection model. We will train the model today with Keras and deep learning. The Learning Rate Finder plot will be generated and inspected for the optimal learning rate prior to training. A training history plot will be generated upon completion of the training process. The examples/ subdirectory will be populated by predict_fire.py with sample images that will be annotated for demonstration and verification purposes.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Our pyimagesearch module holds: config.py : Our customizable configuration. FireDetectionNet : Our Keras Convolutional Neural Network class designed specifically for detecting fire and smoke. LearningRateFinder : A Keras class for assisting in the process of finding the optimal learning rate for deep learning training. The root of the project contains three scripts: prune.sh : A simple bash script that removes irrelevant images from Gautam’s fire dataset. train.py : Our Keras deep learning training script. This script has two modes of operation: (1) Learning Rate Finder mode,and (2) training mode. predict_fire.py : A quick and dirty script which samples images from our dataset, generating annotated Fire/Non-fire images for verification. Let’s move on to preparing our Fire/Non-fire dataset in the next section. Preparing our Fire and Non-fire combined dataset Preparing our Fire and Non-fire dataset involves a four-step process: Step #1: Ensure you followed the instructions in the previous section to grab and unzip today’s files from the “Downloads” section. Step #2: Download and extract the fire/smoke dataset into the project.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Step #3: Prune the fire/smoke dataset for extraneous, irrelevant files. Step #4: Download and extract the 8-scenes dataset into the project. The result of Steps #2-4 will be a dataset consisting of two classes: Fire Non-fire Combining datasets is a tactic I often use. It saves valuable time and often leads to a great model. Let’s begin putting our combined dataset together. Step #2: Download and extract the fire/smoke dataset into the project. Download the fire/smoke dataset using this link. Store the .zip in the keras-fire-detection/ project directory that you extracted in the last section. Once downloaded, unzip the dataset: $ unzip Robbery_Accident_Fire_Database2.zip Step #3: Prune the dataset for extraneous, irrelevant files. Execute the prune.sh script to delete the extraneous, irrelevant files from the fire dataset: $ sh prune.sh At this point, we have Fire data.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Now we need Non-fire data for our two-class problem. Step #4: Download and extract the 8-scenes dataset into the project. Download the 8-scenes dataset using this link. Store the .zip in the keras-fire-detection/ project directory alongside the Fire dataset. Once downloaded, navigate to the project folder and unarchive the dataset: $ unzip spatial_envelope_256x256_static_8outdoorcategories.zip Review Project + Dataset Structure At this point, it is time to inspect our directory structure once more. Yours should be identical to mine: $ tree --dirsfirst --filelimit 10 . ├── Robbery_Accident_Fire_Database2 │ ├── Accident [887 entries] │ ├── Fire [1315 entries] │ ├── Robbery [2073 entries] │ └── readme.txt ├── spatial_envelope_256x256_static_8outdoorcategories [2689 entries] ├── output │ ├── examples [48 entries] │ ├── fire_detection.model │ │ ├── assets │ │ ├── variables │ │ │ ├── variables.data-00000-of-00002 │ │ │ ├── variables.data-00001-of-00002 │ │ │ └── variables.index │ │ └── saved_model.pb │ ├── lrfind_plot.png │ └── training_plot.png ├── pyimagesearch │ ├── __init__.py │ ├── config.py │ ├── firedetectionnet.py │ └── learningratefinder.py ├── Robbery_Accident_Fire_Database.zip ├── spatial_envelope_256x256_static_8outdoorcategories.zip ├── prune.sh ├── train.py └── predict_fire.py 11 directories, 16 files Ensure your dataset is pruned (i.e. the Fire/ directory should have exactly 1,315 entries and not the previous 1,405 entries). Our configuration file This project will span multiple Python files that will need to be executed, so let’s store all important variables in a single config.py file. Open up config.py now and insert the following code: # import the necessary packages import os # initialize the path to the fire and non-fire dataset directories FIRE_PATH = os.path.sep.join(["Robbery_Accident_Fire_Database2", "Fire"]) NON_FIRE_PATH = "spatial_envelope_256x256_static_8outdoorcategories" # initialize the class labels in the dataset CLASSES = ["Non-Fire", "Fire"] We’ll use the os module for combining paths (Line 2). Lines 5-7 contain paths to our (1) Fire images, and (2) Non-fire images.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Line 10 is a list of our two class names. Let’s set a handful of training parameters: # define the size of the training and testing split TRAIN_SPLIT = 0.75 TEST_SPLIT = 0.25 # define the initial learning rate, batch size, and number of epochs INIT_LR = 1e-2 BATCH_SIZE = 64 NUM_EPOCHS = 50 Lines 13 and 14 define the size of our training and testing dataset splits. Lines 17-19 contain three hyperparameters — the initial learning rate, batch size, and number of epochs to train for. From here, we’ll define a few paths: # set the path to the serialized model after training MODEL_PATH = os.path.sep.join(["output", "fire_detection.model"]) # define the path to the output learning rate finder plot and # training history plot LRFIND_PLOT_PATH = os.path.sep.join(["output", "lrfind_plot.png"]) TRAINING_PLOT_PATH = os.path.sep.join(["output", "training_plot.png"]) Lines 22-27 include paths to: Our yet-to-be-trained serialized fire detection model. The Learning Rate Finder plot which we will analyze to set our initial learning rate. A training accuracy/loss history plot. To wrap up our config we’ll define settings for prediction spot-checking: # define the path to the output directory that will store our final # output with labels/annotations along with the number of images to # sample OUTPUT_IMAGE_PATH = os.path.sep.join(["output", "examples"]) SAMPLE_SIZE = 50 Our prediction script will sample and annotate images using our model. Lines 32 and 33 include the path to output directory where we’ll store output classification results and the number of images to sample. Implementing our fire detection Convolutional Neural Network Figure 5: FireDetectionNet is a deep learning fire/smoke classification network built with the Keras deep learning framework. In this section we’ll implement FireDetectionNet, a Convolutional Neural Network used to detect smoke and fire in images.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
This network utilizes depthwise separable convolution rather than standard convolution as depthwise separable convolution: Is more efficient, as Edge/IoT devices will have limited CPU and power draw. Requires less memory, as again, Edge/IoT devices have limited RAM. Requires less computation, as we have limited CPU horsepower. Can perform better than standard convolution in some cases, which can lead to a better fire/smoke detector. Let’s get started implementing FireDetectioNet now — open up the firedetectionnet.py file now and insert the following code: # import the necessary packages from tensorflow.keras.models import Sequential from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.layers import SeparableConv2D from tensorflow.keras.layers import MaxPooling2D from tensorflow.keras.layers import Activation from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Dense class FireDetectionNet: @staticmethod def build(width, height, depth, classes): # initialize the model along with the input shape to be # "channels last" and the channels dimension itself model = Sequential() inputShape = (height, width, depth) chanDim = -1 Our TensorFlow 2.0 Keras imports span from Lines 2-9. We will use Keras’ Sequential API to build our fire detection CNN. Line 11 defines our FireDetectionNet class. We begin by defining the build method on Line 13. The build method accepts parameters including dimensions of our images (width , height , depth ) as well as the number of classes we will be training our model to recognize (i.e. this parameter affects the softmax classifier head shape). We then initialize the model and inputShape (Lines 16-18).
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
From here we’ll define our first set of CONV => RELU => POOL layers: # CONV => RELU => POOL model.add(SeparableConv2D(16, (7, 7), padding="same", input_shape=inputShape)) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) model.add(MaxPooling2D(pool_size=(2, 2))) These layers use a larger kernel size to both (1) reduce the input volume spatial dimensions faster, and (2) detect larger color blobs that contain fire. We’ll then define more CONV => RELU => POOL layer sets: # CONV => RELU => POOL model.add(SeparableConv2D(32, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) model.add(MaxPooling2D(pool_size=(2, 2))) # (CONV => RELU) * 2 => POOL model.add(SeparableConv2D(64, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) model.add(SeparableConv2D(64, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) model.add(MaxPooling2D(pool_size=(2, 2))) Lines 34-40 allow our model to learn richer features by stacking two sets of CONV => RELU before applying a POOL . From here we’ll create our fully-connected head of the network: # first set of FC => RELU layers model.add(Flatten()) model.add(Dense(128)) model.add(Activation("relu")) model.add(BatchNormalization()) model.add(Dropout(0.5)) # second set of FC => RELU layers model.add(Dense(128)) model.add(Activation("relu")) model.add(BatchNormalization()) model.add(Dropout(0.5)) # softmax classifier model.add(Dense(classes)) model.add(Activation("softmax")) # return the constructed network architecture return model Lines 43-53 add two sets of FC => RELU layers. Lines 56 and 57 append our Softmax classifier prior to Line 60 returning the model . Creating our training script Our training script will be responsible for: Loading our Fire and Non-fire combined dataset from disk. Instantiating our FireDetectionNet architecture. Finding our optimal learning rate by using our LearningRateFinder class. Taking the optimal learning rate and training our network for the full set of epochs. Let’s get started! Open up the train.py file in your directory structure and insert the following code: # set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.optimizers import SGD from tensorflow.keras.utils import to_categorical from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from pyimagesearch.learningratefinder import LearningRateFinder from pyimagesearch.firedetectionnet import FireDetectionNet from pyimagesearch import config from imutils import paths import matplotlib.pyplot as plt import numpy as np import argparse import cv2 import sys Lines 1-19 handle our imports: matplotlib : For generating plots with Python.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Line 3 sets the backend so we can save our plots as image files. tensorflow.keras : Our TensorFlow 2.0 imports including data augmentation, stochastic gradient descent optimizer, and one-hot label encoder. sklearn : Two imports for dataset splitting and classification reporting. LearningRateFinder : A class we will use for finding an optimal learning rate prior to training. When we operate our script in this mode, it will generate a plot for us to (1) manually inspect and (2) insert the optimal learning rate into our configuration file. FireDetectionNet : The fire/smoke Convolutional Neural Network (CNN) that we built in the previous section. config : Our configuration file of settings for this training script (it also contains settings for our prediction script). paths : Contains functions from my imutils package to list images in a directory tree. argparse : For parsing command line argument flags. cv2 : OpenCV is used for loading and preprocessing images.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Now that we’ve imported packages, let’s define a reusable function to load our dataset: def load_dataset(datasetPath): # grab the paths to all images in our dataset directory, then # initialize our lists of images imagePaths = list(paths.list_images(datasetPath)) data = [] # loop over the image paths for imagePath in imagePaths: # load the image and resize it to be a fixed 128x128 pixels, # ignoring aspect ratio image = cv2.imread(imagePath) image = cv2.resize(image, (128, 128)) # add the image to the data lists data.append(image) # return the data list as a NumPy array return np.array(data, dtype="float32") Our load_dataset helper function assists with loading, preprocessing, and preparing both the Fire and Non-fire datasets. Line 21 defines the function which accepts a path to the dataset. Line 24 grabs all image paths in the dataset. Lines 28-35 loop over the imagePaths . Images are loaded, resized to 128×128 dimensions, and added to the data list. Line 38 returns the data in NumPy array format. We’ll now parse a single command line argument: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-f", "--lr-find", type=int, default=0, help="whether or not to find optimal learning rate") args = vars(ap.parse_args()) The --lr-find flag sets the mode for our script. If the flag is set to 1 , then we’ll be in our learning rate finder mode, generating a learning rate plot for us to inspect. Otherwise, our script will operate in training mode and train the network for the full set of epochs (i.e. when the --lr-find flag is not present).
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Let’s go ahead and load our data now: # load the fire and non-fire images print("[INFO] loading data...") fireData = load_dataset(config. FIRE_PATH) nonFireData = load_dataset(config. NON_FIRE_PATH) # construct the class labels for the data fireLabels = np.ones((fireData.shape[0],)) nonFireLabels = np.zeros((nonFireData.shape[0],)) # stack the fire data with the non-fire data, then scale the data # to the range [0, 1] data = np.vstack([fireData, nonFireData]) labels = np.hstack([fireLabels, nonFireLabels]) data /= 255 Lines 48 and 49 load and resize the Fire and Non-fire images. Lines 52 and 53 construct labels for both classes (1 for Fire and 0 for Non-fire). Subsequently, we stack the data and labels into a single NumPy array (i.e. combine the datasets) via Lines 57 and 58. Line 59 scales pixel intensities to the range [0, 1]. We have three more steps to prepare our data: # perform one-hot encoding on the labels and account for skew in the # labeled data labels = to_categorical(labels, num_classes=2) classTotals = labels.sum(axis=0) classWeight = classTotals.max() / classTotals # construct the training and testing split (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=config. TEST_SPLIT, random_state=42) First, we perform one-hot encoding on our labels (Line 63). Then, we account for skew in our dataset (Lines 64 and 65). To do so, we compute the classWeight to weight Fire images more than Non-fire images during the gradient update (as we have over 2x more Fire images than Non-fire images).
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Lines 68 and 69 construct training and testing splits based on our config (in my config I have the split set to 75% training/25% testing). Next, we’ll initialize data augmentation and compile our FireDetectionNet model: # initialize the training data augmentation object aug = ImageDataGenerator( rotation_range=30, zoom_range=0.15, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.15, horizontal_flip=True, fill_mode="nearest") # initialize the optimizer and model print("[INFO] compiling model...") opt = SGD(lr=config. INIT_LR, momentum=0.9, decay=config. INIT_LR / config. NUM_EPOCHS) model = FireDetectionNet.build(width=128, height=128, depth=3, classes=2) model.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"]) Lines 74-79 instantiate our data augmentation object. We then build and compile our FireDetectionNet model (Lines 83-88). Note that our initial learning rate and decay is set as we initialize our SGD optimizer. Let’s handle our Learning Rate Finder mode: # check to see if we are attempting to find an optimal learning rate # before training for the full number of epochs if args["lr_find"] > 0: # initialize the learning rate finder and then train with learning # rates ranging from 1e-10 to 1e+1 print("[INFO] finding learning rate...") lrf = LearningRateFinder(model) lrf.find( aug.flow(trainX, trainY, batch_size=config. BATCH_SIZE), 1e-10, 1e+1, stepsPerEpoch=np.ceil((trainX.shape[0] / float(config. BATCH_SIZE))), epochs=20, batchSize=config.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
BATCH_SIZE, classWeight=classWeight) # plot the loss for the various learning rates and save the # resulting plot to disk lrf.plot_loss() plt.savefig(config. LRFIND_PLOT_PATH) # gracefully exit the script so we can adjust our learning rates # in the config and then train the network for our full set of # epochs print("[INFO] learning rate finder complete") print("[INFO] examine plot and adjust learning rates before training") sys.exit(0) Line 92 checks to see if we should attempt to find optimal learning rates. Assuming so, we: Initialize LearningRateFinder (Line 96). Start training with a 1e-10 learning rate and exponentially increase it until we hit 1e+1 (Lines 97-103). Plot the loss vs. learning rate and save the resulting figure (Lines 107 and 108). Gracefully exit the script after printing a couple of messages to the user (Lines 115). After this code executes we now need to: Step #1: Manually inspect the generated learning rate plot. Step #2: Update config.py with our INIT_LR (i.e., the optimal learning rate we determined by analyzing the plot). Step #3: Train the network on our full dataset. Assuming we have completed Step #1 and Step #2, now let’s handle the Step #3 where our initial learning rate has been determined and updated in the config.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
In this case, it is time to handle training mode in our script: # train the network print("[INFO] training network...") H = model.fit_generator( aug.flow(trainX, trainY, batch_size=config. BATCH_SIZE), validation_data=(testX, testY), steps_per_epoch=trainX.shape[0] // config. BATCH_SIZE, epochs=config. NUM_EPOCHS, class_weight=classWeight, verbose=1) Lines 119-125 train our fire detection model using data augmentation and our skewed dataset class weighting. Be sure to review my .fit_generator tutorial. Finally, we’ll evaluate the model, serialize it to disk, and plot the training history: # evaluate the network and show a classification report print("[INFO] evaluating network...") predictions = model.predict(testX, batch_size=config. BATCH_SIZE) print(classification_report(testY.argmax(axis=1), predictions.argmax(axis=1), target_names=config. CLASSES)) # serialize the model to disk print("[INFO] serializing network to '{}'...".format(config. MODEL_PATH)) model.save(config. MODEL_PATH) # construct a plot that plots and saves the training history N = np.arange(0, config.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
NUM_EPOCHS) plt.style.use("ggplot") plt.figure() plt.plot(N, H.history["loss"], label="train_loss") plt.plot(N, H.history["val_loss"], label="val_loss") plt.plot(N, H.history["accuracy"], label="train_acc") plt.plot(N, H.history["val_accuracy"], label="val_acc") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(config. TRAINING_PLOT_PATH) Lines 129-131 make predictions on test data and print a classification report in our terminal. Line 135 serializes the model and saves it to disk. We’ll recall the model in our prediction script. Lines 138-149 generate a historical plot of accuracy/loss curves during training. We will inspect this plot for overfitting or underfitting. Training the fire detection model with Keras Training our fire detection model is broken down into three steps: Step #1: Run the train.py script with the --lr-find command line argument to find our optimal learning rate. Step #2: Update Line 17 of our configuration file (config.py ) to set our INIT_LR value as the optimal learning rate. Step #3: Execute the train.py script again, but this time let it train for the full set of epochs. Start by using the “Downloads” section of this tutorial to download the source code to this tutorial.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
From there you can perform Step #1 by executing the following command: $ python train.py --lr-find 1 [INFO] loading data... [INFO] finding learning rate... Epoch 1/20 47/47 [==============================] - 10s 221ms/step - loss: 1.2949 - accuracy: 0.4923 Epoch 2/20 47/47 [==============================] - 11s 228ms/step - loss: 1.3315 - accuracy: 0.4897 Epoch 3/20 47/47 [==============================] - 10s 218ms/step - loss: 1.3409 - accuracy: 0.4860 Epoch 4/20 47/47 [==============================] - 10s 215ms/step - loss: 1.3973 - accuracy: 0.4770 Epoch 5/20 47/47 [==============================] - 10s 219ms/step - loss: 1.3170 - accuracy: 0.4957 ... Epoch 15/20 47/47 [==============================] - 10s 216ms/step - loss: 0.5097 - accuracy: 0.7728 Epoch 16/20 47/47 [==============================] - 10s 217ms/step - loss: 0.5507 - accuracy: 0.7345 Epoch 17/20 47/47 [==============================] - 10s 220ms/step - loss: 0.7554 - accuracy: 0.7089 Epoch 18/20 47/47 [==============================] - 10s 220ms/step - loss: 1.1833 - accuracy: 0.6606 Epoch 19/20 37/47 [======================>.......] - ETA: 2s - loss: 3.1446 - accuracy: 0.6338 [INFO] learning rate finder complete [INFO] examine plot and adjust learning rates before training Figure 6: Analyzing our optimal deep learning rate finder plot. We will use the optimal learning rate to train a fire/smoke detector using Keras and Python. Examining Figure 6 above you can see that our network is able to gain traction and start to learn around 1e-5 . The lowest loss can be found between 1e-2 and 1e-1; however, at 1e-1 we can see loss starting to increase sharply, implying that the learning rate is too large and the network is overfitting. To be safe we should use an initial learning rate of 1e-2. Let’s now move on to Step #2. Open up config.py and scroll to Lines 16-19 where we set our training hyperparameters: # define the initial learning rate, batch size, and number of epochs INIT_LR = 1e-2 BATCH_SIZE = 64 NUM_EPOCHS = 50 Here we see our initial learning rate (INIT_LR) value — we need to set this value to 1e-2 (as our code indicates). The final step (Step #3) is to train FireDetectionNet for the full set of NUM_EPOCHS: $ python train.py [INFO] loading data... [INFO] compiling model... [INFO] training network... Epoch 1/50 46/46 [==============================] - 11s 233ms/step - loss: 0.6813 - accuracy: 0.6974 - val_loss: 0.6583 - val_accuracy: 0.6464 Epoch 2/50 46/46 [==============================] - 11s 232ms/step - loss: 0.4886 - accuracy: 0.7631 - val_loss: 0.7774 - val_accuracy: 0.6464 Epoch 3/50 46/46 [==============================] - 10s 224ms/step - loss: 0.4414 - accuracy: 0.7845 - val_loss: 0.9470 - val_accuracy: 0.6464 Epoch 4/50 46/46 [==============================] - 10s 222ms/step - loss: 0.4193 - accuracy: 0.7917 - val_loss: 1.0790 - val_accuracy: 0.6464 Epoch 5/50 46/46 [==============================] - 10s 224ms/step - loss: 0.4015 - accuracy: 0.8070 - val_loss: 1.2034 - val_accuracy: 0.6464 ... Epoch 46/50 46/46 [==============================] - 10s 222ms/step - loss: 0.1935 - accuracy: 0.9275 - val_loss: 0.2985 - val_accuracy: 0.8781 Epoch 47/50 46/46 [==============================] - 10s 221ms/step - loss: 0.1812 - accuracy: 0.9244 - val_loss: 0.2325 - val_accuracy: 0.9031 Epoch 48/50 46/46 [==============================] - 10s 226ms/step - loss: 0.1857 - accuracy: 0.9241 - val_loss: 0.2788 - val_accuracy: 0.8911 Epoch 49/50 46/46 [==============================] - 11s 229ms/step - loss: 0.2065 - accuracy: 0.9129 - val_loss: 0.2177 - val_accuracy: 0.9121 Epoch 50/50 46/46 [==============================] - 63s 1s/step - loss: 0.1842 - accuracy: 0.9316 - val_loss: 0.2376 - val_accuracy: 0.9111 [INFO] evaluating network... precision recall f1-score support Non-Fire 0.96 0.90 0.93 647 Fire 0.83 0.94 0.88 354 accuracy 0.91 1001 macro avg 0.90 0.92 0.91 1001 weighted avg 0.92 0.91 0.91 1001 [INFO] serializing network to 'output/fire_detection.model'... Figure 7: Accuracy/loss curves for training a fire and smoke detection deep learning model with Keras and Python. Learning is a bit volatile here but you can see that we are obtaining 92% accuracy. Making predictions on fire/non-fire images Given our trained fire detection model, let’s now learn how to: Load the trained model from disk.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Sample random images from our dataset. Classify each input image using our model. Open up predict_fire.py and insert the following code: # import the necessary packages from tensorflow.keras.models import load_model from pyimagesearch import config from imutils import paths import numpy as np import imutils import random import cv2 import os # load the trained model from disk print("[INFO] loading model...") model = load_model(config. MODEL_PATH) Lines 2-9 handle our imports, namely load_model , so that we can load our serialized TensorFlow/Keras model from disk. Let’s grab 25 random images from our combined dataset: # grab the paths to the fire and non-fire images, respectively print("[INFO] predicting...") firePaths = list(paths.list_images(config. FIRE_PATH)) nonFirePaths = list(paths.list_images(config. NON_FIRE_PATH)) # combine the two image path lists, randomly shuffle them, and sample # them imagePaths = firePaths + nonFirePaths random.shuffle(imagePaths) imagePaths = imagePaths[:config. SAMPLE_SIZE] Lines 17 and 18 grab image paths from our combined dataset while Lines 22-24 sample 25 random image paths. From here, we’ll loop over each of the individual image paths and perform fire detection inference: # loop over the sampled image paths for (i, imagePath) in enumerate(imagePaths): # load the image and clone it image = cv2.imread(imagePath) output = image.copy() # resize the input image to be a fixed 128x128 pixels, ignoring # aspect ratio image = cv2.resize(image, (128, 128)) image = image.astype("float32") / 255.0 # make predictions on the image preds = model.predict(np.expand_dims(image, axis=0))[0] j = np.argmax(preds) label = config. CLASSES[j] # draw the activity on the output frame text = label if label == "Non-Fire" else "WARNING!
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Fire!" output = imutils.resize(output, width=500) cv2.putText(output, text, (35, 50), cv2.FONT_HERSHEY_SIMPLEX, 1.25, (0, 255, 0), 5) # write the output image to disk filename = "{}.png".format(i) p = os.path.sep.join([config. OUTPUT_IMAGE_PATH, filename]) cv2.imwrite(p, output) Line 27 begins a loop over our sampled image paths: We load and preprocess the image just as in training (Lines 29-35). Make predictions and grab the highest probability label (Lines 38-40). Annotate the label in the top corner of the image (Lines 43-46). Save the output image to disk (Lines 49-51). Fire detection results To see our fire detector in action make sure you use the “Downloads” section of this tutorial to download the source code and pre-trained model. From there you can execute the following command: $ python predict_fire.py [INFO] loading model... [INFO] predicting... Figure 8: Fire and smoke detection with Keras, deep learning, and Python. I’ve included a set sample of results in Figure 8 — notice how our model was able to correctly predict “fire” and “non-fire” in each of them. Limitations and drawbacks Our results are not perfect, however.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Here are a few examples of incorrect classifications: Figure 9: Examples of incorrect fire/smoke detection. The image on the left in particular is troubling — a sunset will cast shades of reds and oranges across the sky, creating an “inferno” like effect. It appears that in those situations our fire detection model will struggle considerably. So, why are these incorrect classifications coming from? The answer lies in the dataset itself. To start, we only worked with raw image data. Smoke and fire can be better detected with video as fires start off as a smolder, slowly build to a critical point, and then erupt into massive flames. Such a pattern is better detected in video streams rather than images. Secondly, our datasets are quite small. Combining the two datasets we only had a total of 4,003 images.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Fire and smoke datasets are hard to come by, making it extremely challenging to create high accuracy models. Finally, our datasets are not necessarily representative of the problem. Many of the example images in our fire/smoke dataset contained examples of professional photos captured by news reports. Fires don’t look like that in the wild. In order to improve our fire and smoke detection model, we need better data. Future efforts in fire/smoke detection research should focus less on the actual deep learning architectures/training methods and more on the actual dataset gathering and curation process, ensuring the dataset better represents how fires start, smolder, and spread in natural scene images. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated?
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to create a smoke and fire detector using Computer Vision, Deep Learning, and the Keras library. To build our smoke and fire detector we utilized two datasets: A dataset of fire/smoke examples (1,315 images), curated by PyImageSearch readers Gautam Kumar. A dataset of non-fire/non-smoke examples (2,688 images) containing examples of 8 natural outdoor scenes (forests, coastlines, mountains, open country, etc.). This dataset was originally put together by Oliva and Torralba for their 2001 paper, Modeling the shape of the scene: a holistic representation of the spatial envelope. We then designed a FireDetectionNet — a Convolutional Neural Network for smoke and fire detection. This network was trained on our two datasets. Once our network was trained we evaluated it on our testing set and found that it obtained 92% accuracy. However, there are a number of limitations and drawbacks to this approach: First, we only worked with image data.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
Smoke and fire can be better detected with video as fires start off as a smolder, slowly build to a critical point, and then erupt into massive flames. Secondly, our datasets are small. Combining the two datasets we only had a total of 4,003 images. Fire and smoke datasets are hard to come by, making it extremely challenging to create high accuracy models. Building on the previous point, our datasets are not necessarily representative of the problem. Many of the example images in our fire/smoke dataset are of professional photos captured by news reports. Fires don’t look like that in the wild. The point is this: Fire and smoke detection is a solvable problem…but we need better datasets. Luckily, PyImageSearch Gurus member David Bonn is actively working on this problem and discussing it in the PyImageSearch Gurus Community forums. If you’re interested in learning more about his project, be sure to connect with him.
https://pyimagesearch.com/2019/11/18/fire-and-smoke-detection-with-keras-and-deep-learning/
I hope you enjoyed this tutorial! To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below. Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
Click here to download the source code to this pos In this tutorial, you will learn to install TensorFlow 2.0 on your Ubuntu system either with or without a GPU. There are a number of important updates in TensorFlow 2.0, including eager execution, automatic differentiation, and better multi-GPU/distributed training support, but the most important update is that Keras is now the official high-level deep learning API for TensorFlow. In short — you should be using the Keras implementation inside TensorFlow 2.0 (i.e., tf.keras ) when training your own deep neural networks. The official Keras package will still receive bug fixes, but all new features and implementations will be inside tf.keras . Both Francois Chollet (the creator of Keras) as well as the TensorFlow developers and maintainers recommend you use tf.keras moving forward. Furthermore, if you own a copy of my book, Deep Learning for Computer Vision with Python, you should use this guide to install TensorFlow 2.0 on your Ubuntu system. Inside this tutorial, you’ll learn how to install TensorFlow 2.0 on Ubuntu. Alternatively, click here for my macOS + TensorFlow 2.0 installation instructions. To learn how to install TensorFlow 2.0 on Ubuntu, just keep reading. How to install TensorFlow 2.0 on Ubuntu In the first part of this tutorial we’ll discuss the pre-configured deep learning development environments that are a part of my book, Deep Learning for Computer Vision with Python.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
From there, you’ll learn why you should use TensorFlow 2.0, including the Keras implementation inside of TensorFlow 2.0. We’ll then configure and install TensorFlow 2.0 on our Ubuntu system. Let’s begin. Pre-configured deep learning environments Figure 1: My deep learning Virtual Machine with TensorFlow, Keras, OpenCV, and all other Deep Learning and Computer Vision libraries you need, pre-configured and pre-installed. When it comes to working with deep learning and Python I highly recommend that you use a Unix-based environment. Deep learning tools can be more easily configured and installed on Linux, allowing you to develop and run neural networks quickly. Of course, configuring your own deep learning + Python + Linux development environment can be quite the tedious task, especially if you are new to Linux, a beginner at working the command line/terminal, or a novice when compiling and installing packages by hand. In order to help you jump start your deep learning + Python education, I have created two pre-configured environments: Pre-configured VirtualBox Ubuntu Virtual Machine (VM) with all necessary deep learning libraries you need to be successful (including Keras, TensorFlow, scikit-learn, scikit-image, OpenCV, and others) pre-configured and pre-installed. Pre-configured Deep Learning Amazon Machine Image (AMI) which runs on Amazon Web Service’s (AWS) Elastic Compute (EC2) infrastructure. This environment is free for anyone on the internet to use regardless of whether you are a DL4CV customer of mine or not (cloud/GPU fees apply).
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
Deep learning libraries are pre-installed including both those listed in #1 in addition to TFOD API, Mask R-CNN, RetinaNet, and mxnet. I strongly urge you to consider using my pre-configured environments if you are working through my books. Using a pre-configured environment is not cheating — they simply allow you to focus on learning rather than the job of a system administrator. If you are more familiar with Microsoft Azure’s infrastructure, be sure to check out their Data Science Virtual Machine (DSVM), including my review of the environment. The Azure team maintains a great environment for you and I cannot speak highly enough about the support they provided while I ensured that all of my deep learning chapters ran successfully on their system. That said, pre-configured environments are not for everyone. In the remainder of this tutorial, we will serve as the “deep learning systems administrators” installing TensorFlow 2.0 on our bare metal Ubuntu machine. Why TensorFlow 2.0 and where is Keras? Figure 2: Keras and TensorFlow have a complicated history together. When installing TensorFlow 2.0 on Ubuntu, keep in mind that Keras is the official high-level API built into TensorFlow.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
It seems like every day that there is a war on Twitter about the best deep learning framework. The problem is that these discussions are counterproductive to everyone’s time. What we should be talking about is your new model architecture and how you’ve applied it to solve a problem. That said, I use Keras as my daily deep learning library and as the primary teaching tool on this blog. If you can pick up Keras, you’ll be perfectly comfortable in TensorFlow, PyTorch, mxnet, or any other similar framework. They are all just different ratcheting wrenches in your toolbox that can accomplish the same goal. Francois Chollet (chief maintainer/developer of Keras), committed his first version of Keras to his GitHub on March 27th, 2015. Since then, the software has undergone many changes and iterations. Earlier in 2019, the tf.keras submodule was introduced into TensorFlow v1.10.0. Now with TensorFlow 2.0, Keras is the official high-level API of TensorFlow.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
The keras package will only receive bug fixes from here forward. If you want to use Keras now, you need to use TensorFlow 2.0. To learn more about the marriage of Keras and TensorFlow, be sure to read my previous article. TensorFlow 2.0 has a bunch of new features, including: The integration of Keras into TensorFlow via tf.keras Sessions and eager execution Automatic differentiation Model and layer subclassing Better multi-GPU/distributed training support TensorFlow Lite for mobile/embedded devices TensorFlow Extended for deploying production models Long story short — if you would like to use Keras for deep learning, then you need to install TensorFlow 2.0 going forward. Configuring your TensorFlow 2.0 + Ubuntu deep learning system The following instructions for installing TensorFlow 2.0 on your machine assume: You have administrative access to your system You can open a terminal and or you have an active SSH connection to the target machine You know how to operate the command line. Let’s get started! Step #1: Install Ubuntu + TensorFlow 2.0 deep learning dependencies This step is for both GPU users and non-GPU users. Our Ubuntu install instructions assume you are working with Ubuntu 18.04 LTS. These instructions are tested on 18.04.3. We’ll begin by opening a terminal and updating our system: $ sudo apt-get update $ sudo apt-get upgrade From there we’ll install compiler tools: $ sudo apt-get install build-essential cmake unzip pkg-config $ sudo apt-get install gcc-6 g++-6 And then we’ll install screen, a tool used for multiple terminals in the same window — I often use it for remote SSH connections: $ sudo apt-get install screen From there we’ll install X windows libraries and OpenGL libraries: $ sudo apt-get install libxmu-dev libxi-dev libglu1-mesa libglu1-mesa-dev Along with image and video I/O libraries: $ sudo apt-get install libjpeg-dev libpng-dev libtiff-dev $ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev $ sudo apt-get install libxvidcore-dev libx264-dev Next, we’ll install optimization libraries: $ sudo apt-get install libopenblas-dev libatlas-base-dev liblapack-dev gfortran And HDF5 for working with large datasets: $ sudo apt-get install libhdf5-serial-dev We also need our Python 3 development libraries including TK and GTK GUI support: $ sudo apt-get install python3-dev python3-tk python-imaging-tk $ sudo apt-get install libgtk-3-dev If you have a GPU, continue to Step #2.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
Otherwise, if you do not have a GPU, skip to Step #3. Step #2 (GPU-only): Install NVIDIA drivers, CUDA, and cuDNN Figure 3: How to install TensorFlow 2.0 for a GPU machine. This step is only for GPU users. In this step, we will install NVIDIA GPU drivers, CUDA, and cuDNN for TensorFlow 2.0 on Ubuntu. We need to add an apt-get repository so that we can install NVIDIA GPU drivers. This can be accomplished in your terminal: $ sudo add-apt-repository ppa:graphics-drivers/ppa $ sudo apt-get update Go ahead and install your NVIDIA graphics driver: $ sudo apt-get install nvidia-driver-418 And then issue the reboot command and wait for your system to restart: $ sudo reboot now Once you are back at your terminal/SSH connection, run the nvidia-smi command to query your GPU and check its status: $ nvidia-smi Fri Nov 22 03:14:45 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.50 Driver Version: 430.50 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp. A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... Off | 00000000:00:1E.0 Off | 0 | | N/A 41C P0 39W / 300W | 0MiB / 16160MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ The nvidia-smi command output is useful to see the health and usage of your GPU. Let’s go ahead and download CUDA 10.0. I’m recommending CUDA 10.0 from this point forward as it is now very reliable and mature.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
The following commands will both download and install CUDA 10.0 right from your terminal $ cd ~ $ mkdir installers $ cd installers/ $ wget https://developer.nvidia.com/compute/cuda/10.0/Prod/local_installers/cuda_10.0.130_410.48_linux $ mv cuda_10.0.130_410.48_linux cuda_10.0.130_410.48_linux.run $ chmod +x cuda_10.0.130_410.48_linux.run $ sudo ./cuda_10.0.130_410.48_linux.run --override Note: As you follow these commands take note of the line-wrapping due to long URLs/filenames. You will be prompted to accept the End User License Agreement (EULA). During the process, you may encounter the following error: Please make sure that PATH includes /usr/local/cuda-10.0/bin LD_LIBRARY_PATH includes /usr/local/cuda-10.0/lib64, or, add /usr/local/cuda-10.0/lib64 to /etc/ld.so.conf and run ldconfig as root To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-10.0/bin Please see CUDA_Installation_Guide_Linux.pdf in /usr/local/cuda-10.0/doc/pdf for detailed information on setting up CUDA. *WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 384.00 is required for CUDA 10.0 functionality to work. To install the driver using this installer, run the following command, replacing with the name of this run file: sudo .run -silent -driver Logfile is /tmp/cuda_install_25774.log You may safely ignore this error message. Now let’s update our bash profile using nano (you can use vim or emacs if you are more comfortable with them): $ nano ~/.bashrc Insert the following lines at the bottom of the profile: # NVIDIA CUDA Toolkit export PATH=/usr/local/cuda-10.0/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64 Save the file (ctrl + x , y , enter ) and exit to your terminal. Figure 4: How to install TensorFlow 2.0 on Ubuntu with an NVIDIA CUDA GPU. Then, source the profile: $ source ~/.bashrc From here we’ll query CUDA to ensure that it is successfully installed: $ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2018 NVIDIA Corporation Built on Sat_Aug_25_21:08:01_CDT_2018 Cuda compilation tools, release 10.0, V10.0.130 If your output shows that CUDA is built, then you’re now ready to install cuDNN — the CUDA compatible deep neural net library.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
Go ahead and download cuDNN v7.6.4 for CUDA 10.0 from the following link: https://developer.nvidia.com/rdp/cudnn-archive Make sure you select: Download cuDNN v7.6.4 (September 27, 2019), for CUDA 10.0 cuDNN Library for Linux And then allow the .zip file to download (you may need to create an account on NVIDIA’s website to download the cuDNN files) You then may need to SCP (secure copy) it from your home machine to your remote deep learning box: $ scp ~/Downloads/cudnn-10.0-linux-x64-v7.6.4.24.tgz \ username@your_ip_address:~/installers Back on your GPU development system, let’s install cuDNN: $ cd ~/installers $ tar -zxf cudnn-10.0-linux-x64-v7.6.4.38.tgz $ cd cuda $ sudo cp -P lib64/* /usr/local/cuda/lib64/ $ sudo cp -P include/* /usr/local/cuda/include/ $ cd ~ At this point, we have installed: NVIDIA GPU v418 drivers CUDA 10.0 cuDNN 7.6.4 for CUDA 10.0 The hard part is certainly behind us now — GPU installations can be challenging. Great job setting up your GPU! Continue on to Step #3. Step #3: Install pip and virtual environments This step is for both GPU users and non-GPU users. In this step, we will set up pip and Python virtual environments. We will use the de-facto Python package manager, pip. Note: While you are welcome to opt for Anaconda (or alternatives), I’ve still found pip to be more ubiquitous in the community. Feel free to use Anaconda if you so wish, just understand that I cannot provide support for it. Let’s download and install pip: $ wget https://bootstrap.pypa.io/get-pip.py $ sudo python3 get-pip.py To complement pip, I recommend using both virtualenv and virtualenvwrapper to manage virtual environments. Virtual environments are a best practice when it comes to Python development.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
They allow you to test different versions of Python libraries in sequestered development and production environments. I use them daily and you should too for all Python development. In other words, do not install TensorFlow 2.0 and associated Python packages directly to your system environment. It will only cause problems later. Let’s install my preferred virtual environment tools now: $ pip3 install virtualenv virtualenvwrapper Note: Your system may require that you use the sudo command to install the above virtual environment tools. This will only be required once — from here forward, do not use sudo . From here, we need to update our bash profile to accommodate virtualenvwrapper . Open up the ~/.bashrc file with Nano or another text editor: $ nano ~/.bashrc And insert the following lines at the end of the file: # virtualenv and virtualenvwrapper export WORKON_HOME=$HOME/.local/bin/.virtualenvs export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3 export VIRTUALENVWRAPPER_VIRTUALENV=$HOME/.local/bin/virtualenv source $HOME/.local/bin/virtualenvwrapper.sh Save the file (ctrl + x , y , enter ) and exit to your terminal. Go ahead and source/load the changes into your profile: $ source ~/.bashrc Output will be displayed in your terminal indicating that virtualenvwrapper is installed. If you encounter errors here, you need to address them before moving on.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
Usually, errors at this point are due to typos in your ~/.bashrc file. Now we’re ready to create your Python 3 deep learning virtual environment named dl4cv: $ mkvirtualenv dl4cv -p python3 You can create similar virtual environments with different names (and packages therein) as needed. On my personal system, I have many virtual environments. For developing and testing software for my book, Deep Learning for Computer Vision with Python, I like to name (or precede the name of) the environment with dl4cv . That said, feel free to use the nomenclature that makes the most sense to you. Great job setting up virtual environments on your system! Step #3: Install TensorFlow 2.0 into your dl4cv virtual environment This step is for both GPU users and non-GPU users. In this step, we’ll install TensorFlow 2.0 with pip. Ensure that you are still in your dl4cv virtual environment (typically the virtual environment name precedes your bash prompt). If not, no worries.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
Simply activate the environment with the following command: $ workon dl4cv A prerequisite of TensorFlow 2.0 is NumPy for numerical processing. Go ahead and install NumPy and TensorFlow 2.0 using pip: $ pip install numpy $ pip install tensorflow==2.0.0 # or tensorflow-gpu==2.0.0 To install TensorFlow 2.0 for a GPU be sure to replace tensorflow with tensorflow-gpu. You should NOT have both installed — use either tensorflow for a CPU install or tensorflow-gpu for a GPU install, not both! Great job installing TensorFlow 2.0! Step #4: Install TensorFlow 2.0 associated packages into your dl4cv virtual environment Figure 5: A fully-fledged TensorFlow 2.0 + Ubuntu deep learning environment requires additional Python libraries as well. This step is for both GPU users and non-GPU users. In this step, we will install additional packages needed for common deep learning development with TensorFlow 2.0. Ensure that you are still in your dl4cv virtual environment (typically the virtual environment name precedes your bash prompt). If not, no worries. Simply activate the environment with the following command: $ workon dl4cv We begin by installing standard image processing libraries including OpenCV: $ pip install opencv-contrib-python $ pip install scikit-image $ pip install pillow $ pip install imutils These image processing libraries will allow us to perform image I/O, various preprocessing techniques, as well as graphical display.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
From there, let’s install machine learning libraries and support libraries, the most notable two being scikit-learn and matplotlib: $ pip install scikit-learn $ pip install matplotlib $ pip install progressbar2 $ pip install beautifulsoup4 $ pip install pandas Scikit-learn is an especially important library when it comes to machine learning. We will use a number of features from this library including classification reports, label encoders, and machine learning models. Great job installing associated image processing and machine learning libraries. Step #5: Test your TensorFlow 2.0 install This step is for both GPU users and non-GPU users. As a quick sanity test, we’ll test our TensorFlow 2.0 install. Fire up a Python shell in your dl4cv environment and ensure that you can import the following packages: $ workon dl4cv $ python >>> import tensorflow as tf >>> tf.__version__ 2.0.0 >>> import tensorflow.keras >>> import cv2 >>> cv2.__version__ 4.1.2 If you configured your system with an NVIDIA GPU, be sure to check if TensorFlow 2.0’s installation is able to take advantage of your GPU: $ workon dl4cv $ python >>> import tensorflow as tf >>> tf.test.is_gpu_available() True Great job testing your TensorFlow 2.0 installation on Ubuntu. Accessing your TensorFlow 2.0 virtual environment At this point, your TensorFlow 2.0 dl4cv environment is ready to go. Whenever you would like to execute TensorFlow 2.0 code (such as from my deep learning book), be sure to use the workon command: $ workon dl4cv Your bash prompt will be preceded with (dl4cv) indicating that you are “inside” the TensorFlow 2.0 virtual environment. If you need to get back to your system-level environment, you can deactivate the current virtual environment: $ deactivate Frequently Asked Questions (FAQ) Q: These instructions seem really complicated. Do you have a pre-configured environment?
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
A: Yes, the instructions can be daunting. I recommend brushing up on your Linux command line skills prior to following these instructions. I do offer two pre-configured environments for my book: Pre-configured Deep Learning Virtual Machine: My VirtualBox VM is included with your purchase of my deep learning book. Just download the VirtualBox and import the VM into VirtualBox. From there, boot it up and you’ll be running example code in a matter of minutes. Pre-configured Amazon Machine Image (EC2 AMI): Free for everyone on the internet. You can use this environment with no strings attached even if you don’t own my deep learning book (AWS charges apply, of course). Again, compute resources on AWS are not free — you will need to pay for cloud/GPU fees but not the AMI itself. Arguably, working on a deep learning rig in the cloud is cheaper and less time-consuming than keeping a deep learning box on-site. Free hardware upgrades, no system admin headaches, no calls to hardware vendors about warranty policies, no power bills, pay only for what you use.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
This is the best option if you have a few one-off projects and don’t want to drain your bank account with hardware expenses. Q: Why didn’t we install Keras? A: Keras is officially part of TensorFlow as of TensorFlow v1.10.0. By installing TensorFlow 2.0 the Keras API is inherently installed. Keras has been deeply embedded into TensorFlow and tf.keras is the primary high-level API in TensorFlow 2.0. The legacy functions that come with TensorFlow play nicely with tf.keras now. In order to understand the difference between Keras and tf.keras in a more detailed manner, check out my recent blog post. You may now import Keras using the following statement in your Python programs: $ workon dl4cv $ python >>> import tensorflow.keras >>> Q: Which version of Ubuntu should I use? A: Ubuntu 18.04.3 is “Long Term Support” (LTS) and is perfectly appropriate. There are plenty of legacy systems using Ubuntu 16.04 as well, but if you are building a new system, I would recommend Ubuntu 18.04.3 at this point.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
Currently, I do not advise using Ubuntu 19.04, as usually when a new Ubuntu OS is released, there are Aptitude package conflicts. Q: I’m really stuck. Something is not working. Can you help me? A: I really love helping readers and I would love to help you configure your deep learning development environment. That said, I receive 100+ emails and blog post comments per day — I simply don’t have the time to get to them all Customers of mine receive support priority over non-customers due to the number of requests myself and my team receive. Please consider becoming a customer by browsing my library of books and courses. My personal recommend is that you to grab a copy of Deep Learning for Computer Vision with Python — that book includes access to my pre-configured deep learning development environments that have TensorFlow, Keras, OpenCV, etc. pre-installed. You’ll be up and running in a matter of minutes.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to install TensorFlow 2.0 on Ubuntu (either with or without GPU support). Now that your TensorFlow 2.0 + Ubuntu deep learning rig is configured, I would suggest picking up a copy of Deep Learning for Computer Vision with Python. You’ll be getting a great education and you’ll learn how to successfully apply Deep Learning to your own projects. To be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!
https://pyimagesearch.com/2019/12/09/how-to-install-tensorflow-2-0-on-ubuntu/
Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning. Join the Newsletter! Website
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Click here to download the source code to this pos In this tutorial, you will learn how to train your own custom dlib shape predictor. You’ll then learn how to take your trained dlib shape predictor and use it to predict landmarks on input images and real-time video streams. Today kicks off a brand new two-part series on training custom shape predictors with dlib: Part #1: Training a custom dlib shape predictor (today’s tutorial) Part #2: Tuning dlib shape predictor hyperparameters to balance speed, accuracy, and model size (next week’s tutorial) Shape predictors, also called landmark predictors, are used to predict key (x, y)-coordinates of a given “shape”. The most common, well-known shape predictor is dlib’s facial landmark predictor used to localize individual facial structures, including the: Eyes Eyebrows Nose Lips/mouth Jawline Facial landmarks are used for face alignment (a method to improve face recognition accuracy), building a “drowsiness detector” to detect tired, sleepy drivers behind the wheel, face swapping, virtual makeover applications, and much more. However, just because facial landmarks are the most popular type of shape predictor, doesn’t mean we can’t train a shape predictor to localize other shapes in an image! For example, you could use a shape predictor to: Automatically localize the four corners of a piece of paper when building a computer vision-based document scanner. Detect the key, structural joints of the human body (feet, knees, elbows, etc.). Localize the tips of your fingers when building an AR/VR application. Today we’ll be exploring shape predictors in more detail, including how you can train your own custom shape predictor using the dlib library. To learn how to train your own dlib shape predictor, just keep reading!
https://pyimagesearch.com/2019/12/16/training-a-custom-dlib-shape-predictor/
Looking for the source code to this post? Jump Right To The Downloads Section Tuning a custom dlib shape predictor In the first part of this tutorial, we’ll briefly discuss what shape/landmark predictors are and how they can be used to predict specific locations on structural objects. From there we’ll review the iBUG 300-W dataset, a common dataset used to train shape predictors used to localize specific locations on the human face (i.e., facial landmarks). I’ll then show you how to train your own custom dlib shape predictor, resulting in a model that can balance speed, accuracy, and model size. Finally, we’ll put our shape predictor to the test and apply it to a set of input images/video streams, demonstrating that our shape predictor is capable of running in real-time. We’ll wrap up the tutorial with a discussion of next steps. What are shape/landmark predictors? Figure 1: Training a custom dlib shape predictor on facial landmarks (image source). Shape/landmark predictors are used to localize specific (x, y)-coordinates on an input “shape”. The term “shape” is arbitrary, but it’s assumed that the shape is structural in nature.