file_name
stringclasses
3 values
content
stringlengths
1
3.49k
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
MANNINGMohamed Elgendy
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
Deep Learning for Vision Systems MOHAMED ELGENDY MANNING SHELTER ISLAND
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
For online information and ordering of this and other Manning books, please visit www.manning.com . The publisher offers discounts on this book when ordered in quantity. For more information, please contact Special Sales Department Manning Publications Co. 20 Baldwin Road PO Box 761 Shelter Island, NY 11964 Email: orders@manning.com ©2020 by Manning Publications Co. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without prior written permission of the publisher. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in the book, and Manning Publications was aware of a trademark claim, the designations have been printed in initial caps or all caps. Recognizing the importance of preserving what has been written, it is Manning’s policy to have the books we publish printed on acid-free paper, and we exert our best efforts to that end. Recognizing also our responsibility to conserve the resources of our planet, Manning books are printed on paper that is at least 15 percent recycled and processed without the use of elemental chlorine. Development editor: Jenny Stout Technical development editor: Alain Couniot Manning Publications Co. Review editor: Ivan Martinovic ´ 20 Baldwin Road Production editor: Lori Weidert PO Box 761 Copy editor: Tiffany Taylor Shelter Island, NY 11964 Proofreader: Keri Hales Technical proofreader: Al Krinker Typesetter: Dennis Dalinnik Cover designer: Marija Tudor ISBN: 9781617296192 Printed in the United States of America
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
To my mom, Huda, who taught me perseverance and kindness To my dad, Ali, who taught me patience and purpose To my loving and supportive wife, Amanda, who always inspires me to keep climbing To my two-year-old daughter, Emily, who teaches me every day that AI still has a long way to go to catch up with even the tiniest humans
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
null
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
vcontents preface xiii acknowledgments xv about this book xvi about the author xix about the cover illustration xx PART 1DEEP LEARNING FOUNDATION ............................. 1 1 Welcome to computer vision 3 1.1 Computer vision 4 What is visual perception? 5■Vision systems 5 Sensing devices 7■Interpreting devices 8 1.2 Applications of computer vision 10 Image classification 10■Object detection and localization 12 Generating art (style transfer) 12■Creating images 13 Face recognition 15■Image recommendation system 15 1.3 Computer vision pipeline: The big picture 17 1.4 Image input 19 Image as functions 19■How computers see images 21 Color images 21
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS vi 1.5 Image preprocessing 23 Converting color images to grayscale to reduce computation complexity 23 1.6 Feature extraction 27 What is a feature in computer vision? 27■What makes a good (useful) feature? 28■Extracting features (handcrafted vs. automatic extracting) 31 1.7 Classifier learning algorithm 33 2 Deep learning and neural networks 36 2.1 Understanding perceptrons 37 What is a perceptron? 38■How does the perceptron learn? 43 Is one neuron enough to solve complex problems? 43 2.2 Multilayer perceptrons 45 Multilayer perceptron architecture 46■What are hidden layers? 47■How many layers, and how many nodes in each layer? 47■Some takeaways from this section 50 2.3 Activation functions 51 Linear transfer function 53■Heaviside step function (binary classifier) 54■Sigmoid/logistic function 55■Softmax function 57■Hyperbolic tangent function (tanh) 58 Rectified linear unit 58■Leaky ReLU 59 2.4 The feedforward process 62 Feedforward calculations 64■Feature learning 65 2.5 Error functions 68 What is the error function? 69■Why do we need an error function? 69■Error is always positive 69■Mean square error 70■Cross-entropy 71■A final note on errors and weights 72 2.6 Optimization algorithms 74 What is optimization? 74■Batch gradient descent 77 Stochastic gradient descent 83■Mini-batch gradient descent 84 Gradient descent takeaways 85 2.7 Backpropagation 86 What is backpropagation? 87■Backpropagation takeaways 90
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS vii 3 Convolutional neural networks 92 3.1 Image classification using MLP 93 Input layer 94■Hidden layers 96■Output layer 96 Putting it all together 97■Drawbacks of MLPs for processing images 99 3.2 CNN architecture 102 The big picture 102■A closer look at feature extraction 104 A closer look at classification 105 3.3 Basic components of a CNN 106 Convolutional layers 107■Pooling layers or subsampling 114 Fully connected layers 119 3.4 Image classification using CNNs 121 Building the model architecture 121■Number of parameters (weights) 123 3.5 Adding dropout layers to avoid overfitting 124 What is overfitting? 125■What is a dropout layer? 125 Why do we need dropout layers? 126■Where does the dropout layer go in the CNN architecture? 127 3.6 Convolution over color images (3D images) 128 How do we perform a convolution on a color image? 129 What happens to the computational complexity? 130 3.7 Project: Image classification for color images 133 4 Structuring DL projects and hyperparameter tuning 145 4.1 Defining performance metrics 146 Is accuracy the best metric for evaluating a model? 147 Confusion matrix 147■Precision and recall 148 F-score 149 4.2 Designing a baseline model 149 4.3 Getting your data ready for training 151 Splitting your data for train/validation/test 151 Data preprocessing 153 4.4 Evaluating the model and interpreting its performance 156 Diagnosing overfitting and underfitting 156■Plotting the learning curves 158■
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS viii 4.5 Improving the network and tuning hyperparameters 162 Collecting more data vs. tuning hyperparameters 162 Parameters vs. hyperparameters 163■Neural network hyperparameters 163■Network architecture 164 4.6 Learning and optimization 166 Learning rate and decay schedule 166■A systematic approach to find the optimal learning rate 169■Learning rate decay and adaptive learning 170■Mini-batch size 171 4.7 Optimization algorithms 174 Gradient descent with momentum 174■Adam 175 Number of epochs and early stopping criteria 175■Early stopping 177 4.8 Regularization techniques to avoid overfitting 177 L2 regularization 177■Dropout layers 179 Data augmentation 180 4.9 Batch normalization 181 The covariate shift problem 181■Covariate shift in neural networks 182■How does batch normalization work? 183 Batch normalization implementation in Keras 184■Batch normalization recap 185 4.10 Project: Achieve high accuracy on image classification 185 PART 2IMAGE CLASSIFICATION AND DETECTION ........... 193 5 Advanced CNN architectures 195 5.1 CNN design patterns 197 5.2 LeNet-5 199 LeNet architecture 199■LeNet-5 implementation in Keras 200 Setting up the learning hyperparameters 202■LeNet performance on the MNIST dataset 203 5.3 AlexNet 203 AlexNet architecture 205■Novel features of AlexNet 205 AlexNet implementation in Keras 207■Setting up the learning hyperparameters 210■AlexNet performance 211 5.4 VGGNet 212 Novel features of VGGNet 212■VGGNet configurations 213 Learning hyperparameters 216■
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS ix 5.5 Inception and GoogLeNet 217 Novel features of Inception 217■Inception module: Naive version 218■Inception module with dimensionality reduction 220■Inception architecture 223■GoogLeNet in Keras 225■Learning hyperparameters 229■Inception performance on the CIFAR dataset 229 5.6 ResNet 230 Novel features of ResNet 230■Residual blocks 233■ResNet implementation in Keras 235■Learning hyperparameters 238 ResNet performance on the CIFAR dataset 238 6 Transfer learning 240 6.1 What problems does transfer learning solve? 241 6.2 What is transfer learning? 243 6.3 How transfer learning works 250 How do neural networks learn features? 252■Transferability of features extracted at later layers 254 6.4 Transfer learning approaches 254 Using a pretrained network as a classifier 254■Using a pretrained network as a feature extractor 256■Fine-tuning 258 6.5 Choosing the appropriate level of transfer learning 260 Scenario 1: Target dataset is small and similar to the source dataset 260■Scenario 2: Target dataset is large and similar to the source dataset 261■Scenario 3: Target dataset is small and different from the source dataset 261■Scenario 4: Target dataset is large and different from the source dataset 261■Recap of the transfer learning scenarios 262 6.6 Open source datasets 262 MNIST 263■Fashion-MNIST 264■CIFAR 264 ImageNet 265■MS COCO 266■Google Open Images 267 Kaggle 267 6.7 Project 1: A pretrained network as a feature extractor 268 6.8 Project 2: Fine-tuning 274 7 Object detection with R-CNN, SSD, and YOLO 283 7.1 General object detection framework 285 Region proposals 286■Network predictions 287 Non-maximum suppression (NMS) 288■Object-detector evaluation metrics 289
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS x 7.2 Region-based convolutional neural networks (R-CNNs) 292 R-CNN 293■Fast R-CNN 297■Faster R-CNN 300 Recap of the R-CNN family 308 7.3 Single-shot detector (SSD) 310 High-level SSD architecture 311■Base network 313 Multi-scale feature layers 315■Non-maximum suppression 319 7.4 You only look once (YOLO) 320 How YOLOv3 works 321■YOLOv3 architecture 324 7.5 Project: Train an SSD network in a self-driving car application 326 Step 1: Build the model 328■Step 2: Model configuration 329 Step 3: Create the model 330■Step 4: Load the data 331 Step 5: Train the model 333■Step 6: Visualize the loss 334 Step 7: Make predictions 335 PART 3GENERATIVE MODELS AND VISUAL EMBEDDINGS ...339 8 Generative adversarial networks (GANs) 341 8.1 GAN architecture 343 Deep convolutional GANs (DCGANs) 345■The discriminator model 345■The generator model 348■Training the GAN 351■GAN minimax function 354 8.2 Evaluating GAN models 357 Inception score 358■Fréchet inception distance (FID) 358 Which evaluation scheme to use 358 8.3 Popular GAN applications 359 Text-to-photo synthesis 359■Image-to-image translation (Pix2Pix GAN) 360■Image super-resolution GAN (SRGAN) 361 Ready to get your hands dirty? 362 8.4 Project: Building your own GAN 362 9 DeepDream and neural style transfer 374 9.1 How convolutional neural networks see the world 375 Revisiting how neural networks work 376■Visualizing CNN features 377■
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS xi 9.2 DeepDream 384 How the DeepDream algorithm works 385■DeepDream implementation in Keras 387 9.3 Neural style transfer 392 Content loss 393■Style loss 396■Total variance loss 397 Network training 397 10 Visual embeddings 400 10.1 Applications of visual embeddings 402 Face recognition 402■Image recommendation systems 403 Object re-identification 405 10.2 Learning embedding 406 10.3 Loss functions 407 Problem setup and formalization 408■Cross-entropy loss 409 Contrastive loss 410■Triplet loss 411■Naive implementation and runtime analysis of losses 412 10.4 Mining informative data 414 Dataloader 414■Informative data mining: Finding useful triplets 416■Batch all (BA) 419■Batch hard (BH) 419 Batch weighted (BW) 421■Batch sample (BS) 421 10.5 Project: Train an embedding network 423 Fashion: Get me items similar to this 424■Vehicle re-identification 424■Implementation 426■Testing a trained model 427 10.6 Pushing the boundaries of current accuracy 431 appendix A Getting set up 437 index 445
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
null
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
xiiipreface Two years ago, I decided to write a book to teach deep learning for computer vision from an intuitive perspective. My goal was to develop a comprehensive resource that takes learners from knowing only the basics of machine learning to building advanced deep learning algorithms that they can apply to solve complex computer vision problems. The problem : In short, as of this moment, there are no books out there that teach deep learning for computer vision the way I wanted to learn about it. As a beginner machine learning engineer, I wanted to read one book that would take me from point A to point Z. I planned to specialize in building modern computer vision applications, and I wished that I had a single resource that would teach me everything I needed to do two things: 1) use neural networks to build an end-to-end computer vision applica- tion, and 2) be comfortable reading and implementing research papers to stay up-to- date with the latest industry advancements. I found myself jumping between online courses, blogs, papers, and YouTube videos to create a comprehensive curriculum for myself. It’s challenging to try to comprehend what is happening under the hood on a deeper level: not just a basic understanding, but how the concepts and theories make sense mathematically. It was impossible to find one comprehensive resource that (horizontally) covered the most important topics that I needed to learn to work on complex computer vision applica- tions while also diving deep enough (vertically) to help me understand the math that makes the magic work.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
PREFACE xiv As a beginner, I searched but couldn’t find anything to meet these needs. So now I’ve written it. My goal has been to write a book that not only teaches the content I wanted when I was starting out, but also levels up your ability to learn on your own. My solution is a comprehensive book that dives deep both horizontally and vertically: ■Horizontally —This book explains most topics that an engineer needs to learn to build production-ready computer vision applications, from neural networks and how they work to the different types of neural network architectures and how to train, evaluate, and tune the network. ■Vertically —The book dives a level or two deeper than the code and explains intuitively (and gently) how the math works under the hood, to empower you to be comfortable reading and implementing research papers or even invent- ing your own techniques. At the time of writing, I believe this is the only deep learning for vision systems resource that is taught this way. Whether you are looking for a job as a computer vision engineer, want to gain a deeper understanding of advanced neural networks algorithms in computer vision, or want to build your product or startup, I wrote this book with you in mind. I hope you enjoy it.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
xvacknowledgments This book was a lot of work. No, make that really a lot of work! But I hope you will find it valuable. There are quite a few people I’d like to thank for helping me along the way. I would like to thank the people at Manning who made this book possible: pub- lisher Marjan Bace and everyone on the editorial and production teams, including Jennifer Stout, Tiffany Taylor, Lori Weidert, Katie Tennant, and many others who worked behind the scenes. Many thanks go to the technical peer reviewers led by Alain Couniot—Al Krinker, Albert Choy, Alessandro Campeis, Bojan Djurkovic, Burhan ul haq, David Fombella Pombal, Ishan Khurana, Ita Cirovic Donev, Jason Coleman, Juan Gabriel Bono, Juan José Durillo Barrionuevo, Michele Adduci, Millad Dagdoni, Peter Hraber, Richard Vaughan, Rohit Agarwal, Tony Holdroyd, Tymoteusz Wolodzko, and Will Fuger—and the active readers who contributed their feedback in the book forums. Their contribu- tions included catching typos, code errors and technical mistakes, as well as making valuable topic suggestions. Each pass through the review process and each piece of feedback implemented through the forum topics shaped and molded the final ver- sion of this book. Finally, thank you to the entire Synapse Technology team. You’ve created some- thing that’s incredibly cool. Thank you to Simanta Guatam, Aleksandr Patsekin, Jay Patel, and others for answering my questions and brainstorming ideas for the book.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
xviabout this book Who should read this book If you know the basic machine learning framework, can hack around in Python, and want to learn how to build and train advanced, production-ready neural networks to solve complex computer vision problems, I wrote this book for you. The book was written for anyone with intermediate Python experience and basic machine learning understanding who wishes to explore training deep neural networks and learn to apply deep learning to solve computer vision problems. When I started writing the book, my primary goal was as follows: “I want to write a book to grow readers’ skills, not teach them content.” To achieve this goal, I had to keep an eye on two main tenets: 1Teach you how to learn . I don’t want to read a book that just goes through a set of scientific facts. I can get that on the internet for free. If I read a book, I want to finish it having grown my skillset so I can study the topic further. I want to learn how to think about the presented solutions and come up with my own. 2Go very deep . If I’m successful in satisfying the first tenet, that makes this one easy. If you learn how to learn new concepts, that allows me to dive deep with- out worrying that you might fall behind. This book doesn’t avoid the math part of the learning, because understanding the mathematical equations will empower you with the best skill in the AI world: the ability to read research papers, compare innovations, and make the right decisions about implement- ing new concepts in your own problems. But I promise to introduce only the mathematical concepts you need, and I promise to present them in a way that
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
ABOUT THIS BOOK xvii doesn’t interrupt your flow of understanding the concepts without the math part if you prefer. How this book is organized: A roadmap This book is structured into three parts. The first part explains deep leaning in detail as a foundation for the remaining topics. I strongly recommend that you not skip this section, because it dives deep into neural network components and definitions and explains all the notions required to be able to understand how neural networks work under the hood. After reading part 1, you can jump directly to topics of interest in the remaining chapters. Part 2 explains deep learning techniques to solve object classifica- tion and detection problems, and part 3 explains deep learning techniques to gener- ate images and visual embeddings. In several chapters, practical projects implement the topics discussed. About the code All of this book’s code examples use open source frameworks that are free to down- load. We will be using Python, Tensorflow, Keras, and OpenCV. Appendix A walks you through the complete setup. I also recommend that you have access to a GPU if you want to run the book projects on your machine, because chapters 6–10 contain more complex projects to train deep networks that will take a long time on a regular CPU. Another option is to use a cloud environment like Google Colab for free or other paid options. Examples of source code occur both in numbered listings and in line with normal text. In both cases, source code is formatted in a fixed-width font like this to sepa- rate it from ordinary text. Sometimes code is also in bold to highlight code that has changed from previous steps in the chapter, such as when a new feature adds to an existing line of code. In many cases, the original source code has been reformatted; we’ve added line breaks and reworked indentation to accommodate the available page space in the book. In rare cases, even this was not enough, and listings include line-continuation markers ( ➥). Additionally, comments in the source code have often been removed from the listings when the code is described in the text. Code annotations accompany many of the listings, highlighting important concepts. The code for the examples in this book is available for download from the Man- ning website at www.manning.com/books/deep-learning-for-vision-systems and from GitHub at https:/ /github.com/moelgendy/deep_learning_for_vision_systems . liveBook discussion forum Purchase of Deep Learning for Vision Systems includes free access to a private web forum run by Manning Publications where you can make comments about the book, ask technical questions, and receive help from the author and from other users. To
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
ABOUT THIS BOOK xviii access the forum, go to https:/ /livebook.manning.com/#!/book/deep-learning-for- vision-systems/discussion . You can also learn more about Manning’s forums and the rules of conduct at https:/ /livebook.manning.com/#!/discussion . Manning’s commitment to our readers is to provide a venue where a meaningful dialogue between individual readers and between readers and the author can take place. It is not a commitment to any specific amount of participation on the part of the author, whose contribution to the forum remains voluntary (and unpaid). We sug- gest you try asking the author some challenging questions lest his interest stray! The forum and the archives of previous discussions will be accessible from the publisher’s website as long as the book is in print.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
xixabout the author Mohamed Elgendy is the vice president of engineering at Rakuten, where he is lead- ing the development of its AI platform and products. Previously, he served as head of engineering at Synapse Technology, building proprietary computer vision applica- tions to detect threats at security checkpoints worldwide. At Amazon, Mohamed built and managed the central AI team that serves as a deep learning think tank for Ama- zon engineering teams like AWS and Amazon Go. He also developed the deep learn- ing for computer vision curriculum at Amazon’s Machine University. Mohamed regularly speaks at AI conferences like Amazon’s DevCon, O’Reilly’s AI conference, and Google’s I/O.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
xxabout the cover illustration The figure on the cover of Deep Learning for Vision Systems depicts Ibn al-Haytham, an Arab mathematician, astronomer, and physicist who is often referred to as “the father of modern optics” due to his significant contributions to the principles of optics and visual perception. The illustration is modified from the frontispiece of a fifteenth- century edition of Johannes Hevelius’s work Selenographia . In his book Kitab al-Manazir (Book of Optics ), Ibn al-Haytham was the first to explain that vision occurs when light reflects from an object and then passes to one’s eyes. He was also the first to demonstrate that vision occurs in the brain, rather than in the eyes—and many of these concepts are at the heart of modern vision systems. You will see the correlation when you read chapter 1 of this book. Ibn al-Haytham has been a great inspiration for me as I work and innovate in this field. By honoring his memory on the cover of this book, I hope to inspire fellow prac- titioners that our work can live and inspire others for thousands of years.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
Part 1 Deep learning foundation C omputer vision is a technological area that’s been advancing rapidly thanks to the tremendous advances in artificial intelligence and deep learning that have taken place in the past few years. Neural networks now help self-driving cars to navigate around other cars, pedestrians, and other obstacles; and recom- mender agents are getting smarter about suggesting products that resemble other products. Face-recognition technologies are becoming more sophisticated, too, enabling smartphones to recognize faces before unlocking a phone or a door. Computer vision applications like these and others have become a staple in our daily lives. However, by moving beyond the simple recognition of objects, deep learning has given computers the power to imagine and create new things, like art that didn’t exist previously, new human faces, and other objects. Part 1 of this book looks at the foundations of deep learning, different forms of neural net- works, and structured projects that go a bit further with concepts like hyper- parameter tuning.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
3Welcome to computer vision Hello! I’m very excited that you are here. You are making a great decision—to grasp deep learning (DL) and computer vision (CV). The timing couldn’t be more perfect. CV is an area that’s been advancing rapidly, thanks to the huge AI and DL advances of recent years. Neural networks are now allowing self-driving cars to fig- ure out where other cars and pedestrians are and navigate around them. We are using CV applications in our daily lives more and more with all the smart devices in our homes—from security cameras to door locks. CV is also making face recogni- tion work better than ever: smartphones can recognize faces for unlocking, and smart locks can unlock doors. I wouldn’t be surprised if sometime in the near future, your couch or television is able to recognize specific people in your house and react according to their personal preferences. It’s not just about recognizingThis chapter covers Components of the vision system Applications of computer vision Understanding the computer vision pipeline Preprocessing images and extracting features Using classifier learning algorithms
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
4 CHAPTER 1Welcome to computer vision objects—DL has given computers the power to imagine and create new things like art- work; new objects; and even unique, realistic human faces. The main reason that I’m excited about deep learning for computer vision, and w hat d re w m e t o thi s fi e ld, is ho w r apid advanc es i n AI re s ear ch ar e e nab lin g ne w applications to be built every day and across different industries, something not possi- ble just a few years ago. The unlimited possibilities of CV research is what inspired me to write this book. By learning these tools, perhaps you will be able to invent new prod- ucts and applications. Even if you end up not working on CV per se, you will find many concepts in this book useful for some of your DL algorithms and architectures. That is because while the main focus is CV applications, this book covers the most important DL architectures, such as artificial neural networks (ANNs), convolutional networks (CNNs), generative adversarial networks (GANs), transfer learning, and many more, which are transferable to other domains like natural language processing (NLP) and voice user interfaces (VUIs). The high-level layout of this chapter is as follows: Computer vision intuition —We will start with visual perception intuition and learn the similarities between humans and machine vision systems. We will look at how vision systems have two main components: a sensing device and an inter- preting device. Each is tailored to fulfill a specific task. Applications of CV — H e r e , w e w i l l t a k e a b i r d ’ s - e y e v i e w o f t h e D L a l g o r i t h m s used in different CV applications. We will then discuss vision in general for dif- ferent creatures. Computer vision pipeline —Finally, we will zoom in on the second component of vision systems: the interpreting device. We will walk through the sequence of steps taken by vision systems to process and understand image data. These are referred to as a computer vision pipeline . The CV pipeline is composed of four main steps: image input, image preprocessing, feature extraction, and an ML model to interpret the image. We will talk about image formation and how com- puters see images. Then, we will quickly review image-processing techniques and extracting features. Ready? Let’s get started! 1.1 Computer vision The core concept of any AI system is that it can perceive its environment and take actions based on its perceptions. Computer vision is concerned with the visual percep- tion part: it is the science of perceiving and understanding the world through images and videos by constructing a physical model of the world so that an AI system can then take appropriate actions. For humans, vision is only one aspect of perception. We per- ceive the world through our sight, but also through sound, smell, and our other senses. It is similar with AI systems—vision is just one way to understand the world. Depending on the application you are building, you select the sensing device that best captures the world.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
5 Computer vision 1.1.1 What is visual perception? Visual perception , at its most basic, is the act of observing patterns and objects through sight or visual input. With an autonomous vehicle, for example, visual perception means understanding the surrounding objects and their specific details—such as pedestrians, or whether there is a particular lane the vehicle needs to be centered in—and detecting traffic signs and understanding what they mean. That’s why the word perception is part of the definition. We are not just looking to capture the surrounding environment. We are trying to build systems that can actually understand that environment through visual input. 1.1.2 Vision systems In past decades, traditional image-processing techniques were considered CV systems, but that is not totally accurate. A machine processing an image is completely different from that machine understanding what’s happening within the image, which is not a trivial task. Image processing is now just a piece of a bigger, more complex system that aims to interpret image content. HUMAN VISION SYSTEMS At the highest level, vision systems are pretty much the same for humans, animals, insects, and most living organisms. They consist of a sensor or an eye to capture the image and a brain to process and interpret the image. The system then outputs a prediction of the image components based on the data extracted from the image (figure 1.1). Let’s see how the human vision system works. Suppose we want to interpret the image of dogs in figure 1.1. We look at it and directly understand that the image con- sists of a bunch of dogs (three, to be specific). It comes pretty natural to us to classify POOL Human vision system Eye (sensing device responsible for capturing images of the environment)Brain (interpreting device responsible for understanding the image content) Dogs grassInterpretation Figure 1.1 The human vision system uses the eye and brain to sense and interpret an image.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
6 CHAPTER 1Welcome to computer vision and detect objects in this image because we have been trained over the years to iden- tify dogs. Suppose someone shows you a picture of a dog for the first time—you definitely don’t know what it is. Then they tell you that this is a dog. After a couple experiments like this, you will have been trained to identify dogs. Now, in a follow-up exercise, they show you a picture of a horse. When you look at the image, your brain starts analyzing the object features: hmmm, it has four legs, long face, long ears. Could it be a dog? “Wrong: this is a horse,” you’re told. Then your brain adjusts some parameters in its algorithm to learn the differences between dogs and horses. Congratulations! You just trained your brain to classify dogs and horses. Can you add more animals to the equa- tion, like cats, tigers, cheetahs, and so on? Definitely. You can train your brain to iden- tify almost anything. The same is true of computers. You can train machines to learn and identify objects, but humans are much more intuitive than machines. It takes only a few images for you to learn to identify most objects, whereas with machines, it takes thousands or, in more complex cases, millions of image samples to learn to identify objects. AI VISION SYSTEMS Scientists were inspired by the human vision system and in recent years have done an amazing job of copying visual ability with machines. To mimic the human vision sys- tem, we need the same two main components: a sensing device to mimic the function of the eye and a powerful algorithm to mimic the brain function in interpreting and classifying image content (figure 1.2). The ML perspective Let’s look at the previous example from the machine learning perspective: You learned to identify dogs by looking at examples of several dog-labeled images. This approach is called supervised learning. Labeled data is data for which you already know the target answer. You were shown a sample image of a dog and told that it was a dog. Your brain learned to associate the features you saw with this label: dog. You were then shown a different object, a horse, and asked to identify it. At first, your brain thought it was a dog, because you hadn’t seen horses before, and your brain confused horse features with dog features. When you were told that your prediction was wrong, your brain adjusted its parameters to learn horse features. “Yes, both have four legs, but the horse’s legs are lon- ger. Longer legs indicate a horse.” We can run this experiment many times until the brain makes no mistakes. This is called training by trial and error .
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
7 Computer vision 1.1.3 Sensing devices Vision systems are designed to fulfill a specific task. An important aspect of design is selecting the best sensing device to capture the surroundings of a specific environ- ment, whether that is a camera, radar, X-ray, CT scan, Lidar, or a combination of devices to provide the full scene of an environment to fulfill the task at hand. Let’s look at the autonomous vehicle (AV) example again. The main goal of the AV vision system is to allow the car to understand the environment around it and move from point A to point B safely and in a timely manner. To fulfill this goal, vehi- cles are equipped with a combination of cameras and sensors that can detect 360 degrees of movement—pedestrians, cyclists, vehicles, roadwork, and other objects— from up to three football fields away. Here are some of the sensing devices usually used in self-driving cars to perceive the surrounding area: Lidar, a radar-like technique, uses invisible pulses of light to create a high- resolution 3D map of the surrounding area. Cameras can see street signs and road markings but cannot measure distance. Radar can measure distance and velocity but cannot see in fine detail. Medical diagnosis applications use X-rays or CT scans as sensing devices. Or maybe you need to use some other type of radar to capture the landscape for agricultural vision systems. There are a variety of vision systems, each designed to perform a partic- ular task. The first step in designing vision systems is to identify the task they are built for. This is something to keep in mind when designing end-to-end vision systems. Recognizing images Animals, humans, and insects all have eyes as sensing devices. But not all eyes have the same structure, output image quality, and resolution. They are tailored to the spe- cific needs of the creature. Bees, for instance, and many other insects, have compoundComputer vision system Dogs grassOutput Interpreting device Sensing device Figure 1.2 The components of the computer vision system are a sensing device and an interpreting device.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
8 CHAPTER 1Welcome to computer vision 1.1.4 Interpreting devices Computer vision algorithms are typically employed as interpreting devices. The inter- preter is the brain of the vision system. Its role is to take the output image from the sensing device and learn features and patterns to identify objects. So we need to build a brain. Simple! Scientists were inspired by how our brains work and tried to reverse engineer the central nervous system to get some insight on how to build an artificial brain. Thus, artificial neural networks (ANNs) were born (figure 1.3). In figure 1.3, we can see an analogy between biological neurons and artificial sys- tems. Both contain a main processing element, a neuron , with input signals ( x1, x2, …, xn) and an output. The learning behavior of biological neurons inspired scientists to create a network of neurons that are connected to each other. Imitating how information is processed in the human brain, each artificial neuron fires a signal to all the neurons that it’s con- nected to when enough of its input signals are activated. Thus, neurons have a very simple mechanism on the individual level (as you will see in the next chapter); but when you have millions of these neurons stacked in layers and connected together, each neuron is connected to thousands of other neurons, yielding a learning behav- ior. Building a multilayer neural network is called deep learning (figure 1.4).(continued) eyes that consist of multiple lenses (as many as 30,000 lenses in a single compound eye). Compound eyes have low resolution, which makes them not so good at recog- nizing objects at a far distance. But they are very sensitive to motion, which is essen- tial for survival while flying at high speed. Bees don’t need high-resolution pictures. Their vision systems are built to allow them to pick up the smallest movements while flying fast. Compound eyes are low resolution but sensitive to motion.Compound eyes How bees see a flower
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
9 Computer vision DL methods learn representations through a sequence of transformations of data through layers of neurons. In this book, we will explore different DL architectures, such as ANNs and convolutional neural networks, and how they are used in CV applications. Biological neuron Artificial neuron Neuron Flow of informationDendrites (information coming from other neurons) Synapses (information output to other neurons)fx( ) Output xx n2 ...x1Input Neuron Figure 1.3 The similarities between biological neurons and artificial systems InputArtificial neural network (ANN) Layers of neuronsOutput Figure 1.4 Deep learning involves layers of neurons in a network.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
10 CHAPTER 1Welcome to computer vision CAN MACHINE LEARNING ACHIEVE BETTER PERFORMANCE THAN THE HUMAN BRAIN ? Well, if you had asked me this question 10 years ago, I would’ve probably said no, machines cannot surpass the accuracy of a human. But let’s take a look at the follow- ing two scenarios: Suppose you were given a book of 10,000 dog images, classified by breed, and you were asked to learn the properties of each breed. How long would it take you to study the 130 breeds in 10,000 images? And if you were given a test of 100 dog images and asked to label them based on what you learned, out of the 100, how many would you get right? Well, a neural network that is trained in a couple of hours can achieve more than 95% accuracy. On the creation side, a neural network can study the patterns in the strokes, col- ors, and shading of a particular piece of art. Based on this analysis, it can then transfer the style from the original artwork into a new image and create a new piece of original art within a few seconds. Recent AI and DL advances have allowed machines to surpass human visual ability in many image classification and object detection applications, and capacity is rapidly expanding to many other applications. But don’t take my word for it. In the next sec- tion, we’ll discuss some of the most popular CV applications using DL technology. 1.2 Applications of computer vision Computers began to be able to recognize human faces in images decades ago, but now AI systems are rivaling the ability of computers to classify objects in photos and videos. Thanks to the dramatic evolution in both computational power and the amount of data available, AI and DL have managed to achieve superhuman performance on many com- plex visual perception tasks like image search and captioning, image and video classifi- cation, and object detection. Moreover, deep neural networks are not restricted to CV tasks: they are also successful at natural language processing and voice user inter- face tasks. In this book, we’ll focus on visual applications that are applied in CV tasks. DL is used in many computer vision applications to recognize objects and their behavior. In this section, I’m not going to attempt to list all the CV applications that are out there. I would need an entire book for that. Instead, I’ll give you a bird’s-eye view of some of the most popular DL algorithms and their possible applications across different industries. Among these industries are autonomous cars, drones, robots, in-store cam- eras, and medical diagnostic scanners that can detect lung cancer in early stages. 1.2.1 Image classification Image classification is the task of assigning to an image a label from a predefined set of categories. A convolutional neural network is a neural network type that truly shines in processing and classifying images in many different applications: Lung cancer diagnosis —Lung cancer is a growing problem. The main reason lung cancer is very dangerous is that when it is diagnosed, it is usually in the middle or
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
11 Applications of computer vision late stages. When diagnosing lung cancer, doctors typically use their eyes to examine CT scan images, looking for small nodules in the lungs. In the early stages, the nodules are usually very small and hard to spot. Several CV compa- nies decided to tackle this challenge using DL technology. Almost every lung cancer starts as a small nodule, and these nodules appear in a variety of shapes that doctors take years to learn to recognize. Doctors are very good at identifying mid- and large-size nodules, such as 6–10 mm. But when nodules are 4 mm or smaller, sometimes doctors have difficulty identify- ing them. DL networks, specifically CNNs, are now able to learn these features automatically from X-ray and CT scan images and detect small nodules early, before they become deadly (figure 1.5). Traffic sign recognition —Traditionally, standard CV methods were employed to detect and classify traffic signs, but this approach required time-consuming man- ual work to handcraft important features in images. Instead, by applying DL to this problem, we can create a model that reliably classifies traffic signs, learning to identify the most appropriate features for this problem by itself (figure 1.6). NOTE Increasing numbers of image classification tasks are being solved with convolutional neural networks. Due to their high recognition rate and fast execution, CNNs have enhanced most CV tasks, both pre-existing and new. Just like the cancer diagnosis and traffic sign examples, you can feed tens or hundreds of thousands of images into a CNN to label them into as many classes as you want. Other image classification examples include identifying people and objects, classifying different animals (like cats versus dogs versus horses), different breeds of animals, types of land suitable for agriculture, and so on. In short, if you have a set of labeled images, convolutional networks can classify them into a set of predefined classes. CT scanTumor X-ray Tumor Figure 1.5 Vision systems are now able to learn patterns in X-ray images to identify tumors in earlier stages of development.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
12 CHAPTER 1Welcome to computer vision 1.2.2 Object detection and localization Image classification problems are the most basic applications for CNNs. In these prob- lems, each image contains only one object, and our task is to identify it. But if we aim to reach human levels of understanding, we have to add complexity to these networks so they can recognize multiple objects and their locations in an image. To do that, we can build object detection systems like YOLO (you only look once), SSD (single-shot detector), and Faster R-CNN, which not only classify images but also can locate and detect each object in images that contain multiple objects. These DL systems can look at an image, break it up into smaller regions, and label each region with a class so that a variable num- ber of objects in a given image can be localized and labeled (figure 1.7). You can imag- ine that such a task is a basic prerequisite for applications like autonomous systems. 1.2.3 Generating art (style transfer) Neural style transfer , one of the most interesting CV applications, is used to transfer the style from one image to another. The basic idea of style transfer is this: you take one image—say, of a city—and then apply a style of art to that image—say, The Starry Night (by Vincent Van Gogh)—and output the same city from the original image, but look- ing as though it was painted by Van Gogh (figure 1.8). This is actually a neat application. The astonishing thing, if you know any painters, is that it can take days or even weeks to finish a painting, and yet here is an application that can paint a new image inspired by an existing style in a matter of seconds. Figure 1.6 Vision systems can detect traffic signs with very high performance.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
13 Applications of computer vision 1.2.4 Creating images Although the earlier examples are truly impressive CV applications of AI, this is where I see the real magic happening: the magic of creation. In 2014, Ian Good- fellow invented a new DL model that can imagine new things called generative adversarial networks (GANs). The name makes them sound a little intimidating, but I promise you that they are not. A GAN is an evolved CNN architecture that isBicycleClouds Pedestrian Figure 1.7 Deep learning systems can segment objects in an image. + Style Generated art =Original image Figure 1.8 Style transfer from Van Gogh’s The Starry Night onto the original image, producing a piece of art that feels as though it was created by the original artist
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
14 CHAPTER 1Welcome to computer vision considered a major advancement in DL. So when you understand CNNs, GANs will make a lot more sense to you. GANs are sophisticated DL models that generate stunningly accurate synthesized images of objects, people, and places, among other things. If you give them a set of images, they can make entirely new, realistic-looking images. For example, StackGAN is one of the GAN architecture variations that can use a textual description of an object to generate a high-resolution image of the object matching that description. This is not just running an image search on a database. These “photos” have never been seen before and are totally imaginary (figure 1.9). The GAN is one of the most promising advancements in machine learning in recent years. Research into GANs is new, and the results are overwhelmingly promising. Most of the applications of GANs so have far have been for images. But it makes you won- der: if machines are given the power of imagination to create pictures, what else can they create? In the future, will your favorite movies, music, and maybe even books be created by computers? The ability to synthesize one data type (text) to another (image) will eventually allow us to create all sorts of entertainment using only detailed text descriptions. GANs create artwork In October 2018, an AI-created painting called The Portrait of Edmond Belamy sold for $432,500. The artwork features a fictional person named Edmond de Belamy, possibly French and—to judge by his dark frock coat and plain white collar—a man of the church.This small blue bird has a short, pointy beak and brown on its wings. This bird is completely red with black wings and a pointy beak. Figure 1.9 Generative adversarial networks (GANS) can create new, “made-up” images from a set of existing images.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
15 Applications of computer vision 1.2.5 Face recognition Face recognition (FR) allows us to exactly identify or tag an image of a person. Day-to- day applications include searching for celebrities on the web and auto-tagging friends and family in images. Face recognition is a form of fine-grained classification. The famous Handbook of Face Recognition (Li et al., Springer, 2011) categorizes two modes of an FR system: Face identification —Face identification involves one-to-many matches that com- pare a query face image against all the template images in the database to deter- mine the identity of the query face. Another face recognition scenario involves a watchlist check by city authorities, where a query face is matched to a list of suspects (one-to-few matches). Face verification —Face verification involves a one-to-one match that compares a query face image against a template face image whose identity is being claimed (figure 1.10). 1.2.6 Image recommendation system In this task, a user seeks to find similar images with respect to a given query image. Shopping websites provide product suggestions (via images) based on the selection of a particular product, for example, showing a variety of shoes similar to those the user selected. An example of an apparel search is shown in figure 1.11.The artwork was created by a team of three 25-year-old French students using GANs. The network was trained on a dataset of 15,000 portraits painted between the fourteenth and twentieth centuries, and then it created one of its own. The team printed the image, framed it, and signed it with part of a GAN algorithm.AI-generated artwork featuring a fictional person named Edmond de Belamy sold for $432,500.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
16 CHAPTER 1Welcome to computer vision Face verification Face verification systemPerson 1Person 1 Person 2 Not person 1Face identification Face identification system Haven’t seen her before Figure 1.10 Example of face verification (left) and face recognition (right) Figure 1.11 Apparel search. The leftmost image in each row is the query/clicked image, and the subsequent columns show similar apparel. ( Source : Liu et al., 2016.) Query Retrievals
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
17 Computer vision pipeline: The big picture 1.3 Computer vision pipeline: The big picture Okay, now that I have your attention, let’s dig one level deeper into CV systems. Remember that earlier in this chapter, we discussed how vision systems are composed of two main components: sensing devices and interpreting devices (figure 1.12 offers a reminder). In this section, we will take a look at the pipeline the interpreting device component uses to process and understand images. Applications of CV vary, but a typical vision system uses a sequence of distinct steps to process and analyze image data. These steps are referred to as a computer vision pipeline . Many vision applications follow the flow of acquiring images and data, processing that data, performing some analysis and recognition steps, and then finally making a pre- diction based on the extracted information (figure 1.13). Let’s apply the pipeline in figure 1.13 to an image classifier example. Suppose we have an image of a motorcycle, and we want the model to predict the probability of the object from the following classes: motorcycle, car, and dog (see figure 1.14).Computer vision system Dogs grassOutput Interpreting device Sensing device Figure 1.12 Focusing on the interpreting device in computer vision systems 1. Input data 2. Preprocessing 3. Feature extraction 4. ML model • Images • Videos (image frames)Getting the data ready: • Standardize images • Color transformation • More...• Find distinguishing information about the image• Learn from the extracted features to predict and classify objects Figure 1.13 The computer vision pipeline, which takes input data, processes it, extracts information, and then sends it to the machine learning model to learn
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
18 CHAPTER 1Welcome to computer vision DEFINITIONS An image classifier is an algorithm that takes in an image as input and outputs a label or “class” that identifies that image. A class (also called a category ) in machine learning is the output category of your data. Here is how the image flows through the classification pipeline: 1A computer receives visual input from an imaging device like a camera. This input is typically captured as an image or a sequence of images forming a video. 2Each image is then sent through some preprocessing steps whose purpose is to standardize the images. Common preprocessing steps include resizing an image, blurring, rotating, changing its shape, or transforming the image from one color to another, such as from color to grayscale. Only by standardizing the images—for example, making them the same size—can you then compare them and further analyze them. 3We extract features. Features are what help us define objects, and they are usu- ally information about object shape or color. For example, some features that distinguish a motorcycle are the shape of the wheels, headlights, mudguards, and so on. The output of this process is a feature vector that is a list of unique shapes that identify the object. 4The features are fed into a classification model . This step looks at the feature vec- tor from the previous step and predicts the class of the image. Pretend that you are the classifier model for a few minutes, and let’s go through the classification process. You look at the list of features in the feature vector one by one and try to determine what’s in the image: aFirst you see a wheel feature; could this be a car, a motorcycle, or a dog? Clearly it is not a dog, because dogs don’t have wheels (at least, normal dogs, not robots). Then this could be an image of a car or a motorcycle. bYou move on to the next feature, the headlight s. There is a higher probability that this is a motorcycle than a car. cThe next feature is rear mudguards —again, there is a higher probability that it is a motorcycle.1. Input data 2. Preprocessing • Geometric transforming • Image blurring3. Feature extraction P(motorcycle) = 0.854. Classifier Features vectorP(car) = 0.14 P(dog) = 0.01 Figure 1.14 Using the machine learning model to predict the probability of the motorcycle object from the motorcycle, car, and dog classes
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
19 Image input dThe object has only two wheels; this is closer to a motorcycle. eAnd you keep going through all the features like the body shape, pedal, and so on, until you arrive at a best guess of the object in the image. The output of this process is the probability of each class. As you can see in our exam- ple, the dog has the lowest probability, 1%, whereas there is an 85% probability that this is a motorcycle. You can see that, although the model was able to predict the right class with the highest probability, it is still a little confused about distinguishing between cars and motorcycles—it predicted that there is a 14% chance this is an image of a car. Since we know that it is a motorcycle, we can say that our ML classifica- tion algorithm is 85% accurate. Not bad! To improve this accuracy, we may need to do more of step 1 (acquire more training images), or step 2 (more processing to remove noise), or step 3 (extract better features), or step 4 (change the classifier algorithm and tune some hyperparameters), or even allow more training time. The many differ- ent approaches we can take to improve the performance of our model all lie in one or more of the pipeline steps. That was the big picture of how images flow through the CV pipeline. Next, we’ll zoom in one level deeper on each of the pipeline steps. 1.4 Image input In CV applications, we deal with images or video data. Let’s talk about grayscale and color images for now, and in later chapters, we will talk about videos, since videos are just stacked sequential frames of images. 1.4.1 Image as functions An image can be represented as a function of two variables x and y, which define a two- dimensional area. A digital image is made of a grid of pixels. The pixel is the raw build- ing block of an image. Every image consists of a set of pixels in which their values rep- resent the intensity of light that appears in a given place in the image. Let’s take a look at the motorcycle example again after applying the pixel grid to it (figure 1.15). Grayscale image (32 × 16) F(20, 7) = 0 Black pixel F(12, 13) = 255 White pixelx 31 0 0 y 15 F(18, 9) = 190 Gray pixel Figure 1.15 Images consists of raw building blocks called pixels . The pixel values represent the intensity of light that appears in a given place in the image.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
20 CHAPTER 1Welcome to computer vision The image in figure 1.14 has a size of 32 × 16. This means the dimensions of the image are 32 pixels wide and 16 pixels tall. The x-axis goes from 0 to 31, and the y-axis from 0 to 16. Overall, the image has 512 (32 × 16) pixels. In this grayscale image, each pixel contains a value that represents the intensity of light on that specific pixel. The pixel val- ues range from 0 to 255. Since the pixel value represents the intensity of light, the value 0 represents very dark pixels (black), 255 is very bright (white), and the values in between represent the intensity on the grayscale. You can see that the image coordinate system is similar to the Cartesian coordinate system: images are two-dimensional and lie on the x-y plane. The origin (0, 0) is at the top left of the image. To represent a specific pixel, we use the following notations: F as a function, and x, y as the location of the pixel in x- and y-coordinates. For example, the pixel located at x = 12 and y = 13 is white; this is represented by the following func- tion: F(12, 13) = 255. Similarly, the pixel (20, 7) that lies on the front of the motor- cycle is black, represented as F(20, 7) = 0. Grayscale => F(x, y) gives the intensity at position (x, y) That was for grayscale images. How about color images? In color images, instead of representing the value of the pixel by just one number, the value is represented by three numbers representing the intensity of each color in the pixel. In an RGB system, for example, the value of the pixel is represented by three numbers: the intensity of red, intensity of green, and intensity of blue. There are other color systems for images like HSV and Lab. All follow the same concept when representing the pixel value (more on color images soon). Here is the function repre- senting color images in the RGB system: Color image in RGB => F(x, y) = [ red (x, y), green (x, y), blue (x, y) ] Thinking of an image as a function is very useful in image processing. We can think of an image as a function of F(x, y) and operate on it mathematically to transform it to a new image function G(x, y). Let’s take a look at the image transformation examples in table 1.1. Table 1.1 Image transformation example functions Application Transformation Darken the image. G(x, y) = 0.5 * F(x, y) Brighten the image. G(x, y) = 2 * F(x, y) Move an object down 150 pixels. G(x, y) = F(x, y + 150) Remove the gray in an image to trans- form the image into black and white.G(x, y) = { 0 if F(x, y) < 130, 255 otherwise }
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
21 Image input 1.4.2 How computers see images When we look at an image, we see objects, landscape, colors, and so on. But that’s not the case with computers. Consider figure 1.16. Your human brain can process it and immediately know that it is a picture of a motorcycle. To a computer, the image looks like a 2D matrix of the pixels’ values, which represent intensities across the color spec- trum. There is no context here, just a massive pile of data. The image in figure 1.16 is of size 24 × 24. This size indicates the width and height of the image: there are 24 pixels horizontally and 24 vertically. That means there is a total of 576 (24 × 24) pixels. If the image is 700 × 500, then the dimensionality of the matrix will be (700, 500), where each pixel in the matrix represents the intensity of brightness in that pixel. Zero represents black, and 255 represents white. 1.4.3 Color images In grayscale images, each pixel represents the intensity of only one color, whereas in the standard RGB system, color images have three channels (red, green, and blue). In other words, color images are represented by three matrices: one represents the intensity of red in the pixel, one represents green, and one represents blue (figure 1.17). As you can see in figure 1.17, the color image is composed of three channels: red, green, and blue. Now the question is, how do computers see this image? Again, they see the matrix, unlike grayscale images, where we had only one channel. In this case, we will have three matrices stacked on top of each other; that’s why it’s a 3D matrix. The dimensionality of 700 × 700 color images is (700, 700, 3). Let’s say the first matrix represents the red channel; then each element of that matrix represents an intensity of red color in that pixel, and likewise with green and blue. Each pixel in a color 08 49 81 52 22 24 32 47 24 21 78 16 84 19 04 04 04 20 20 0102 49 49 90 31 47 98 24 55 36 17 39 56 80 52 36 42 49 23 7022 99 31 95 14 32 81 20 58 23 53 05 00 61 08 68 14 34 35 5497 40 73 23 71 60 28 68 05 09 28 42 48 68 83 81 73 41 29 7138 17 55 04 51 99 64 02 66 75 22 96 35 05 97 57 38 72 78 8315 81 79 60 67 03 23 62 73 00 75 35 71 94 35 62 25 30 31 5100 18 14 11 43 45 67 12 99 74 31 31 89 47 99 20 39 23 90 5440 57 29 42 59 02 10 20 26 44 67 47 07 49 14 72 11 88 01 4900 60 93 69 41 44 26 95 97 20 15 55 05 28 07 03 24 34 74 1675 87 71 24 92 75 38 63 17 45 94 58 44 73 97 16 94 62 31 9204 17 40 48 34 33 40 94 78 35 03 88 44 92 57 33 72 99 49 33What computers see What we see 05 40 67 56 54 53 67 39 78 14 80 24 37 13 32 67 18 69 71 4807 98 53 01 22 78 59 63 94 00 04 00 44 86 16 46 06 82 48 6178 43 99 32 40 36 54 04 83 41 42 17 40 52 26 55 46 47 86 4352 69 30 54 40 64 70 49 14 33 16 54 21 17 26 12 29 59 81 5112 46 03 71 28 20 66 91 88 97 14 24 58 77 79 32 32 85 14 0150 04 49 37 44 35 18 44 34 34 09 34 51 04 33 43 40 74 23 8977 56 13 02 33 09 38 49 89 31 53 29 54 09 27 93 62 04 57 1991 62 36 34 13 12 64 94 63 33 56 85 17 55 98 53 74 34 05 6708 00 65 91 80 80 70 21 72 95 92 57 58 40 44 69 36 24 54 48 Figure 1.16 A computer sees images as matrices of values. The values represent the intensity of the pixels across the color spectrum. For example, grayscale images range between pixel values of 0 for black and 255 for white.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
22 CHAPTER 1Welcome to computer vision image has three numbers (0 to 255) associated with it. These numbers represent intensity of red, green, and blue color in that particular pixel. If we take the pixel (0,0) as an example, we will see that it represents the top-left pixel of the image of green grass. When we view this pixel in the color images, it looks like figure 1.18. The example in figure 1.19 shows some shades of the color green and their RGB values. 35 166 156165 166 158163 164 162165 166 165 162 157 ...158 159 159 158 167 ...... ... ... ... ... ...102 170 160169 170 162167 168 166169 170 169 163 158169 170 170 168 168 ...... ... ... ... ... ...RGB channels Channel 3 Blue intensity values Channel 2 Green intensity values Channel 1 Red intensity valuesColor image F(0, 0) = [11, 102, 35] 11 159 149 146 145 ...158 159 151 146 143 ...156 157 155 149 143 ...158 159 158 153 148 ...158 159 159 158 158 ...... ... ... ... ... ... Figure 1.17 Color images are represented by red, green, and blue channels, and matrices can be used to indicate those colors’ intensity. Red 11 +Green 102 + =Blue 35Forest Green (11, 102, 35) Figure 1.18 An image of green grass is actually made of three colors of varying intensity. Forest HEX #0B6623 RGB 11 102 35Forest green Codes: HEX #0B6623 RGB 11 102 35 Olive HEX #708238 RGB 112 130 56Olive green Codes: HEX #708238 RGB 112 130 56 Jungle HEX #29AB87 RGB 41 171 135Jungle green Codes: HEX #29AB87 RGB 41 171 135Mint HEX #98FB98 RGB 152 251 152Codes:Mint green HEX #98FB98 RGB 152 251 152 Lime HEX #C7EA46 RGB 199 234 70Codes:Lime green HEX #C7EA46 RGB 199 234 70 Jade HEX #00A86B RGB 0 168 107Codes:Jade green HEX #00A86B RGB 0 168 107 Figure 1.19 Different shades of green mean different intensities of the three image colors (red, green, blue).
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
23 Image preprocessing 1.5 Image preprocessing In machine learning (ML) projects, you usually go through a data preprocessing or cleaning step. As an ML engineer, you will spend a good amount of time cleaning up and preparing the data before you build your learning model. The goal of this step is to make your data ready for the ML model to make it easier to analyze and process computationally. The same thing is true with images. Based on the problem you are solving and the dataset in hand, some data massaging is required before you feed your images to the ML model. Image processing could involve simple tasks like image resizing. Later, you will learn that in order to feed a dataset of images to a convolutional network, the images all have to be the same size. Other processing tasks can take place, like geometric and color transformation, converting color to grayscale, and many more. We will cover various image-processing techniques throughout the chapters of this book and in the projects. The acquired data is usually messy and comes from different sources. To feed it to the ML model (or neural network), it needs to be standardized and cleaned up. Pre- processing is used to conduct steps that will reduce the complexity and increase the accuracy of the applied algorithm. We can’t write a unique algorithm for each of the conditions in which an image is taken; thus, when we acquire an image, we convert it into a form that would allow a general algorithm to solve it. The following subsections describe some data-preprocessing techniques. 1.5.1 Converting color images to grayscale to reduce computation complexity Sometimes you will find it useful to remove unnecessary information from your images to reduce space or computational complexity. For example, suppose you want to convert your colored images to grayscale, because for many objects, color is notHow do computers see color? Computers see an image as matrices. Grayscale images have one channel (gray); thus, we can represent grayscale images in a 2D matrix, where each element rep- resents the intensity of brightness in that particular pixel. Remember, 0 means black and 255 means white. Grayscale images have one channel, whereas color images have three channels: red, green, and blue. We can represent color images in a 3D matrix where the depth is three. We’ve also seen how images can be treated as functions of space. This concept allows us to operate on images mathematically and change or extract information from them. Treating images as functions is the basis of many image-processing tech- niques, such as converting color to grayscale or scaling an image. Each of these steps is just operating mathematical equations to transform an image pixel by pixel. Grayscale: f(x, y) gives the intensity at position ( x, y) Color image: f(x, y) = [ red ( x, y), green ( x, y), blue ( x, y) ]
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
24 CHAPTER 1Welcome to computer vision necessary to recognize and interpret an image. Grayscale can be good enough for rec- ognizing certain objects. Since color images contain more information than black- and-white images, they can add unnecessary complexity and take up more space in memory. Remember that color images are represented in three channels, which means that converting them to grayscale will reduce the number of pixels that need to be processed (figure 1.20). In this example, you can see how patterns of brightness and darkness (intensity) can be used to define the shape and characteristics of many objects. However, in other applications, color is important to define certain objects, like skin cancer detection, which relies heavily on skin color (red rashes). Standardizing images —As you will see in chapter 3, one important constraint that exists in some ML algorithms, such as CNNs, is the need to resize the images in your dataset to unified dimensions. This implies that your images must be pre- processed and scaled to have identical widths and heights before being fed to the learning algorithm. Data augmentation —Another common preprocessing technique involves aug- menting the existing dataset with modified versions of the existing images. Scal- ing, rotations, and other affine transformations are typically used to enlarge your dataset and expose the neural network to a wide variety of variations ofBicycleClouds Pedestrian Figure 1.20 Converting color images to grayscale results in a reduced number of pixels that need to be processed. This could be a good approach for applications that do not rely a lot on the color information loss due to the conversion.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
25 Image preprocessing your images. This makes it more likely that your model will recognize objects when they appear in any form and shape. Figure 1.21 shows an example of image augmentation applied to a butterfly image. Other techniques —Many more preprocessing techniques are available to get your images ready for training an ML model. In some projects, you might need to remove the background color from your images to reduce noise. Other projects might require that you brighten or darken your images. In short, any adjustments that you need to apply to your dataset are part of preprocessing. You will selectWhen is color important? Converting an image to grayscale might not be a good decision for some problems. There are a number of applications for which color is very important: for example, building a diagnostic system to identify red skin rashes in medical images. This appli- cation relies heavily on the intensity of the red color in the skin. Removing colors from the image will make it harder to solve this problem. In general, color images provide very helpful information in many medical applications. Another example of the importance of color in images is lane-detection applications in a self-driving car, where the car has to identify the difference between yellow and white lines, because they are treated differently. Grayscale images do not provide enough information to distinguish between the yellow and white lines. The rule of thumb to identify the importance of colors in your problem is to look at the image with the human eye. If you are able to identify the object you are looking for in a gray image, then you probably have enough information to feed to your model. If not, then you definitely need more information (colors) for your model. The same rule can be applied for most other preprocessing techniques that we will discuss. YellowWhite Grayscale-based image processors cannot differentiate between color images.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
26 CHAPTER 1Welcome to computer vision the appropriate processing techniques based on the dataset at hand and the problem you are solving. You will see many image-processing techniques through- out this book, helping you build your intuition of which ones you need when working on your own projects. No free lunch theorem This is a phrase that was introduced by David Wolpert and William Macready in “No Free Lunch Theorems for Optimizations” ( IEEE Transactions on Evolutionary Compu- tation 1, 67). You will often hear this said when a team is working on an ML project. It means that no one prescribed recipe fits all models. When working on ML proj- ects, you will need to make many choices like building your neural network architec- ture, tuning hyperparameters, and applying the appropriate data preprocessing techniques. While there are some rule-of-thumb approaches to tackle certain prob- lems, there is really no single recipe that is guaranteed to work well in all situations.Data augmentationOriginal image De-texturized De-colorized Edge enhanced Salient edge map Flip/rotate Figure 1.21 Image-augmentation techniques create modified versions of the input image to provide more examples for the ML model to learn from.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
27 Feature extraction 1.6 Feature extraction Feature extraction is a core component of the CV pipeline. In fact, the entire DL model works around the idea of extracting useful features that clearly define the objects in the image. So we’ll spend a little more time here, because it is important that you understand what a feature is, what a vector of features is, and why we extract features. DEFINITION A feature in machine learning is an individual measurable prop- erty or characteristic of an observed phenomenon. Features are the input that you feed to your ML model to output a prediction or classification. Suppose you want to predict the price of a house: your input features (properties) might include square_foot , number_of_rooms , bathrooms , and so on, and the model will output the predicted price based on the values of your fea- tures. Selecting good features that clearly distinguish your objects increases the predictive power of ML algorithms. 1.6.1 What is a feature in computer vision? In CV, a feature is a measurable piece of data in your image that is unique to that spe- cific object. It may be a distinct color or a specific shape such as a line, edge, or image segment. A good feature is used to distinguish objects from one another. For example, if I give you a feature like a wheel and ask you to guess whether an object is a motorcy- cle or a dog, what would your guess be? A motorcycle. Correct! In this case, the wheel is a strong feature that clearly distinguishes between motorcycles and dogs. However, if I give you the same feature (a wheel) and ask you to guess whether an object is a bicycle or a motorcycle, this feature is not strong enough to distinguish between those objects. You need to look for more features like a mirror, license plate, or maybe a pedal, that collectively describe an object. In ML projects, we want to transform the raw data (image) into a feature vector to show to our learning algorithm, which can learn the characteristics of the object (figure 1.22). In the figure, we feed the raw input image of a motorcycle into a feature extraction algorithm. Let’s treat the feature extraction algorithm as a black box for now, and we will come back to it. For now, we need to know that the extraction algorithm produces a vector that contains a list of features. This feature vector is a 1D array that makes a robust representation of the object.You must make certain assumptions about the dataset and the problem you are try- ing to solve. For some datasets, it is best to convert the colored images to grayscale, while for other datasets, you might need to keep or adjust the color images. The good news is that, unlike traditional machine learning, DL algorithms require min- imum data preprocessing because, as you will see soon, neural networks do most of the heavy lifting in processing an image and extracting features.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
28 CHAPTER 1Welcome to computer vision 1.6.2 What makes a good (useful) feature? Machine learning models are only as good as the features you provide. That means coming up with good features is an important job in building ML models. But what makes a good feature? And how can you tell? Feature generalizability It is important to point out that figure 1.22 reflects features extracted from just one motorcycle. A very important characteristic of a feature is repeatability . The feature should be able to detect motorcycles in general, not just this specific one. So in real- world problems, a feature is not an exact copy of a piece of the input image. If we take the wheel feature, for example, the feature doesn’t look exactly like the wheel of one particular motorcycle. Instead, it looks like a circular shape with some patterns that identify wheels in all images in the training dataset. When the feature extractor sees thousands of images of motorcycles, it recognizes patterns that define wheels in general, regardless of where they appear in the image and what type of motorcycle they are part of. Input data Features Feature extraction algorithm Figure 1.22 Example input image fed to a feature-extraction algorithm to find patterns within the image and create the feature vector Feature after looking at thousands of images Feature after looking at one image Features need to detect general patterns.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
29 Feature extraction Let’s discuss this with an example. Suppose we want to build a classifier to tell the dif- ference between two types of dogs: Greyhound and Labrador. Let’s take two features— the dogs’ height and their eye color—and evaluate them (figure 1.23). Let’s begin with height. How useful do you think this feature is? Well, on average, Greyhounds tend to be a couple of inches taller than Labradors, but not always. There is a lot of variation in the dog world. So let’s evaluate this feature across different val- ues in both breeds’ populations. Let’s visualize the height distribution on a toy exam- ple in the histogram in figure 1.24. From the histogram, we can see that if the dog’s height is 20 inches or less, there is more than an 80% probability that the dog is a Labrador. On the other side of the his- togram, if we look at dogs that are taller than 30 inches, we can be pretty confident Greyhound Labrador Figure 1.23 Example of Greyhound and Labrador dogs 300 250 200 150 100 50 0Number of dogs 10 15 20 25 30 35 40 HeightLabrador Greyhound Figure 1.24 A visualization of the height distribution on a toy dogs dataset
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
30 CHAPTER 1Welcome to computer vision the dog is a Greyhound. Now, what about the data in the middle of the histogram (heights from 20 to 30 inches)? We can see that the probability of each type of dog is pretty close. The thought process in this case is as follows: if height ≤ 20: return higher probability to Labrador if height ≥ 30: return higher probability to Greyhound if 20 < height < 30: look for other features to classify the object So the height of the dog in this case is a useful feature because it helps (adds informa- tion) in distinguishing between both dog types. We can keep it. But it doesn’t distin- guish between Greyhounds and Labradors in all cases, which is fine. In ML projects, there is usually no one feature that can classify all objects on its own. That’s why, in machine learning, we almost always need multiple features, where each feature cap- tures a different type of information. If only one feature would do the job, we could just write if-else statements instead of bothering with training a classifier. TIP Similar to what we did earlier with color conversion (color versus gray- scale), to figure out which features you should use for a specific problem, do a thought experiment. Pretend you are the classifier. If you want to differentiate between Greyhounds and Labradors, what information do you need to know? You might ask about the hair length, the body size, the color, and so on. For another quick example of a non-useful feature to drive this idea home, let’s look at dog eye color. For this toy example, imagine that we have only two eye colors, blue and brown. Figure 1.25 shows what a histogram might look like for this example. Blue eyes Brown eyesLabrador Greyhound Figure 1.25 A visualization of the eye color distribution in a toy dogs dataset
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
31 Feature extraction It is clear that for most values, the distribution is about 50/50 for both types. So practi- cally, this feature tells us nothing, because it doesn’t correlate with the type of dog. Hence, it doesn’t distinguish between Greyhounds and Labradors. 1.6.3 Extracting features (handcrafted vs. automatic extracting) This is a large topic in machine learning that could take up an entire book. It’s typi- cally described in the context of a topic called feature engineering. In this book, we are only concerned with extracting features in images. So I’ll touch on the idea very quickly in this chapter and build on it in later chapters. TRADITIONAL MACHINE LEARNING USING HANDCRAFTED FEATURES In traditional ML problems, we spend a good amount of time in manual feature selec- tion and engineering. In this process, we rely on our domain knowledge (or partner with domain experts) to create features that make ML algorithms work better. We then feed the produced features to a classifier like a support vector machine (SVM) or AdaBoost to predict the output (figure 1.26). Some of the handcrafted feature sets are these: Histogram of oriented gradients (HOG) Haar Cascades Scale-invariant feature transform (SIFT) Speeded-Up Robust Feature (SURF)What makes a good feature for object recognition? A good feature will help us recognize an object in all the ways it may appear. Charac- teristics of a good feature follow: Identifiable Easily tracked and compared Consistent across different scales, lighting conditions, and viewing angles Still visible in noisy images or when only part of an object is visible InputFeature extraction (handcrafted)Learning algorithm SVM or AdaBoost Output Car Not a car Figure 1.26 Traditional machine learning algorithms require handcrafted feature extraction.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
32 CHAPTER 1Welcome to computer vision DEEP LEARNING USING AUTOMATICALLY EXTRACTED FEATURES In DL, however, we do not need to manually extract features from the image. The net- work extracts features automatically and learns their importance on the output by applying weights to its connections. You just feed the raw image to the network, and while it passes through the network layers, the network identifies patterns within the image with which to create features (figure 1.27). Neural networks can be thought of as feature extractors plus classifiers that are end-to-end trainable, as opposed to tradi- tional ML models that use handcrafted features. How do neural networks distinguish useful features from non-useful features? You might get the impression that neural networks only understand the most useful features, but that’s not entirely true. Neural networks scoop up all the features avail- able and give them random weights. During the training process, the neural network adjusts these weights to reflect their importance and how they should impact the out- put prediction. The patterns with the highest appearance frequency will have higher weights and are considered more useful features. Features with the lowest weights will have very little impact on the output. This learning process will be discussed in deeper detail in the next chapter.Input Feature extraction and classification Output Car Not a car Figure 1.27 A deep neural network passes the input image through its layers to automatically extract features and classify the object. No handcrafted features are needed. OutputFeaturesW W WW 2 3 41Weights NeuronX2 X3X1 ... Xn Weighting different features to reflect their importance in identifying the object
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
33 Classifier learning algorithm WHY USE FEATURES ? The input image has too much extra information that is not necessary for classifica- tion. Therefore, the first step after preprocessing the image is to simplify it by extract- ing the important information and throwing away nonessential information. By extracting important colors or image segments, we can transform complex and large image data into smaller sets of features. This makes the task of classifying images based on their features simpler and faster. Consider the following example. Suppose we have a dataset of 10,000 images of motorcycles, each of 1,000 width by 1,000 height. Some images have solid backgrounds, and others have busy backgrounds of unnecessary data. When these thousands of images are fed to the feature extraction algorithms, we lose all the unnecessary data that is not important to identify motorcycles, and we only keep a consolidated list of useful features that can be fed directly to the classifier (figure 1.28). This process is a lot sim- pler than having the classifier look at the raw dataset of 10,000 images to learn the properties of motorcycles. 1.7 Classifier learning algorithm Here is what we have discussed so far regarding the classifier pipeline: Input image —We’ve seen how images are represented as functions, and that com- puters see images as a 2D matrix for grayscale images and a 3D matrix (three channels) for colored images. Image preprocessing —We discussed some image-preprocessing techniques to clean up our dataset and make it ready as input to the ML algorithm. Feature extraction —We converted our large dataset of images into a vector of use- ful features that uniquely describe the objects in the image.Feature extractionFeatures vectorImages dataset of 10,000 images ... ... ...Classifier algorithm Figure 1.28 Extracting and consolidating features from thousands of images in one feature vector to be fed to the classifier
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
34 CHAPTER 1Welcome to computer vision Now it is time to feed the extracted feature vector to the classifier to output a class label for the images (for example, motorcycle or otherwise). As we discussed in the previous section, the classification task is done one of these ways: traditional ML algorithms like SVMs, or deep neural network algorithms like CNNs. While traditional ML algorithms might get decent results for some problems, CNNs truly shine in processing and classifying images in the most complex problems. In this book, we will discuss neural networks and how they work in detail. For now, I want you to know that neural networks automatically extract useful features from your dataset, and they act as a classifier to output class labels for your images. Input images pass through the layers of the neural network to learn their features layer by layer (figure 1.29). The deeper your network is (the more layers), the more it will learn the features of the dataset: hence the name deep learning . More layers come with some trade-offs that we will discuss in the next two chapters. The last layer of the neural net- work usually acts as the classifier that outputs the class label. Summary Both human and machine vision systems contain two basic components: a sens- ing device and an interpreting device. The interpreting process consists of four steps: input the data, preprocess it, do feature extraction, and produce a machine learning model.Deep learning classifier Network layers Input image ...MotorcycleOutput Not motorcycle... ... ...... ... Feature extraction layers (The input image flows through the network layers to learn its features. Early layers detect patterns in the image, then later layers detect patterns within patterns, and so on, until it creates the feature vector.)Classification layer (Looks at the feature vector extracted by the previous layer and fires the upper node if it sees the features of a motorcycle or the lower node if it doesn’t.) Figure 1.29 Input images pass through the layers of a neural network so it can learn features layer by layer.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
35 Summary An image can be represented as a function of x and y. Computers see an image as a matrix of pixel values: one channel for grayscale images and three channels for color images. Image-processing techniques vary for each problem and dataset. Some of these techniques are converting images to grayscale to reduce complexity, resizing images to a uniform size to fit your neural network, and data augmentation. Features are unique properties in the image that are used to classify its objects. Traditional ML algorithms use several feature-extraction methods.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
36Deep learning and neural networks In the last chapter, we discussed the computer vision (CV) pipeline components: the input image, preprocessing, extracting features, and the learning algorithm (classifier). We also discussed that in traditional ML algorithms, we manually extract features that produce a vector of features to be classified by the learning algorithm, whereas in deep learning (DL), neural networks act as both the feature extractor and the classifier. A neural network automatically recognizes patterns and extracts features from the image and classifies them into labels (figure 2.1). In this chapter, we will take a short pause from the CV context to open the DL algorithm box from figure 2.1. We will dive deeper into how neural networks learn features and make predictions. Then, in the next chapter, we will comeThis chapter covers Understanding perceptrons and multilayer perceptrons Working with the different types of activation functions Training networks with feedforward, error functions, and error optimization Performing backpropagation
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
37 Understanding perceptrons back to CV applications with one of the most popular DL architectures: convolutional neural networks. The high-level layout of this chapter is as follows: We will begin with the most basic component of the neural network: the perceptron , a neural network that contains only one neuron. Then we will move on to a more complex neural network architecture that con- tains hundreds of neurons to solve more complex problems. This network is called a multilayer perceptron (MLP), where neurons are stacked in hidden layers . Here, you will learn the main components of the neural network architecture: the input layer, hidden layers, weight connections, and output layer. You will learn that the network training process consists of three main steps: 1Feedforward operation 2Calculating the error 3Error optimization: using backpropagation and gradient descent to select the most optimum parameters that minimize the error function We will dive deep into each of these steps. You will see that building a neural network requires making necessary design decisions: choosing an optimizer, cost function, and activation functions, as well as designing the architecture of the network, including how many layers should be connected to each other and how many neurons should be in each layer. Ready? Let’s get started! 2.1 Understanding perceptrons Let’s take a look at the artificial neural network (ANN) diagram from chapter 1 (fig- ure 2.2). You can see that ANNs consist of many neurons that are structured in layers to perform some kind of calculations and predict an output. This architecture can beFeature extractorFeatures vectorTraditional machine learning flow Traditional ML algorithmOutput Input Deep learning algorithmDeep learning flow Output Input Figure 2.1 Traditional ML algorithms require manual feature extraction. A deep neural network automatically extracts features by passing the input image through its layers.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
38 CHAPTER 2Deep learning and neural networks also called a multilayer perceptron , which is more intuitive because it implies that the net- work consists of perceptrons structured in multiple layers. Both terms, MLP and ANN, are used interchangeably to describe this neural network architecture. In the MLP diagram in figure 2.2, each node is called a neuron . We will discuss how MLP networks work soon, but first let’s zoom in on the most basic component of the neural network: the perceptron. Once you understand how a single perceptron works, it will become more intuitive to understand how multiple perceptrons work together to learn data features. 2.1.1 What is a perceptron? The most simple neural network is the perceptron, which consists of a single neuron. Conceptually, the perceptron functions in a manner similar to a biological neuron (figure 2.3). A biological neuron receives electrical signals from its dendrites , modu- lates the electrical signals in various amounts, and then fires an output signal through its synapses only when the total strength of the input signals exceeds a certain thresh- old. The output is then fed to another neuron, and so forth. To model the biological neuron phenomenon, the artificial neuron performs two consecutive functions: it calculates the weighted sum of the inputs to represent the total strength of the input signals, and it applies a step function to the result to determine whether to fire the output 1 if the signal exceeds a certain threshold or 0 if the signal doesn’t exceed the threshold. As we discussed in chapter 1, not all input features are equally useful or important. To represent that, each input node is assigned a weight value, called its connection weight , to reflect its importance.InputArtificial neural network (ANN) Layers of neuronsOutput Figure 2.2 An artificial neural network consists of layers of nodes, or neurons connected with edges.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
39 Understanding perceptrons In the perceptron diagram in figure 2.4, you can see the following: Input vector —The feature vector that is fed to the neuron. It is usually denoted with an uppercase X to represent a vector of inputs ( x1, x2, . . ., xn). Weights vector —Each x1 is assigned a weight value w1 that represents its impor- tance to distinguish between different input datapoints.Connection weights Not all input features are equally important (or useful) features. Each input feature (x1) is assigned its own weight ( w1) that reflects its importance in the decision-making process. Inputs assigned greater weight have a greater effect on the output. If the weight is high, it amplifies the input signal; and if the weight is low, it diminishes the input signal. In common representations of neural networks, the weights are repre- sented by lines or edges from the input node to the perceptron. For example, if you are predicting a house price based on a set of features like size, neighborhood, and number of rooms, there are three input features ( x1, x2, and x3). Each of these inputs will have a different weight value that represents its effect on the final decision. For example, if the size of the house has double the effect on the price compared with the neighborhood, and the neighborhood has double the effect compared with the number of rooms, you will see weights something like 8, 4, and 2, respectively. How the connection values are assigned and how the learning happens is the core of the neural network training process. This is what we will discuss for the rest of this chapter. Biological neuron Artificial neuron Neuron Flow of informationDendrites (information coming from other neurons) Synapses (information output to other neurons)fx( ) Output xx n2 ...x1Input Neuron Figure 2.3 Artificial neurons were inspired by biological neurons. Different neurons are connected to each other by synapses that carry information.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
40 CHAPTER 2Deep learning and neural networks Neuron functions —The calculations performed within the neuron to modulate the input signals: the weighted sum and step activation function. Output —Controlled by the type of activation function you choose for your net- work. There are different activation functions, as we will discuss in detail in this chapter. For a step function, the output is either 0 or 1. Other activation func- tions produce probability output or float numbers. The output node represents the perceptron prediction. Let’s take a deeper look at the weighted sum and step function calculations that hap- pen inside the neuron. WEIGHTED SUM FUNCTION Also known as a linear combination , the weighted sum function is the sum of all inputs multiplied by their weights, and then added to a bias term. This function produces a straight line represented in the following equation: z = xi · wi + b (bias) z = x1 · w1 + x2 · w2 + x3 · w3 + … + xn · wn + b Here is how we implement the weighted sum in Python: z = np.dot(w.T,X) + b OutputSumActivation function InputsW W WW 2 3 41 X2 X3X1 ... Xn Figure 2.4 Input vectors are fed to the neuron, with weights assigned to represent importance. Calculations performed within the neuron are weighted sum and activation functions.  X is the input vector (uppercase X), w is the weights vector, and b is the y-intercept.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
41 Understanding perceptrons What is a bias in the perceptron, and why do we add it? Let’s brush up our memory on some linear algebra concepts to help understand what’s happening under the hood. Here is the function of the straight line: The function of a straight line is represented by the equation ( y = mx + b), where b is the y-intercept. To be able to define a line, you need two things: the slope of the line and a point on the line. The bias is that point on the y-axis. Bias allows you to move the line up and down on the y-axis to better fit the prediction with the data. Without the bias ( b), the line always has to go through the origin point (0,0), and you will get a poorer fit. To visualize the importance of bias, look at the graph in the above figure and try to separate the circles from the star using a line that passes through the ori- gin (0,0). It is not possible. The input layer can be given biases by introducing an extra input node that always has a value of 1, as you can see in the next figure. In neural networks, the value of the bias (b) is treated as an extra weight and is learned and adjusted by the neuron to minimize the cost function, as we will learn in the following sections of this chapter. The input layer can be given biases by introducing an extra input that always has a value of 1.xyb ym xb== The equation of a straight line OutputActivation functionInputs Weights Net input function1 X1W1 W2 WmW0 X2 Xm......
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
42 CHAPTER 2Deep learning and neural networks STEP ACTIVATION FUNCTION In both artificial and biological neural networks, a neuron does not just output the bare input it receives. Instead, there is one more step, called an activation function ; this is the decision-making unit of the brain. In ANNs, the activation function takes the same weighted sum input from before ( z = Σxi · wi + b) and activates (fires) the neuron if the weighted sum is higher than a certain threshold. This activation happens based on the activation function calculations. Later in this chapter, we’ll review the different types of activation functions and their general purpose in the broader context of neu- ral networks. The simplest activation function used by the perceptron algorithm is the step function that produces a binary output (0 or 1). It basically says that if the summed input ≥ 0, it “fires” (output = 1); else (summed input < 0), it doesn’t fire (out- put = 0) (figure 2.5). This is how the step function looks in Python: def step_function(z): if z <= 0: return 0 else: return 11.0 0.8 0.6 0.4 0.2 0.0 –4 –3 –2 –1 0 1 2 3 4 ZStep function xi i•wb+ y = g x g z (), where is an activation function and is the weighted sum =Output =0 If 1 Ifwx b ≤ w x b >• •+0 +0 Figure 2.5 The step function produces a binary output (0 or 1). If the summed input ≥ 0, it “fires” (output = 1); else (summed input < 0) it doesn't fire (output = 0). z is the weighted sum = Σxi · wi + b
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
43 Understanding perceptrons 2.1.2 How does the perceptron learn? The perceptron uses trial and error to learn from its mistakes. It uses the weights as knobs by tuning their values up and down until the network is trained (figure 2.6). The perceptron’s learning logic goes like this: 1The neuron calculates the weighted sum and applies the activation function to make a prediction yˆ. This is called the feedforward process: yˆ = activation( xi · wi + b) 2It compares the output prediction with the correct label to calculate the error: error = y – yˆ 3It then updates the weight. If the prediction is too high, it adjusts the weight to make a lower prediction the next time, and vice versa. 4Repeat! This process is repeated many times, and the neuron continues to update the weights to improve its predictions until step 2 produces a very small error (close to zero), which means the neuron’s prediction is very close to the correct value. At this point, we can stop the training and save the weight values that yielded the best results to apply to future cases where the outcome is unknown. 2.1.3 Is one neuron enough to solve complex problems? The short answer is no, but let’s see why. The perceptron is a linear function. This means the trained neuron will produce a straight line that separates our data. Suppose we want to train a perceptron to predict whether a player will be accepted into the college squad. We collect all the data from previous years and train theOutputSumActivation functionX1 X2 X3 XW W W W 41 2 3 4 Figure 2.6 Weights are tuned up and down during the learning process to optimize the value of the loss function. 
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
44 CHAPTER 2Deep learning and neural networks perceptron to predict whether players will be accepted based on only two features (height and weight). The trained perceptron will find the best weights and bias values to produce the straight line that best separates the accepted from non-accepted (best fit). The line has this equation: z = height · w1 + age · w2 + b After the training is complete on the training data, we can start using the perceptron to predict with new players. When we get a player who is 150 cm in height and 12 years old, we compute the previous equation with the values (150, 12). When plotted in a graph (figure 2.7), you can see that it falls below the line: the neuron is predicting that this player will not be accepted. If it falls above the line, then the player will be accepted. In figure 2.7, the single perceptron works fine because our data was linearly separable . This means the training data can be separated by a straight line. But life isn’t always that simple. What happens when we have a more complex dataset that cannot be sep- arated by a straight line ( nonlinear dataset )? As you can see in figure 2.8, a single straight line will not separate our training data. We say that it does not fit our data. We need a more complex network for more complex data like this. What if we built a network with two perceptrons? This would produce two lines. Would that help us separate the data better? Okay, this is definitely better than the straight line. But, I still see some color mis- predictions. Can we add more neurons to make the function fit better? Now you are getting it. Conceptually, the more neurons we add, the better the network will fit our210cm 200 140150160170180190 Height 130 120 10 11 12 13 14 15 16 17 18 19 Agex b Figure 2.7 Linearly separable data can be separated by a straight line.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
45 Multilayer perceptrons training data. In fact, if we add too many neurons, this will make the network overfit the training data (not good). But we will talk about this later. The general rule here is that the more complex our network is, the better it learns the features of our data. 2.2 Multilayer perceptrons We saw that a single perceptron works great with simple datasets that can be separated by a line. But, as you can imagine, the real world is much more complex than that. This is where neural networks can show their full potential. Linear vs. nonlinear problems Linear datasets —The data can be split with a single straight line. Nonlinear datasets —The data cannot be split with a single straight line. We need more than one line to form a shape that splits the data. Look at this 2D data. In the linear problem, the stars and dots can be easily classified by drawing a single straight line. In nonlinear data, a single line will not separate both shapes.cm Neuron 1 Neuron 2210 200 140150160170180190 Height 130 120 10 11 12 13 14 15 16 17 18 19 Age Figure 2.8 In a nonlinear dataset, a single straight line cannot separate the training data. A network with two perceptrons can produce two lines and help separate the data further in this example. Linear (can be split by one straight line)(need more than oneNonlinear line to split the data) Examples of linear data and nonlinear data
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
46 CHAPTER 2Deep learning and neural networks To split a nonlinear dataset, we need more than one line. This means we need to come up with an architecture to use tens and hundreds of neurons in our neural net- work. Let’s look at the example in figure 2.9. Remember that a perceptron is a linear function that produces a straight line. So in order to fit this data, we try to create a triangle-like shape that splits the dark dots. It looks like three lines would do the job. Figure 2.9 is an example of a small neural network that is used to model nonlinear data. In this network, we used three neurons stacked together in one layer called a hidden layer , so called because we don’t see the output of these layers during the training process. 2.2.1 Multilayer perceptron architecture We’ve seen how a neural network can be designed to have more than one neuron. Let’s expand on this idea with a more complex dataset. The diagram in figure 2.10 is from the Tensorflow playground website ( https:/ /playground.tensorflow.org ). We try to model a spiral dataset to distinguish between two classes. In order to fit this dataset, we need to build a neural network that contains tens of neurons. A very common neu- ral network architecture is to stack the neurons in layers on top of each other, called hidden layers . Each layer has n number of neurons. Layers are connected to each other by weight connections. This leads to the multilayer perceptron (MLP) architecture in the figure. The main components of the neural network architecture are as follows: Input layer —Contains the feature vector. Hidden layers —The neurons are stacked on top of each other in hidden layers. They are called “hidden” layers because we don’t see or control the input going into these layers or the output. All we do is feed the feature vector to the input layer and see the output coming out of the output layer. Weight connections (edges) —Weights are assigned to each connection between the nodes to reflect the importance of their influence on the final output predic- tion. In graph network terms, these are called edges connecting the nodes .Input features Output Hidden layer x x1 2Figure 2.9 A perceptron is a linear function that produces a straight line. So to fit this data, we need three perceptrons to create a triangle-like shape that splits the dark dots.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
47 Multilayer perceptrons Output layer —We get the answer or prediction from our model from the output layer. Depending on the setup of the neural network, the final output may be a real-valued output (regression problem) or a set of probabilities (classification problem). This is determined by the type of activation function we use in the neurons in the output layer. We’ll discuss the different types of activation func- tions in the next section. We discussed the input layer, weights, and output layer. The next area of this architec- ture is the hidden layers. 2.2.2 What are hidden layers? This is where the core of the feature-learning process takes place. When you look at the hidden layer nodes in figure 2.10, you see that the early layers detect simple pat- terns to learn low-level features (straight lines). Later layers detect patterns within patterns to learn more complex features and shapes, then patterns within patterns within patterns, and so on. This concept will come in handy when we discuss convolu- tional networks in later chapters. For now, know that, in neural networks, we stack hid- den layers to learn complex features from each other until we fit our data. So when you are designing your neural network, if your network is not fitting the data, the solu- tion could be adding more hidden layers. 2.2.3 How many layers, and how many nodes in each layer? As a machine learning engineer, you will mostly be designing your network and tun- ing its hyperparameters. While there is no single prescribed recipe that fits all models, we will try throughout this book to build your hyperparameter tuning intuition, asX16 neurons 6 neurons 6 neuronsSix hidden layers Input features6 neurons 6 neurons 2 neuronsOutput X2 These are the new features that are learned after each layer. Figure 2.10 Tensorflow playground example representation of the feature learning in a deep neural network
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
48 CHAPTER 2Deep learning and neural networks well as recommend some starting points. The number of layers and the number of neurons in each layer are among the important hyperparameters you will be design- ing when working with neural networks. A network can have one or more hidden layers (technically, as many as you want). Each layer has one or more neurons (again, as many as you want). Your main job, as a machine learning engineer, is to design these layers. Usually, when we have two or more hidden layers, we call this a deep neural network . The general rule is this: the deeper your network is, the more it will fit the training data. But too much depth is not a good thing, because the network can fit the training data so much that it fails to generalize when you show it new data (overfitting); also, it becomes more computa- tionally expensive. So your job is to build a network that is not too simple (one neu- ron) and not too complex for your data. It is recommended that you read about different neural network architectures that are successfully implemented by others to build an intuition about what is too simple for your problem. Start from that point, maybe three to five layers (if you are training on a CPU), and observe the network performance. If it is performing poorly (underfitting), add more layers. If you see signs of overfitting (discussed later), then decrease the number of layers. To build a sense of how neural networks perform when you add more layers, play around with the Tensorflow playground ( https:/ /playground.tensorflow.org ). Fully connected layers It is important to call out that the layers in classical MLP network architectures are fully connected to the next hidden layer. In the following figure, notice that each node in a layer is connected to all nodes in the previous layer. This is called a fully con- nected network . These edges are the weights that represent the importance of this node to the output value. n_units n_units n_out Input features Hidden layer 1 Hidden layer 2 Output layer210 A fully connected network
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
49 Multilayer perceptrons In later chapters, we will discuss other variations of neural network architecture (like convolutional and recurrent networks). For now, know that this is the most basic neu- ral network architecture, and it can be referred to by any of these names: ANN, MLP, fully connected network, or feedforward network. Let’s do a quick exercise to find out how many edges we have in our example. Sup- pose that we designed an MLP network with two hidden layers, and each has five neurons: Weights_0_1 : (4 nodes in the input layer) × (5 nodes in layer 1) + 5 biases [1 bias per neuron] = 25 edges Weights_1_2 : 5 × 5 nodes + 5 biases = 30 edges Weights_2_output : 5 × 3 nodes + 3 bias = 18 edges Total edges (weights) in this network = 73 We have a total of 73 weights in this very simple network. The values of these weights are randomly initialized, and then the network performs feedforward and backpropagation to learn the best values of weights that most fit our model to the training data. To see the number of weights in this network, try to build this simple network in Keras as follows: model = Sequential([ Dense(5, input_dim=4), Dense(5), Dense(3) ]) And print the model summary: model.summary() The output will be as follows: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 5) 25 _________________________________________________________________ dense_1 (Dense) (None, 5) 30 _________________________________________________________________ dense_2 (Dense) (None, 3) 18 ================================================================= Total params: 73 Trainable params: 73 Non-trainable params: 0
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
50 CHAPTER 2Deep learning and neural networks 2.2.4 Some takeaways from this section Let’s recap what we’ve discussed so far: We talked about the analogy between biological and artificial neurons: both have inputs and a neuron that does some calculations to modulate the input signals and create output. We zoomed in on the artificial neuron’s calculations to explore its two main functions: weighted sum and the activation function. We saw that the network assigns random weights to all the edges. These weight parameters reflect the usefulness (or importance) of these features on the out- put prediction. Finally, we saw that perceptrons contain a single neuron. They are linear func- tions that produce a straight line to split linear data. In order to split more com- plex data (nonlinear), we need to apply more than one neuron in our network to form a multilayer perceptron. The MLP architecture contains input features, connection weights, hidden lay- ers, and an output layer. We discussed the high-level process of how the perceptron learns. The learning process is a repetition of three main steps: feedforward calculations to produce a prediction (weighted sum and activation), calculating the error, and back- propagating the error and updating the weights to minimize the error. We should also keep in mind some of the important points about neural network hyperparameters: Number of hidden layers —You can have as many layers as you want, each with as many neurons as you want. The general idea is that the more neurons you have, the better your network will learn the training data. But if you have too many neurons, this might lead to a phenomenon called overfitting : the network learned the training set so much that it memorized it instead of learning its fea- tures. Thus, it will fail to generalize. To get the appropriate number of layers, start with a small network, and observe the network performance. Then start adding layers until you get satisfying results. Activation function —There are many types of activation functions, the most pop- ular being ReLU and softmax. It is recommended that you use ReLU activation in the hidden layers and Softmax for the output layer (you will see how this is implemented in most projects in this book). Error function —Measures how far the network’s prediction is from the true label. Mean square error is common for regression problems, and cross-entropy is common for classification problems. Optimizer —Optimization algorithms are used to find the optimum weight values that minimize the error. There are several optimizer types to choose from. In this chapter, we discuss batch gradient descent, stochastic gradient descent, and
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
51 Activation functions mini-batch gradient descent. Adam and RMSprop are two other popular opti- mizers that we don’t discuss. Batch size —Mini-batch size is the number of sub-samples given to the network, after which parameter update happens. Bigger batch sizes learn faster but require more memory space. A good default for batch size might be 32. Also try 64, 128, 256, and so on. Number of epochs —The number of times the entire training dataset is shown to the network while training. Increase the number of epochs until the validation accu- racy starts decreasing even when training accuracy is increasing (overfitting). Learning rate —One of the optimizer’s input parameters that we tune. Theoreti- cally, a learning rate that is too small is guaranteed to reach the minimum error (if you train for infinity time). A learning rate that is too big speeds up the learning but is not guaranteed to find the minimum error. The default lr value of the optimizer in most DL libraries is a reasonable start to get decent results. From there, go down or up by one order of magnitude. We will discuss the learning rate in detail in chapter 4. 2.3 Activation functions When you are building your neural network, one of the design decisions that you will need to make is what activation function to use for your neurons’ calculations. Activa- tion functions are also referred to as transfer functions or nonlinearities because they transform the linear combination of a weighted sum into a nonlinear model. An acti- vation function is placed at the end of each perceptron to decide whether to activate this neuron. More on hyperparameters Other hyperparameters that we have not discussed yet include dropout and regular- ization. We will discuss hyperparameter tuning in detail in chapter 4, after we cover convolutional neural networks in chapter 3. In general, the best way to tune hyperparameters is by trial and error. By getting your hands dirty with your own projects as well as learning from other existing neural net- work architectures, you will start to develop intuition about good starting points for your hyperparameters. Learn to analyze your network’s performance and understand which hyperparameter you need to tune for each symptom. And this is what we are going to do in this book. By understanding the reasoning behind these hyperparameters and observing the network performance in the projects at the end of the chapters, you will develop a feel for which hyperparameter to tune for a particular effect. For example, if you see that your error value is not decreasing and keeps oscillating, then you might fix that by reducing the learning rate. Or, if you see that the network is performing poorly in learning the training data, this might mean that the network is underfitting and you need to build a more complex model by adding more neurons and hidden layers.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
52 CHAPTER 2Deep learning and neural networks Why use activation functions at all? Why not just calculate the weighted sum of our network and propagate that through the hidden layers to produce an output? The purpose of the activation function is to introduce nonlinearity into the net- work. Without it, a multilayer perceptron will perform similarly to a single perceptron no matter how many layers we add. Activation functions are needed to restrict the out- put value to a certain finite value. Let’s revisit the example of predicting whether a player gets accepted (figure 2.11). First, the model calculates the weighted sum and produces the linear function ( z): z = height · w1 + age · w2 + b The output of this function has no bound. z could literally be any number. We use an activation function to wrap the prediction values to a finite value. In this example, we use a step function where if z > 0, then above the line (accepted) and if z < 0, then below the line (rejected). So without the activation function, we just have a linear function that produces a number, but no decision is made in this perceptron. The activation function is what decides whether to fire this perceptron. There are infinite activation functions. In fact, the last few years have seen a lot of progress in the creation of state-of-the-art activations. However, there are still relatively few activations that account for the vast majority of activation needs. Let’s dive deeper into some of the most common types of activation functions.x 210cm 200 140150160170180190 Height 130 120 10 11 12 13 14 15 16 17 18 19 Ageb Figure 2.11 This example revisits the prediction of whether a player gets accepted from section 2.1.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
53 Activation functions 2.3.1 Linear transfer function A linear transfer function , also called an identity function , indicates that the function passes a signal through unchanged. In practical terms, the output will be equal to the input, which means we don’t actually have an activation function. So no matter how many layers our neural network has, all it is doing is computing a linear activation function or, at most, scaling the weighted average coming in. But it doesn’t transform input into a nonlinear function. activation( z) = z = wx + b The composition of two linear functions is a linear function, so unless you throw a nonlinear activation function in your neural network, you are not computing any interesting functions no matter how deep you make your network. No learning here! To understand why, let’s calculate the derivative of the activation z(x) = w · x + b, where w = 4 and b = 0. When we plot this function, it looks like figure 2.12. Then the derivative of z(x) = 4 x is z'(x) = 4 (figure 2.13). The derivative of a linear function is constant: it does not depend on the input value x. This means that every time we do a backpropagation, the gradient will be the same. And this is a big problem: we are not really improving the error, since the gradient is pretty much the same. This will be clearer when we discuss backpropagation later in this chapter.fx x( ) = 4 4y 3 2 1 –1 –4–31 2 3 4x –4 –3 –2 –1 –2 Figure 2.12 The plot for the activation function f(x) = 4 x
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
54 CHAPTER 2Deep learning and neural networks 2.3.2 Heaviside step function (binary classifier) The step function produces a binary output. It basically says that if the input x > 0, it fires (output y = 1); else (input < 0), it doesn’t fire (output y = 0). It is mainly used in binary classification problems like true or false, spam or not spam, and pass or fail (figure 2.14).f' x g x( ) = ( ) = 4y 3 2 1 –1 –2 –4–31 2 3 4 x –4 –3 –2 –14 Figure 2.13 The plot for the derivative of z(x) = 4x is z'(x) = 4. 1.0 0.8 0.6 0.4 0.2 0.0 –4 –3 –2 –1 0 1 2 3 4 ZStep function Output =0 If 1 Ifwx b ≤ w x b >• •+0 +0 Figure 2.14 Step functions are commonly used in binary classification problems because they transform the input into zero or one.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
55 Activation functions 2.3.3 Sigmoid/logistic function This is one of the most common activation functions. It is often used in binary classifi- ers to predict the probability of a class when you have two classes. The sigmoid squishes all the values to a probability between 0 and 1, which reduces extreme values or out- liers in the data without removing them. Sigmoid or logistic functions convert infinite continuous variables (range between – ∞ to + ∞) into simple probabilities between 0 and 1. It is also called the S-shape curve because when plotted in a graph, it produces an S-shaped curve. While the step function is used to produce a discrete answer (pass or fail), sigmoid is used to produce the probability of passing and probability of failing (figure 2.15): σ(z) = Here is how sigmoid is implemented in Python: import numpy as np def sigmoid(x): return 1 / (1 + np.exp(-x))1 1ez–+--------------- Sigmoid ) =σ(z1 1 + e 0– –5 –100.01.0 1.8 1.6 1.4 1.2 51 0z Figure 2.15 While the step function is used to produce a discrete answer (pass or fail), sigmoid is used to produce the probability of passing or failing. Imports numpy Sigmoid activation function
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
56 CHAPTER 2Deep learning and neural networks Just-in-time linear algebra (optional) Let’s take a deeper dive into the math side of the sigmoid function to understand the problem it helps solve and how the sigmoid function equation is driven. Suppose that we are trying to predict whether patients have diabetes based on only one feature: their age. When we plot the data we have about our patients, we get the linear model shown in the figure: z = β0 + β1 age In this plot, you can observe the balance of probabilities that should go from 0 to 1. Note that when patients are below the age of 25, the predicted probabilities are neg- ative; meanwhile, they are higher than 1 (100%) when patients are older than 43 years old. This is a clear example of why linear functions do not work in most cases. Now, how do we fix this to give us probabilities within the range of 0 < probability < 1? First, we need to do something to eliminate all the negative probability values. The exponential function is a great solution for this problem because the exponent of any- thing (and I mean anything ) is always going to be positive. So let’s apply that to our linear equation to calculate the probability ( p): p = exp( z) = exp( β0 + β1 age) This equation ensures that we always get probabilities greater than 0. Now, what about the values that are higher than 1? We need to do something about them. With proportions, any given number divided by a number that is greater than it will give us a number smaller than 1. Let’s do exactly that to the previous equation. We divide the equation by its value plus a small value: either 1 or a (in some cases very small) value—let’s call it epsilon ( ε): p = 2 1 p1.5 0.5 0 –0.520 30 35 40 45 50 55 Age25 The linear model we get when we plot our data about our patients z()exp z()exp ε+------------------------
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
57 Activation functions 2.3.4 Softmax function The softmax function is a generalization of the sigmoid function. It is used to obtain classification probabilities when we have more than two classes. It forces the outputs of a neural network to sum to 1 (for example, 0 < output < 1). A very common use case in deep learning problems is to predict a single class out of many options (more than two). The softmax equation is as follows: σ(xj) = Figure 2.16 shows an example of the softmax function.If you divide the equation by exp( z), you get p = When we plot the probability of this equation, we get the S shape of the sigmoid func- tion, where probability is no longer below 0 or above 1. In fact, as patients’ ages grow, the probability asymptotically gets closer to 1; and as the weights move down, the function asymptotically gets closer to 0 but is never outside the 0 < p < 1 range. This is the plot of the sigmoid function and logistic regression.1 1 z–()exp+---------------------------- 2 1 p1.5 0.5 0 –0.520 30 35 40 45 50 55 Age25As patients get older, the probability asymptotically gets closer to 1. This is the plot of the sigmoid function and logistic regression. exj Σiexi------------ Softmax0.46 0.34 0.201.2 0.9 0.4Figure 2.16 The softmax function transforms the input values to probability values between 0 and 1.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
58 CHAPTER 2Deep learning and neural networks TIP Softmax is the go-to function that you will often use at the output layer of a classifier when you are working on a problem where you need to predict a class between more than two classes. Softmax works fine if you are classifying two classes, as well. It will basically work like a sigmoid function. By the end of this section, I’ll tell you my recommendations about when to use each activa- tion function. 2.3.5 Hyperbolic tangent function (tanh) The hyperbolic tangent function is a shifted version of the sigmoid version. Instead of squeezing the signal values between 0 and 1, tanh squishes all values to the range –1 to 1. Tanh almost always works better than the sigmoid function in hidden layers because it has the effect of centering your data so that the mean of the data is close to zero rather than 0.5, which makes learning for the next layer a little bit easier: tanh( x) = = One of the downsides of both sigmoid and tanh functions is that if ( z) is very large or very small, then the gradient (or derivative or slope) of this function becomes very small (close to zero), which will slow down gradient descent (figure 2.17). This is when the ReLU activation function (explained next) provides a solution. 2.3.6 Rectified linear unit The rectified linear unit (ReLU) activation function activates a node only if the input is above zero. If the input is below zero, the output is always zero. But when the input is higher than zero, it has a linear relationship with the output variable. The ReLU function is represented as follows: f(x) = max (0, x)x()sinh x() cosh------------------- -exex–– exex–+---------------- - tanh ( ) x –1.0–0.5–4 –20.51.0 4x 2 Figure 2.17 If (z) is very large or very small, then the gradient (or derivative or slope) of this function becomes very small (close to zero).
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
59 Activation functions At the time of writing, ReLU is considered the state-of-the-art activation function because it works well in many different situations, and it tends to train better than sig- moid and tanh in hidden layers (figure 2.18). Here is how ReLU is implemented in Python: def relu(x): if x < 0: return 0 else: return x 2.3.7 Leaky ReLU One disadvantage of ReLU activation is that the derivative is equal to zero when ( x) is negative. Leaky ReLU is a ReLU variation that tries to mitigate this issue. Instead of having the function be zero when x < 0, leaky ReLU introduces a small negative slope (around 0.01) when ( x) is negative. It usually works better than the ReLU function, although it’s not used as much in practice. Take a look at the leaky ReLU graph in fig- ure 2.19; can you see the leak? f(x) = max(0.01 x, x) Why 0.01? Some people like to use this as another hyperparameter to tune, but that would be overkill, since you already have other, bigger problems to worry about. Feel free to try different values (0.1, 0.01, 0.002) in your model and see how they work.Rectifier ReLU( ) = x0 if < 0x xxif > = 06 5 4 3 2 1 0 –1 –2 –3 –2 –4 0 2 4 Figure 2.18 The ReLU function eliminates all negative values of the input by transforming them into zeros. ReLU activation function
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
60 CHAPTER 2Deep learning and neural networks Here is how Leaky ReLU is implemented in Python: def leaky_relu(x): if x < 0: return x * 0.01 else: return x Table 2.1 summarizes the various activation functions we’ve discussed in this section. Table 2.1 A cheat sheet of the most common activation functions Activation functionDescription Plot Equation Linear trans- fer function (identity function)The signal passes through it unchanged. It remains a linear function. Almost never used.f(x) = x Heaviside step function (binary classifier)Produces a binary output of 0 or 1. Mainly used in binary classifica- tion to give a dis- crete value.output = {Leaky ReLU fx( ) =0.01 for < 0xx xxfor = > 010 8 6 4 2 51 0 –10 –5 Figure 2.19 Instead of having the function be zero when x < 0, leaky ReLU introduces a small negative slope (around 0.01) when ( x) is negative. Leaky ReLU activation function with a 0.0 1 leak 3 2 1 0 –6 –4 –2 2 4 045 –3 –4 –5–2–1 1.0 0.8 0.6 0.4 0.2 0.0 –4 –3 –2 –1 0 1 2 3 4 ZStep function 0ifwx b 0≤+⋅ 1i fwx b 0>+⋅
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
61 Activation functions Sigmoid/ logistic functionSquishes all the values to a probabil- ity between 0 and 1, which reduces extreme values or outliers in the data. Usually used to classify two classes.σ(z) = Softmax functionA generalization of the sigmoid func- tion. Used to obtain classification proba- bilities when we have more than two classes. σ(xj) = Hyperbolic tangent func- tion (tanh)Squishes all values to the range of –1 to 1. Tanh almost always works better than the sigmoid function in hidden layers.tanh( x) = = Rectified linear unit (ReLU)Activates a node only if the input is above zero. Always recommended for hidden layers. Better than tanh.f(x) = max (0, x) Leaky ReLU Instead of having the function be zero when x < 0, leaky ReLU introduces a small negative slope (around 0.01) when ( x) is negative.f(x) = max(0.01 x, x)Table 2.1 A cheat sheet of the most common activation functions Activation functionDescription Plot Equation 0 z–6 –4 –2 –80.01.0 1.5 Ø( )z 24681 1ez–+------------------------- 0 z–6 –4 –2 –80.01.0 1.5 Ø( )z 2468exj Σiexi--------- tanh x –1.0–0.5–4 –20.51.0 4x 2x()sinh x()cosh------------------------------- exex–– exex–+-------------------------- Rectifier 6 5 4 3 2 1 0 –1 –2 –3–2 –4 0 2 4 10 8 6 4 2 51 0 –10 –5
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
62 CHAPTER 2Deep learning and neural networks 2.4 The feedforward process Now that you understand how to stack perceptrons in layers, connect them with weights/edges, perform a weighted sum function, and apply activation functions, let’s implement the complete forward-pass calculations to produce a prediction output. The process of computing the linear combination and applying the activation func- tion is called feedforward . We briefly discussed feedforward several times in the previ- ous sections; let’s take a deeper look at what happens in this process. The term feedforward is used to imply the forward direction in which the informa- tion flows from the input layer through the hidden layers, all the way to the output layer. This process happens through the implementation of two consecutive functions: the weighted sum and the activation function. In short, the forward pass is the calcula- tions through the layers to make a prediction. Let’s take a look at the simple three-layer neural network in figure 2.20 and explore each of its components: Layers —This network consists of an input layer with three input features, and three hidden layers with 3, 4, 1 neurons in each layer.Hyperparameter alert Due to the number of activation functions, it may appear to be an overwhelming task to select the appropriate activation function for your network. While it is important to select a good activation function, I promise this is not going to be a challenging task when you design your network. There are some rules of thumb that you can start with, and then you can tune the model as needed. If you are not sure what to use, here are my two cents about choosing an activation function: For hidden layers —In most cases, you can use the ReLU activation function (or leaky ReLU) in hidden layers, as you will see in the projects that we will build throughout this book. It is increasingly becoming the default choice because it is a bit faster to compute than other activation functions. More importantly, it reduces the likelihood of the gradient vanishing because it does not saturate for large input values—as opposed to the sigmoid and tanh acti- vation functions, which saturate at ~ 1. Remember, the gradient is the slope. When the function plateaus, this will lead to no slope; hence, the gradient starts to vanish. This makes it harder to descend to the minimum error (we will talk more about this phenomenon, called vanishing/exploding gradients , in later chapters). For the output layer —The softmax activation function is generally a good choice for most classification problems when the classes are mutually exclu- sive. The sigmoid function serves the same purpose when you are doing binary classification. For regression problems, you can simply use no activa- tion function at all, since the weighted sum node produces the continuous output that you need: for example, if you want to predict house pricing based on the prices of other houses in the same neighborhood.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
63 The feedforward process Weights and biases (w, b) —The edges between nodes are assigned random weights denoted as Wab(n), where ( n) indicates the layer number and ( ab) indi- cates the weighted edge connecting the ath neuron in layer ( n) to the bth neu- ron in the previous layer ( n – 1). For example, W23(2) is the weight that connects the second node in layer 2 to the third node in layer 1 ( a22 to a13). (Note that you can see different denotations of Wab(n) in other DL literature, which is fine as long as you follow one convention for your entire network.) The biases are treated similarly to weights because they are randomly initial- ized, and their values are learned during the training process. So, for conve- nience, from this point forward we are going to represent the basis with the same notation that we gave for the weights ( w). In DL literature, you will mostly find all weights and biases represented as ( w) for simplicity. Activation functions σ(x)—In this example, we are using the sigmoid function σ(x) as an activation function. Node values (a) —We will calculate the weighted sum, apply the activation func- tion, and assign this value to the node amn, where n is the layer number and m is the node index in the layer. For example, a23 means node number 2 in layer 3. Layer 1 n= 3Layer 2 n= 4Input layer n= 3Layer 3 n= 1 a21 31 aa 11 11W1 12W211W2 13W2 41W2 42W2 43W233W231W2 32W223W221W2 22W2 11W3 12W3 13W3 14W312W1 32W113W1 21W1 23W122W1 31W1 33W1a12 a22 a32 a42a g x xx 132 31 Figure 2.20 A simple three-layer neural network
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
64 CHAPTER 2Deep learning and neural networks 2.4.1 Feedforward calculations We have all we need to start the feedforward calculations: a1(1) = σ(w11(1)x1 + w21(1)x2 + w31(1)x3) a2(1) = σ(w12(1)x1 + w22(1)x2 + w32(1)x3) a3(1) = σ(w13(1)x1 + w23(1)x2 + w33(1)x3) Then we do the same calculations for layer 2 , and a4(2) all the way to the output prediction in layer 3: yˆ = a1(2) = σ (w11(3)a1(2) + w12(3)a2(2) + w13(3)a3(2) + w14(3)a4(2)) And there you have it! You just calculated the feedforward of a two-layer neural net- work. Let’s take a moment to reflect on what we just did. Take a look at how many equations we need to solve for such a small network. What happens when we have a more complex problem with hundreds of nodes in the input layer and hundreds more in the hidden layers? It is more efficient to use matrices to pass through multi- ple inputs at once. Doing this allows for big computational speedups, especially when using tools like NumPy, where we can implement this with one line of code. Let’s see how the matrices computation looks (figure 2.21). All we did here is sim- ply stack the inputs and weights in matrices and multiply them together. The intuitive way to read this equation is from the right to the left. Start at the far right and follow with me: We stack all the inputs together in one vector (row, column), in this case (3, 1). We multiply the input vector by the weights matrix from layer 1 ( W(1)) and then apply the sigmoid function. We multiply the result for layer 2 ⇒ σ · W(2) and layer 3 ⇒ σ · W(3). If we have a fourth layer, you multiply the result from step 3 by σ · W(4), and so on, until we get the final prediction output yˆ! Here is a simplified representation of this matrices formula: yˆ = σ · W(3) · σ · W(2) · σ · W(1) · (x)a12()a22()a32(),,
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
65 The feedforward process 2.4.2 Feature learning The nodes in the hidden layers ( ai) are the new features that are learned after each layer. For example, if you look at figure 2.20, you see that we have three feature inputs ( x1, x2, and x3). After computing the forward pass in the first layer, the net- work learns patterns, and these features are transformed to three new features with different values ( ). Then, in the next layer, the network learns patterns within the patterns and produces new features ( , and , and so forth). The produced features after each layer are not totally understood, and we don’t see them, nor do we have much control over them. It is part of the neural network magic. That’s why they are called hidden layers. What we do is this: we look at the final output prediction and keep tuning some parameters until we are satisfied by the network’s performance. To reiterate, let’s see this in a small example. In figure 2.22, you see a small neural network to estimate the price of a house based on three features: how many bedrooms it has, how big it is, and which neighborhood it is in. You can see that the original input feature values 3, 2000, and 1 were transformed into new feature values after performing the feedforward process in the first layer ( ). Then they were transformed again to a prediction output value ( yˆ). When training a neural network, we see the prediction output and compare it with the true price to calculate the error and repeat the process until we get the minimum error. To help visualize the feature-learning process, let’s take another look at figure 2.9 (repeated here in figure 2.23) from the Tensorflow playground. You can see that the first layer learns basic features like lines and edges. The second layer begins to learn more complex features like corners. The process continues until the last layers of the network learn even more complex feature shapes like circles and spirals that fit the dataset. Figure 2.21 Reading from left to right, we stack the inputs together in one vector, multiply the input vector by the weights matrix from layer 1, apply the sigmoid function, and multiply the result.WW(3) (2) Layer 3 Layer 2W ŷ113=3 13W3 14W3/c115 W212 22W2 23W2 31W2 32W2 33W2 41W2 42W2 43W211W2 12W2 13W2W(1) Layer 1W211 22W1 23W1 31W1 32W1 33W111W1 12W1 13W1 Input vectorX XX 2 31 /c115/c115 W12/c183/c183 a11()a21()a31(),, a12()a22()a32(),, a42() a12()a22()a32()a42(),,,
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
66 CHAPTER 2Deep learning and neural networks That is how a neural network learns new features: via the network’s hidden layers. First, they recognize patterns in the data. Then, they recognize patterns within patterns; then patterns within patterns within patterns, and so on. The deeper the network is, the more it learns about the training data.Bedrooms Square feet Neighborhood (mapped to an ID number) WeightsInput features Hidden layerOutput prediction ( ) ŷ Weights3 2,000 1Final price estimate New feature a4New feature a1 New feature a2 New feature a3 Figure 2.22 A small neural network to estimate the price of a house based on three features: how many bedrooms it has, how big it is, and which neighborhood it is in x16 neurons 6 neurons 6 neuronsSix hidden layers Input features6 neurons 6 neurons 2 neuronsOutput x2 These are the new features that are learned after each layer. Figure 2.23 Learning features in multiple hidden layers
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
67 The feedforward process Vectors and matrices refresher If you understood the matrix calculations we just did in the feedforward discussion, feel free to skip this sidebar. If you are still not convinced, hang tight: this sidebar is for you. The feedforward calculations are a set of matrix multiplications. While you will not do these calculations by hand, because there are a lot of great DL libraries that do them for you with just one line of code, it is valuable to understand the mathematics that happens under the hood so you can debug your network. Especially because this is very trivial and interesting, let’s quickly review matrix calculations. Let’s start with some basic definitions of matrix dimensions: A scalar is a single number. A vector is an array of numbers. A matrix is a 2D array. A tensor is an n-dimensional array with n > 2. We will follow the conventions used in most mathematical literature: Scalars are written in lowercase and italics: for instance, n. Vectors are written in lowercase, italics, and bold type: for instance, x. Matrices are written in uppercase, italics, and bold: for instance, X. Matrix dimensions are written as follows: (row × column). Multiplication: Scalar multiplication —Simply multiply the scalar number by all the numbers in the matrix. Note that scalar multiplications don’t change the matrix dimensions: Matrix multiplication —When multiplying two matrices, such as in the case of (row 1 × column 1) × (row 2 × column 2), column 1 and row 2 must be equal to each other, and the product will have the dimensions (row 1 × column 2). For example,11 22 41 3Scalar Vector Tensor Matrix 2 1 7 12 3 4 5Matrix dimensions: a scalar is a single number, a vector is an array of numbers, a matrix is a 2D array, and a tensor is an n-dimensional array. 2 ·10 4 2 · 4 2 · 36 3=2 · 10 2 · 6 34 1 × 3 1 × 32 xyz 8= 613 7 4 3 × 3Same Product9 4 07 ·
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
68 CHAPTER 2Deep learning and neural networks 2.5 Error functions So far, you have learned how to implement the forward pass in neural networks to produce a prediction that consists of the weighted sum plus activation operations. Now, how do we evaluate the prediction that the network just produced? More importantly, how do we know how far this prediction is from the correct answer (the label)? The answer is this: measure the error. The selection of an error function is another important aspect of the design of a neural network. Error functions can also be referred to as cost functions or loss functions , and these terms are used inter- changeably in DL literature. where x = 3 · 13 + 4 · 8 + 2 · 6 = 83, and the same for y = 63 and z = 37. Now that you know the matrices multiplication rules, pull out a piece of paper and work through the dimensions of matrices in the earlier neural network example. The following figure shows the matrix equation again for your convenience. The last thing I want you to understand about matrices is transposition . With transpo- sition, you can convert a row vector to a column vector and vice versa, where the shape ( m × n) is inverted and becomes ( n × m). The superscript ( AT) is used for trans- posed matrices:WW(3) (2) Layer 3 Layer 2W ŷ113=3 13W3 14W3W212 22W2 23W2 31W2 32W2 33W2 41W2 42W2 43W211W2 12W2 13W2W(1) Layer 1W211 22W1 23W1 31W1 32W1 33W111W1 12W1 13W1 Input vectorX XX 2 31 /c115/c115 /c115 /c183/c183 W12 The matrix equation from the main text. Use it to work through matrix dimensions. AA= = [2 8]T/c2222 8 AA==T/c2221 4 72 5 83 6 9 91 24 57 8 9 6 3 AA==T/c2220 2 11 4 –1021 –141
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
69 Error functions 2.5.1 What is the error function? The error function is a measure of how “wrong” the neural network prediction is with respect to the expected output (the label). It quantifies how far we are from the cor- rect solution. For example, if we have a high loss, then our model is not doing a good job. The smaller the loss, the better the job the model is doing. The larger the loss, the more our model needs to be trained to increase its accuracy. 2.5.2 Why do we need an error function? Calculating error is an optimization problem, something all machine learning engi- neers love (mathematicians, too). Optimization problems focus on defining an error function and trying to optimize its parameters to get the minimum error (more on optimization in the next section). But for now, know that, in general, when we are working on an optimization problem, if we are able to define the error function for the problem, we have a very good shot at solving it by running optimization algo- rithms to minimize the error function. In optimization problems, our ultimate goal is to find the optimum variables (weights) that would minimize the error function as much as we can. If we don’t know how far from the target we are, how will we know what to change in the next iteration? The process of minimizing this error is called error function optimization . We will review several optimization methods in the next section. But for now, all we need to know from the error function is how far we are from the correct prediction, or how much we missed the desired degree of performance. 2.5.3 Error is always positive Consider this scenario: suppose we have two data points that we are trying to get our network to predict correctly. If the first gives an error of 10 and the second gives an error of –10, then our average error is zero! This is misleading because “error = 0” means our network is producing perfect predictions, when, in fact, it missed by 10 twice. We don’t want that. We want the error of each prediction to be positive, so the errors don’t cancel each other when we take the average error. Think of an archer aiming at a target and missing by 1 inch. We are not really concerned about which direction they missed; all we need to know is how far each shot is from the target. A visualization of loss functions of two separate models plotted over time is shown in figure 2.24. You can see that model #1 is doing a better job of minimizing error, whereas model #2 starts off better until epoch 6 and then plateaus. Different loss functions will give different errors for the same prediction, and thus have a considerable effect on the performance of the model. A thorough discussion of loss functions is outside the scope of this book. Instead, we will focus on the two most commonly used loss functions: mean squared error (and its variations), usually used for regression problems, and cross-entropy, used for classification problems.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
70 CHAPTER 2Deep learning and neural networks 2.5.4 Mean square error Mean squared error (MSE) is commonly used in regression problems that require the output to be a real value (like house pricing). Instead of just comparing the predic- tion output with the label ( yˆi – yi), the error is squared and averaged over the number of data points, as you see in this equation: E(W, b) = ( yˆi – yi)2 MSE is a good choice for a few reasons. The square ensures the error is always positive, and larger errors are penalized more than smaller errors. Also, it makes the math nice, which is always a plus. The notations in the formula are listed in table 2.2. MSE is quite sensitive to outliers, since it squares the error value. This might not be an issue for the specific problem that you are solving. In fact, this sensitivity to outliers might be beneficial in some cases. For example, if you are predicting a stock price, you would want to take outliers into account, and sensitivity to outliers would be a good thing. In other scenarios, you wouldn’t want to build a model that is skewed by outliers, such as predicting a house price in a city. In that case, you are more inter- ested in the median and less in the mean. A variation error function of MSE calledEpoch number5 10 15 20 25 30 35 40 0Model 1 0.21.8 1.6 1.4 1.2 1.0 Loss 0.8 0.6 0.4Model 2 Figure 2.24 A visualization of the loss functions of two separate models plotted over time 1 N--- - i1=N 
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
71 Error functions mean absolute error (MAE) was developed just for this purpose. It averages the absolute error over the entire dataset without taking the square of the error: E(W, b) = | yˆi – yi| 2.5.5 Cross-entropy Cross-entropy is commonly used in classification problems because it quantifies the difference between two probability distributions. For example, suppose that for a spe- cific training instance, we are trying to classify a dog image out of three possible classes (dogs, cats, fish). The true distribution for this training instance is as follows: Probability(cat) P(dog) P(fish) 0.0 1.0 0.0 We can interpret this “true” distribution to mean that the training instance has 0% probability of being class A, 100% probability of being class B, and 0% probability of being class C. Now, suppose our machine learning algorithm predicts the following probability distribution: Probability(cat) P(dog) P(fish) 0.2 0.3 0.5 How close is the predicted distribution to the true distribution? That is what the cross- entropy loss function determines. We can use this formula: E(W, b) = – yˆi log( pi)Table 2.2 Meanings of notation used in regression problems Notation Meaning E(W, b) The loss function. Is also annotated as J(W, b) in other literature. W Weights matrix. In some literature, the weights are denoted by the theta sign ( θ). b Biases vector. N Number of training examples. yˆi Prediction output. Also notated as hw, b(X) in some DL literature. yi The correct output (the label). (yˆi – yi) Usually called the residual . 1 N--- - i1=N  i1=m 
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
72 CHAPTER 2Deep learning and neural networks where ( y) is the target probability, ( p) is the predicted probability, and ( m) is the num- ber of classes. The sum is over the three classes: cat, dog, and fish. In this case, the loss is 1.2: E = - (0.0 * log(0.2) + 1.0 * log(0.3) + 0.0 * log(0.5)) = 1.2 So that is how “wrong” or “far away” our prediction is from the true distribution. Let’s do this one more time, just to show how the loss changes when the network makes better predictions. In the previous example, we showed the network an image of a dog, and it predicted that the image was 30% likely to be a dog, which was very far from the target prediction. In later iterations, the network learns some patterns and gets the predictions a little better, up to 50%: Probability(cat) P(dog) P(fish) 0.3 0.5 0.2 Then we calculate the loss again: E = - (0.0*log(0.3) + 1.0*log(0.5) + 0.0*log(0.2)) = 0.69 You see that when the network makes a better prediction (dog is up to 50% from 30%), the loss decreases from 1.2 to 0.69. In the ideal case, when the network predicts that the image is 100% likely to be a dog, the cross-entropy loss will be 0 (feel free to try the math). To calculate the cross-entropy error across all the training examples ( n), we use this general formula: E(W, b) = – yˆij log( pij) NOTE It is important to note that you will not be doing these calculations by hand. Understanding how things work under the hood gives you better intu- ition when you are designing your neural network. In DL projects, we usually use libraries like Tensorflow, PyTorch, and Keras where the error function is generally a parameter choice. 2.5.6 A final note on errors and weights As mentioned before, in order for the neural network to learn, it needs to minimize the error function as much as possible (0 is ideal). The lower the errors, the higher the accuracy of the model in predicting values. How do we minimize the error? Let’s look at the following perceptron example with a single input to understand the relationship between the weight and the error:i1=n  i1=m  X YW f(x)
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
73 Error functions Suppose the input x = 0.3, and its label (goal prediction) y = 0.8. The prediction out- put ( yˆ) of this perception is calculated as follows: yˆi = w · x = w · 0.3 And the error, in its simplest form, is calculated by comparing the prediction yˆ and the label y: error = | yˆ – y| = |( w · x) – y| = | w · 0.3 – 0.8| If you look at this error function, you will notice that the input ( x) and the goal predic- tion ( y) are fixed values. They will never change for these specific data points. The only two variables that we can change in this equation are the error and the weight. Now, if we want to get to the minimum error, which variable can we play with? Correct: the weight! The weight acts as a knob that the network needs to adjust up and down until it gets the minimum error. This is how the network learns: by adjusting weight. When we plot the error function with respect to the weight, we get the graph shown in figure 2.25. As mentioned before, we initialize the network with random weights. The weight lies somewhere on this curve, and our mission is to make it descend this curve to its optimal value with the minimum error. The process of finding the goal weights of the neural network happens by adjusting the weight values in an iterative process using an optimi- zation algorithm. 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2Cost function: ( )Jw 0 0 –5 5 10 15 20 25 30 35 40 wSlopeStarting weight Goal weightFigure 2.25 The network learns by adjusting weight. When we plot the error function with respect to weight, we get this type of graph.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
74 CHAPTER 2Deep learning and neural networks 2.6 Optimization algorithms Training a neural network involves showing the network many examples (a training dataset); the network makes predictions through feedforward calculations and com- pares them with the correct labels to calculate the error. Finally, the neural network needs to adjust the weights (on all edges) until it gets the minimum error value, which means maximum accuracy. Now, all we need to do is build algorithms that can find the optimum weights for us. 2.6.1 What is optimization? Ahh, optimization! A topic that is dear to my heart, and dear to every machine learn- ing engineer (mathematicians too). Optimization is a way of framing a problem to maximize or minimize some value. The best thing about computing an error function is that we turn the neural network into an optimization problem where our goal is to minimize the error . Suppose you want to optimize your commute from home to work. First, you need to define the metric that you are optimizing (the error function). Maybe you want to optimize the cost of the commute, or the time, or the distance. Then, based on that specific loss function, you work on minimizing its value by changing some parameters. Changing the parameters to minimize (or maximize) a value is called optimization . If you choose the loss function to be the cost, maybe you will choose a longer commute that will take two hours, or (hypothetically) you might walk for five hours to minimize the cost. On the other hand, if you want to optimize the time spent commuting, maybe you will spend $50 to take a cab that will decrease the commute time to 20 min- utes. Based on the loss function you defined, you can start changing your parameters to get the results you want. TIP In neural networks, optimizing the error function means updating the weights and biases until we find the optimal weights , or the best values for the weights to produce the minimum error. Let’s look at the space that we are trying to optimize: In a neural network of the simplest form, a perceptron with one input, we have only one weight. We can easily plot the error (that we are trying to minimize) with respect to this weight, represented by the 2D curve in figure 2.26 (repeated from earlier). But what if we have two weights? If we graph all the possible values of the two weights, we get a 3D plane of the error (figure 2.27). What about more than two weights? Your network will probably have hundreds or thousands of weights (because each edge in your network has its own weight value).X YW f(x)
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
75 Optimization algorithms Since we humans are only equipped to understand a maximum of 3 dimensions, it is impossible for us to visualize error graphs when we have 10 weights, not to mention hundreds or thousands of weight parameters. So, from this point on, we will study the error function using the 2D or 3D plane of the error. In order to optimize the model, our goal is to search this space to find the best weights that will achieve the lowest pos- sible error. Why do we need an optimization algorithm? Can’t we just brute-force through a lot of weight values until we get the minimum error? Suppose we used a brute-force approach where we just tried a lot of different possi- ble weights (say 1,000 values) and found the weight that produced the minimum error. Could that work? Well, theoretically, yes. This approach might work when we2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2Cost function: ( )Jw 0 0 –5 5 10 15 20 25 30 35 40 wSlopeStarting weight Goal weight Figure 2.26 The error function with respect to its weight for a single perceptron is a 2D curve. Error ww 1280 300 250 200 150 100 5050 0100150200300 250 060 40 20 Goal weight Figure 2.27 Graphing all possible values of two weights gives a 3D error plane.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
76 CHAPTER 2Deep learning and neural networks have very few inputs and only one or two neurons in our network. Let me try to con- vince you that this approach wouldn’t scale. Let’s take a look at a scenario where we have a very simple neural network. Suppose we want to predict house prices based on only four features (inputs) and one hidden layer of five neurons (see figure 2.28). As you can see, we have 20 edges (weights) from the input to the hidden layer, plus 5 weights from the hidden layer to the output prediction, totaling 25 weight variables that need to be adjusted for optimum values. To brute-force our way through a simple neural network of this size, if we are trying 1,000 different values for each weight, then we will have a total of 1075 combinations: 1,000 × 1,000 × . . . × 1,000 = 1,00025 = 1075 combinations Let’s say we were able to get our hands on the fastest supercomputer in the world: Sun- way TaihuLight, which operates at a speed of 93 petaflops ⇒ 93 × 1015 floating-pointPrice Input layer Hidden layer Output layerArea (feet )2 Bedrooms Distance to city (miles) Agex x1 2 xx 43y Figure 2.28 If we want to predict house prices based on only four features (inputs) and one hidden layer of five neurons, we’ll have 20 edges (weights) from the input to the hidden layer, plus 5 weights from the hidden layer to the output prediction.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
77 Optimization algorithms operations per second (FLOPs). In the best-case scenario, this supercomputer would need = 1.08 × 1058 seconds = 3.42 × 1050 years That is a huge number: it’s longer than the universe has existed. Who has that kind of time to wait for the network to train? Remember that this is a very simple neural net- work that usually takes a few minutes to train using smart optimization algorithms. In the real world, you will be building more complex networks that have thousands of inputs and tens of hidden layers, and you will be required to train them in a matter of hours (or days, or sometimes weeks). So we have to come up with a different approach to find the optimal weights. Hopefully I have convinced you that brute-forcing through the optimization pro- cess is not the answer. Now, let’s study the most popular optimization algorithm for neural networks: gradient descent. Gradient descent has several variations: batch gradi- ent descent (BGD), stochastic gradient descent (SGD), and mini-batch GD (MB-GD). 2.6.2 Batch gradient descent The general definition of a gradient (also known as a derivative ) is that it is the function that tells you the slope or rate of change of the line that is tangent to the curve at any given point. It is just a fancy term for the slope or steepness of the curve (figure 2.29). Gradient descent simply means updating the weights iteratively to descend the slope of the error curve until we get to the point with minimum error. Let’s take a look at the error function that we introduced earlier with respect to the weights. At the initial weight point, we calculate the derivative of the error function to get the slope (direc- tion) of the next step. We keep repeating this process to take steps down the curve until we reach the minimum error (figure 2.30).1075 93 1015×---------------------- a ef bcdSlope at point aSlope at point cSlope at point d Slope at point f Slope at point e Slope at point bFigure 2.29 A gradient is the function that describes the rate of change of the line that is tangent to a curve at any given point.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
78 CHAPTER 2Deep learning and neural networks HOW DOES GRADIENT DESCENT WORK ? To visualize how gradient descent works, let’s plot the error function in a 3D graph (figure 2.31) and go through the process step by step. The random initial weight (starting weight) is at point A, and our goal is to descend this error mountain to the goal w1 and w2 weight values, which produce the minimum error value. The way we do that is by taking a series of steps down the curve until we get the minimum error. In order to descend the error mountain, we need to determine two things for each step: The step direction (gradient) The step size (learning rate)Cost WeightGradientInitial weight Incremental stepDerivative of cost Minimum cost Figure 2.30 Gradient descent takes incremental steps to descend the error function. Error WW 1280 300 250 200 150 100 5050 0100150200300 250 060 40 20Starting weight Goal weight 4 31 2 BA Figure 2.31 The random initial weight (starting weight) is at point A. We descend the error mountain to the w1 and w2 weight values that produce the minimum error value.
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
79 Optimization algorithms THE DIRECTION (GRADIENT ) Suppose you are standing on the top of the error mountain at point A. To get to the bottom, you need to determine the step direction that results in the deepest descent (has the steepest slope). And what is the slope, again? It is the derivative of the curve. So if you are standing on top of that mountain, you need to look at all the directions around you and find out which direction will result in the deepest descent (1, 2, 3, or 4, for example). Let’s say it is direction 3; we choose that way. This brings us to point B, and we restart the process (calculate feedforward and error) and find the direction of deepest descent, and so forth, until we get to the bottom of the mountain. This process is called gradient descent . By taking the derivative of the error with respect to the weight ( ), we get the direction that we should take. Now there’s one thing left. The gradient only determines the direction. How large should the size of the step be? It could be a 1-foot step or a 100-foot jump. This is what we need to deter- mine next. THE STEP SIZE (LEARNING RATE α) The learning rate is the size of each step the network takes when it descends the error mountain, and it is usually denoted by the Greek letter alpha ( α). It is one of the most important hyperparameters that you tune when you train your neural network (more on that later). A larger learning rate means the network will learn faster (since it is descending the mountain with larger steps), and smaller steps mean slower learning. Well, this sounds simple enough. Let’s use large learning rates and complete the neu- ral network training in minutes instead of waiting for hours. Right? Not quite. Let’s take a look at what could happen if we set a very large learning rate value. In figure 2.32, you are starting at point A. When you take a large step in the direc- tion of the arrow, instead of descending the error mountain, you end up at point B, on the other side. Then another large step takes you to C, and so forth. The error will keep oscillating and will never descend. We will talk more later about tuning the learn- ing rate and how to determine if the error is oscillating. But for now, you need to know this: if you use a very small learning rate, the network will eventually descend thedE dw------ - Error WW 1280 300 250 200 150 100 5050 0100150200300 250 060 40 20 Goal weight BCA Figure 2.32 Setting a very large learning rate causes the error to oscillate and never descend.

Dataset Card for "deep_learning_books"

More Information needed

Downloads last month
2
Edit dataset card