text
stringlengths
20
3.06k
   
Jason Brownlee Deep Learning With Python Develop Deep Learning Models On Theano And Tensor Flow Using Keras
i Deep Learning With Python Copyright 2016 Jason Brownlee. All Rights Reserved. Edition: v1. 7
Contents Prefaceiii I Introduction11 Welcome21. 1 Deep Learning The Wrong Way........................... 21. 2 Deep Learning With Python............................. 31. 3 Book Organization..................................31. 4 Requirements For This Book............................. 61. 5 Your Outcomes From Reading This Book......................71. 6 What This Book is Not................................81. 7 Summary....................................... 9II Background102 Introduction to Theano112. 1 What is Theano?................................... 112. 2 How to Install Theano................................122. 3 Simple Theano Example............................... 122. 4 Extensions and Wrappers for Theano........................132. 5 More Theano Resources............................... 132. 6 Summary....................................... 143 Introduction to Tensor Flow153. 1 What is Tensor Flow?................................. 153. 2 How to Install Tensor Flow..............................153. 3 Your First Examples in Tensor Flow......................... 163. 4 Simple Tensor Flow Example............................. 163. 5 More Deep Learning Models............................. 173. 6 Summary....................................... 174 Introduction to Keras194. 1 What is Keras?....................................194. 2 How to Install Keras................................. 204. 3 Theano and Tensor Flow Backends for Keras....................20ii
iii4. 4 Build Deep Learning Models with Keras......................214. 5 Summary....................................... 225 Project: Develop Large Models on GPUs Cheaply In the Cloud235. 1 Project Overview................................... 235. 2 Setup Your AWS Account..............................245. 3 Launch Your Server Instance............................. 255. 4 Login, Configure and Run..............................295. 5 Build and Run Models on AWS........................... 315. 6 Close Your EC2 Instance............................... 325. 7 Tips and Tricks for Using Keras on AWS......................345. 8 More Resources For Deep Learning on AWS....................345. 9 Summary....................................... 34III Multilayer Perceptrons366 Crash Course In Multilayer Perceptrons376. 1 Crash Course Overview................................376. 2 Multilayer Perceptrons................................386. 3 Neurons........................................386. 4 Networks of Neurons................................. 396. 5 Training Networks..................................416. 6 Summary....................................... 427 Develop Your First Neural Network With Keras437. 1 Tutorial Overview................................... 437. 2 Pima Indians Onset of Diabetes Dataset......................447. 3 Load Data....................................... 457. 4 Define Model..................................... 457. 5 Compile Model....................................477. 6 Fit Model....................................... 487. 7 Evaluate Model....................................487. 8 Tie It All Together..................................487. 9 Summary....................................... 498 Evaluate The Performance of Deep Learning Models518. 1 Empirically Evaluate Network Configurations................... 518. 2 Data Splitting..................................... 518. 3 Manualk-Fold Cross Validation........................... 548. 4 Summary....................................... 559 Use Keras Models With Scikit-Learn For General Machine Learning579. 1 Overview........................................579. 2 Evaluate Models with Cross Validation....................... 589. 3 Grid Search Deep Learning Model Parameters................... 599. 4 Summary....................................... 61
iv10 Project: Multiclass Classification Of Flower Species6210. 1 Iris Flowers Classification Dataset..........................6210. 2 Import Classes and Functions............................6310. 3 Initialize Random Number Generator........................6310. 4 Load The Dataset................................... 6410. 5 Encode The Output Variable............................6410. 6 Define The Neural Network Model......................... 6510. 7 Evaluate The Model withk-Fold Cross Validation................. 6510. 8 Summary....................................... 6711 Project: Binary Classification Of Sonar Returns6811. 1 Sonar Object Classification Dataset......................... 6811. 2 Baseline Neural Network Model Performance....................6911. 3 Improve Performance With Data Preparation................... 7111. 4 Tuning Layers and Neurons in The Model..................... 7311. 5 Summary....................................... 7512 Project: Regression Of Boston House Prices7712. 1 Boston House Price Dataset............................. 7712. 2 Develop a Baseline Neural Network Model..................... 7812. 3 Lift Performance By Standardizing The Dataset..................8112. 4 Tune The Neural Network Topology......................... 8212. 5 Summary....................................... 84IV Advanced Multilayer Perceptrons and Keras8613 Save Your Models For Later With Serialization8713. 1 Tutorial Overview................................... 8713. 2 Save Your Neural Network Model to JSON..................... 8813. 3 Save Your Neural Network Model to YAML....................9013. 4 Summary....................................... 9214 Keep The Best Models During Training With Checkpointing9314. 1 Checkpointing Neural Network Models....................... 9314. 2 Checkpoint Neural Network Model Improvements................. 9414. 3 Checkpoint Best Neural Network Model Only................... 9514. 4 Loading a Saved Neural Network Model......................9614. 5 Summary....................................... 9715 Understand Model Behavior During Training By Plotting History9815. 1 Access Model Training History in Keras......................9815. 2 Visualize Model Training History in Keras..................... 9915. 3 Summary....................................... 101
v16 Reduce Overfitting With Dropout Regularization10216. 1 Dropout Regularization For Neural Networks....................10216. 2 Dropout Regularization in Keras..........................10316. 3 Using Dropout on the Visible Layer......................... 10416. 4 Using Dropout on Hidden Layers..........................10516. 5 Tips For Using Dropout............................... 10716. 6 Summary....................................... 10717 Lift Performance With Learning Rate Schedules10817. 1 Learning Rate Schedule For Training Models....................10817. 2 Ionosphere Classification Dataset..........................10917. 3 Time-Based Learning Rate Schedule........................10917. 4 Drop-Based Learning Rate Schedule......................... 11217. 5 Tips for Using Learning Rate Schedules....................... 11417. 6 Summary....................................... 114V Convolutional Neural Networks11518 Crash Course In Convolutional Neural Networks11618. 1 The Case for Convolutional Neural Networks....................11618. 2 Building Blocks of Convolutional Neural Networks................. 11718. 3 Convolutional Layers................................. 11718. 4 Pooling Layers....................................11718. 5 Fully Connected Layers................................11818. 6 Worked Example................................... 11818. 7 Convolutional Neural Networks Best Practices................... 11918. 8 Summary....................................... 12019 Project: Handwritten Digit Recognition12119. 1 Handwritten Digit Recognition Dataset....................... 12119. 2 Loading the MNIST dataset in Keras........................12219. 3 Baseline Model with Multilayer Perceptrons....................12319. 4 Simple Convolutional Neural Network for MNIST................. 12719. 5 Larger Convolutional Neural Network for MNIST................. 13119. 6 Summary....................................... 13420 Improve Model Performance With Image Augmentation13520. 1 Keras Image Augmentation API........................... 13520. 2 Point of Comparison for Image Augmentation................... 13620. 3 Feature Standardization............................... 13720. 4 ZCA Whitening....................................13920. 5 Random Rotations..................................14120. 6 Random Shifts....................................14220. 7 Random Flips..................................... 14420. 8 Saving Augmented Images to File..........................14520. 9 Tips For Augmenting Image Data with Keras................... 147
vi20. 10Summary....................................... 14721 Project Object Recognition in Photographs14821. 1 Photograph Object Recognition Dataset......................14821. 2 Loading The CIFAR-10 Dataset in Keras......................14921. 3 Simple CNN for CIFAR-10..............................15021. 4 Larger CNN for CIFAR-10..............................15421. 5 Extensions To Improve Model Performance..................... 15721. 6 Summary....................................... 15822 Project: Predict Sentiment From Movie Reviews15922. 1 Movie Review Sentiment Classification Dataset..................15922. 2 Load the IMDB Dataset With Keras........................16022. 3 Word Embeddings..................................16222. 4 Simple Multilayer Perceptron Model........................16322. 5 One-Dimensional Convolutional Neural Network..................16522. 6 Summary....................................... 168VI Recurrent Neural Networks16923 Crash Course In Recurrent Neural Networks17023. 1 Support For Sequences in Neural Networks..................... 17023. 2 Recurrent Neural Networks..............................17123. 3 Long Short-Term Memory Networks......................... 17223. 4 Summary....................................... 17224 Time Series Prediction with Multilayer Perceptrons17424. 1 Problem Description: Time Series Prediction....................17424. 2 Multilayer Perceptron Regression..........................17624. 3 Multilayer Perceptron Using the Window Method................. 18124. 4 Summary....................................... 18325 Time Series Prediction with LSTM Recurrent Neural Networks18525. 1 LSTM Network For Regression........................... 18525. 2 LSTM For Regression Using the Window Method................. 18925. 3 LSTM For Regression with Time Steps....................... 19125. 4 LSTM With Memory Between Batches....................... 19425. 5 Stacked LSTMs With Memory Between Batches..................19725. 6 Summary....................................... 20026 Project: Sequence Classification of Movie Reviews20126. 1 Simple LSTM for Sequence Classification......................20126. 2 LSTM For Sequence Classification With Dropout................. 20326. 3 LSTM and CNN For Sequence Classification....................20626. 4 Summary....................................... 207
vii27 Understanding Stateful LSTM Recurrent Neural Networks20927. 1 Problem Description: Learn the Alphabet..................... 20927. 2 LSTM for Learning One-Char to One-Char Mapping............... 21127. 3 LSTM for a Feature Window to One-Char Mapping................21427. 4 LSTM for a Time Step Window to One-Char Mapping..............21627. 5 LSTM State Maintained Between Samples Within A Batch............21827. 6 Stateful LSTM for a One-Char to One-Char Mapping............... 22127. 7 LSTM with Variable Length Input to One-Char Output..............22427. 8 Summary....................................... 22728 Project: Text Generation With Alice in Wonderland22828. 1 Problem Description: Text Generation....................... 22828. 2 Develop a Small LSTM Recurrent Neural Network................. 22928. 3 Generating Text with an LSTM Network......................23428. 4 Larger LSTM Recurrent Neural Network......................23728. 5 Extension Ideas to Improve the Model....................... 24028. 6 Summary....................................... 241VII Conclusions24229 How Far You Have Come24330 Getting More Help24430. 1 Artificial Neural Networks..............................24430. 2 Deep Learning..................................... 24430. 3 Python Machine Learning..............................24530. 4 Keras Library..................................... 245
Preface Deep learning is a fascinating field. Artificial neural networks have been around for a long time,but something special has happened in recent years. The mixture of new faster hardware, newtechniques and highly optimized open source libraries allow very large networks to be createdwith frightening ease. This new wave of much larger and much deeper neural networks are also impressively skillfulon a range of problems. I have watched over recent years as they tackle and handily becomestate-of-the-art across a range of dicult problem domains. Not least object recognition, speechrecognition, sentiment classification, translation and more. When a technique comes a long that does so well on such a broad set of problems, you haveto pay attention. The problem is where do you start with deep learning? I created this bookbecause I thought that there was no gentle way for Python machine learning practitioners toquickly get started developing deep learning models. In developing the lessons in this book, I chose the best of breed Python deep learning librarycalled Keras that abstracted away all of the complexity, ruthlessly leaving you an API containingonly what you need to know to eciently develop and evaluate neural network models. This is the guide that I wish I had when I started apply deep learning to machine learningproblems. I hope that you find it useful on your own projects and have as much fun applyingdeep learning as I did in creating this book for you. Jason Brownlee Melbourne, Australia2016 viii
Part IIntroduction 1
Chapter 1Welcome Welcome to Deep Learning With Python. This book is your guide to deep learning in Python. You will discover the Keras Python library for deep learning and how to use it to develop andevaluate deep learning models. In this book you will discover the techniques, recipes and skillsin deep learning that you can then bring to your own machine learning projects. Deep learning does have a lot of fascinating math under the covers, but you do not needto know it to be able to pick it up as a tool and wield it on important projects and deliverreal value. From the applied perspective, deep learning is quite a shallow field and a motivateddeveloper can quickly pick it up and start making very real and impactful contributions. This ismy goal for you and this book is your ticket to that outcome. 1. 1 Deep Learning The Wrong Way If you ask a deep learning practitioner how to get started with neural networks and deep learning,what do they say? They say things like You must have a strong foundation in linear algebra. You must have a deep knowledge of traditional neural network techniques. You really must know about probability and statistics. You should really have a deep knowledge of machine learning. You probably need to be a Ph D in computer science. You probably need 10 years of experience as a machine learning developer. You can see that the “common sense” advice means that it is not until after you havecompleted years of study and experience that you are ready to actually start developing andevaluating machine learning model for your machine learning projects. I think this advice is dead wrong. 2
1. 2. Deep Learning With Python31. 2 Deep Learning With Python The approach taken with this book and with all of Machine Learning Mastery is to flip thetraditional approach. If you are interested in deep learning, start by developing and evaluatingdeep learning models. Then if you discover you really like it or have a knack for it, later youcan step deeper and deeper into the background and theory, as you need it in order to serveyou in developing better and more valuable results. This book is your ticket to jumping in andmaking a ruckus with deep learning. Ih a v eu s e dm a n yo ft h et o pd e e pl e a r n i n gp l a t f o r m sa n dl i b r a r i e sa n d Ic h o s ew h a t It h i n kis the best-of-breed platform for getting started and very quickly developing powerful and evenstate-of-the-art deep learning models in the Keras deep learning library for Python. Unlike R,Python is a fully featured programming language allowing you to use the same libraries andcode for model development as you can use in production. Unlike Java, Python has the Sci Pystack for scientific computing and scikit-learn which is a professional grade machine library. There are two top numerical platforms for developing deep learning models, they are Theanodeveloped by the University of Montreal and Tensor Flow developed at Google. Both weredeveloped for use in Python and both can be leveraged by the super simple to use Keras library. Keras wraps the numerical computing complexity of Theano and Tensor Flow providing a concise API that we will use to develop our own neural network and deep learning models. You will develop your own and perhaps your first neural network and deep learning modelswhile working through this book, and you will have the skills to bring this amazing newtechnology to your own projects. It is going to be a fun journey and I can't wait to start. 1. 3 Book Organization This book is broken down into three parts. Lessonswhere you learn about specific features of neural network models and or how touse specific aspects of the Keras API. Projectswhere you will pull together multiple lessons into an end-to-end project anddeliver a result, providing a template your your own projects. Recipeswhere you can copy and paste the standalone code into your own project,including all of the code presented in this book. 1. 3. 1 Lessons and Projects Lessons are discrete and are focused on one topic, designed for you to complete in one sitting. You can take as long as you need, from 20 minutes if you are racing through, to hours if youwant to experiment with the code or ideas and improve upon the presented results. Your lessonsare divided into five parts: Background. Multilayer Perceptrons. Advanced Multilayer Perceptrons and Keras.
1. 3. Book Organization4 Convolutional Neural Networks. Recurrent Neural Networks. 1. 3. 2 Part 2: Background In this part you will learn about the Theano, Tensor Flow and Keras libraries that lay thefoundation for your deep learning journey and about how you can leverage very cheap Amazon Web Service computing in order to develop and evaluate your own large models in the cloud. This part of the book includes the following lessons: Introduction to the Theano Numerical Library. Introduction to the Tensor Flow Numerical Library. Introduction to the Keras Deep Learning Library. The lessons will introduce you to the important foundational libraries that you need toinstall and use on your workstation. This is taken one step further in a project that shows howyou can cheaply harness GPU cloud computing to develop and evaluate very large deep learningmodels. Project: Develop Large Models on GPUs Cheaply In the Cloud. At the end of this part you will be ready to start developing models in Keras on yourworkstation or in the cloud. 1. 3. 3 Part 3: Multilayer Perceptrons In this part you will learn about feedforward neural networks that may be deep or not and howto expertly develop your own networks and evaluate them eciently using Keras. This part ofthe book includes the following lessons: Crash Course In Multilayer Perceptrons. Develop Your First Neural Network With Keras. Evaluate The Performance of Deep Learning Models. Use Keras Models With Scikit-Learn For General Machine Learning. These important lessons are tied together with three foundation projects. These projectsdemonstrate how you can quickly and eciently develop neural network models for tabulardata and provide project templates that you can use on your own regression and classificationmachine learning problems. These projects include: Project: Multiclass Classification Problem. Project: Binary Classification Problem. Project: Regression Problem. At the end of this part you will be ready to discover the finer points of deep learning usingthe Keras API.
1. 3. Book Organization51. 3. 4 Part 4: Advanced Multilayer Perceptrons In this part you will learn about some of the more finer points of the Keras library and API forpractical machine learning projects and some of the more important developments in appliedneural networks that you need to know in order to deliver world class results. This part of thebook includes the following lessons: Save Your Models For Later With Network Serialization. Keep The Best Models During Training With Checkpointing. Understand Model Behavior During Training By Plotting History. Reduce Overfitting With Dropout Regularization. Lift Performance With Learning Rate Schedules. At the end of this part you will know how to confidently wield Keras on your own machinelearning projects with a focus of the finer points of investigating model performance, persistingmodels for later use and gaining lifts in performance over baseline models. 1. 3. 5 Part 5: Convolutional Neural Networks In this part you will receive a crash course in the dominant model for computer vision machinelearning problems and some natural language problems and how you can best exploit thecapabilities of the Keras API for your own projects. This part of the book includes the followinglessons: Crash Course In Convolutional Neural Networks. Improve Model Performance With Image Augmentation. The best way to learn about this impressive type of neural network model is to apply it. You will work through three larger projects and apply CNN to image data for object recognitionand text data for sentiment classification. Project: Handwritten Digit Recognition. Project: Object Recognition in Photographs. Project: Movie Review Sentiment Classification. After completing the lessons and projects in this part you will have the skills and theconfidence of complete and working templates and recipes to tackle your own deep learningprojects using convolutional neural networks.
1. 4. Requirements For This Book61. 3. 6 Part 6: Recurrent Neural Networks In this part you will receive a crash course in the dominant model for data with a sequence ortime component and how you can best exploit the capabilities of the Keras API for your ownprojects. This part of the book includes the following lessons: Crash Course In Recurrent Neural Networks. Multilayer Perceptron Models for Time Series Problems. LSTM Models for Time Series Problems. Understanding State in LSTM Models for Sequence Prediction. The best way to learn about this complex type of neural network model is to apply it. You will work through two larger projects and apply RNN to sequence classification and textgeneration. Project: Sequence Classification of Movie Reviews. Project: Text Generation With Alice in Wonderland. After completing the lessons and projects in this part you will have the skills and theconfidence of complete and working templates and recipes to tackle your own deep learningprojects using recurrent neural networks. 1. 3. 7 Conclusions The book concludes with some resources that you can use to learn more information about aspecific topic or find help if you need it as you start to develop and evaluate your own deeplearning models. 1. 3. 8 Recipes Building up a catalog of code recipes is an important part of your deep learning journey. Eachtime you learn about a new technique or new problem type, you should write up a short coderecipe that demonstrates it. This will give you a starting point to use on your next deep learningor machine learning project. As part of this book you will receive a catalog of deep learning recipes. This includes recipesfor all of the lessons presented in this book, as well as the complete code for all of the projects. You are strongly encouraged to add to and build upon this catalog of recipes as you expandyour use and knowledge of deep learning in Python. 1. 4 Requirements For This Book1. 4. 1 Python and Sci Py You do not need to be a Python expert, but it would be helpful if you knew how to install andsetup Python and Sci Py. The lessons and projects assume that you have a Python and Sci Py
1. 5. Your Outcomes From Reading This Book7environment available. This may be on your workstation or laptop, it may be in a VM or a Docker instance that you run, or it may be a server instance that you can configure in the cloudas taught in Part II of this book. Technical Requirements: The technical requirements for the code and tutorials in thisbook are as follows: Python version 2 or 3 installed. This book was developed using Python version 2. 7. 11. Sci Py and Num Py installed. This book was developed with Sci Py version 0. 17. 0 and Num Py version 1. 11. 0. Matplotlib installed. This book was developed with Matplotlib version 1. 5. 1. Pandas installed. This book was developed with Pandas version 0. 18. 0. scikit-learn installed. This book was developed with scikit-learn 0. 17. 1. You do not need to match the version exactly, but if you are having problems running aspecific code example, please ensure that you update to the same or higher version as the libraryspecified. You will be guided as to how to install the deep learning libraries Theano, Tensor Flowand Keras in Part II of the book. 1. 4. 2 Machine Learning You do not need to be a machine learning expert, but it would be helpful if you knew how tonavigate a small machine learning problem using scikit-learn. Basic concepts like cross validationand one hot encoding used in lessons and projects are described, but only briefly. There areresources to go into these topics in more detail at the end of the book, but some knowledge ofthese areas might make things easier for you. 1. 4. 3 Deep Learning You do not need to know the math and theory of deep learning algorithms, but it would behelpful to have some basic idea of the field. You will get a crash course in neural networkterminology and models, but we will not go into much detail. Again, there will be resources formore information at the end of the book, but it might be helpful if you can start with someidea about neural networks. Note: All tutorials can be completed on standard workstation hardware with a CPU. AGPU is not required. Some tutorials later in the book can be sped up significantly by runningon the GPU and a suggestion is provided to consider using GPU hardware at the beginning ofthose sections. You can access GPU hardware easily and cheaply in the cloud and a step-by-stepprocedure is taught on how to do this in Chapter5. 1. 5 Your Outcomes From Reading This Book This book will lead you from being a developer who is interested in deep learning with Pythonto a developer who has the resources and capabilities to work through a new dataset end-to-endusing Python and develop accurate deep learning models. Specifically, you will know:
1. 6. What This Book is Not8 How to develop and evaluate neural network models end-to-end. How to use more advanced techniques required for developing state-of-the-art deep learningmodels. How to build larger models for image and text data. How to use advanced image augmentation techniques in order to lift model performance. How to get help with deep learning in Python. From here you can start to dive into the specifics of the functions, techniques and algorithmsused with the goal of learning how to use them better in order to deliver more accurate predictivemodels, more reliably in less time. There are a few ways you can read this book. You can dipinto the lessons and projects as your need or interests motivate you. Alternatively, you canwork through the book end-to-end and take advantage of how the lessons and projects build incomplexity and range. I recommend the latter approach. To get the very most from this book, I recommend taking each lesson and project and buildupon them. Attempt to improve the results, apply the method to a similar but di↵erent problem,and so on. Write up what you tried or learned and share it on your blog, social media or sendme an email atjason@Machine Learning Mastery. com. This book is really what you make of itand by putting in a little extra, you can quickly become a true force in applied deep learning. 1. 6 What This Book is Not This book solves a specific problem of getting you, a developer, up to speed applying deeplearning to your own machine learning projects in Python. As such, this book was not intendedto be everything to everyone and it is very important to calibrate your expectations. Specifically: This is not a deep learning textbook. We will not be getting into the basic theoryof artificial neural networks or deep learning algorithms. You are also expected to havesome familiarity with machine learning basics, or be able to pick them up yourself. This is not an algorithm book. We will not be working through the details of howspecific deep learning algorithms work. You are expected to have some basic knowledge ofdeep learning algorithms or how to pick up this knowledge yourself. This is not a Python programming book. We will not be spending a lot of time on Python syntax and programming (e. g. basic programming tasks in Python). You areexpected to already be familiar with Python or a developer who can pick up a new C-likelanguage relatively quickly. You can still get a lot out of this book if you are weak in one or two of these areas, but youmay struggle picking up the language or require some more explanation of the techniques. Ifthis is the case, see the Getting More Help chapter at the end of the book and seek out a goodcompanion reference text.
1. 7. Summary91. 7 Summary It is a special time right now. The tools for applied deep learning have never been so good. The pace of change with neural networks and deep learning feels like it has never been so fast,spurred by the amazing results that the methods are showing in such a broad range of fields. This is the start of your journey into deep learning and I am excited for you. Take your time,have fun and I'm so excited to see where you can take this amazing new technology. 1. 7. 1 Next Let's dive in. Next up is Part II where you will take a whirlwind tour of the foundation librariesfor deep learning in Python, namely the numerical libraries Theano and Tensor Flow and thelibrary you will be using throughout this book called Keras.
Part IIBackground 10
Chapter 2Introduction to Theano Theano is a Python library for fast numerical computation that can be run on the CPU or GPU. It is a key foundational library for deep learning in Python that you can use directly to createdeep learning models. After completing this lesson, you will know: About the Theano library for Python. How a very simple symbolic expression can be defined, compiled and calculated. Where you can learn more about Theano. Let's get started. 2. 1 What is Theano?Theano is an open source project released under the BSD license and was developed by the LISA(now MILA1) group at the University of Montreal, Quebec, Canada (home of Yoshua Bengio). It is named after a Greek mathematician. At it's heart Theano is a compiler for mathematicalexpressions in Python. It knows how to take your structures and turn them into very ecientcode that uses Num Py, ecient native libraries like BLAS and native code to run as fast aspossible on CPUs or GPUs. It uses a host of clever code optimizations to squeeze as much performance as possible fromyour hardware. If you are into the nitty-gritty of mathematical optimizations in code, check outthis interesting list2. The actual syntax of Theano expressions is symbolic, which can be o↵putting to beginners. Specifically, expression are defined in the abstract sense, compiled andlater actually used to make calculations. Theano was specifically designed to handle the types of computation required for largeneural network algorithms used in deep learning. It was one of the first libraries of its kind(development started in 2007) and is considered an industry standard for deep learning researchand development. 1http://mila. umontreal. ca/2http://deeplearning. net/software/theano/optimizations. html#optimizations11
2. 2. How to Install Theano122. 2 How to Install Theano Theano provides extensive installation instructions for the major operating systems: Windows,OS X and Linux. Read the Installing Theano guide for your platform3. Theano assumes aworking Python 2 or Python 3 environment with Sci Py. There are ways to make the installationeasier, such as using Anaconda4to quickly setup Python and Sci Py on your machine as wellas using Docker images. With a working Python and Sci Py environment, it is relativelystraightforward to install Theano using pip, for example:sudo pip install Theano Listing 2. 1: Install Theano with pip. New releases of Theano may be announced and you will want to update to get any bug fixesand eciency improvements. You can upgrade Theano using pip as follows:sudo pip install--upgrade--no-deps theano Listing 2. 2: Upgrade Theano with pip. You may want to use the bleeding edge version of Theano checked directly out of Git Hub. This may be required for some wrapper libraries that make use of bleeding edge API changes. You can install Theano directly from a Git Hub checkout as follows:sudo pip install--upgrade--no-deps git+git://github. com/Theano/Theano. git Listing 2. 3: Upgrade Theano with pip from Git Hub. You are now ready to run Theano on your CPU, which is just fine for the development ofsmall models. Large models may run slowly on the CPU. If you have a Nvidia GPU, you maywant to look into configuring Theano to use your GPU. There is a wealth of documentation ofthe Theano homepage for further configuring the library. Theano v0. 8. 2is the latest at the time of writing and is used in this book. 2. 3 Simple Theano Example In this section we demonstrate a simple Python script that gives you a flavor of Theano. In thisexample we define two symbolic floating point variablesaandb. We define an expression thatuses these variables (c=a+b). We then compile this symbolic expression into a function using Theano that we can use later. Finally, we use our compiled expression by plugging in some realvalues and performing the calculation using ecient compiled Theano code under the covers. #Exampleof Theanolibraryimporttheanofromtheanoimporttensor#declaretwosymbolicfloating-pointscalarsa = tensor. dscalar()b = tensor. dscalar()3http://deeplearning. net/software/theano/install. html4https://www. continuum. io/downloads
2. 4. Extensions and Wrappers for Theano13#createasimplesymbolicexpressionc=a+b#converttheexpressionintoacallableobjectthattakes(a,b)andcomputescf = theano. function([a,b], c)#bind1. 5to a,2. 5to b,andevaluate c result = f(1. 5, 2. 5)print(result)Listing 2. 4: Example of Symbolic Arithmetic with Theano. Running the example prints the output 4, which matches our expectation that 1. 5+2. 5=4. 0. This is a useful example as it gives you a flavor for how a symbolic expression can be defined,compiled and used. Although we have only performed a basic introduction of adding 2 and 2,you can see how pre-defining computation to be compiled for eciency may be scaled up tolarge vector and matrix operations required for deep learning. 2. 4 Extensions and Wrappers for Theano If you are new to deep learning you do not have to use Theano directly. In fact, you are highlyencouraged to use one of many popular Python projects that make Theano a lot easier to usefor deep learning. These projects provide data structures and behaviors in Python, specificallydesigned to quickly and reliably create deep learning models whilst ensuring that fast andecient models are created and executed by Theano under the covers. The amount of Theanosyntax exposed by the libraries varies. Keras is a wrapper library that hides Theano completely and provides a very simple API towork with to create deep learning models. It hides Theano so well, that it can in fact run as awrapper for another popular foundation framework called Tensor Flow (discussed next). 2. 5 More Theano Resources Looking for some more resources on Theano? Take a look at some of the following. Theano Ocial Homepagehttp://deeplearning. net/software/theano/ Theano Git Hub Repositoryhttps://github. com/Theano/Theano/ Theano: A CPU and GPU Math Compiler in Python (2010)http://www. iro. umontreal. ca/~lisa/pointeurs/theano_scipy2010. pdf List of Libraries Built on Theanohttps://github. com/Theano/Theano/wiki/Related-projects List of Theano configuration optionshttp://deeplearning. net/software/theano/library/config. html
2. 6. Summary142. 6 Summary In this lesson you discovered the Theano Python library for ecient numerical computation. You learned: Theano is a foundation library used for deep learning research and development. Deep learning models can be developed directly in Theano if desired. The development and evaluation of deep learning models is easier with wrapper librarieslike Keras. 2. 6. 1 Next You now know about the Theano library for numerical computation in Python. In the nextlesson you will discover the Tensor Flow library released by Google that attempts to o↵er thesame capabilities.
Chapter 3Introduction to Tensor Flow Tensor Flow is a Python library for fast numerical computing created and released by Google. It is a foundation library that can be used to create deep learning models directly or by usingwrapper libraries that simplify the process built on top of Tensor Flow. After completing thislesson you will know: About the Tensor Flow library for Python. How to define, compile and evaluate a simple symbolic expression in Tensor Flow. Where to go to get more information on the Library. Let's get started. Note:Tensor Flow is not easily supported on Windows at the time of writing. It may bepossible to get Tensor Flow working on windows with Docker. Tensor Flow is not required tocomplete the rest of this book, and if you are on the Windows platform you can skip this lesson. 3. 1 What is Tensor Flow?Tensor Flow is an open source library for fast numerical computing. It was created and ismaintained by Google and released under the Apache 2. 0 open source license. The API isnominally for the Python programming language, although there is access to the underlying C++ API. Unlike other numerical libraries intended for use in Deep Learning like Theano,Tensor Flow was designed for use both in research and development and in production systems,not least Rank Brain in Google search1and the fun Deep Dream project2. It can run on single CPU systems, GPUs as well as mobile devices and large scale distributed systems of hundredsof machines. 3. 2 How to Install Tensor Flow Installation of Tensor Flow is straightforward if you already have a Python Sci Py environment. Tensor Flow works with Python 2. 7 and Python 3. 3+. With a working Python and Sci Py1https://en. wikipedia. org/wiki/Rank Brain2https://en. wikipedia. org/wiki/Deep Dream15
3. 3. Your First Examples in Tensor Flow16environment, it is relatively straightforward to install Tensor Flow using pip There are a numberof di↵erent distributions of Tensor Flow, customized for di↵erent environments, therefore toinstall Tensor Flow you can follow the Download and Setup instructions3on the Tensor Flowwebsite., for example:Tensor Flow v0. 10. 0is the latest at the time of writing and is used in this book. 3. 3 Your First Examples in Tensor Flow Computation is described in terms of data flow and operations in the structure of a directedgraph. Nodes: Nodes perform computation and have zero or more inputs and outputs. Data thatmoves between nodes are known as tensors, which are multi-dimensional arrays of realvalues. Edges: The graph defines the flow of data, branching, looping and updates to state. Special edges can be used to synchronize behavior within the graph, for example waitingfor computation on a number of inputs to complete. Operation: An operation is a named abstract computation which can take input attributesand produce output attributes. For example, you could define an add or multiply operation. 3. 4 Simple Tensor Flow Example In this section we demonstrate a simple Python script that gives you a flavor of Tensor Flow. Inthis example we define two symbolic floating point variablesaandb. We define an expressionthat uses these variables (c=a+b). This is the same example used in the previous chapter thatintroduced Theano. We then compile this symbolic expression into a function using Tensor Flowthat we can use later. Finally, we use our complied expression by plugging in some real valuesand performing the calculation using ecient compiled Tensor Flow code under the covers. #Exampleof Tensor Flowlibraryimporttensorflow as tf#declaretwosymbolicfloating-pointscalarsa = tf. placeholder(tf. float32)b = tf. placeholder(tf. float32)#createasimplesymbolicexpressionusingtheaddfunctionadd = tf. add(a, b)#bind1. 5to a,2. 5to b,andevaluate c sess = tf. Session()binding = {a: 1. 5, b: 2. 5}c = sess. run(add, feed_dict=binding)print(c)Listing 3. 1: Example of Symbolic Arithmetic with Tensor Flow. 3https://www. tensorflow. org/versions/r0. 9/get_started/os_setup. html
3. 5. More Deep Learning Models17Running the example prints the output 4, which matches our expectation that 1. 5+2. 5=4. 0. This is a useful example as it gives you a flavor for how a symbolic expression can be defined,compiled and used. Although we have only performed a basic introduction of adding 2 and 2,you can see how pre-defining computation to be compiled for eciency may be scaled up tolarge vector and matrix operations required for deep learning. 3. 5 More Deep Learning Models Your Tensor Flow installation comes with a number of Deep Learning models that you can useand experiment with directly. Firstly, you need to find out where Tensor Flow was installed onyour system. For example, you can use the following Python script:python-c importos;importinspect;importtensorflow;print(os. path. dirname(inspect. getfile(tensorflow))) Listing 3. 2: Print Install Directory for Tensor Flow. Change to this directory and take note of themodels/subdirectory. Included are a numberof deep learning models with tutorial-like comments, such as: Multi-threaded word2vec mini-batched skip-gram model. Multi-threaded word2vec unbatched skip-gram model. CNN for the CIFAR-10 network. Simple, end-to-end, Le Net-5-like convolutional MNIST model example. Sequence-to-sequence model with an attention mechanism. Also check the examples directory as it contains an example using the MNIST dataset. There is also an excellent list of tutorials on the main Tensor Flow website4. They show howto use di↵erent network types, di↵erent datasets and how to use the framework in variousdi↵erent ways. Finally, there is the Tensor Flow playground5where you can experiment withsmall networks right in your web browser. 3. 6 Summary In this lesson you discovered the Tensor Flow Python library for deep learning. You learned: Tensor Flow is another ecient numerical library like Theano. Like Theano, deep learning models can be developed directly in Tensor Flow if desired. Also like Theano, Tensor Flow may be better leveraged by a wrapper library that abstractsthe complexity and lower level details. 4https://www. tensorflow. org/versions/r0. 9/tutorials/5http://playground. tensorflow. org/
3. 6. Summary183. 6. 1 Next You now know about the Theano and Tensor Flow libraries for ecient numerical computationin Python. In the next lesson you will discover the Keras library that wraps both libraries andgives you a clean and simple API for developing and evaluating deep learning models.
Chapter 4Introduction to Keras Two of the top numerical platforms in Python that provide the basis for deep learning researchand development are Theano and Tensor Flow. Both are very powerful libraries, but both canbe dicult to use directly for creating deep learning models. In this lesson you will discoverthe Keras Python library that provides a clean and convenient way to create a range of deeplearning models on top of Theano or Tensor Flow. After completing this lesson you will know: About the Keras Python library for deep learning. How to configure Keras for Theano or Tensor Flow. The standard idiom for creating models with Keras. Let's get started. 4. 1 What is Keras?Keras is a minimalist Python library for deep learning that can run on top of Theano or Tensor Flow. It was developed to make developing deep learning models as fast and easy aspossible for research and development. It runs on Python 2. 7 or 3. 5 and can seamlessly executeon GPUs and CPUs given the underlying frameworks. It is released under the permissive MITlicense. Keras was developed and maintained by Fran¸ cois Chollet, a Google engineer using fourguiding principles: Modularity: A model can be understood as a sequence or a graph alone. All the concernsof a deep learning model are discrete components that can be combined in arbitrary ways. Minimalism: The library provides just enough to achieve an outcome, no frills andmaximizing readability. Extensibility: New components are intentionally easy to add and use within the frame-work, intended for developers to trial and explore new ideas. Python: No separate model files with custom file formats. Everything is native Python. 19
4. 2. How to Install Keras204. 2 How to Install Keras Keras is relatively straightforward to install if you already have a working Python and Sci Pyenvironment. You must also have an installation of Theano or Tensor Flow on your system. Keras can be installed easily using pip, as follows:sudo pip install keras Listing 4. 1: Install Keras With Pip. You can check your version of Keras on the command line using the following script:python-c"importkeras;print(keras. __version__)"Listing 4. 2: Print Keras Version. Running the above script you will see:1. 1. 0Listing 4. 3: Output of Printing Keras Version. You can upgrade your installation of Keras using the same method:sudo pip install--upgrade keras Listing 4. 4: Upgrade Keras With Pip. Keras v1. 1. 0 is the latest at the time of writing and is used in this book. 4. 3 Theano and Tensor Flow Backends for Keras Keras is a lightweight API and rather than providing an implementation of the requiredmathematical operations needed for deep learning it provides a consistent interface to ecientnumerical libraries calledbackends. Assuming you have both Theano and Tensor Flow installed,you can configure the backend used by Keras. The easiest way is by adding or editing the Kerasconfiguration file in your home directory:~/. keras/keras. json Listing 4. 5: Path to Keras Configuration File. Which has the format:{"image_dim_ordering":"tf","epsilon": 1e-07,"floatx":"float32","backend":"tensorflow"}Listing 4. 6: Example Content of Keras Configuration File. In this configuration file you can change thebackendproperty fromtensorflow(the default)totheano. Keras will then use the configuration the next time it is run. You can confirm thebackend used by Keras using the following script on the command line:
4. 4. Build Deep Learning Models with Keras21python-c"fromkerasimportbackend;print(backend. _BACKEND)"Listing 4. 7: Script to Print the Configured Keras Backend. Running this with default configuration you will see:Using Tensor Flow backend. tensorflow Listing 4. 8: Sample Output of Script to Print the Configured Keras Backend. You can also specify the backend to use by Keras on the command line by specifying the KERASBACKENDenvironment variable, as follows:KERAS_BACKEND=theano python-c"fromkerasimportbackend;print(backend. _BACKEND)"Listing 4. 9: Example of Using the Environment Variable to Change the Keras Backend. Running this example prints:Using Theano backend. theano Listing 4. 10: Sample Output of Using the Theano Backend. 4. 4 Build Deep Learning Models with Keras The focus of Keras is the idea of a model. The main type of model is a sequence of layers calleda Sequentialwhich is a linear stack of layers. You create a Sequentialand add layers to itin the order that you wish for the computation to be performed. Once defined, you compilethe model which makes use of the underlying framework to optimize the computation to beperformed by your model. In this you can specify the loss function and the optimizer to be used. Once compiled, the model must be fit to data. This can be done one batch of data at atime or by firing o↵the entire model training regime. This is where all the compute happens. Once trained, you can use your model to make predictions on new data. We can summarize theconstruction of deep learning models in Keras as follows:1. Define your model. Create a Sequentialmodel and add configured layers. 2. Compile your model. Specify loss function and optimizers and call thecompile()function on the model. 3. Fit your model. Train the model on a sample of data by calling thefit()function onthe model. 4. Make predictions. Use the model to generate predictions on new data by callingfunctions such asevaluate()orpredict()on the model.
4. 5. Summary224. 5 Summary In this lesson you discovered the Keras Python library for deep learning research and development. You learned: Keras wraps both the Tensor Flow and Theano libraries, abstracting their capabilities andhiding their complexity. Keras is designed for minimalism and modularity allowing you to very quickly define deeplearning models. Keras deep learning models can be developed using an idiom of defining, compiling andfitting models that can then be evaluated or used to make predictions. 4. 5. 1 Next You are now up to speed with the Python libraries for deep learning. In the next project youwill discover step-by-step how you can develop and run very large deep learning models usingthese libraries in the cloud using GPU hardware at a fraction of the cost of purchasing yourown hardware.
Chapter 5Project: Develop Large Models on GPUs Cheaply In the Cloud Large deep learning models require a lot of compute time to run. You can run them on your CPU but it can take hours or days to get a result. If you have access to a GPU on your desktop,you can drastically speed up the training time of your deep learning models. In this project youwill discover how you can get access to GPUs to speed up the training of your deep learningmodels by using the Amazon Web Service (AWS) infrastructure. For less than a dollar per hourand often a lot cheaper you can use this service from your workstation or laptop. After workingthrough this project you will know: How to create an account and log-in to Amazon Web Service. How to launch a server instance for deep learning. How to configure a server instance for faster deep learning on the GPU. Let's get started. 5. 1 Project Overview The process is quite simple because most of the work has already been done for us. Below is anoverview of the process. Setup Your AWS Account. Launch Your Server Instance. Login and Run Your Code. Close Your Server Instance. Note, it costs money to use a virtual server instance on Amazon. The cost is lowfor ad hoc model development (e. g. less than one US dollar per hour), which is why this isso attractive, but it is not free. The server instance runs Linux. It is desirable although notrequired that you know how to navigate Linux or a Unix-like environment. We're just runningour Python scripts, so no advanced skills are needed. 23
5. 2. Setup Your AWS Account245. 2 Setup Your AWS Account You need an account on Amazon Web Services1. 1. You can create account by the Amazon Web Services portal and click Sign in to the Console. From there you can sign in using an existing Amazon account or create a newaccount. Figure 5. 1: AWS Sign-in Button 2. You will need to provide your details as well as a valid credit card that Amazon cancharge. The process is a lot quicker if you are already an Amazon customer and have yourcredit card on file. 1https://aws. amazon. com
5. 3. Launch Your Server Instance25 Figure 5. 2: AWS Sign-In Form Once you have an account you can log into the Amazon Web Services console. You will seear a n g eo fd i↵erent services that you can access. 5. 3 Launch Your Server Instance Now that you have an AWS account, you want to launch an EC2 virtual server instance onwhich you can run Keras. Launching an instance is as easy as selecting the image to load andstarting the virtual server. Thankfully there is already an image available that has almosteverything we need it has the cryptic nameami-125b2c72and was created for the Stanford CS231n class. Let's launch it as an instance. 1. Login to your AWS console2if you have not already. 2https://console. aws. amazon. com/console/home
5. 3. Launch Your Server Instance26 Figure 5. 3: AWS Console 2. Click on EC2 for launching a new virtual server. 3. Select N. Californiafrom the drop-down in the top right hand corner. This is importantotherwise you will not be able to find the image we plan to use. Figure 5. 4: Select North California 4. Click the Launch Instancebutton. 5. Click Community AMIs. An AMI is an Amazon Machine Image. It is a frozen instanceof a server that you can select and instantiate on a new virtual server.
5. 3. Launch Your Server Instance27 Figure 5. 5: Community AMIs 6. Enterami-125b2c72in the Search community AMIssearch box and press enter. Youshould be presented with a single result. Figure 5. 6: Select a Specific AMI 7. Click Selectto choose the AMI in the search result. 8. Now you need to select the hardware on which to run the image. Scroll down and selecttheg2. 2xlargehardware. This includes a GPU that we can use to significantly increasethe training speed of our models. Figure 5. 7: Select g2. 2xlarge Hardware 9. Click Review and Launchto finalize the configuration of your server instance. 10. Click the Launchbutton. 11. Select Your Key Pair. If you have a key pair because you have used EC2 before, select Choose an existing key pairand choose your key pair from the list. Then check I acknowledge.... If you do not have a key
5. 3. Launch Your Server Instance28pair, select the option Create a new key pairand enter a Key pair namesuch as keras-keypair. Click the Download Key Pairbutton. Figure 5. 8: Select Your Key Pair 12. Open a Terminal and change directory to where you downloaded your key pair. 13. If you have not already done so, restrict the access permissions on your key pair file. This is required as part of the SSH access to your server. For example, open a terminal onyour workstation and type:cd Downloadschmod 600 keras-aws-keypair. pem Listing 5. 1: Change Permissions of Your Key Pair File. 14. Click Launch Instances. If this is your first time using AWS, Amazon may have tovalidate your request and this could take up to 2 hours (often just a few minutes). 15. Click View Instancesto review the status of your instance.
5. 4. Login, Configure and Run29 Figure 5. 9: Review Your Running Instance Your server is now running and ready for you to log in. 5. 4 Login, Configure and Run Now that you have launched your server instance, it is time to log in and start using it. 1. Click View Instancesin your Amazon EC2 console if you have not done so already. 2. Copy the Public IP(down the bottom of the screen in Description) to your clipboard. In this example my IP address is52. 53. 186. 1. Do not use this IP address, it willnot work as your server IP address will be di↵erent. 3. Open a Terminal and change directory to where you downloaded your key pair. Loginto your server using SSH, for example:ssh-i keras-aws-keypair. pem ubuntu@52. 53. 186. 1Listing 5. 2: Log-in To Your AWS Instance. 4. If prompted, typeyesand press enter. You are now logged into your server.
5. 4. Login, Configure and Run30 Figure 5. 10: Log in Screen for Your AWS Server We need to make two small changes before we can start using Keras. This will just take aminute. You will have to do these changes each time you start the instance. 5. 4. 1 Update Keras Update to a specific version of Keras known to work on this configuration, at the time of writingthe latest version of Keras is version 1. 1. 0. We can specify this version as part of the upgrade of Keras via pip. pip install--upgrade--no-deps keras==1. 1. 0Listing 5. 3: Update Keras Using Pip. 5. 4. 2 Configure Theano Update your configuration of Theano (the Keras backend) to always use the GPU. First openthe Theano configuration file in your favorite command line text editor, such as vi:vi ~/. theanorc Listing 5. 4: Edit the Theano Configuration File. Copy and paste the following configuration and save the file:[global]device = gpufloat X = float32optimizer_including = cudnnallow_gc = False[lib]cnmem=. 95Listing 5. 5: New Configuration For Theano.
5. 5. Build and Run Models on AWS31This configures Theano to use the GPU instead of the CPU. Among some other minorconfiguration it ensures that not all GPU memory is used by Theano, avoiding memory errors ifyou start using larger datasets like CIFAR-10. We're done. You can confirm that Theano isworking correctly by typing:python-c"importtheano;print(theano. sandbox. cuda. dnn. dnn_available())"Listing 5. 6: Script to Check Theano Configuration. This command will output Theano configuration. You should see:Using gpu device 0: GRID K520 (CNMe Misenabled)True Listing 5. 7: Sample Output of Script to Check Theano Configuration. You can also confirm that Keras is installed and is working correctly by typing:python-c"importkeras;print(keras. __version__)"Listing 5. 8: Script To Check Keras Configuration. You should see:Using Theano backend. Using gpu device 0: GRID K520 (CNMe Misenabled)1. 1. 0Listing 5. 9: Sample Output of Script to Check Keras Configuration. You are now free to run your code. 5. 5 Build and Run Models on AWSThis section o↵ers some tips for running your code on AWS. 5. 5. 1 Copy Scripts and Data to AWSYou can get started quickly by copying your files to your running AWS instance. For example,you can copy the examples provided with this book to your AWS instance using thescpcommand as follows:scp-i keras-aws-keypair. pem-r src ubuntu@52. 53. 186. 1:~/Listing 5. 10: Example for Copying Sample Code to AWS. This will copy the entiresrc/directory to your home directory on your AWS instance. Youcan easily adapt this example to get your larger datasets from your workstation onto your AWSinstance. Note that Amazon may impose charges for moving very large amounts of data in andout of your AWS instance. Refer to Amazon documentation for relevant charges.
5. 6. Close Your EC2 Instance325. 5. 2 Run Models on AWSYou can run your scripts on your AWS instance as per normal:python filename. py Listing 5. 11: Example of Running a Python script on AWS. You are using AWS to create large neural network models that may take hours or days totrain. As such, it is a better idea to run your scripts as a background job. This allows you toclose your terminal and your workstation while your AWS instance continues to run your script. You can easily run your script as a background process as follows:nohup /path/to/script >/path/to/script. log 2>&1 < /dev/null &Listing 5. 12: Run Script as a Background Process. You can then check the status and results in yourscript. logfile later. 5. 6 Close Your EC2 Instance When you are finished with your work you must close your instance. Remember you are chargedby the amount of time that you use the instance. It is cheap, but you do not want to leave aninstance on if you are not using it. 1. Log out of your instance at the terminal, for example you can type:exit Listing 5. 13: Log-out of Server Instance. 2. Log in to your AWS account with your web browser. 3. Click EC2. 4. Click Instancesfrom the left-hand side menu.
5. 6. Close Your EC2 Instance33 Figure 5. 11: Review Your List of Running Instances 5. Select your running instance from the list (it may already be selected if you only haveone running instance). Figure 5. 12: Select Your Running AWS Instance 6. Click the Actionsbutton and select Instance Stateand choose Terminate. Confirmthat you want to terminate your running instance. It may take a number of seconds for the instance to close and to be removed from your listof instances.
5. 7. Tips and Tricks for Using Keras on AWS345. 7 Tips and Tricks for Using Keras on AWSBelow are some tips and tricks for getting the most out of using Keras on AWS instances. Design a suite of experiments to run beforehand. Experiments can take a longtime to run and you are paying for the time you use. Make time to design a batch ofexperiments to run on AWS. Put each in a separate file and call them in turn from anotherscript. This will allow you to answer multiple questions from one long run, perhapsovernight. Always close your instance at the end of your experiments. Y o ud on o tw a n tt obe surprised with a very large AWS bill. Try spot instances for a cheaper but less reliable option. Amazon sell unusedtime on their hardware at a much cheaper price, but at the cost of potentially having yourinstance closed at any second. If you are learning or your experiments are not critical, thismight be an ideal option for you. You can access spot instances from the Spot Instanceoption on the left hand side menu in your EC2 web console. 5. 8 More Resources For Deep Learning on AWSBelow is a list of resources to learn more about AWS and developing deep learning models inthe cloud. An introduction to Amazon Elastic Compute Cloud (EC2) if you are new to all of this. http://docs. aws. amazon. com/AWSEC2/latest/User Guide/concepts. html An introduction to Amazon Machine Images (AMI). http://docs. aws. amazon. com/AWSEC2/latest/User Guide/AMIs. html AWS Tutorial for the Stanford CS231n Convolutional Neural Networks for Visual Recog-nition class (it is slightly out of date). http://cs231n. github. io/aws-tutorial/ Learn more about how to configure Theano on the Theano Configuration page. http://deeplearning. net/software/theano/library/config. html5. 9 Summary In this lesson you discovered how you can develop and evaluate your large deep learning modelsin Keras using GPUs on the Amazon Web Service. You learned: Amazon Web Services with their Elastic Compute Cloud o↵ers an a↵ordable way to runlarge deep learning models on GPU hardware. How to setup and launch an EC2 server for deep learning experiments. How to update the Keras version on the server and confirm that the system is workingcorrectly. How to run Keras experiments on AWS instances in batch as background tasks.
5. 9. Summary355. 9. 1 Next This concludes Part II and gives you the capability to install, configure and use the Pythondeep learning libraries on your workstation or in the cloud, leveraging GPU hardware. Next in Part III you will learn how to use the Keras API and develop your own neural network models.
Part IIIMultilayer Perceptrons 36
Chapter 6Crash Course In Multilayer Perceptrons Artificial neural networks are a fascinating area of study, although they can be intimidatingwhen just getting started. There is a lot of specialized terminology used when describing thedata structures and algorithms used in the field. In this lesson you will get a crash course in theterminology and processes used in the field of Multilayer Perceptron artificial neural networks. After completing this lesson you will know: The building blocks of neural networks including neurons, weights and activation functions. How the building blocks are used in layers to create networks. How networks are trained from example data. Let's get started. 6. 1 Crash Course Overview We are going to cover a lot of ground in this lesson. Here is an idea of what is ahead:1. Multilayer Perceptrons. 2. Neurons, Weights and Activations. 3. Networks of Neurons. 4. Training Networks. We will start o↵with an overview of Multilayer Perceptrons. 37
6. 2. Multilayer Perceptrons386. 2 Multilayer Perceptrons The field of artificial neural networks is often just called Neural Networksor Multilayer Percep-tronsafter perhaps the most useful type of neural network. A Perceptron is a single neuronmodel that was a precursor to larger neural networks. It is a field of study that investigateshow simple models of biological brains can be used to solve dicult computational tasks likethe predictive modeling tasks we see in machine learning. The goal is not to create realisticmodels of the brain, but instead to develop robust algorithms and data structures that we canuse to model dicult problems. The power of neural networks come from their ability to learn the representation in yourtraining data and how to best relate it to the output variable that you want to predict. Inthis sense neural networks learn a mapping. Mathematically, they are capable of learningany mapping function and have been proven to be a universal approximation algorithm. Thepredictive capability of neural networks comes from the hierarchical or multilayered structure ofthe networks. The data structure can pick out (learn to represent) features at di↵erent scales orresolutions and combine them into higher-order features. For example from lines, to collectionsof lines to shapes. 6. 3 Neurons The building block for neural networks are artificial neurons. These are simple computationalunits that have weighted input signals and produce an output signal using an activation function. Figure 6. 1: Model of a Simple Neuron
6. 4. Networks of Neurons396. 3. 1 Neuron Weights You may be familiar with linear regression, in which case the weights on the inputs are verymuch like the coecients used in a regression equation. Like linear regression, each neuron alsohas a bias which can be thought of as an input that always has the value 1. 0 and it too must beweighted. For example, a neuron may have two inputs in which case it requires three weights. One for each input and one for the bias. Weights are often initialized to small random values, such as values in the range 0 to 0. 3,although more complex initialization schemes can be used. Like linear regression, larger weightsindicate increased complexity and fragility of the model. It is desirable to keep weights in thenetwork small and regularization techniques can be used. 6. 3. 2 Activation The weighted inputs are summed and passed through an activation function, sometimes called atransfer function. An activation function is a simple mapping of summed weighted input to theoutput of the neuron. It is called an activation function because it governs the threshold atwhich the neuron is activated and the strength of the output signal. Historically simple stepactivation functions were used where if the summed input was above a threshold, for example0. 5, then the neuron would output a value of 1. 0, otherwise it would output a 0. 0. Traditionally nonlinear activation functions are used. This allows the network to combinethe inputs in more complex ways and in turn provide a richer capability in the functions theycan model. Nonlinear functions like the logistic function also called the sigmoid function wereused that output a value between 0 and 1 with an s-shaped distribution, and the hyperbolictangent function also called Tanh that outputs the same distribution over the range-1 to +1. More recently the rectifier activation function has been shown to provide better results. 6. 4 Networks of Neurons Neurons are arranged into networks of neurons. A row of neurons is called a layer and onenetwork can have multiple layers. The architecture of the neurons in the network is often calledthe network topology.
6. 4. Networks of Neurons40 Figure 6. 2: Model of a Simple Network6. 4. 1 Input or Visible Layers The bottom layer that takes input from your dataset is called the visible layer, because it isthe exposed part of the network. Often a neural network is drawn with a visible layer with oneneuron per input value or column in your dataset. These are not neurons as described above,but simply pass the input value though to the next layer. 6. 4. 2 Hidden Layers Layers after the input layer are called hidden layers because they are not directly exposed tothe input. The simplest network structure is to have a single neuron in the hidden layer thatdirectly outputs the value. Given increases in computing power and ecient libraries, very deepneural networks can be constructed. Deep learning can refer to having many hidden layers inyour neural network. They are deep because they would have been unimaginably slow to trainhistorically, but may take seconds or minutes to train using modern techniques and hardware. 6. 4. 3 Output Layer The final hidden layer is called the output layer and it is responsible for outputting a valueor vector of values that correspond to the format required for the problem. The choice ofactivation function in the output layer is strongly constrained by the type of problem that youare modeling. For example: A regression problem may have a single output neuron and the neuron may have noactivation function. A binary classification problem may have a single output neuron and use a sigmoidactivation function to output a value between 0 and 1 to represent the probability ofpredicting a value for the primary class. This can be turned into a crisp class value byusing a threshold of 0. 5 and snap values less than the threshold to 0 otherwise to 1.
6. 5. Training Networks41 A multiclass classification problem may have multiple neurons in the output layer, one foreach class (e. g. three neurons for the three classes in the famous iris flowers classificationproblem). In this case a softmax activation function may be used to output a probabilityof the network predicting each of the class values. Selecting the output with the highestprobability can be used to produce a crisp class classification value. 6. 5 Training Networks Once configured, the neural network needs to be trained on your dataset. 6. 5. 1 Data Preparation You must first prepare your data for training on a neural network. Data must be numerical, forexample real values. If you have categorical data, such as asexattribute with the valuesmaleandfemale, you can convert it to a real-valued representation called a one hot encoding. Thisis where one new column is added for each class value (two columns in the case of sex of maleand female) and a 0 or 1 is added for each row depending on the class value for that row. This same one hot encoding can be used on the output variable in classification problemswith more than one class. This would create a binary vector from a single column that wouldbe easy to directly compare to the output of the neuron in the network's output layer, that asdescribed above, would output one value for each class. Neural networks require the input to bescaled in a consistent way. You can rescale it to the range between 0 and 1 called normalization. Another popular technique is to standardize it so that the distribution of each column has themean of zero and the standard deviation of 1. Scaling also applies to image pixel data. Datasuch as words can be converted to integers, such as the frequency rank of the word in the datasetand other encoding techniques. 6. 5. 2 Stochastic Gradient Descent The classical and still preferred training algorithm for neural networks is called stochasticgradient descent. This is where one row of data is exposed to the network at a time as input. The network processes the input upward activating neurons as it goes to finally produce anoutput value. This is called a forward pass on the network. It is the type of pass that is alsoused after the network is trained in order to make predictions on new data. The output of the network is compared to the expected output and an error is calculated. This error is then propagated back through the network, one layer at a time, and the weightsare updated according to the amount that they contributed to the error. This clever bit of mathis called the Back Propagation algorithm. The process is repeated for all of the examples inyour training data. One round of updating the network for the entire training dataset is calledan epoch. A network may be trained for tens, hundreds or many thousands of epochs. 6. 5. 3 Weight Updates The weights in the network can be updated from the errors calculated for each training exampleand this is called online learning. It can result in fast but also chaotic changes to the network.
6. 6. Summary42Alternatively, the errors can be saved up across all of the training examples and the networkcan be updated at the end. This is called batch learning and is often more stable. Because datasets are so large and because of computational eciencies, the size of thebatch, the number of examples the network is shown before an update is often reduced to asmall number, such as tens or hundreds of examples. The amount that weights are updated iscontrolled by a configuration parameter called the learning rate. It is also called the step sizeand controls the step or change made to network weights for a given error. Often small learningrates are used such as 0. 1 or 0. 01 or smaller. The update equation can be complemented withadditional configuration terms that you can set. Momentumis a term that incorporates the properties from the previous weight updateto allow the weights to continue to change in the same direction even when there is lesserror being calculated. Learning Rate Decayis used to decrease the learning rate over epochs to allow thenetwork to make large changes to the weights at the beginning and smaller fine tuningchanges later in the training schedule. 6. 5. 4 Prediction Once a neural network has been trained it can be used to make predictions. You can makepredictions on test or validation data in order to estimate the skill of the model on unseen data. You can also deploy it operationally and use it to make predictions continuously. The networktopology and the final set of weights is all that you need to save from the model. Predictionsare made by providing the input to the network and performing a forward-pass allowing it togenerate an output that you can use as a prediction. 6. 6 Summary In this lesson you discovered artificial neural networks for machine learning. You learned: How neural networks are not models of the brain but are instead computational modelsfor solving complex machine learning problems. That neural networks are comprised of neurons that have weights and activation functions. The networks are organized into layers of neurons and are trained using stochastic gradientdescent. That it is a good idea to prepare your data before training a neural network model. 6. 6. 1 Next You now know the basics of neural network models. In the next section you will develop yourvery first Multilayer Perceptron model in Keras.
Chapter 7Develop Your First Neural Network With Keras Keras is a powerful and easy-to-use Python library for developing and evaluating deep learningmodels. It wraps the ecient numerical computation libraries Theano and Tensor Flow andallows you to define and train neural network models in a few short lines of code. In this lessonyou will discover how to create your first neural network model in Python using Keras. Aftercompleting this lesson you will know: How to load a CSV dataset ready for use with Keras. How to define and compile a Multilayer Perceptron model in Keras. How to evaluate a Keras model on a validation dataset. Let's get started. 7. 1 Tutorial Overview There is not a lot of code required, but we are going to step over it slowly so that you will knowhow to create your own models in the future. The steps you are going to cover in this tutorialare as follows:1. Load Data. 2. Define Model. 3. Compile Model. 4. Fit Model. 5. Evaluate Model. 6. Tie It All Together. 43
7. 2. Pima Indians Onset of Diabetes Dataset447. 2 Pima Indians Onset of Diabetes Dataset In this tutorial we are going to use the Pima Indians onset of diabetes dataset. This is astandard machine learning dataset available for free download from the UCI Machine Learningrepository. It describes patient medical record data for Pima Indians and whether they had anonset of diabetes within five years. It is a binary classification problem (onset of diabetes as 1or not as 0). The input variables that describe each patient are numerical and have varyingscales. Below lists the eight attributes for the dataset:1. Number of times pregnant. 2. Plasma glucose concentration a 2 hours in an oral glucose tolerance test. 3. Diastolic blood pressure (mm Hg). 4. Triceps skin fold thickness (mm). 5. 2-Hour serum insulin (mu U/ml). 6. Body mass index. 7. Diabetes pedigree function. 8. Age (years). 9. Class, onset of diabetes within five years. Given that all attributes are numerical makes it easy to use directly with neural networksthat expect numerical inputs and output values, and ideal for our first neural network in Keras. This dataset will also be used for a number of additional lessons coming up in this book, sokeep it handy. below is a sample of the dataset showing the first 5 rows of the 768 instances:6,148,72,35,0,33. 6,0. 627,50,11,85,66,29,0,26. 6,0. 351,31,08,183,64,0,0,23. 3,0. 672,32,11,89,66,23,94,28. 1,0. 167,21,00,137,40,35,168,43. 1,2. 288,33,1Listing 7. 1: Sample of the Pima Indians Dataset. The dataset file is available in your bundle of code recipes provided with this book. Alterna-tively, you can download the Pima Indian dataset from the UCI Machine Learning repositoryand place it in your local working directory, the same as your Python file1. Save it with the filename:pima-indians-diabetes. csv Listing 7. 2: Pima Indians Dataset File. 1http://archive. ics. uci. edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes. data
7. 3. Load Data45The baseline accuracy if all predictions are made asno onset of diabetesis 65. 1%. Topresults on the dataset are in the range of 77. 7% accuracy using 10-fold cross validation2. Y o ucan learn more about the dataset on the dataset home page on the UCI Machine Learning Repository3. 7. 3 Load Data Whenever we work with machine learning algorithms that use a stochastic process (e. g. randomnumbers), it is a good idea to initialize the random number generator with a fixed seed value. This is so that you can run the same code again and again and get the same result. This is usefulif you need to demonstrate a result, compare algorithms using the same source of randomnessor to debug a part of your code. You can initialize the random number generator with any seedyou like, for example:fromkeras. modelsimport Sequentialfromkeras. layersimport Denseimportnumpy#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)Listing 7. 3: Load Libraries and Seed Random Number Generator. Now we can load our Pima Indians dataset. You can now load the file directly using the Num Py functionloadtxt(). There are eight input variables and one output variable (the lastcolumn). Once loaded we can split the dataset into input variables (X) and the output classvariable (Y). #loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]Listing 7. 4: Load The Dataset Using Num Py. We have initialized our random number generator to ensure our results are reproducible andloaded our data. We are now ready to define our neural network model. 7. 4 Define Model Models in Keras are defined as a sequence of layers. We create a Sequentialmodel and addlayers one at a time until we are happy with our network topology. The first thing to get rightis to ensure the input layer has the right number of inputs. This can be specified when creatingthe first layer with theinputdimargument and setting it to 8 for the 8 input variables. How do we know the number of layers to use and their types? This is a very hard question. There are heuristics that we can use and often the best network structure is found throughap r o c e s so ft r i a la n de r r o re x p e r i m e n t a t i o n. G e n e r a l l y,y o un e e dan e t w o r kl a r g ee n o u g h2http://www. is. umk. pl/projects/datasets. html#Diabetes3http://archive. ics. uci. edu/ml/datasets/Pima+Indians+Diabetes
7. 4. Define Model46to capture the structure of the problem if that helps at all. In this example we will use afully-connected network structure with three layers. Fully connected layers are defined using the Denseclass. We can specify the number ofneurons in the layer as the first argument, the initialization method as the second argumentasinitand specify the activation function using theactivationargument. In this case weinitialize the network weights to a small random number generated from a uniform distribution(uniform), in this case between 0 and 0. 05 because that is the default uniform weight initializationin Keras. Another traditional alternative would benormalfor small random numbers generatedfrom a Gaussian distribution. We will use the rectifier (relu)a c t i v a t i o nf u n c t i o no nt h efi r s tt w ol a y e r sa n dt h esigmoidactivation function in the output layer. It used to be the case that sigmoid and tanh activationfunctions were preferred for all layers. These days, better performance is seen using the rectifieractivation function. We use a sigmoid activation function on the output layer to ensure ournetwork output is between 0 and 1 and easy to map to either a probability of class 1 or snap toah a r dc l a s s i fi c a t i o no fe i t h e rc l a s sw i t had e f a u l tt h r e s h o l do f0. 5. W ec a np i e c ei ta l lt o g e t h e rby adding each layer. The first hidden layer has 12 neurons and expects 8 input variables. Thesecond hidden layer has 8 neurons and finally the output layer has 1 neuron to predict the class(onset of diabetes or not). #createmodelmodel = Sequential()model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))Listing 7. 5: Define the Neural Network Model in Keras. Below provides a depiction of the network structure.
7. 5. Compile Model47 Figure 7. 1: Visualization of Neural Network Structure. 7. 5 Compile Model Now that the model is defined, we can compile it. Compiling the model uses the ecientnumerical libraries under the covers (the so-called backend) such as Theano or Tensor Flow. The backend automatically chooses the best way to represent the network for training andmaking predictions to run on your hardware. When compiling, we must specify some additionalproperties required when training the network. Remember training a network means finding thebest set of weights to make predictions for this problem. We must specify the loss function to use to evaluate a set of weights, the optimizer usedto search through di↵erent weights for the network and any optional metrics we would liketo collect and report during training. In this case we will use logarithmic loss, which for abinary classification problem is defined in Keras asbinarycrossentropy. We will also use theecient gradient descent algorithmadamfor no other reason that it is an ecient default. Learnmore about the Adam optimization algorithm in the paper Adam: A Method for Stochastic Optimization4. Finally, because it is a classification problem, we will collect and report theclassification accuracy as the metric. #Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])Listing 7. 6: Compile the Neural Network Model. 4http://arxiv. org/abs/1412. 6980
7. 6. Fit Model487. 6 Fit Model We have defined our model and compiled it ready for ecient computation. Now it is time toexecute the model on some data. We can train or fit our model on our loaded data by callingthefit()function on the model. The training process will run for a fixed number of iterations through the dataset calledepochs, that we must specify using thenbepochargument. We can also set the number ofinstances that are evaluated before a weight update in the network is performed called thebatch size and set using thebatchsizeargument. For this problem we will run for a smallnumber of epochs (150) and use a relatively small batch size of 10. Again, these can be chosenexperimentally by trial and error. #Fitthemodelmodel. fit(X, Y, nb_epoch=150, batch_size=10)Listing 7. 7: Fit the Neural Network Model to the Dataset. This is where the work happens on your CPU or GPU. 7. 7 Evaluate Model We have trained our neural network on the entire dataset and we can evaluate the performanceof the network on the same dataset. This will only give us an idea of how well we have modeledthe dataset (e. g. train accuracy), but no idea of how well the algorithm might perform on newdata. We have done this for simplicity, but ideally, you could separate your data into train andtest datasets for the training and evaluation of your model. You can evaluate your model on your training dataset using theevaluation()function onyour model and pass it the same input and output used to train the model. This will generate aprediction for each input and output pair and collect scores, including the average loss and anymetrics you have configured, such as accuracy. #evaluatethemodelscores = model. evaluate(X, Y)print("%s:%. 2f%%"% (model. metrics_names[1], scores[1]*100))Listing 7. 8: Evaluate the Neural Network Model on the Dataset. 7. 8 Tie It All Together You have just seen how you can easily create your first neural network model in Keras. Let's tieit all together into a complete code example. #Createyourfirst MLPin Kerasfromkeras. modelsimport Sequentialfromkeras. layersimport Denseimportnumpy#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadpimaindiansdataset
7. 9. Summary49dataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#createmodelmodel = Sequential()model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])#Fitthemodelmodel. fit(X, Y, nb_epoch=150, batch_size=10)#evaluatethemodelscores = model. evaluate(X, Y)print("%s:%. 2f%%"% (model. metrics_names[1], scores[1]*100))Listing 7. 9: Complete Working Example of Your First Neural Network in Keras. Running this example, you should see a message for each of the 150 epochs printing the lossand accuracy for each, followed by the final evaluation of the trained model on the trainingdataset. It takes about 10 seconds to execute on my workstation running on the CPU with a Theano backend....Epoch 145/150768/768 [==============================]-0s-loss: 0. 4574-acc: 0. 7786Epoch 146/150768/768 [==============================]-0s-loss: 0. 4620-acc: 0. 7734Epoch 147/150768/768 [==============================]-0s-loss: 0. 4633-acc: 0. 7760Epoch 148/150768/768 [==============================]-0s-loss: 0. 4554-acc: 0. 7812Epoch 149/150768/768 [==============================]-0s-loss: 0. 4656-acc: 0. 7643Epoch 150/150768/768 [==============================]-0s-loss: 0. 4618-acc: 0. 7878448/768 [================>............. ]-ETA: 0sacc: 78. 39%Listing 7. 10: Output of Running Your First Neural Network in Keras. 7. 9 Summary In this lesson you discovered how to create your first neural network model using the powerful Keras Python library for deep learning. Specifically you learned the five key steps in using Keras to create a neural network or deep learning model, step-by-step including: How to load data. How to define a neural network model in Keras. How to compile a Keras model using the ecient numerical backend.
7. 9. Summary50 How to train a model on data. How to evaluate a model on data. 7. 9. 1 Next You now know how to develop a Multilayer Perceptron model in Keras. In the next section youwill discover di↵erent ways that you can evaluate your models and estimate their performanceon unseen data.
Chapter 8Evaluate The Performance of Deep Learning Models There are a lot of decisions to make when designing and configuring your deep learning models. Most of these decisions must be resolved empirically through trial and error and evaluatingthem on real data. As such, it is critically important to have a robust way to evaluate theperformance of your neural network and deep learning models. In this lesson you will discover afew ways that you can use to evaluate model performance using Keras. After completing thislesson, you will know: How to evaluate a Keras model using an automatic verification dataset. How to evaluate a Keras model using a manual verification dataset. How to evaluate a Keras model usingk-fold cross validation. Let's get started. 8. 1 Empirically Evaluate Network Configurations There are a myriad of decisions you must make when designing and configuring your deeplearning models. Many of these decisions can be resolved by copying the structure of otherpeople's networks and using heuristics. Ultimately, the best technique is to actually designsmall experiments and empirically evaluate options using real data. This includes high-leveldecisions like the number, size and type of layers in your network. It also includes the lowerlevel decisions like the choice of loss function, activation functions, optimization procedure andnumber of epochs. Deep learning is often used on problems that have very large datasets. That is tens ofthousands or hundreds of thousands of instances. As such, you need to have a robust testharness that allows you to estimate the performance of a given configuration on unseen data,and reliably compare the performance to other configurations. 8. 2 Data Splitting The large amount of data and the complexity of the models require very long training times. Assuch, it is typically to use a simple separation of data into training and test datasets or training51
8. 2. Data Splitting52and validation datasets. Keras provides two convenient ways of evaluating your deep learningalgorithms this way:1. Use an automatic verification dataset. 2. Use a manual verification dataset. 8. 2. 1 Use a Automatic Verification Dataset Keras can separate a portion of your training data into a validation dataset and evaluate theperformance of your model on that validation dataset each epoch. You can do this by setting thevalidationsplitargument on thefit()function to a percentage of the size of your trainingdataset. For example, a reasonable value might be 0. 2 or 0. 33 for 20% or 33% of your trainingdata held back for validation. The example below demonstrates the use of using an automaticvalidation dataset on the Pima Indians onset of diabetes dataset (see Section7. 2). #MLPwithautomaticvalidationsetfromkeras. modelsimport Sequentialfromkeras. layersimport Denseimportnumpy#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#createmodelmodel = Sequential()model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])#Fitthemodelmodel. fit(X, Y, validation_split=0. 33, nb_epoch=150, batch_size=10)Listing 8. 1: Evaluate A Neural Network Using an Automatic Validation Set. Running the example, you can see that the verbose output on each epoch shows the lossand accuracy on both the training dataset and the validation dataset. Epoch 145/150514/514 [==============================]-0s-loss: 0. 4885-acc: 0. 7743-val_loss:0. 5016-val_acc: 0. 7638Epoch 146/150514/514 [==============================]-0s-loss: 0. 4862-acc: 0. 7704-val_loss:0. 5202-val_acc: 0. 7323Epoch 147/150514/514 [==============================]-0s-loss: 0. 4959-acc: 0. 7588-val_loss:0. 5012-val_acc: 0. 7598Epoch 148/150514/514 [==============================]-0s-loss: 0. 4966-acc: 0. 7665-val_loss:0. 5244-val_acc: 0. 7520
8. 2. Data Splitting53Epoch 149/150514/514 [==============================]-0s-loss: 0. 4863-acc: 0. 7724-val_loss:0. 5074-val_acc: 0. 7717Epoch 150/150514/514 [==============================]-0s-loss: 0. 4884-acc: 0. 7724-val_loss:0. 5462-val_acc: 0. 7205Listing 8. 2: Output of Evaluating A Neural Network Using an Automatic Validation Set. 8. 2. 2 Use a Manual Verification Dataset Keras also allows you to manually specify the dataset to use for validation during training. In this example we use the handytraintestsplit()function from the Python scikit-learnmachine learning library to separate our data into a training and test dataset. We use 67%for training and the remaining 33% of the data for validation. The validation dataset can bespecified to thefit()function in Keras by thevalidationdataargument. It takes a tuple ofthe input and output datasets. #MLPwithmanualvalidationsetfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromsklearn. model_selectionimporttrain_test_splitimportnumpy#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#splitinto67%fortrainand33%fortest X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0. 33, random_state=seed)#createmodelmodel = Sequential()model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])#Fitthemodelmodel. fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=150, batch_size=10)Listing 8. 3: Evaluate A Neural Network Using an Manual Validation Set. Like before, running the example provides verbose output of training that includes the lossand accuracy of the model on both the training and validation datasets for each epoch....Epoch 145/150514/514 [==============================]-0s-loss: 0. 5001-acc: 0. 7685-val_loss:0. 5617-val_acc: 0. 7087Epoch 146/150514/514 [==============================]-0s-loss: 0. 5041-acc: 0. 7529-val_loss:0. 5423-val_acc: 0. 7362
8. 3. Manualk-Fold Cross Validation54Epoch 147/150514/514 [==============================]-0s-loss: 0. 4936-acc: 0. 7685-val_loss:0. 5426-val_acc: 0. 7283Epoch 148/150514/514 [==============================]-0s-loss: 0. 4957-acc: 0. 7685-val_loss:0. 5430-val_acc: 0. 7362Epoch 149/150514/514 [==============================]-0s-loss: 0. 4953-acc: 0. 7685-val_loss:0. 5403-val_acc: 0. 7323Epoch 150/150514/514 [==============================]-0s-loss: 0. 4941-acc: 0. 7743-val_loss:0. 5452-val_acc: 0. 7323Listing 8. 4: Output of Evaluating A Neural Network Using an Manual Validation Set. 8. 3 Manualk-Fold Cross Validation The gold standard for machine learning model evaluation isk-fold cross validation. It providesa robust estimate of the performance of a model on unseen data. It does this by splitting thetraining dataset intoksubsets and takes turns training models on all subsets except one whichis held out, and evaluating model performance on the held out validation dataset. The processis repeated until all subsets are given an opportunity to be the held out validation set. Theperformance measure is then averaged across all models that are created. Cross validation is often not used for evaluating deep learning models because of the greatercomputational expense. For examplek-fold cross validation is often used with 5 or 10 folds. Assuch, 5 or 10 models must be constructed and evaluated, greatly adding to the evaluation timeof a model. Nevertheless, when the problem is small enough or if you have sucient computeresources,k-fold cross validation can give you a less biased estimate of the performance of yourmodel. In the example below we use the handy Stratified KFoldclass1from the scikit-learn Pythonmachine learning library to split up the training dataset into 10 folds. The folds are stratified,meaning that the algorithm attempts to balance the number of instances of each class in eachfold. The example creates and evaluates 10 models using the 10 splits of the data and collectsall of the scores. The verbose output for each epoch is turned o↵by passingverbose=0to thefit()andevaluate()functions on the model. The performance is printed for each model andit is stored. The average and standard deviation of the model performance is then printed atthe end of the run to provide a robust estimate of model accuracy. #MLPfor Pima Indians Datasetwith10-foldcrossvalidationfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromsklearn. model_selectionimport Stratified KFoldimportnumpy#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")1http://scikit-learn. org/stable/modules/generated/sklearn. model_selection. Stratified KFold. html
8. 4. Summary55#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#define10-foldcrossvalidationtestharnesskfold = Stratified KFold(n_splits=10, shuffle=True, random_state=seed)cvscores = []fortrain, testinkfold. split(X, Y):#createmodelmodel = Sequential()model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])#Fitthemodelmodel. fit(X[train], Y[train], nb_epoch=150, batch_size=10, verbose=0)#evaluatethemodelscores = model. evaluate(X[test], Y[test], verbose=0)print("%s:%. 2f%%"% (model. metrics_names[1], scores[1]*100))cvscores. append(scores[1] * 100)print("%. 2f%%(+/-%. 2f%%)"% (numpy. mean(cvscores), numpy. std(cvscores)))Listing 8. 5: Evaluate A Neural Network Using scikit-learn. Running the example will take less than a minute and will produce the following output:acc: 71. 43%acc: 71. 43%acc: 75. 32%acc: 79. 22%acc: 80. 52%acc: 68. 83%acc: 76. 62%acc: 67. 53%acc: 68. 42%acc: 72. 37%73. 17% (+/-4. 33%)Listing 8. 6: Output of Evaluating A Neural Network Using scikit-learn. Notice that we had to re-create the model each loop to then fit and evaluate it with the datafor the fold. In the next lesson we will look at how we can use Keras models natively with thescikit-learn machine learning library. 8. 4 Summary In this lesson you discovered the importance of having a robust way to estimate the performanceof your deep learning models on unseen data. You learned three ways that you can estimate theperformance of your deep learning models in Python using the Keras library: Automatically splitting a training dataset into train and validation datasets. Manually and explicitly defining a training and validation dataset. Evaluating performance usingk-fold cross validation, the gold standard technique.
8. 4. Summary568. 4. 1 Next You now know how to evaluate your models and estimate their performance. In the next lessonyou will discover how you can best integrate your Keras models with the scikit-learn machinelearning library.
Chapter 9Use Keras Models With Scikit-Learn For General Machine Learning The scikit-learn library is the most popular library for general machine learning in Python. In this lesson you will discover how you can use deep learning models from Keras with thescikit-learn library in Python. After completing this lesson you will know: How to wrap a Keras model for use with the scikit-learn machine learning library. How to easily evaluate Keras models using cross validation in scikit-learn. How to tune Keras model hyperparameters using grid search in scikit-learn. Let's get started. 9. 1 Overview Keras is a popular library for deep learning in Python, but the focus of the library is deeplearning. In fact it strives for minimalism, focusing on only what you need to quickly and simplydefine and build deep learning models. The scikit-learn library in Python is built upon the Sci Py stack for ecient numerical computation. It is a fully featured library for general purposemachine learning and provides many utilities that are useful in the development of deep learningmodels. Not least: Evaluation of models using resampling methods likek-fold cross validation. Ecient search and evaluation of model hyperparameters. The Keras library provides a convenient wrapper for deep learning models to be used asclassification or regression estimators in scikit-learn. In the next sections we will work throughexamples of using the Keras Classifierwrapper for a classification neural network createdin Keras and used in the scikit-learn library. The test problem is the Pima Indians onset ofdiabetes classification dataset (see Section7. 2). 57
9. 2. Evaluate Models with Cross Validation589. 2 Evaluate Models with Cross Validation The Keras Classifierand Keras Regressorclasses in Keras take an argumentbuildfnwhichis the name of the function to call to create your model. You must define a function calledwhatever you like that defines your model, compiles it and returns it. In the example belowwe define a functioncreatemodel()that create a simple multilayer neural network for theproblem. We pass this function name to the Keras Classifierclass by thebuildfnargument. We also pass in additional arguments ofnbepoch=150andbatchsize=10. These are auto-matically bundled up and passed on to thefit()function which is called internally by the Keras Classifierclass. In this example we use the scikit-learn Stratified KFoldto perform10-fold stratified cross validation. This is a resampling technique that can provide a robustestimate of the performance of a machine learning model on unseen data. We use the scikit-learnfunctioncrossvalscore()to evaluate our model using the cross validation scheme and printthe results. #MLPfor Pima Indians Datasetwith10-foldcrossvalidationviasklearnfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Classifierfromsklearn. model_selectionimport Stratified KFoldfromsklearn. model_selectionimportcross_val_scoreimportnumpy#Functiontocreatemodel,requiredfor Keras Classifierdefcreate_model():#createmodelmodel = Sequential()model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodel#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#createmodelmodel = Keras Classifier(build_fn=create_model, nb_epoch=150, batch_size=10, verbose=0)#evaluateusing10-foldcrossvalidationkfold = Stratified KFold(n_splits=10, shuffle=True, random_state=seed)results = cross_val_score(model, X, Y, cv=kfold)print(results. mean())Listing 9. 1: Evaluate A Neural Network Using scikit-learn. Running the example displays the skill of the model for each epoch. A total of 10 modelsare created and evaluated and the final average accuracy is displayed.
9. 3. Grid Search Deep Learning Model Parameters590. 731715653237Listing 9. 2: Output of Evaluate A Neural Network Using scikit-learn. You can see that when the Keras model is wrapped that estimating model accuracy can begreatly streamlined, compared to the manual enumeration of cross validation folds performed inthe previous lesson. 9. 3 Grid Search Deep Learning Model Parameters The previous example showed how easy it is to wrap your deep learning model from Kerasand use it in functions from the scikit-learn library. In this example we go a step further. Wealready know we can provide arguments to thefit()function. The function that we specify tothebuildfnargument when creating the Keras Classifierwrapper can also take arguments. We can use these arguments to further customize the construction of the model. In this example we use a grid search to evaluate di↵erent configurations for our neuralnetwork model and report on the combination that provides the best estimated performance. Thecreatemodel()function is defined to take two argumentsoptimizerandinit,b o t ho fwhich must have default values. This will allow us to evaluate the e↵ect of using di↵erentoptimization algorithms and weight initialization schemes for our network. After creating ourmodel, we define arrays of values for the parameter we wish to search, specifically: Optimizers for searching di↵erent weight values. Initializers for preparing the network weights using di↵erent schemes. Number of epochs for training the model for di↵erent number of exposures to the trainingdataset. Batches for varying the number of samples before weight updates. The options are specified into a dictionary and passed to the configuration of the Grid Search CVscikit-learn class. This class will evaluate a version of our neural network model for each combi-nation of parameters (2⇥3⇥3⇥3) for the combinations of optimizers, initializations, epochsand batches). Each combination is then evaluated using the default of 3-fold stratified crossvalidation. That is a lot of models and a lot of computation. This is not a scheme that you want touse lightly because of the time it will take to compute. It may be useful for you to designsmall experiments with a smaller subset of your data that will complete in a reasonable time. This experiment is reasonable in this case because of the small network and the small dataset(less than 1,000 instances and 9 attributes). Finally, the performance and combination ofconfigurations for the best model are displayed, followed by the performance of all combinationsof parameters. #MLPfor Pima Indians Datasetwithgridsearchviasklearnfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Classifierfromsklearn. model_selectionimport Grid Search CV
9. 3. Grid Search Deep Learning Model Parameters60importnumpy#Functiontocreatemodel,requiredfor Keras Classifierdefcreate_model(optimizer= rmsprop, init= glorot_uniform ):#createmodelmodel = Sequential()model. add(Dense(12, input_dim=8, init=init, activation= relu ))model. add(Dense(8, init=init, activation= relu ))model. add(Dense(1, init=init, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer=optimizer, metrics=[ accuracy ])returnmodel#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#createmodelmodel = Keras Classifier(build_fn=create_model, verbose=0)#gridsearchepochs,batchsizeandoptimizeroptimizers = [ rmsprop, adam ]init = [ glorot_uniform, normal, uniform ]epochs = numpy. array([50, 100, 150])batches = numpy. array([5, 10, 20])param_grid =dict(optimizer=optimizers, nb_epoch=epochs, batch_size=batches, init=init)grid = Grid Search CV(estimator=model, param_grid=param_grid)grid_result = grid. fit(X, Y)#summarizeresultsprint("Best:%fusing%s"% (grid_result. best_score_, grid_result. best_params_))forparams, mean_score, scoresingrid_result. grid_scores_:print("%f(%f)with:%r"% (scores. mean(), scores. std(), params))Listing 9. 3: Grid Search Neural Network Parameters Using scikit-learn. This might take about 5 minutes to complete on your workstation executed on the CPU. running the example shows the results below. We can see that the grid search discovered thatusing a uniform initialization scheme,rmspropoptimizer, 150 epochs and a batch size of 5achieved the best cross validation score of approximately 75% on this problem. Best: 0. 750000 using { init : normal, optimizer : rmsprop, nb_epoch : 150, batch_size : 5}0. 662760 (0. 038450) with: { init : glorot_uniform, optimizer : rmsprop, nb_epoch :50, batch_size : 5}0. 665365 (0. 004872) with: { init : glorot_uniform, optimizer : adam, nb_epoch : 50, batch_size : 5}0. 669271 (0. 028940) with: { init : glorot_uniform, optimizer : rmsprop, nb_epoch :100, batch_size : 5}0. 709635 (0. 034987) with: { init : glorot_uniform, optimizer : adam, nb_epoch : 100, batch_size : 5}0. 699219 (0. 022097) with: { init : glorot_uniform, optimizer : rmsprop, nb_epoch :150, batch_size : 5}0. 725260 (0. 008027) with: { init : glorot_uniform, optimizer : adam, nb_epoch : 150,
9. 4. Summary61 batch_size : 5}... Listing 9. 4: Output of Grid Search Neural Network Parameters Using scikit-learn. 9. 4 Summary In this lesson you discovered how you can wrap your Keras deep learning models and use themin the scikit-learn general machine learning library. You learned: Specifically how to wrap Keras models so that they can be used with the scikit-learnmachine learning library. How to use a wrapped Keras model as part of evaluating model performance in scikit-learn. How to perform hyperparameter tuning in scikit-learn using a wrapped Keras model. You can see that using scikit-learn for standard machine learning operations such as modelevaluation and model hyperparameter optimization can save a lot of time over implementingthese schemes yourself. 9. 4. 1 Next You now know how to best integrate your Keras models into the scikit-learn machine learninglibrary. Now it is time to put your new skills to the test. Over the next few chapters youwill practice developing neural network models in Keras end-to-end, starting with a multiclassclassification problem next.
Chapter 10Project: Multiclass Classification Of Flower Species In this project tutorial you will discover how you can use Keras to develop and evaluate neuralnetwork models for multiclass classification problems. After completing this step-by-step tutorial,you will know: How to load data from CSV and make it available to Keras. How to prepare multiclass classification data for modeling with neural networks. How to evaluate Keras neural network models with scikit-learn. Let's get started. 10. 1 Iris Flowers Classification Dataset In this tutorial we will use the standard machine learning problem called the iris flowers dataset. This dataset is well studied and is a good problem for practicing on neural networks becauseall of the 4 input variables are numeric and have the same scale in centimeters. Each instancedescribes the properties of an observed flower measurements and the output variable is specificiris species. The attributes for this dataset can be summarized as follows:1. Sepal length in centimeters. 2. Sepal width in centimeters. 3. Petal length in centimeters. 4. Petal width in centimeters. 5. Class. This is a multiclass classification problem, meaning that there are more than two classesto be predicted, in fact there are three flower species. This is an important type of problemon which to practice with neural networks because the three class values require specializedhandling. Below is a sample of the first five of the 150 instances:62
10. 2. Import Classes and Functions635. 1,3. 5,1. 4,0. 2,Iris-setosa4. 9,3. 0,1. 4,0. 2,Iris-setosa4. 7,3. 2,1. 3,0. 2,Iris-setosa4. 6,3. 1,1. 5,0. 2,Iris-setosa5. 0,3. 6,1. 4,0. 2,Iris-setosa Listing 10. 1: Sample of the Iris Flowers Dataset. The iris flower dataset is a well studied problem and a such we can expect to achieve a modelaccuracy in the range of 95% to 97%. This provides a good target to aim for when developingour models. The dataset is provided in the bundle of sample code provided with this book. Youcan also download the iris flowers dataset from the UCI Machine Learning repository1and placeit in your current working directory with the filenameiris. csv. You can learn more about theiris flower classification dataset on the UCI Machine Learning Repository page2. 10. 2 Import Classes and Functions We can begin by importing all of the classes and functions we will need in this tutorial. Thisincludes both the functionality we require from Keras, but also data loading from Pandas aswell as data preparation and model evaluation from scikit-learn. importnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Classifierfromkeras. utilsimportnp_utilsfromsklearn. model_selectionimportcross_val_scorefromsklearn. model_selectionimport KFoldfromsklearn. preprocessingimport Label Encoderfromsklearn. pipelineimport Pipeline Listing 10. 2: Import Classes and Functions. 10. 3 Initialize Random Number Generator Next we need to initialize the random number generator to a constant value. This is importantto ensure that the results we achieve from this model can be achieved again precisely. It ensuresthat the stochastic process of training a neural network model can be reproduced. #fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)Listing 10. 3: Initialize Random Number Generator. 1http://archive. ics. uci. edu/ml/machine-learning-databases/iris/iris. data2https://archive. ics. uci. edu/ml/datasets/Iris
10. 4. Load The Dataset6410. 4 Load The Dataset The dataset can be loaded directly. Because the output variable contains strings, it is easiest toload the data using pandas. We can then split the attributes (columns) into input variables (X)and output variables (Y). #loaddatasetdataframe = pandas. read_csv("iris. csv", header=None)dataset = dataframe. values X = dataset[:,0:4]. astype(float)Y = dataset[:,4]Listing 10. 4: Load Dataset And Separate Into Input and Output Variables. 10. 5 Encode The Output Variable The output variable contains three di↵erent string values. When modeling multiclass classificationproblems using neural networks, it is good practice to reshape the output attribute from avector that contains values for each class value to be a matrix with a boolean for each classvalue and whether or not a given instance has that class value or not. This is called one hotencoding or creating dummy variables from a categorical variable. For example, in this problemthe three class values are Iris-setosa,Iris-versicolorand Iris-virginica. If we had thethree observations:Iris-setosa Iris-versicolor Iris-virginica Listing 10. 5: Three Classes In The Iris Dataset. We can turn this into a one-hot encoded binary matrix for each data instance that wouldlook as follows:Iris-setosa, Iris-versicolor, Iris-virginica1, 0, 00, 1, 00, 0, 1Listing 10. 6: One Hot Encoding of The Classes In The Iris Dataset. We can do this by first encoding the strings consistently to integers using the scikit-learnclass Label Encoder. Then convert the vector of integers to a one hot encoding using the Kerasfunctiontocategorical(). #encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)encoded_Y = encoder. transform(Y)#convertintegerstodummyvariables(i. e. onehotencoded)dummy_y = np_utils. to_categorical(encoded_Y)Listing 10. 7: One Hot Encoding Of Iris Dataset Output Variable.
10. 6. Define The Neural Network Model6510. 6 Define The Neural Network Model The Keras library provides wrapper classes to allow you to use neural network models developedwith Keras in scikit-learn as we saw in the previous lesson. There is a Keras Classifierclassin Keras that can be used as an Estimator in scikit-learn, the base type of model in the library. The Keras Classifiertakes the name of a function as an argument. This function must returnthe constructed neural network model, ready for training. Below is a function that will create a baseline neural network for the iris classificationproblem. It creates a simple fully connected network with one hidden layer that contains 4neurons, the same number of inputs (it could be any number of neurons). The hidden layer usesar e c t i fi e ra c t i v a t i o nf u n c t i o nw h i c hi sag o o dp r a c t i c e. B e c a u s ew eu s e dao n e-h o te n c o d i n gf o rour iris dataset, the output layer must create 3 output values, one for each class. The outputvalue with the largest value will be taken as the class predicted by the model. The networktopology of this simple one-layer neural network can be summarized as:4 inputs-> [4 hidden nodes]-> 3 outputs Listing 10. 8: Example Network Structure. Note that we use a sigmoid activation function in the output layer. This is to ensure theoutput values are in the range of 0 and 1 and may be used as predicted probabilities. Finally,the network uses the ecient ADAM gradient descent optimization algorithm with a logarithmicloss function, which is calledcategoricalcrossentropyin Keras. #definebaselinemodeldefbaseline_model():#createmodelmodel = Sequential()model. add(Dense(4, input_dim=4, init= normal, activation= relu ))model. add(Dense(3, init= normal, activation= sigmoid ))#Compilemodelmodel. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodel Listing 10. 9: Define and Compile the Neural Network Model. We can now create our Keras Classifierfor use in scikit-learn. We can also pass argumentsin the construction of the Keras Classifierclass that will be passed on to thefit()functioninternally used to train the neural network. Here, we pass the number of epochsnbepochas200 andbatchsizeas 5 to use when training the model. Debugging is also turned o↵whentraining by settingverboseto 0. estimator = Keras Classifier(build_fn=baseline_model, nb_epoch=200, batch_size=5, verbose=0)Listing 10. 10: Create Wrapper For Neural Network Model For Use in scikit-learn. 10. 7 Evaluate The Model withk-Fold Cross Validation We can now evaluate the neural network model on our training data. The scikit-learn libraryhas excellent capability to evaluate models using a suite of techniques. The gold standard forevaluating machine learning models isk-fold cross validation. First we can define the model
10. 7. Evaluate The Model withk-Fold Cross Validation66evaluation procedure. Here, we set the number of folds to be 10 (an excellent default) and toshu✏et h ed a t ab e f o r ep a r t i t i o n i n gi t. kfold = KFold(n_splits=10, shuffle=True, random_state=seed)Listing 10. 11: Prepare Cross Validation. Now we can evaluate our model (estimator) on our dataset (Xanddummyy)u s i n ga1 0-f o l dcross validation procedure (kfold). Evaluating the model only takes approximately 10 secondsand returns an object that describes the evaluation of the 10 constructed models for each of thesplits of the dataset. results = cross_val_score(estimator, X, dummy_y, cv=kfold)print("Accuracy:%. 2f%%(%. 2f%%)"% (results. mean()*100, results. std()*100))Listing 10. 12: Evaluate the Neural Network Model. The full code listing is provided below for completeness. #Multiclass Classificationwiththe Iris Flowers Datasetimportnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Classifierfromkeras. utilsimportnp_utilsfromsklearn. model_selectionimportcross_val_scorefromsklearn. model_selectionimport KFoldfromsklearn. preprocessingimport Label Encoderfromsklearn. pipelineimport Pipeline#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddatasetdataframe = pandas. read_csv("iris. csv", header=None)dataset = dataframe. values X = dataset[:,0:4]. astype(float)Y = dataset[:,4]#encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)encoded_Y = encoder. transform(Y)#convertintegerstodummyvariables(i. e. onehotencoded)dummy_y = np_utils. to_categorical(encoded_Y)#definebaselinemodeldefbaseline_model():#createmodelmodel = Sequential()model. add(Dense(4, input_dim=4, init= normal, activation= relu ))model. add(Dense(3, init= normal, activation= sigmoid ))#Compilemodelmodel. compile(loss= categorical_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodelestimator = Keras Classifier(build_fn=baseline_model, nb_epoch=200, batch_size=5, verbose=0)kfold = KFold(n_splits=10, shuffle=True, random_state=seed)results = cross_val_score(estimator, X, dummy_y, cv=kfold)print("Accuracy:%. 2f%%(%. 2f%%)"% (results. mean()*100, results. std()*100))
10. 8. Summary67Listing 10. 13: Multilayer Perceptron Model for Iris Flowers Problem. The results are summarized as both the mean and standard deviation of the model accuracyon the dataset. This is a reasonable estimation of the performance of the model on unseen data. It is also within the realm of known top results for this problem. Accuracy: 95. 33% (4. 27%)Listing 10. 14: Estimated Accuracy of Neural Network Model on the Iris Dataset. 10. 8 Summary In this lesson you discovered how to develop and evaluate a neural network using the Keras Python library for deep learning. By completing this tutorial, you learned: How to load data and make it available to Keras. How to prepare multiclass classification data for modeling using one hot encoding. How to use Keras neural network models with scikit-learn. How to define a neural network using Keras for multiclass classification. How to evaluate a Keras neural network model using scikit-learn withk-fold cross valida-tion. 10. 8. 1 Next This was your first end-to-end project using Keras on a standalone dataset. In the next tutorialyou will develop neural network models for a binary classification problem and tune them to getincreases in model performance.
Chapter 11Project: Binary Classification Of Sonar Returns In this project tutorial you will discover how to e↵ectively use the Keras library in your machinelearning project by working through a binary classification project step-by-step. After completingthis step-by-step tutorial, you will know: How to load training data and make it available to Keras. How to design and train a neural network for tabular data. How to evaluate the performance of a neural network model in Keras on unseen data. How to perform data preparation to improve skill when using neural networks. How to tune the topology and configuration of neural networks in Keras. Let's get started. 11. 1 Sonar Object Classification Dataset The dataset we will use in this tutorial is the Sonar dataset. This is a dataset that describessonar chirp returns bouncing o↵di↵erent surfaces. The 60 input variables are the strength ofthe returns at di↵erent angles. It is a binary classification problem that requires a model todi↵erentiate rocks from metal cylinders. It is a well understood dataset. All of the variables are continuous and generally in therange of 0 to 1. The output variable is a string Mfor mine and Rfor rock, which will need to beconverted to integers 1 and 0. The dataset contains 208 observations. The dataset is in thebundle of source code provided with this book. Alternatively, you can download the datasetand place it in your working directory with the filenamesonar. csv1. A benefit of using this dataset is that it is a standard benchmark problem. This meansthat we have some idea of the expected skill of a good model. Using cross validation, a neuralnetwork should be able to achieve performance around 84% with an upper bound on accuracy1https://archive. ics. uci. edu/ml/machine-learning-databases/undocumented/connectionist-bench/sonar/sonar. all-data68
11. 2. Baseline Neural Network Model Performance69for custom models at around 88%2. You can learn more about this dataset on the UCI Machine Learning repository3. 11. 2 Baseline Neural Network Model Performance Let's create a baseline model and result for this problem. We will start o↵by importing all ofthe classes and functions we will need. importnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Classifierfromsklearn. model_selectionimportcross_val_scorefromsklearn. preprocessingimport Label Encoderfromsklearn. model_selectionimport Stratified KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline Listing 11. 1: Import Classes and Functions. Next, we can initialize the random number generator to ensure that we always get the sameresults when executing this code. This will help if we are debugging. #fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)Listing 11. 2: Initialize The Random Number Generator. Now we can load the dataset using Pandas and split the columns into 60 input variables (X)and 1 output variable (Y). We use Pandas to load the data because it easily handles strings(the output variable), whereas attempting to load the data directly using Num Py would bemore dicult. #loaddatasetdataframe = pandas. read_csv("sonar. csv", header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:60]. astype(float)Y = dataset[:,60]Listing 11. 3: Load The Dataset And Separate Into Input and Output Variables. The output variable is string values. We must convert them into integer values 0 and 1. Wecan do this using the Label Encoderclass from scikit-learn. This class will model the encodingrequired using the entire dataset via thefit()function, then apply the encoding to create anew output variable using thetransform()function. #encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)2http://www. is. umk. pl/projects/datasets. html#Sonar3https://archive. ics. uci. edu/ml/datasets/Connectionist+Bench+(Sonar,+Mines+vs. +Rocks)
11. 2. Baseline Neural Network Model Performance70encoded_Y = encoder. transform(Y)Listing 11. 4: Label Encode Output Variable. We are now ready to create our neural network model using Keras. We are going to usescikit-learn to evaluate the model using stratifiedk-fold cross validation. This is a resamplingtechnique that will provide an estimate of the performance of the model. To use Keras modelswith scikit-learn, we must use the Keras Classifierwrapper. This class takes a function thatcreates and returns our neural network model. It also takes arguments that it will pass along tothe call tofit()such as the number of epochs and the batch size. Let's start o↵by defining thefunction that creates our baseline model. Our model will have a single fully connected hiddenlayer with the same number of neurons as input variables. This is a good default starting pointwhen creating neural networks on a new problem. The weights are initialized using a small Gaussian random number. The Rectifier activationfunction is used. The output layer contains a single neuron in order to make predictions. Ituses the sigmoid activation function in order to produce a probability output in the range of0 to 1 that can easily and automatically be converted to crisp class values. Finally, we areusing the logarithmic loss function (binarycrossentropy)d u r i n gt r a i n i n g,t h ep r e f e r r e dl o s sfunction for binary classification problems. The model also uses the ecient Adam optimizationalgorithm for gradient descent and accuracy metrics will be collected when the model is trained. #baselinemodeldefcreate_baseline():#createmodelmodel = Sequential()model. add(Dense(60, input_dim=60, init= normal, activation= relu ))model. add(Dense(1, init= normal, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodel Listing 11. 5: Define and Compile Baseline Model. Now it is time to evaluate this model using stratified cross validation in the scikit-learnframework. We pass the number of training epochs to the Keras Classifier, again usingreasonable default values. Verbose output is also turned o↵given that the model will be created10 times for the 10-fold cross validation being performed. #evaluatemodelwithstandardizeddatasetestimator = Keras Classifier(build_fn=create_baseline, nb_epoch=100, batch_size=5, verbose=0)kfold = Stratified KFold(n_split=10, shuffle=True, random_state=seed)results = cross_val_score(estimator, X, encoded_Y, cv=kfold)print("Baseline:%. 2f%%(%. 2f%%)"% (results. mean()*100, results. std()*100))Listing 11. 6: Fit And Evaluate Baseline Model. The full code listing is provided below for completeness. #Binary Classificationwith Sonar Dataset:Baselineimportnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Classifierfromsklearn. model_selectionimportcross_val_score
11. 3. Improve Performance With Data Preparation71fromsklearn. preprocessingimport Label Encoderfromsklearn. model_selectionimport Stratified KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddatasetdataframe = pandas. read_csv("sonar. csv", header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:60]. astype(float)Y = dataset[:,60]#encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)encoded_Y = encoder. transform(Y)#baselinemodeldefcreate_baseline():#createmodelmodel = Sequential()model. add(Dense(60, input_dim=60, init= normal, activation= relu ))model. add(Dense(1, init= normal, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodel#evaluatemodelwithstandardizeddatasetestimator = Keras Classifier(build_fn=create_baseline, nb_epoch=100, batch_size=5, verbose=0)kfold = Stratified KFold(n_splits=10, shuffle=True, random_state=seed)results = cross_val_score(estimator, X, encoded_Y, cv=kfold)print("Baseline:%. 2f%%(%. 2f%%)"% (results. mean()*100, results. std()*100))Listing 11. 7: Multilayer Perceptron Model for Sonar Problem. Running this code produces the following output showing the mean and standard deviationof the estimated accuracy of the model on unseen data. Baseline: 81. 68% (5. 67%)Listing 11. 8: Sample Output From Fitting And Evaluating The Baseline Model. This is an excellent score without doing any hard work. 11. 3 Improve Performance With Data Preparation It is a good practice to prepare your data before modeling. Neural network models are especiallysuitable to having consistent input values, both in scale and distribution. An e↵ective datapreparation scheme for tabular data when building neural network models is standardization. This is where the data is rescaled such that the mean value for each attribute is 0 and the standarddeviation is 1. This preserves Gaussian and Gaussian-like distributions whilst normalizing thecentral tendencies for each attribute. We can use scikit-learn to perform the standardization of our Sonar dataset using the Standard Scalerclass. Rather than performing the standardization on the entire dataset, it isgood practice to train the standardization procedure on the training data within the pass of a
11. 3. Improve Performance With Data Preparation72cross validation run and to use the trained standardization instance to prepare the unseen testfold. This makes standardization a step in model preparation in the cross validation processand it prevents the algorithm having knowledge of unseen data during evaluation, knowledgethat might be passed from the data preparation scheme like a crisper distribution. We can achieve this in scikit-learn using a Pipelineclass. The pipeline is a wrapper thatexecutes one or more models within a pass of the cross validation procedure. Here, we candefine a pipeline with the Standard Scalerfollowed by our neural network model. #Binary Classificationwith Sonar Dataset:Standardizedimportnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Classifierfromsklearn. model_selectionimportcross_val_scorefromsklearn. preprocessingimport Label Encoderfromsklearn. model_selectionimport Stratified KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddatasetdataframe = pandas. read_csv("sonar. csv", header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:60]. astype(float)Y = dataset[:,60]#encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)encoded_Y = encoder. transform(Y)#baselinemodeldefcreate_baseline():#createmodelmodel = Sequential()model. add(Dense(60, input_dim=60, init= normal, activation= relu ))model. add(Dense(1, init= normal, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodel#evaluatebaselinemodelwithstandardizeddatasetnumpy. random. seed(seed)estimators = []estimators. append(( standardize, Standard Scaler()))estimators. append(( mlp, Keras Classifier(build_fn=create_baseline, nb_epoch=100,batch_size=5, verbose=0)))pipeline = Pipeline(estimators)kfold = Stratified KFold(n_splits=10, shuffle=True, random_state=seed)results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)print("Standardized:%. 2f%%(%. 2f%%)"% (results. mean()*100, results. std()*100))Listing 11. 9: Update Experiment To Use Data Standardization. Running this example provides the results below. We do see a small but very nice lift in the
11. 4. Tuning Layers and Neurons in The Model73mean accuracy. Standardized: 84. 07% (6. 23%)Listing 11. 10: Sample Output From Update Using Data Standardization. 11. 4 Tuning Layers and Neurons in The Model There are many things to tune on a neural network, such as the weight initialization, activationfunctions, optimization procedure and so on. One aspect that may have an outsized e↵ect is thestructure of the network itself called the network topology. In this section we take a look at twoexperiments on the structure of the network: making it smaller and making it larger. These aregood experiments to perform when tuning a neural network on your problem. 11. 4. 1 Evaluate a Smaller Network I suspect that there is a lot of redundancy in the input variables for this problem. The datadescribes the same signal from di↵erent angles. Perhaps some of those angles are more relevantthan others. We can force a type of feature extraction by the network by restricting therepresentational space in the first hidden layer. In this experiment we take our baseline model with 60 neurons in the hidden layer andreduce it by half to 30. This will put pressure on the network during training to pick out themost important structure in the input data to model. We will also standardize the data as inthe previous experiment with data preparation and try to take advantage of the small lift inperformance. #Binary Classificationwith Sonar Dataset:Standardized Smallerimportnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Classifierfromsklearn. model_selectionimportcross_val_scorefromsklearn. preprocessingimport Label Encoderfromsklearn. model_selectionimport Stratified KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddatasetdataframe = pandas. read_csv("sonar. csv", header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:60]. astype(float)Y = dataset[:,60]#encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)encoded_Y = encoder. transform(Y)#smallermodeldefcreate_smaller():
11. 4. Tuning Layers and Neurons in The Model74#createmodelmodel = Sequential()model. add(Dense(30, input_dim=60, init= normal, activation= relu ))model. add(Dense(1, init= normal, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodelnumpy. random. seed(seed)estimators = []estimators. append(( standardize, Standard Scaler()))estimators. append(( mlp, Keras Classifier(build_fn=create_smaller, nb_epoch=100,batch_size=5, verbose=0)))pipeline = Pipeline(estimators)kfold = Stratified KFold(n_splits=10, shuffle=True, random_state=seed)results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)print("Smaller:%. 2f%%(%. 2f%%)"% (results. mean()*100, results. std()*100))Listing 11. 11: Update To Use a Smaller Network Topology. Running this example provides the following result. We can see that we have a very slightboost in the mean estimated accuracy and an important reduction in the standard deviation(average spread) of the accuracy scores for the model. This is a great result because we aredoing slightly better with a network half the size, which in turn takes half the time to train. Smaller: 84. 61% (4. 65%)Listing 11. 12: Sample Output From Using A Smaller Network Topology. 11. 4. 2 Evaluate a Larger Network A neural network topology with more layers o↵ers more opportunity for the network to extractkey features and recombine them in useful nonlinear ways. We can evaluate whether addingmore layers to the network improves the performance easily by making another small tweak tothe function used to create our model. Here, we add one new layer (one line) to the networkthat introduces another hidden layer with 30 neurons after the first hidden layer. Our networknow has the topology:60 inputs-> [60-> 30]-> 1 output Listing 11. 13: Summary of New Network Topology. The idea here is that the network is given the opportunity to model all input variablesbefore being bottlenecked and forced to halve the representational capacity, much like we did inthe experiment above with the smaller network. Instead of squeezing the representation of theinputs themselves, we have an additional hidden layer to aid in the process. #Binary Classificationwith Sonar Dataset:Standardized Largerimportnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Classifierfromsklearn. model_selectionimportcross_val_scorefromsklearn. preprocessingimport Label Encoder
11. 5. Summary75fromsklearn. model_selectionimport Stratified KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loaddatasetdataframe = pandas. read_csv("sonar. csv", header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:60]. astype(float)Y = dataset[:,60]#encodeclassvaluesasintegersencoder = Label Encoder()encoder. fit(Y)encoded_Y = encoder. transform(Y)#largermodeldefcreate_larger():#createmodelmodel = Sequential()model. add(Dense(60, input_dim=60, init= normal, activation= relu ))model. add(Dense(30, init= normal, activation= relu ))model. add(Dense(1, init= normal, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])returnmodelnumpy. random. seed(seed)estimators = []estimators. append(( standardize, Standard Scaler()))estimators. append(( mlp, Keras Classifier(build_fn=create_larger, nb_epoch=100,batch_size=5, verbose=0)))pipeline = Pipeline(estimators)kfold = Stratified KFold(n_splits=10, shuffle=True, random_state=seed)results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)print("Larger:%. 2f%%(%. 2f%%)"% (results. mean()*100, results. std()*100))Listing 11. 14: Update To Use a Larger Network Topology. Running this example produces the results below. We can see that we do get a nice lift inthe model performance, achieving near state-of-the-art results with very little e↵ort indeed. Larger: 86. 47% (3. 82%)Listing 11. 15: Sample Output From Using A Larger Network Topology. With further tuning of aspects like the optimization algorithm and the number of trainingepochs, it is expected that further improvements are possible. What is the best score that youcan achieve on this dataset?11. 5 Summary In this lesson you discovered how you can work through a binary classification problem step-by-step with Keras, specifically: How to load and prepare data for use in Keras.
11. 5. Summary76 How to create a baseline neural network model. How to evaluate a Keras model using scikit-learn and stratifiedk-fold cross validation. How data preparation schemes can lift the performance of your models. How experiments adjusting the network topology can lift model performance. 11. 5. 1 Next You now know how to develop neural network models in Keras for multiclass and binaryclassification problems. In the next tutorial you will work through a project to develop neuralnetwork models for a regression problem.
Chapter 12Project: Regression Of Boston House Prices In this project tutorial you will discover how to develop and evaluate neural network modelsusing Keras for a regression problem. After completing this step-by-step tutorial, you will know: How to load a CSV dataset and make it available to Keras. How to create a neural network model with Keras for a regression problem. How to use scikit-learn with Keras to evaluate models using cross validation. How to perform data preparation in order to improve skill with Keras models. How to tune the network topology of models with Keras. Let's get started. 12. 1 Boston House Price Dataset The problem that we will look at in this tutorial is the Boston house price dataset. The datasetdescribes properties of houses in Boston suburbs and is concerned with modeling the price ofhouses in those suburbs in thousands of dollars. As such, this is a regression predictive modelingproblem. There are 13 input variables that describe the properties of a given Boston suburb. The full list of attributes in this dataset are as follows:1. CRIM: per capita crime rate by town. 2. ZN: proportion of residential land zoned for lots over 25,000 sq. ft. 3. INDUS: proportion of non-retail business acres per town. 4. CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise). 5. NOX: nitric oxides concentration (parts per 10 million). 6. RM: average number of rooms per dwelling. 77
12. 2. Develop a Baseline Neural Network Model787. AGE: proportion of owner-occupied units built prior to 1940. 8. DIS: weighted distances to five Boston employment centers. 9. RAD: index of accessibility to radial highways. 10. TAX: full-value property-tax rate per 10,000. 11. PTRATIO: pupil-teacher ratio by town. 12. B:1 0 0 0 (Bk0. 63)2where Bk is the proportion of blacks by town. 13. LSTAT: % lower status of the population. 14. MEDV: Median value of owner-occupied homes in 1000s. This is a well studied problem in machine learning. It is convenient to work with because allof the input and output attributes are numerical and there are 506 instances to work with. Asample of the first 5 rows of the 506 in the dataset is provided below:0. 00632 18. 00 2. 310 0 0. 5380 6. 5750 65. 20 4. 0900 1 296. 0 15. 30 396. 90 4. 98 24. 000. 02731 0. 00 7. 070 0 0. 4690 6. 4210 78. 90 4. 9671 2 242. 0 17. 80 396. 90 9. 14 21. 600. 02729 0. 00 7. 070 0 0. 4690 7. 1850 61. 10 4. 9671 2 242. 0 17. 80 392. 83 4. 03 34. 700. 03237 0. 00 2. 180 0 0. 4580 6. 9980 45. 80 6. 0622 3 222. 0 18. 70 394. 63 2. 94 33. 400. 06905 0. 00 2. 180 0 0. 4580 7. 1470 54. 20 6. 0622 3 222. 0 18. 70 396. 90 5. 33 36. 20Listing 12. 1: Sample of the Boston House Price Dataset. The dataset is available in the bundle of source code provided with this book. Alternatively,you can download this dataset and save it to your current working directly with the file namehousing. csv1. Reasonable performance for models evaluated using Mean Squared Error (MSE)are around 20 in squared thousands of dollars (or 4,500 if you take the square root). This is anice target to aim for with our neural network model. You can learn more about the Bostonhouse price dataset on the UCI Machine Learning Repository2. 12. 2 Develop a Baseline Neural Network Model In this section we will create a baseline neural network model for the regression problem. Let'sstart o↵by importing all of the functions and objects we will need for this tutorial. importnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Regressorfromsklearn. model_selectionimportcross_val_scorefromsklearn. model_selectionimport KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline Listing 12. 2: Import Classes and Functions. 1https://archive. ics. uci. edu/ml/machine-learning-databases/housing/housing. data2https://archive. ics. uci. edu/ml/datasets/Housing
12. 2. Develop a Baseline Neural Network Model79We can now load our dataset from a file in the local directory. The dataset is in fact not in CSV format on the UCI Machine Learning Repository, the attributes are instead separated bywhitespace. We can load this easily using the Pandas library. We can then split the input (X)and output (Y) attributes so that they are easier to model with Keras and scikit-learn. #loaddatasetdataframe = pandas. read_csv("housing. csv", delim_whitespace=True, header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:13]Y = dataset[:,13]Listing 12. 3: Load Dataset and Separate Into Input and Output Variables. We can create Keras models and evaluate them with scikit-learn by using handy wrapperobjects provided by the Keras library. This is desirable, because scikit-learn excels at evaluatingmodels and will allow us to use powerful data preparation and model evaluation schemes withvery few lines of code. The Keras wrapper class require a function as an argument. This functionthat we must define is responsible for creating the neural network model to be evaluated. Below we define the function to create the baseline model to be evaluated. It is a simplemodel that has a single fully connected hidden layer with the same number of neurons as inputattributes (13). The network uses good practices such as the rectifier activation function forthe hidden layer. No activation function is used for the output layer because it is a regressionproblem and we are interested in predicting numerical values directly without transform. The ecient ADAM optimization algorithm is used and a mean squared error loss functionis optimized. This will be the same metric that we will use to evaluate the performance of themodel. It is a desirable metric because by taking the square root of an error value it gives us aresult that we can directly understand in the context of the problem with the units in thousandsof dollars. #definebasemodedefbaseline_model():#createmodelmodel = Sequential()model. add(Dense(13, input_dim=13, init= normal, activation= relu ))model. add(Dense(1, init= normal ))#Compilemodelmodel. compile(loss= mean_squared_error, optimizer= adam )returnmodel Listing 12. 4: Define and Compile a Baseline Neural Network Model. The Keras wrapper object for use in scikit-learn as a regression estimator is called Keras Regressor. We create an instance and pass it both the name of the function to create the neural networkmodel as well as some parameters to pass along to thefit()function of the model later, suchas the number of epochs and batch size. Both of these are set to sensible defaults. We alsoinitialize the random number generator with a constant random seed, a process we will repeatfor each model evaluated in this tutorial. This is to ensure we compare models consistently andthat the results are reproducible. #fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)
12. 2. Develop a Baseline Neural Network Model80#evaluatemodelwithstandardizeddatasetestimator = Keras Regressor(build_fn=baseline_model, nb_epoch=100, batch_size=5, verbose=0)Listing 12. 5: Initialize Random Number Generator and Prepare Model Wrapper for scikit-learn. The final step is to evaluate this baseline model. We will use 10-fold cross validation toevaluate the model. kfold = KFold(n_splits=10, random_state=seed)results = cross_val_score(estimator, X, Y, cv=kfold)print("Baseline:%. 2f(%. 2f)MSE"% (results. mean(), results. std()))Listing 12. 6: Evaluate Baseline Model. The full code listing is provided below for completeness. #Regression Example With Boston Dataset:Baselineimportnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Regressorfromsklearn. model_selectionimportcross_val_scorefromsklearn. model_selectionimport KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline#loaddatasetdataframe = pandas. read_csv("housing. csv", delim_whitespace=True, header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:13]Y = dataset[:,13]#definebasemodeldefbaseline_model():#createmodelmodel = Sequential()model. add(Dense(13, input_dim=13, init= normal, activation= relu ))model. add(Dense(1, init= normal ))#Compilemodelmodel. compile(loss= mean_squared_error, optimizer= adam )returnmodel#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#evaluatemodelwithstandardizeddatasetestimator = Keras Regressor(build_fn=baseline_model, nb_epoch=100, batch_size=5, verbose=0)kfold = KFold(n_splits=10, random_state=seed)results = cross_val_score(estimator, X, Y, cv=kfold)print("Baseline:%. 2f(%. 2f)MSE"% (results. mean(), results. std()))Listing 12. 7: Multilayer Perceptron Model for Boston House Problem. Running this code gives us an estimate of the model's performance on the problem for unseendata. The result reports the mean squared error including the average and standard deviation(average variance) across all 10 folds of the cross validation evaluation. Baseline: 38. 04 (28. 15) MSE
12. 3. Lift Performance By Standardizing The Dataset81Listing 12. 8: Sample Output From Evaluating the Baseline Model. 12. 3 Lift Performance By Standardizing The Dataset An important concern with the Boston house price dataset is that the input attributes all varyin their scales because they measure di↵erent quantities. It is almost always good practice toprepare your data before modeling it using a neural network model. Continuing on from theabove baseline model, we can re-evaluate the same model using a standardized version of theinput dataset. We can use scikit-learn's Pipelineframework3to perform the standardization during themodel evaluation process, within each fold of the cross validation. This ensures that there isno data leakage from each testset cross validation fold into the training data. The code belowcreates a scikit-learn Pipelinethat first standardizes the dataset then creates and evaluatesthe baseline neural network model. #Regression Example With Boston Dataset:Standardizedimportnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Regressorfromsklearn. model_selectionimportcross_val_scorefromsklearn. model_selectionimport KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline#loaddatasetdataframe = pandas. read_csv("housing. csv", delim_whitespace=True, header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:13]Y = dataset[:,13]#definebasemodeldefbaseline_model():#createmodelmodel = Sequential()model. add(Dense(13, input_dim=13, init= normal, activation= relu ))model. add(Dense(1, init= normal ))#Compilemodelmodel. compile(loss= mean_squared_error, optimizer= adam )returnmodel#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#evaluatemodelwithstandardizeddatasetestimators = []estimators. append(( standardize, Standard Scaler()))estimators. append(( mlp, Keras Regressor(build_fn=baseline_model, nb_epoch=50,batch_size=5, verbose=0)))pipeline = Pipeline(estimators)3http://scikit-learn. org/stable/modules/generated/sklearn. pipeline. Pipeline. html
12. 4. Tune The Neural Network Topology82kfold = KFold(n_splits=10, random_state=seed)results = cross_val_score(pipeline, X, Y, cv=kfold)print("Standardized:%. 2f(%. 2f)MSE"% (results. mean(), results. std()))Listing 12. 9: Update To Use a Standardized Dataset. Running the example provides an improved performance over the baseline model withoutstandardized data, dropping the error by 10 thousand squared dollars. Standardized: 28. 24 (26. 25) MSEListing 12. 10: Sample Output From Evaluating the Model on The Standardized Dataset. A further extension of this section would be to similarly apply a rescaling to the outputvariable such as normalizing it to the range of 0 to 1 and use a Sigmoid or similar activationfunction on the output layer to narrow output predictions to the same range. 12. 4 Tune The Neural Network Topology There are many concerns that can be optimized for a neural network model. Perhaps the pointof biggest leverage is the structure of the network itself, including the number of layers andthe number of neurons in each layer. In this section we will evaluate two additional networktopologies in an e↵ort to further improve the performance of the model. We will look at both adeeper and a wider network topology. 12. 4. 1 Evaluate a Deeper Network Topology One way to improve the performance of a neural network is to add more layers. This mightallow the model to extract and recombine higher order features embedded in the data. In thissection we will evaluate the e↵ect of adding one more hidden layer to the model. This is as easyas defining a new function that will create this deeper model, copied from our baseline modelabove. We can then insert a new line after the first hidden layer. In this case with about halfthe number of neurons. Our network topology now looks like:13 inputs-> [13-> 6]-> 1 output Listing 12. 11: Summary of Deeper Network Topology. We can evaluate this network topology in the same way as above, whilst also using thestandardization of the dataset that above was shown to improve performance. #Regression Example With Boston Dataset:Standardizedand Largerimportnumpyimportpandasfromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Regressorfromsklearn. model_selectionimportcross_val_scorefromsklearn. model_selectionimport KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline#loaddatasetdataframe = pandas. read_csv("housing. csv", delim_whitespace=True, header=None)
12. 4. Tune The Neural Network Topology83dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:13]Y = dataset[:,13]#definethemodeldeflarger_model():#createmodelmodel = Sequential()model. add(Dense(13, input_dim=13, init= normal, activation= relu ))model. add(Dense(6, init= normal, activation= relu ))model. add(Dense(1, init= normal ))#Compilemodelmodel. compile(loss= mean_squared_error, optimizer= adam )returnmodel#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#evaluatemodelwithstandardizeddatasetestimators = []estimators. append(( standardize, Standard Scaler()))estimators. append(( mlp, Keras Regressor(build_fn=larger_model, nb_epoch=50, batch_size=5,verbose=0)))pipeline = Pipeline(estimators)kfold = KFold(n_splits=10, random_state=seed)results = cross_val_score(pipeline, X, Y, cv=kfold)print("Larger:%. 2f(%. 2f)MSE"% (results. mean(), results. std()))Listing 12. 12: Evaluate the Larger Neural Network Model. Running this model does show a further improvement in performance from 28 down to 24thousand squared dollars. Larger: 24. 60 (25. 65) MSEListing 12. 13: Sample Output From Evaluating the Deeper Model. 12. 4. 2 Evaluate a Wider Network Topology Another approach to increasing the representational capacity of the model is to create a widernetwork. In this section we evaluate the e↵ect of keeping a shallow network architecture andnearly doubling the number of neurons in the one hidden layer. Again, all we need to do is definea new function that creates our neural network model. Here, we have increased the number ofneurons in the hidden layer compared to the baseline model from 13 to 20. The topology forour wider network can be summarized as follows:13 inputs-> [20]-> 1 output Listing 12. 14: Summary of Wider Network Topology. We can evaluate the wider network topology using the same scheme as above. #Regression Example With Boston Dataset:Standardizedand Widerimportnumpyimportpandasfromkeras. modelsimport Sequential
12. 5. Summary84fromkeras. layersimport Densefromkeras. wrappers. scikit_learnimport Keras Regressorfromsklearn. model_selectionimportcross_val_scorefromsklearn. model_selectionimport KFoldfromsklearn. preprocessingimport Standard Scalerfromsklearn. pipelineimport Pipeline#loaddatasetdataframe = pandas. read_csv("housing. csv", delim_whitespace=True, header=None)dataset = dataframe. values#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:13]Y = dataset[:,13]#definewidermodeldefwider_model():#createmodelmodel = Sequential()model. add(Dense(20, input_dim=13, init= normal, activation= relu ))model. add(Dense(1, init= normal ))#Compilemodelmodel. compile(loss= mean_squared_error, optimizer= adam )returnmodel#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#evaluatemodelwithstandardizeddatasetestimators = []estimators. append(( standardize, Standard Scaler()))estimators. append(( mlp, Keras Regressor(build_fn=wider_model, nb_epoch=100, batch_size=5,verbose=0)))pipeline = Pipeline(estimators)kfold = KFold(n_splits=10, random_state=seed)results = cross_val_score(pipeline, X, Y, cv=kfold)print("Wider:%. 2f(%. 2f)MSE"% (results. mean(), results. std()))Listing 12. 15: Evaluate the Wider Neural Network Model. Building the model does see a further drop in error to about 21 thousand squared dollars. This is not a bad result for this problem. Wider: 21. 64 (23. 75) MSEListing 12. 16: Sample Output From Evaluating the Wider Model. It would have been hard to guess that a wider network would outperform a deeper networkon this problem. The results demonstrate the importance of empirical testing when it comes todeveloping neural network models. 12. 5 Summary In this lesson you discovered the Keras deep learning library for modeling regression problems. Through this tutorial you learned how to develop and evaluate neural network models, including: How to load data and develop a baseline model. How to lift performance using data preparation techniques like standardization.
12. 5. Summary85 How to design and evaluate networks with di↵erent varying topologies on a problem. 12. 5. 1 Next This concludes Part III of the book and leaves you with the skills to develop neural networkmodels on standard machine learning datasets. Next in Part IV you will learn how to get morefrom your neural network models with some advanced techniques and use some of the moreadvanced features of the Keras library.
Part IVAdvanced Multilayer Perceptrons and Keras 86
Chapter 13Save Your Models For Later With Serialization Given that deep learning models can take hours, days and even weeks to train, it is importantto know how to save and load them from disk. In this lesson you will discover how you can saveyour Keras models to file and load them up again to make predictions. After completing thislesson you will know: How to save and load Keras model weights to HDF5 formatted files. How to save and load Keras model structure to JSON files. How to save and load Keras model structure to YAML files. Let's get started. 13. 1 Tutorial Overview Keras separates the concerns of saving your model architecture and saving your model weights. Model weights are saved to HDF5 format. This is a grid format that is ideal for storingmulti-dimensional arrays of numbers. The model structure can be described and saved (andloaded) using two di↵erent formats: JSON and YAML. Each example will also demonstrate saving and loading your model weights to HDF5formatted files. The examples will use the same simple network trained on the Pima Indiansonset of diabetes binary classification dataset (see Section7. 2). 13. 1. 1 HDF5 Format The Hierarchical Data Format or HDF5 for short is a flexible data storage format and isconvenient for storing large arrays of real values, as we have in the weights of neural networks. You may need to install Python support for the HDF5 file format. You can do this using yourpreferred Python package management system such as Pip:sudo pip install h5py Listing 13. 1: Install Python Support For the HDF5 File Format via Pip. 87
13. 2. Save Your Neural Network Model to JSON8813. 2 Save Your Neural Network Model to JSONJSON is a simple file format for describing data hierarchically. Keras provides the ability todescribe any model using JSON format with atojson()function. This can be saved to fileand later loaded via themodelfromjson()function that will create a new model from the JSON specification. The weights are saved directly from the model using thesaveweights()function andlater loaded using the symmetricalloadweights()function. The example below trains andevaluates a simple model on the Pima Indians dataset. The model structure is then convertedto JSON format and written tomodel. jsonin the local directory. The network weights arewritten tomodel. h5in the local directory. The model and weight data is loaded from the saved files and a new model is created. It isimportant to compile the loaded model before it is used. This is so that predictions made usingthe model can use the appropriate ecient computation from the Keras backend. The model isevaluated in the same way printing the same evaluation score. #MLPfor Pima Indians Dataset Serializeto JSONand HDF5fromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. modelsimportmodel_from_jsonimportnumpyimportos#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#createmodelmodel = Sequential()model. add(Dense(12, input_dim=8, init= uniform, activation= relu ))model. add(Dense(8, init= uniform, activation= relu ))model. add(Dense(1, init= uniform, activation= sigmoid ))#Compilemodelmodel. compile(loss= binary_crossentropy, optimizer= adam, metrics=[ accuracy ])#Fitthemodelmodel. fit(X, Y, nb_epoch=150, batch_size=10, verbose=0)#evaluatethemodelscores = model. evaluate(X, Y, verbose=0)print("%s:%. 2f%%"% (model. metrics_names[1], scores[1]*100))#serializemodelto JSONmodel_json = model. to_json()withopen("model. json","w") as json_file:json_file. write(model_json)#serializeweightsto HDF5model. save_weights("model. h5")print("Savedmodeltodisk")#later... #loadjsonandcreatemodel
13. 2. Save Your Neural Network Model to JSON89json_file =open( model. json, r )loaded_model_json = json_file. read()json_file. close()loaded_model = model_from_json(loaded_model_json)#loadweightsintonewmodelloaded_model. load_weights("model. h5")print("Loadedmodelfromdisk")#evaluateloadedmodelontestdataloaded_model. compile(loss= binary_crossentropy, optimizer= rmsprop, metrics=[ accuracy ])score = loaded_model. evaluate(X, Y, verbose=0)print"%s:%. 2f%%"% (loaded_model. metrics_names[1], score[1]*100)Listing 13. 2: Serialize Model To JSON Format. Running this example provides the output below. It shows first the accuracy of the trainedmodel, the saving of the model to disk in JSON format, the loading of the model and finally there-evaluation of the loaded model achieving the same accuracy. acc: 79. 56%Saved model to disk Loaded modelfromdiskacc: 79. 56%Listing 13. 3: Sample Output From Serializing Model To JSON Format. The JSON format of the model looks like the following:{"class_name":"Sequential","config":[{"class_name":"Dense","config":{"W_constraint":null,"b_constraint":null,"name":"dense_1","output_dim":12,"activity_regularizer":null,"trainable":true,"init":"uniform","input_dtype":"float32","input_dim":8,"b_regularizer":null,"W_regularizer":null,"activation":"relu","batch_input_shape":[null,8]}},{"class_name":"Dense","config":{"W_constraint":null,"b_constraint":null,
13. 3. Save Your Neural Network Model to YAML90"name":"dense_2","activity_regularizer":null,"trainable":true,"init":"uniform","input_dim":null,"b_regularizer":null,"W_regularizer":null,"activation":"relu","output_dim":8}},{"class_name":"Dense","config":{"W_constraint":null,"b_constraint":null,"name":"dense_3","activity_regularizer":null,"trainable":true,"init":"uniform","input_dim":null,"b_regularizer":null,"W_regularizer":null,"activation":"sigmoid","output_dim":1}}]}Listing 13. 4: Sample JSON Model File. 13. 3 Save Your Neural Network Model to YAMLThis example is much the same as the above JSON example, except the YAML format is usedfor the model specification. The model is described using YAML, saved to filemodel. yamlandlater loaded into a new model via themodelfromyaml()function. Weights are handled in thesame way as above in HDF5 format asmodel. h5. #MLPfor Pima Indians Datasetserializeto YAMLand HDF5fromkeras. modelsimport Sequentialfromkeras. layersimport Densefromkeras. modelsimportmodel_from_yamlimportnumpyimportos#fixrandomseedforreproducibilityseed = 7numpy. random. seed(seed)#loadpimaindiansdatasetdataset = numpy. loadtxt("pima-indians-diabetes. csv", delimiter=",")#splitintoinput(X)andoutput(Y)variables X = dataset[:,0:8]Y = dataset[:,8]#createmodel
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
2
Edit dataset card